id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
2303.01091
Gaochao Song
Gaochao Song, Luo Zhang, Ran Su, Jianfeng Shi, Ying He, Qian Sun
OPE-SR: Orthogonal Position Encoding for Designing a Parameter-free Upsampling Module in Arbitrary-scale Image Super-Resolution
Accepted by CVPR 2023. 11 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Implicit neural representation (INR) is a popular approach for arbitrary-scale image super-resolution (SR), as a key component of INR, position encoding improves its representation ability. Motivated by position encoding, we propose orthogonal position encoding (OPE) - an extension of position encoding - and an OPE-Upscale module to replace the INR-based upsampling module for arbitrary-scale image super-resolution. Same as INR, our OPE-Upscale Module takes 2D coordinates and latent code as inputs; however it does not require training parameters. This parameter-free feature allows the OPE-Upscale Module to directly perform linear combination operations to reconstruct an image in a continuous manner, achieving an arbitrary-scale image reconstruction. As a concise SR framework, our method has high computing efficiency and consumes less memory comparing to the state-of-the-art (SOTA), which has been confirmed by extensive experiments and evaluations. In addition, our method has comparable results with SOTA in arbitrary scale image super-resolution. Last but not the least, we show that OPE corresponds to a set of orthogonal basis, justifying our design principle.
[ { "version": "v1", "created": "Thu, 2 Mar 2023 09:26:14 GMT" } ]
2023-03-03T00:00:00
[ [ "Song", "Gaochao", "" ], [ "Zhang", "Luo", "" ], [ "Su", "Ran", "" ], [ "Shi", "Jianfeng", "" ], [ "He", "Ying", "" ], [ "Sun", "Qian", "" ] ]
new_dataset
0.958137
2303.01162
V\'it Kr\'atk\'y
V\'it Kr\'atk\'y, Pavel Petr\'a\v{c}ek, Vojt\v{e}ch Spurn\'y, Martin Saska
Autonomous Reflectance Transformation Imaging by a Team of Unmanned Aerial Vehicles
null
IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 2302-2309, 2020
10.1109/LRA.2020.2970646
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A Reflectance Transformation Imaging technique (RTI) realized by multi-rotor Unmanned Aerial Vehicles (UAVs) with a focus on deployment in difficult to access buildings is presented in this letter. RTI is a computational photographic method that captures a surface shape and color of a subject and enables its interactive re-lighting from any direction in a software viewer, revealing details that are not visible with the naked eye. The input of RTI is a set of images captured by a static camera, each one under illumination from a different known direction. We present an innovative approach applying two multi-rotor UAVs to perform this scanning procedure in locations that are hardly accessible or even inaccessible for people. The proposed system is designed for its safe deployment within real-world scenarios in historical buildings with priceless historical value.
[ { "version": "v1", "created": "Thu, 2 Mar 2023 11:09:14 GMT" } ]
2023-03-03T00:00:00
[ [ "Krátký", "Vít", "" ], [ "Petráček", "Pavel", "" ], [ "Spurný", "Vojtěch", "" ], [ "Saska", "Martin", "" ] ]
new_dataset
0.998598
2303.01166
Zhixing Hou
Zhixing Hou, Yuzhang Shang, Tian Gao, Yan Yan
BPT: Binary Point Cloud Transformer for Place Recognition
Submitted to the IEEE/RSJ International Conference on Intelligent Robots (IROS 2023)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Place recognition, an algorithm to recognize the re-visited places, plays the role of back-end optimization trigger in a full SLAM system. Many works equipped with deep learning tools, such as MLP, CNN, and transformer, have achieved great improvements in this research field. Point cloud transformer is one of the excellent frameworks for place recognition applied in robotics, but with large memory consumption and expensive computation, it is adverse to widely deploy the various point cloud transformer networks in mobile or embedded devices. To solve this issue, we propose a binary point cloud transformer for place recognition. As a result, a 32-bit full-precision model can be reduced to a 1-bit model with less memory occupation and faster binarized bitwise operations. To our best knowledge, this is the first binary point cloud transformer that can be deployed on mobile devices for online applications such as place recognition. Experiments on several standard benchmarks demonstrate that the proposed method can get comparable results with the corresponding full-precision transformer model and even outperform some full-precision deep learning methods. For example, the proposed method achieves 93.28% at the top @1% and 85.74% at the top @1% on the Oxford RobotCar dataset in terms of the metric of the average recall rate. Meanwhile, the size and floating point operations of the model with the same transformer structure reduce 56.1% and 34.1% respectively from original precision to binary precision.
[ { "version": "v1", "created": "Thu, 2 Mar 2023 11:15:59 GMT" } ]
2023-03-03T00:00:00
[ [ "Hou", "Zhixing", "" ], [ "Shang", "Yuzhang", "" ], [ "Gao", "Tian", "" ], [ "Yan", "Yan", "" ] ]
new_dataset
0.997985
2303.01173
Jack Saunders Mr
Jack Saunders, Lo\"ic Prenevost, \"Ozg\"ur \c{S}im\c{s}ek, Alan Hunter, and Wenbin Li
Resource-Constrained Station-Keeping for Helium Balloons using Reinforcement Learning
null
null
null
null
cs.RO cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
High altitude balloons have proved useful for ecological aerial surveys, atmospheric monitoring, and communication relays. However, due to weight and power constraints, there is a need to investigate alternate modes of propulsion to navigate in the stratosphere. Very recently, reinforcement learning has been proposed as a control scheme to maintain the balloon in the region of a fixed location, facilitated through diverse opposing wind-fields at different altitudes. Although air-pump based station keeping has been explored, there is no research on the control problem for venting and ballasting actuated balloons, which is commonly used as a low-cost alternative. We show how reinforcement learning can be used for this type of balloon. Specifically, we use the soft actor-critic algorithm, which on average is able to station-keep within 50\;km for 25\% of the flight, consistent with state-of-the-art. Furthermore, we show that the proposed controller effectively minimises the consumption of resources, thereby supporting long duration flights. We frame the controller as a continuous control reinforcement learning problem, which allows for a more diverse range of trajectories, as opposed to current state-of-the-art work, which uses discrete action spaces. Furthermore, through continuous control, we can make use of larger ascent rates which are not possible using air-pumps. The desired ascent-rate is decoupled into desired altitude and time-factor to provide a more transparent policy, compared to low-level control commands used in previous works. Finally, by applying the equations of motion, we establish appropriate thresholds for venting and ballasting to prevent the agent from exploiting the environment. More specifically, we ensure actions are physically feasible by enforcing constraints on venting and ballasting.
[ { "version": "v1", "created": "Thu, 2 Mar 2023 11:35:59 GMT" } ]
2023-03-03T00:00:00
[ [ "Saunders", "Jack", "" ], [ "Prenevost", "Loïc", "" ], [ "Şimşek", "Özgür", "" ], [ "Hunter", "Alan", "" ], [ "Li", "Wenbin", "" ] ]
new_dataset
0.993163
2303.01177
V\'it Kr\'atk\'y
V\'it Kr\'atk\'y, Alfonso Alc\'antara, Jes\'us Capit\'an, Petr \v{S}t\v{e}p\'an, Martin Saska, An\'ibal Ollero
Autonomous Aerial Filming With Distributed Lighting by a Team of Unmanned Aerial Vehicles
null
IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 7580-7587, 2021
10.1109/LRA.2021.3098811
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This letter describes a method for autonomous aerial cinematography with distributed lighting by a team of unmanned aerial vehicles (UAVs). Although camera-carrying multi-rotor helicopters have become commonplace in cinematography, their usage is limited to scenarios with sufficient natural light or of lighting provided by static artificial lights. We propose to use a formation of unmanned aerial vehicles as a tool for filming a target under illumination from various directions, which is one of the fundamental techniques of traditional cinematography. We decompose the multi-UAV trajectory optimization problem to tackle non-linear cinematographic aspects and obstacle avoidance at separate stages, which allows us to re-plan in real time and react to changes in dynamic environments. The performance of our method has been evaluated in realistic simulation scenarios and field experiments, where we show how it increases the quality of the shots and that it is capable of planning safe trajectories even in cluttered environments.
[ { "version": "v1", "created": "Thu, 2 Mar 2023 11:47:33 GMT" } ]
2023-03-03T00:00:00
[ [ "Krátký", "Vít", "" ], [ "Alcántara", "Alfonso", "" ], [ "Capitán", "Jesús", "" ], [ "Štěpán", "Petr", "" ], [ "Saska", "Martin", "" ], [ "Ollero", "Aníbal", "" ] ]
new_dataset
0.997127
2303.01241
Runcong Zhao
Runcong Zhao, Miguel Arana-Catania, Lixing Zhu, Elena Kochkina, Lin Gui, Arkaitz Zubiaga, Rob Procter, Maria Liakata and Yulan He
PANACEA: An Automated Misinformation Detection System on COVID-19
null
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
In this demo, we introduce a web-based misinformation detection system PANACEA on COVID-19 related claims, which has two modules, fact-checking and rumour detection. Our fact-checking module, which is supported by novel natural language inference methods with a self-attention network, outperforms state-of-the-art approaches. It is also able to give automated veracity assessment and ranked supporting evidence with the stance towards the claim to be checked. In addition, PANACEA adapts the bi-directional graph convolutional networks model, which is able to detect rumours based on comment networks of related tweets, instead of relying on the knowledge base. This rumour detection module assists by warning the users in the early stages when a knowledge base may not be available.
[ { "version": "v1", "created": "Tue, 28 Feb 2023 21:53:48 GMT" } ]
2023-03-03T00:00:00
[ [ "Zhao", "Runcong", "" ], [ "Arana-Catania", "Miguel", "" ], [ "Zhu", "Lixing", "" ], [ "Kochkina", "Elena", "" ], [ "Gui", "Lin", "" ], [ "Zubiaga", "Arkaitz", "" ], [ "Procter", "Rob", "" ], [ "Liakata", "Maria", "" ], [ "He", "Yulan", "" ] ]
new_dataset
0.991103
2303.01243
Nicolas Kourtellis Ph.D.
Souvik Paul and Nicolas Kourtellis
Poster: Sponge ML Model Attacks of Mobile Apps
2 pages, 6 figures. Proceedings of the 24th International Workshop on Mobile Computing Systems and Applications (HotMobile). Feb. 2023
null
10.1145/3572864.3581586
null
cs.LG cs.CR cs.PF
http://creativecommons.org/licenses/by-nc-nd/4.0/
Machine Learning (ML)-powered apps are used in pervasive devices such as phones, tablets, smartwatches and IoT devices. Recent advances in collaborative, distributed ML such as Federated Learning (FL) attempt to solve privacy concerns of users and data owners, and thus used by tech industry leaders such as Google, Facebook and Apple. However, FL systems and models are still vulnerable to adversarial membership and attribute inferences and model poisoning attacks, especially in FL-as-a-Service ecosystems recently proposed, which can enable attackers to access multiple ML-powered apps. In this work, we focus on the recently proposed Sponge attack: It is designed to soak up energy consumed while executing inference (not training) of ML model, without hampering the classifier's performance. Recent work has shown sponge attacks on ASCI-enabled GPUs can potentially escalate the power consumption and inference time. For the first time, in this work, we investigate this attack in the mobile setting and measure the effect it can have on ML models running inside apps on mobile devices.
[ { "version": "v1", "created": "Wed, 1 Mar 2023 15:12:56 GMT" } ]
2023-03-03T00:00:00
[ [ "Paul", "Souvik", "" ], [ "Kourtellis", "Nicolas", "" ] ]
new_dataset
0.993398
2303.01330
Jingping Wang
Tingrui Zhang, Jingping Wang, Chao Xu, Alan Gao, Fei Gao
Continuous Implicit SDF Based Any-shape Robot Trajectory Optimization
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Optimization-based trajectory generation methods are widely used in whole-body planning for robots. However, existing work either oversimplifies the robot's geometry and environment representation, resulting in a conservative trajectory, or suffers from a huge overhead in maintaining additional information such as the Signed Distance Field (SDF). To bridge the gap, we consider the robot as an implicit function, with its surface boundary represented by the zero-level set of its SDF. Based on this, we further employ another implicit function to lazily compute the signed distance to the swept volume generated by the robot and its trajectory. The computation is efficient by exploiting continuity in space-time, and the implicit function guarantees precise and continuous collision evaluation even for nonconvex robots with complex surfaces. Furthermore, we propose a trajectory optimization pipeline applicable to the implicit SDF. Simulation and real-world experiments validate the high performance of our approach for arbitrarily shaped robot trajectory optimization.
[ { "version": "v1", "created": "Thu, 2 Mar 2023 15:08:00 GMT" } ]
2023-03-03T00:00:00
[ [ "Zhang", "Tingrui", "" ], [ "Wang", "Jingping", "" ], [ "Xu", "Chao", "" ], [ "Gao", "Alan", "" ], [ "Gao", "Fei", "" ] ]
new_dataset
0.985251
2303.01331
Benjamin Joffe
Benjamin Joffe and Konrad Ahlin
Canonical mapping as a general-purpose object descriptor for robotic manipulation
null
null
null
null
cs.RO cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Perception is an essential part of robotic manipulation in a semi-structured environment. Traditional approaches produce a narrow task-specific prediction (e.g., object's 6D pose), that cannot be adapted to other tasks and is ill-suited for deformable objects. In this paper, we propose using canonical mapping as a near-universal and flexible object descriptor. We demonstrate that common object representations can be derived from a single pre-trained canonical mapping model, which in turn can be generated with minimal manual effort using an automated data generation and training pipeline. We perform a multi-stage experiment using two robot arms that demonstrate the robustness of the perception approach and the ways it can inform the manipulation strategy, thus serving as a powerful foundation for general-purpose robotic manipulation.
[ { "version": "v1", "created": "Thu, 2 Mar 2023 15:09:25 GMT" } ]
2023-03-03T00:00:00
[ [ "Joffe", "Benjamin", "" ], [ "Ahlin", "Konrad", "" ] ]
new_dataset
0.952518
2303.01377
Daniel Sens
Daniel Sens and Ario Sadafi, Francesco Paolo Casale, Nassir Navab, Carsten Marr
BEL: A Bag Embedding Loss for Transformer enhances Multiple Instance Whole Slide Image Classification
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multiple Instance Learning (MIL) has become the predominant approach for classification tasks on gigapixel histopathology whole slide images (WSIs). Within the MIL framework, single WSIs (bags) are decomposed into patches (instances), with only WSI-level annotation available. Recent MIL approaches produce highly informative bag level representations by utilizing the transformer architecture's ability to model the dependencies between instances. However, when applied to high magnification datasets, problems emerge due to the large number of instances and the weak supervisory learning signal. To address this problem, we propose to additionally train transformers with a novel Bag Embedding Loss (BEL). BEL forces the model to learn a discriminative bag-level representation by minimizing the distance between bag embeddings of the same class and maximizing the distance between different classes. We evaluate BEL with the Transformer architecture TransMIL on two publicly available histopathology datasets, BRACS and CAMELYON17. We show that with BEL, TransMIL outperforms the baseline models on both datasets, thus contributing to the clinically highly relevant AI-based tumor classification of histological patient material.
[ { "version": "v1", "created": "Thu, 2 Mar 2023 16:02:55 GMT" } ]
2023-03-03T00:00:00
[ [ "Sens", "Daniel", "" ], [ "Sadafi", "Ario", "" ], [ "Casale", "Francesco Paolo", "" ], [ "Navab", "Nassir", "" ], [ "Marr", "Carsten", "" ] ]
new_dataset
0.995608
2303.01396
Zongtao He
Zongtao He, Liuyi Wang, Shu Li, Qingqing Yan, Chengju Liu and Qijun Chen
MLANet: Multi-Level Attention Network with Sub-instruction for Continuous Vision-and-Language Navigation
null
null
null
null
cs.CV cs.CL cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vision-and-Language Navigation (VLN) aims to develop intelligent agents to navigate in unseen environments only through language and vision supervision. In the recently proposed continuous settings (continuous VLN), the agent must act in a free 3D space and faces tougher challenges like real-time execution, complex instruction understanding, and long action sequence prediction. For a better performance in continuous VLN, we design a multi-level instruction understanding procedure and propose a novel model, Multi-Level Attention Network (MLANet). The first step of MLANet is to generate sub-instructions efficiently. We design a Fast Sub-instruction Algorithm (FSA) to segment the raw instruction into sub-instructions and generate a new sub-instruction dataset named ``FSASub". FSA is annotation-free and faster than the current method by 70 times, thus fitting the real-time requirement in continuous VLN. To solve the complex instruction understanding problem, MLANet needs a global perception of the instruction and observations. We propose a Multi-Level Attention (MLA) module to fuse vision, low-level semantics, and high-level semantics, which produce features containing a dynamic and global comprehension of the task. MLA also mitigates the adverse effects of noise words, thus ensuring a robust understanding of the instruction. To correctly predict actions in long trajectories, MLANet needs to focus on what sub-instruction is being executed every step. We propose a Peak Attention Loss (PAL) to improve the flexible and adaptive selection of the current sub-instruction. PAL benefits the navigation agent by concentrating its attention on the local information, thus helping the agent predict the most appropriate actions. We train and test MLANet in the standard benchmark. Experiment results show MLANet outperforms baselines by a significant margin.
[ { "version": "v1", "created": "Thu, 2 Mar 2023 16:26:14 GMT" } ]
2023-03-03T00:00:00
[ [ "He", "Zongtao", "" ], [ "Wang", "Liuyi", "" ], [ "Li", "Shu", "" ], [ "Yan", "Qingqing", "" ], [ "Liu", "Chengju", "" ], [ "Chen", "Qijun", "" ] ]
new_dataset
0.997264
2303.01428
Christoforos Mavrogiannis
Sidharth Talia, Arnav Thareja, Christoforos Mavrogiannis, Matt Schmittle, Siddhartha S. Srinivasa
PuSHR: A Multirobot System for Nonprehensile Rearrangement
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We focus on the problem of rearranging a set of objects with a team of car-like robot pushers built using off-the-shelf components. Maintaining control of pushed objects while avoiding collisions in a tight space demands highly coordinated motion that is challenging to execute on constrained hardware. Centralized replanning approaches become intractable even for small-sized problems whereas decentralized approaches often get stuck in deadlocks. Our key insight is that by carefully assigning pushing tasks to robots, we could reduce the complexity of the rearrangement task, enabling robust performance via scalable decentralized control. Based on this insight, we built PuSHR, a system that optimally assigns pushing tasks and trajectories to robots offline, and performs trajectory tracking via decentralized control online. Through an ablation study in simulation, we demonstrate that PuSHR dominates baselines ranging from purely decentralized to fully decentralized in terms of success rate and time efficiency across challenging tasks with up to 4 robots. Hardware experiments demonstrate the transfer of our system to the real world and highlight its robustness to model inaccuracies. Our code can be found at https://github.com/prl-mushr/pushr, and videos from our experiments at https://youtu.be/DIWmZerF_O8.
[ { "version": "v1", "created": "Thu, 2 Mar 2023 17:31:42 GMT" } ]
2023-03-03T00:00:00
[ [ "Talia", "Sidharth", "" ], [ "Thareja", "Arnav", "" ], [ "Mavrogiannis", "Christoforos", "" ], [ "Schmittle", "Matt", "" ], [ "Srinivasa", "Siddhartha S.", "" ] ]
new_dataset
0.950084
2303.01432
Ryo Kamoi
Ryo Kamoi, Tanya Goyal, Juan Diego Rodriguez, Greg Durrett
WiCE: Real-World Entailment for Claims in Wikipedia
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Models for textual entailment have increasingly been applied to settings like fact-checking, presupposition verification in question answering, and validating that generation models' outputs are faithful to a source. However, such applications are quite far from the settings that existing datasets are constructed in. We propose WiCE, a new textual entailment dataset centered around verifying claims in text, built on real-world claims and evidence in Wikipedia with fine-grained annotations. We collect sentences in Wikipedia that cite one or more webpages and annotate whether the content on those pages entails those sentences. Negative examples arise naturally, from slight misinterpretation of text to minor aspects of the sentence that are not attested in the evidence. Our annotations are over sub-sentence units of the hypothesis, decomposed automatically by GPT-3, each of which is labeled with a subset of evidence sentences from the source document. We show that real claims in our dataset involve challenging verification problems, and we benchmark existing approaches on this dataset. In addition, we show that reducing the complexity of claims by decomposing them by GPT-3 can improve entailment models' performance on various domains.
[ { "version": "v1", "created": "Thu, 2 Mar 2023 17:45:32 GMT" } ]
2023-03-03T00:00:00
[ [ "Kamoi", "Ryo", "" ], [ "Goyal", "Tanya", "" ], [ "Rodriguez", "Juan Diego", "" ], [ "Durrett", "Greg", "" ] ]
new_dataset
0.993539
2303.01480
Jiaming Zhang
Jiaming Zhang, Ruiping Liu, Hao Shi, Kailun Yang, Simon Rei{\ss}, Kunyu Peng, Haodong Fu, Kaiwei Wang, Rainer Stiefelhagen
Delivering Arbitrary-Modal Semantic Segmentation
Accepted by CVPR 2023. Dataset and our code are at: https://jamycheung.github.io/DELIVER.html
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multimodal fusion can make semantic segmentation more robust. However, fusing an arbitrary number of modalities remains underexplored. To delve into this problem, we create the DeLiVER arbitrary-modal segmentation benchmark, covering Depth, LiDAR, multiple Views, Events, and RGB. Aside from this, we provide this dataset in four severe weather conditions as well as five sensor failure cases to exploit modal complementarity and resolve partial outages. To make this possible, we present the arbitrary cross-modal segmentation model CMNeXt. It encompasses a Self-Query Hub (SQ-Hub) designed to extract effective information from any modality for subsequent fusion with the RGB representation and adds only negligible amounts of parameters (~0.01M) per additional modality. On top, to efficiently and flexibly harvest discriminative cues from the auxiliary modalities, we introduce the simple Parallel Pooling Mixer (PPX). With extensive experiments on a total of six benchmarks, our CMNeXt achieves state-of-the-art performance on the DeLiVER, KITTI-360, MFNet, NYU Depth V2, UrbanLF, and MCubeS datasets, allowing to scale from 1 to 81 modalities. On the freshly collected DeLiVER, the quad-modal CMNeXt reaches up to 66.30% in mIoU with a +9.10% gain as compared to the mono-modal baseline. The DeLiVER dataset and our code are at: https://jamycheung.github.io/DELIVER.html.
[ { "version": "v1", "created": "Thu, 2 Mar 2023 18:41:41 GMT" } ]
2023-03-03T00:00:00
[ [ "Zhang", "Jiaming", "" ], [ "Liu", "Ruiping", "" ], [ "Shi", "Hao", "" ], [ "Yang", "Kailun", "" ], [ "Reiß", "Simon", "" ], [ "Peng", "Kunyu", "" ], [ "Fu", "Haodong", "" ], [ "Wang", "Kaiwei", "" ], [ "Stiefelhagen", "Rainer", "" ] ]
new_dataset
0.988953
2203.09655
Jiehua Chen
Jiehua Chen and Gergely Cs\'aji and Sanjukta Roy and Sofia Simola
Hedonic Games With Friends, Enemies, and Neutrals: Resolving Open Questions and Fine-Grained Complexity
extended abstract appeared at AAMAS 2023
null
null
null
cs.GT cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate verification and existence problems for prominent stability concepts in hedonic games with friends, enemies, and optionally with neutrals [8, 16]. We resolve several (long-standing) open questions [4, 16, 20, 23] and show that for friend-oriented preferences, under the friends and enemies model, it is coNP-complete to verify whether a given agent partition is (strictly) core stable, while under the friends, enemies, and neutrals model, it is NP-complete to determine whether an individual stable partition exists. We further look into natural restricted cases from the literature, such as when the friends and enemies relationships are symmetric, when the initial coalitions have bounded size, when the vertex degree in the friendship graph (resp. the union of friendship and enemy graph) is bounded, or when such graph is acyclic or close to being acyclic. We obtain a complete (parameterized) complexity picture regarding these cases.
[ { "version": "v1", "created": "Thu, 17 Mar 2022 23:31:48 GMT" }, { "version": "v2", "created": "Wed, 1 Mar 2023 02:56:48 GMT" } ]
2023-03-02T00:00:00
[ [ "Chen", "Jiehua", "" ], [ "Csáji", "Gergely", "" ], [ "Roy", "Sanjukta", "" ], [ "Simola", "Sofia", "" ] ]
new_dataset
0.999819
2209.08196
Jordan Ford
Jeff Ford and Jordan Ford
Lossless SIMD Compression of LiDAR Range and Attribute Scan Sequences
null
null
null
null
cs.RO cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
As LiDAR sensors have become ubiquitous, the need for an efficient LiDAR data compression algorithm has increased. Modern LiDARs produce gigabytes of scan data per hour and are often used in applications with limited compute, bandwidth, and storage resources. We present a fast, lossless compression algorithm for LiDAR range and attribute scan sequences including multiple-return range, signal, reflectivity, and ambient infrared. Our algorithm -- dubbed "Jiffy" -- achieves substantial compression by exploiting spatiotemporal redundancy and sparsity. Speed is accomplished by maximizing use of single-instruction-multiple-data (SIMD) instructions. In autonomous driving, infrastructure monitoring, drone inspection, and handheld mapping benchmarks, the Jiffy algorithm consistently outcompresses competing lossless codecs while operating at speeds in excess of 65M points/sec on a single core. In a typical autonomous vehicle use case, single-threaded Jiffy achieves 6x compression of centimeter-precision range scans at 500+ scans per second. To ensure reproducibility and enable adoption, the software is freely available as an open source library.
[ { "version": "v1", "created": "Fri, 16 Sep 2022 23:29:48 GMT" }, { "version": "v2", "created": "Tue, 28 Feb 2023 21:30:32 GMT" } ]
2023-03-02T00:00:00
[ [ "Ford", "Jeff", "" ], [ "Ford", "Jordan", "" ] ]
new_dataset
0.999451
2209.09359
Onur Selim Kilic
Onur Selim K{\i}l{\i}\c{c}, Ahmet Akman and A. Ayd{\i}n Alatan
E-VFIA : Event-Based Video Frame Interpolation with Attention
Accepted to 2023 IEEE International Conference on Robotics and Automation (ICRA 2023)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Video frame interpolation (VFI) is a fundamental vision task that aims to synthesize several frames between two consecutive original video images. Most algorithms aim to accomplish VFI by using only keyframes, which is an ill-posed problem since the keyframes usually do not yield any accurate precision about the trajectories of the objects in the scene. On the other hand, event-based cameras provide more precise information between the keyframes of a video. Some recent state-of-the-art event-based methods approach this problem by utilizing event data for better optical flow estimation to interpolate for video frame by warping. Nonetheless, those methods heavily suffer from the ghosting effect. On the other hand, some of kernel-based VFI methods that only use frames as input, have shown that deformable convolutions, when backed up with transformers, can be a reliable way of dealing with long-range dependencies. We propose event-based video frame interpolation with attention (E-VFIA), as a lightweight kernel-based method. E-VFIA fuses event information with standard video frames by deformable convolutions to generate high quality interpolated frames. The proposed method represents events with high temporal resolution and uses a multi-head self-attention mechanism to better encode event-based information, while being less vulnerable to blurring and ghosting artifacts; thus, generating crispier frames. The simulation results show that the proposed technique outperforms current state-of-the-art methods (both frame and event-based) with a significantly smaller model size.
[ { "version": "v1", "created": "Mon, 19 Sep 2022 21:40:32 GMT" }, { "version": "v2", "created": "Wed, 1 Feb 2023 22:10:17 GMT" }, { "version": "v3", "created": "Wed, 1 Mar 2023 12:52:16 GMT" } ]
2023-03-02T00:00:00
[ [ "Kılıç", "Onur Selim", "" ], [ "Akman", "Ahmet", "" ], [ "Alatan", "A. Aydın", "" ] ]
new_dataset
0.992995
2209.13657
Neelay Joglekar
Neelay Joglekar, Fei Liu, Ryan Orosco, Michael Yip
Suture Thread Spline Reconstruction from Endoscopic Images for Robotic Surgery with Reliability-driven Keypoint Detection
To be published in ICRA 2023
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automating the process of manipulating and delivering sutures during robotic surgery is a prominent problem at the frontier of surgical robotics, as automating this task can significantly reduce surgeons' fatigue during tele-operated surgery and allow them to spend more time addressing higher-level clinical decision making. Accomplishing autonomous suturing and suture manipulation in the real world requires accurate suture thread localization and reconstruction, the process of creating a 3D shape representation of suture thread from 2D stereo camera surgical image pairs. This is a very challenging problem due to how limited pixel information is available for the threads, as well as their sensitivity to lighting and specular reflection. We present a suture thread reconstruction work that uses reliable keypoints and a Minimum Variation Spline (MVS) smoothing optimization to construct a 3D centerline from a segmented surgical image pair. This method is comparable to previous suture thread reconstruction works, with the possible benefit of increased accuracy of grasping point estimation. Our code and datasets will be available at: https://github.com/ucsdarclab/thread-reconstruction.
[ { "version": "v1", "created": "Tue, 27 Sep 2022 19:48:20 GMT" }, { "version": "v2", "created": "Sun, 16 Oct 2022 10:12:01 GMT" }, { "version": "v3", "created": "Tue, 28 Feb 2023 22:42:12 GMT" } ]
2023-03-02T00:00:00
[ [ "Joglekar", "Neelay", "" ], [ "Liu", "Fei", "" ], [ "Orosco", "Ryan", "" ], [ "Yip", "Michael", "" ] ]
new_dataset
0.996334
2210.00120
Ruiqi Ni
Ruiqi Ni, Ahmed H. Qureshi
NTFields: Neural Time Fields for Physics-Informed Robot Motion Planning
null
null
null
null
cs.RO cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural Motion Planners (NMPs) have emerged as a promising tool for solving robot navigation tasks in complex environments. However, these methods often require expert data for learning, which limits their application to scenarios where data generation is time-consuming. Recent developments have also led to physics-informed deep neural models capable of representing complex dynamical Partial Differential Equations (PDEs). Inspired by these developments, we propose Neural Time Fields (NTFields) for robot motion planning in cluttered scenarios. Our framework represents a wave propagation model generating continuous arrival time to find path solutions informed by a nonlinear first-order PDE called Eikonal Equation. We evaluate our method in various cluttered 3D environments, including the Gibson dataset, and demonstrate its ability to solve motion planning problems for 4-DOF and 6-DOF robot manipulators where the traditional grid-based Eikonal planners often face the curse of dimensionality. Furthermore, the results show that our method exhibits high success rates and significantly lower computational times than the state-of-the-art methods, including NMPs that require training data from classical planners.
[ { "version": "v1", "created": "Fri, 30 Sep 2022 22:34:54 GMT" }, { "version": "v2", "created": "Wed, 1 Mar 2023 15:23:49 GMT" } ]
2023-03-02T00:00:00
[ [ "Ni", "Ruiqi", "" ], [ "Qureshi", "Ahmed H.", "" ] ]
new_dataset
0.994765
2210.00312
Ningyu Zhang
Ningyu Zhang, Lei Li, Xiang Chen, Xiaozhuan Liang, Shumin Deng, Huajun Chen
Multimodal Analogical Reasoning over Knowledge Graphs
Accepted by ICLR 2023. The project website is https://zjunlp.github.io/project/MKG_Analogy/introduction.html
null
null
null
cs.CL cs.AI cs.CV cs.LG cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Analogical reasoning is fundamental to human cognition and holds an important place in various fields. However, previous studies mainly focus on single-modal analogical reasoning and ignore taking advantage of structure knowledge. Notably, the research in cognitive psychology has demonstrated that information from multimodal sources always brings more powerful cognitive transfer than single modality sources. To this end, we introduce the new task of multimodal analogical reasoning over knowledge graphs, which requires multimodal reasoning ability with the help of background knowledge. Specifically, we construct a Multimodal Analogical Reasoning dataSet (MARS) and a multimodal knowledge graph MarKG. We evaluate with multimodal knowledge graph embedding and pre-trained Transformer baselines, illustrating the potential challenges of the proposed task. We further propose a novel model-agnostic Multimodal analogical reasoning framework with Transformer (MarT) motivated by the structure mapping theory, which can obtain better performance. Code and datasets are available in https://github.com/zjunlp/MKG_Analogy.
[ { "version": "v1", "created": "Sat, 1 Oct 2022 16:24:15 GMT" }, { "version": "v2", "created": "Tue, 29 Nov 2022 10:40:00 GMT" }, { "version": "v3", "created": "Wed, 25 Jan 2023 05:26:39 GMT" }, { "version": "v4", "created": "Wed, 1 Mar 2023 02:51:12 GMT" } ]
2023-03-02T00:00:00
[ [ "Zhang", "Ningyu", "" ], [ "Li", "Lei", "" ], [ "Chen", "Xiang", "" ], [ "Liang", "Xiaozhuan", "" ], [ "Deng", "Shumin", "" ], [ "Chen", "Huajun", "" ] ]
new_dataset
0.998721
2210.10992
Zeyu Huang
Zeyu Huang, Juzhan Xu, Sisi Dai, Kai Xu, Hao Zhang, Hui Huang, Ruizhen Hu
NIFT: Neural Interaction Field and Template for Object Manipulation
ICRA 2023
null
null
null
cs.RO cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce NIFT, Neural Interaction Field and Template, a descriptive and robust interaction representation of object manipulations to facilitate imitation learning. Given a few object manipulation demos, NIFT guides the generation of the interaction imitation for a new object instance by matching the Neural Interaction Template (NIT) extracted from the demos in the target Neural Interaction Field (NIF) defined for the new object. Specifically, the NIF is a neural field that encodes the relationship between each spatial point and a given object, where the relative position is defined by a spherical distance function rather than occupancies or signed distances, which are commonly adopted by conventional neural fields but less informative. For a given demo interaction, the corresponding NIT is defined by a set of spatial points sampled in the demo NIF with associated neural features. To better capture the interaction, the points are sampled on the Interaction Bisector Surface (IBS), which consists of points that are equidistant to the two interacting objects and has been used extensively for interaction representation. With both point selection and pointwise features defined for better interaction encoding, NIT effectively guides the feature matching in the NIFs of the new object instances such that the relative poses are optimized to realize the manipulation while imitating the demo interactions. Experiments show that our NIFT solution outperforms state-of-the-art imitation learning methods for object manipulation and generalizes better to objects from new categories.
[ { "version": "v1", "created": "Thu, 20 Oct 2022 03:35:05 GMT" }, { "version": "v2", "created": "Fri, 21 Oct 2022 01:56:47 GMT" }, { "version": "v3", "created": "Wed, 1 Mar 2023 01:30:41 GMT" } ]
2023-03-02T00:00:00
[ [ "Huang", "Zeyu", "" ], [ "Xu", "Juzhan", "" ], [ "Dai", "Sisi", "" ], [ "Xu", "Kai", "" ], [ "Zhang", "Hao", "" ], [ "Huang", "Hui", "" ], [ "Hu", "Ruizhen", "" ] ]
new_dataset
0.988815
2212.10763
Zhiang Chen
Zhiang Chen, Devin Keating, Yash Shethwala, Aravind Adhith Pandian Saravanakumaran, Ramon Arrowsmith, Albert Kottke, Christine Wittich, Jnaneshwar Das
Shakebot: A Low-cost, Open-source Robotic Shake Table for Earthquake Research and Education
null
null
null
null
cs.RO physics.geo-ph
http://creativecommons.org/licenses/by/4.0/
Shake tables provide a critical tool for simulating earthquake events and testing the response of structures to seismic forces. However, existing shake tables are either expensive or proprietary. This paper presents the design and implementation of a low-cost, open-source shake table named Shakebot for earthquake engineering research and education, built using Robot Operating System (ROS) and robotic concepts. The Shakebot adapts affordable and high-accuracy components from 3D printers, particularly a closed-loop stepper motor for actuation and a toothed belt for transmission. The stepper motor enables the bed to reach a maximum horizontal acceleration of 11.8 m/s^2 (1.2 g), and velocity of 0.5 m/s, with a 2 kg specimen. The Shakebot is equipped with an accelerometer and a high frame-rate camera for bed motion estimation. The low cost and easy use make the Shakebot accessible to a wide range of users, including students, educators, and researchers in low-resource settings. An important application of the Shakebot is to examine the dynamics of precariously balanced rocks (PBRs), which are negative indicators of earthquakes in nature. Our earlier research built a virtual shake robot in simulation for the PBR study. The Shakebot provides an approach to validate the simulation through physical experiments. The ROS-based perception and motion software facilitates the code transition from our virtual shake robot to the physical Shakebot. The reuse of the control programs ensures that the implemented ground motions are consistent for both the simulation and physical experiments, which is critical to validate our simulation experiments.
[ { "version": "v1", "created": "Wed, 21 Dec 2022 04:49:46 GMT" }, { "version": "v2", "created": "Mon, 27 Feb 2023 22:53:47 GMT" }, { "version": "v3", "created": "Wed, 1 Mar 2023 03:59:13 GMT" } ]
2023-03-02T00:00:00
[ [ "Chen", "Zhiang", "" ], [ "Keating", "Devin", "" ], [ "Shethwala", "Yash", "" ], [ "Saravanakumaran", "Aravind Adhith Pandian", "" ], [ "Arrowsmith", "Ramon", "" ], [ "Kottke", "Albert", "" ], [ "Wittich", "Christine", "" ], [ "Das", "Jnaneshwar", "" ] ]
new_dataset
0.999791
2301.07425
Pengyu Yin
Pengyu Yin, Shenghai Yuan, Haozhi Cao, Xingyu Ji, Shuyang Zhang, and Lihua Xie
Segregator: Global Point Cloud Registration with Semantic and Geometric Cues
6 pages, 5 figures. Accepted to ICRA2023
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents Segregator, a global point cloud registration framework that exploits both semantic information and geometric distribution to efficiently build up outlier-robust correspondences and search for inliers. Current state-of-the-art algorithms rely on point features to set up putative correspondences and refine them by employing pair-wise distance consistency checks. However, such a scheme suffers from degenerate cases, where the descriptive capability of local point features downgrades, and unconstrained cases, where length-preserving (l-TRIMs)-based checks cannot sufficiently constrain whether the current observation is consistent with others, resulting in a complexified NP-complete problem to solve. To tackle these problems, on the one hand, we propose a novel degeneracy-robust and efficient corresponding procedure consisting of both instance-level semantic clusters and geometric-level point features. On the other hand, Gaussian distribution-based translation and rotation invariant measurements (G-TRIMs) are proposed to conduct the consistency check and further constrain the problem size. We validated our proposed algorithm on extensive real-world data-based experiments. The code is available: https://github.com/Pamphlett/Segregator.
[ { "version": "v1", "created": "Wed, 18 Jan 2023 10:47:45 GMT" }, { "version": "v2", "created": "Wed, 1 Mar 2023 02:10:45 GMT" } ]
2023-03-02T00:00:00
[ [ "Yin", "Pengyu", "" ], [ "Yuan", "Shenghai", "" ], [ "Cao", "Haozhi", "" ], [ "Ji", "Xingyu", "" ], [ "Zhang", "Shuyang", "" ], [ "Xie", "Lihua", "" ] ]
new_dataset
0.955307
2301.12711
Elmurod Kuriyozov
Maksud Sharipov, Elmurod Kuriyozov, Ollabergan Yuldashev, Ogabek Sobirov
UzbekTagger: The rule-based POS tagger for Uzbek language
Preprint of the accepted paper to The 10th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, April 21-23, 2023, Poznan, Poland
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
This research paper presents a part-of-speech (POS) annotated dataset and tagger tool for the low-resource Uzbek language. The dataset includes 12 tags, which were used to develop a rule-based POS-tagger tool. The corpus text used in the annotation process was made sure to be balanced over 20 different fields in order to ensure its representativeness. Uzbek being an agglutinative language so the most of the words in an Uzbek sentence are formed by adding suffixes. This nature of it makes the POS-tagging task difficult to find the stems of words and the right part-of-speech they belong to. The methodology proposed in this research is the stemming of the words with an affix/suffix stripping approach including database of the stem forms of the words in the Uzbek language. The tagger tool was tested on the annotated dataset and showed high accuracy in identifying and tagging parts of speech in Uzbek text. This newly presented dataset and tagger tool can be used for a variety of natural language processing tasks such as language modeling, machine translation, and text-to-speech synthesis. The presented dataset is the first of its kind to be made publicly available for Uzbek, and the POS-tagger tool created can also be used as a pivot to use as a base for other closely-related Turkic languages.
[ { "version": "v1", "created": "Mon, 30 Jan 2023 07:40:45 GMT" }, { "version": "v2", "created": "Wed, 1 Mar 2023 14:31:12 GMT" } ]
2023-03-02T00:00:00
[ [ "Sharipov", "Maksud", "" ], [ "Kuriyozov", "Elmurod", "" ], [ "Yuldashev", "Ollabergan", "" ], [ "Sobirov", "Ogabek", "" ] ]
new_dataset
0.999612
2302.06932
Richard Mitev
Marvin Sa{\ss}, Richard Mitev, Ahmad-Reza Sadeghi
Oops..! I Glitched It Again! How to Multi-Glitch the Glitching-Protections on ARM TrustZone-M
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-sa/4.0/
Voltage Fault Injection (VFI), also known as power glitching, has proven to be a severe threat to real-world systems. In VFI attacks, the adversary disturbs the power-supply of the target-device forcing the device to illegitimate behavior. Various countermeasures have been proposed to address different types of fault injection attacks at different abstraction layers, either requiring to modify the underlying hardware or software/firmware at the machine instruction level. Moreover, only recently, individual chip manufacturers have started to respond to this threat by integrating countermeasures in their products. Generally, these countermeasures aim at protecting against single fault injection (SFI) attacks, since Multiple Fault Injection (MFI) is believed to be challenging and sometimes even impractical. In this paper, we present {\mu}-Glitch, the first Voltage Fault Injection (VFI) platform which is capable of injecting multiple, coordinated voltage faults into a target device, requiring only a single trigger signal. We provide a novel flow for Multiple Voltage Fault Injection (MVFI) attacks to significantly reduce the search complexity for fault parameters, as the search space increases exponentially with each additional fault injection. We evaluate and showcase the effectiveness and practicality of our attack platform on four real-world chips, featuring TrustZone-M: The first two have interdependent backchecking mechanisms, while the second two have additionally integrated countermeasures against fault injection. Our evaluation revealed that {\mu}-Glitch can successfully inject four consecutive faults within an average time of one day. Finally, we discuss potential countermeasures to mitigate VFI attacks and additionally propose two novel attack scenarios for MVFI.
[ { "version": "v1", "created": "Tue, 14 Feb 2023 09:40:09 GMT" }, { "version": "v2", "created": "Wed, 1 Mar 2023 08:13:42 GMT" } ]
2023-03-02T00:00:00
[ [ "Saß", "Marvin", "" ], [ "Mitev", "Richard", "" ], [ "Sadeghi", "Ahmad-Reza", "" ] ]
new_dataset
0.992663
2302.12840
Isabel Segura-Bedmar
Isabel Segura-Bedmar
HULAT at SemEval-2023 Task 10: Data augmentation for pre-trained transformers applied to the detection of sexism in social media
The experiments are not reproducible because I did not use a seed for replicability
null
null
null
cs.CL cs.AI cs.LG cs.NE
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper describes our participation in SemEval-2023 Task 10, whose goal is the detection of sexism in social media. We explore some of the most popular transformer models such as BERT, DistilBERT, RoBERTa, and XLNet. We also study different data augmentation techniques to increase the training dataset. During the development phase, our best results were obtained by using RoBERTa and data augmentation for tasks B and C. However, the use of synthetic data does not improve the results for task C. We participated in the three subtasks. Our approach still has much room for improvement, especially in the two fine-grained classifications. All our code is available in the repository https://github.com/isegura/hulat_edos.
[ { "version": "v1", "created": "Fri, 24 Feb 2023 18:17:38 GMT" }, { "version": "v2", "created": "Wed, 1 Mar 2023 08:43:13 GMT" } ]
2023-03-02T00:00:00
[ [ "Segura-Bedmar", "Isabel", "" ] ]
new_dataset
0.995826
2302.13838
Naoya Takahashi
Naoya Takahashi, Mayank K. Singh, Yuki Mitsufuji
Cross-modal Face- and Voice-style Transfer
null
null
null
null
cs.CV cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image-to-image translation and voice conversion enable the generation of a new facial image and voice while maintaining some of the semantics such as a pose in an image and linguistic content in audio, respectively. They can aid in the content-creation process in many applications. However, as they are limited to the conversion within each modality, matching the impression of the generated face and voice remains an open question. We propose a cross-modal style transfer framework called XFaVoT that jointly learns four tasks: image translation and voice conversion tasks with audio or image guidance, which enables the generation of ``face that matches given voice" and ``voice that matches given face", and intra-modality translation tasks with a single framework. Experimental results on multiple datasets show that XFaVoT achieves cross-modal style translation of image and voice, outperforming baselines in terms of quality, diversity, and face-voice correspondence.
[ { "version": "v1", "created": "Mon, 27 Feb 2023 14:39:50 GMT" }, { "version": "v2", "created": "Wed, 1 Mar 2023 14:50:41 GMT" } ]
2023-03-02T00:00:00
[ [ "Takahashi", "Naoya", "" ], [ "Singh", "Mayank K.", "" ], [ "Mitsufuji", "Yuki", "" ] ]
new_dataset
0.958537
2302.14340
Zhihao Liang
Zhihao Liang, Zhangjin Huang, Changxing Ding, Kui Jia
HelixSurf: A Robust and Efficient Neural Implicit Surface Learning of Indoor Scenes with Iterative Intertwined Regularization
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recovery of an underlying scene geometry from multiview images stands as a long-time challenge in computer vision research. The recent promise leverages neural implicit surface learning and differentiable volume rendering, and achieves both the recovery of scene geometry and synthesis of novel views, where deep priors of neural models are used as an inductive smoothness bias. While promising for object-level surfaces, these methods suffer when coping with complex scene surfaces. In the meanwhile, traditional multi-view stereo can recover the geometry of scenes with rich textures, by globally optimizing the local, pixel-wise correspondences across multiple views. We are thus motivated to make use of the complementary benefits from the two strategies, and propose a method termed Helix-shaped neural implicit Surface learning or HelixSurf; HelixSurf uses the intermediate prediction from one strategy as the guidance to regularize the learning of the other one, and conducts such intertwined regularization iteratively during the learning process. We also propose an efficient scheme for differentiable volume rendering in HelixSurf. Experiments on surface reconstruction of indoor scenes show that our method compares favorably with existing methods and is orders of magnitude faster, even when some of existing methods are assisted with auxiliary training data. The source code is available at https://github.com/Gorilla-Lab-SCUT/HelixSurf.
[ { "version": "v1", "created": "Tue, 28 Feb 2023 06:20:07 GMT" }, { "version": "v2", "created": "Wed, 1 Mar 2023 12:24:02 GMT" } ]
2023-03-02T00:00:00
[ [ "Liang", "Zhihao", "" ], [ "Huang", "Zhangjin", "" ], [ "Ding", "Changxing", "" ], [ "Jia", "Kui", "" ] ]
new_dataset
0.993082
2303.00050
Decai Chen
Decai Chen, Haofei Lu, Ingo Feldmann, Oliver Schreer, Peter Eisert
Dynamic Multi-View Scene Reconstruction Using Neural Implicit Surface
5 pages, accepted by ICASSP 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Reconstructing general dynamic scenes is important for many computer vision and graphics applications. Recent works represent the dynamic scene with neural radiance fields for photorealistic view synthesis, while their surface geometry is under-constrained and noisy. Other works introduce surface constraints to the implicit neural representation to disentangle the ambiguity of geometry and appearance field for static scene reconstruction. To bridge the gap between rendering dynamic scenes and recovering static surface geometry, we propose a template-free method to reconstruct surface geometry and appearance using neural implicit representations from multi-view videos. We leverage topology-aware deformation and the signed distance field to learn complex dynamic surfaces via differentiable volume rendering without scene-specific prior knowledge like template models. Furthermore, we propose a novel mask-based ray selection strategy to significantly boost the optimization on challenging time-varying regions. Experiments on different multi-view video datasets demonstrate that our method achieves high-fidelity surface reconstruction as well as photorealistic novel view synthesis.
[ { "version": "v1", "created": "Tue, 28 Feb 2023 19:47:30 GMT" } ]
2023-03-02T00:00:00
[ [ "Chen", "Decai", "" ], [ "Lu", "Haofei", "" ], [ "Feldmann", "Ingo", "" ], [ "Schreer", "Oliver", "" ], [ "Eisert", "Peter", "" ] ]
new_dataset
0.978099
2303.00064
Richard Van Dijk
Richard van Dijk, Daniela Gawehns and Matthijs van Leeuwen
WEARDA: recording wearable sensor data for human activity monitoring
Submitted to the Journal of Open Research Software JORS, Jan 19th, 2023, 17 pages, 5 figures, 3 tables
null
null
null
cs.HC cs.CY
http://creativecommons.org/licenses/by/4.0/
We present WEARDA, the open source WEARable sensor Data Acquisition software package. WEARDA facilitates the acquisition of human activity data with smartwatches and is primarily aimed at researchers who require transparency, full control, and access to raw sensor data. It provides functionality to simultaneously record raw data from four sensors -- tri-axis accelerometer, tri-axis gyroscope, barometer, and GPS -- which should enable researchers to, for example, estimate energy expenditure and mine movement trajectories. A Samsung smartwatch running the Tizen OS was chosen because of 1) the required functionalities of the smartwatch software API, 2) the availability of software development tools and accessible documentation, 3) having the required sensors, and 4) the requirements on case design for acceptance by the target user group. WEARDA addresses five practical challenges concerning preparation, measurement, logistics, privacy preservation, and reproducibility to ensure efficient and errorless data collection. The software package was initially created for the project ``Dementia back at the heart of the community'', and has been successfully used in that context.
[ { "version": "v1", "created": "Tue, 28 Feb 2023 20:07:46 GMT" } ]
2023-03-02T00:00:00
[ [ "van Dijk", "Richard", "" ], [ "Gawehns", "Daniela", "" ], [ "van Leeuwen", "Matthijs", "" ] ]
new_dataset
0.999498
2303.00069
Ajinkya Kulkarni
Ajinkya Kulkarni and Atharva Kulkarni and Sara Abedalmonem Mohammad Shatnawi and Hanan Aldarmaki
ClArTTS: An Open-Source Classical Arabic Text-to-Speech Corpus
None
null
null
null
cs.CL cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
At present, Text-to-speech (TTS) systems that are trained with high-quality transcribed speech data using end-to-end neural models can generate speech that is intelligible, natural, and closely resembles human speech. These models are trained with relatively large single-speaker professionally recorded audio, typically extracted from audiobooks. Meanwhile, due to the scarcity of freely available speech corpora of this kind, a larger gap exists in Arabic TTS research and development. Most of the existing freely available Arabic speech corpora are not suitable for TTS training as they contain multi-speaker casual speech with variations in recording conditions and quality, whereas the corpus curated for speech synthesis are generally small in size and not suitable for training state-of-the-art end-to-end models. In a move towards filling this gap in resources, we present a speech corpus for Classical Arabic Text-to-Speech (ClArTTS) to support the development of end-to-end TTS systems for Arabic. The speech is extracted from a LibriVox audiobook, which is then processed, segmented, and manually transcribed and annotated. The final ClArTTS corpus contains about 12 hours of speech from a single male speaker sampled at 40100 kHz. In this paper, we describe the process of corpus creation and provide details of corpus statistics and a comparison with existing resources. Furthermore, we develop two TTS systems based on Grad-TTS and Glow-TTS and illustrate the performance of the resulting systems via subjective and objective evaluations. The corpus will be made publicly available at www.clartts.com for research purposes, along with the baseline TTS systems demo.
[ { "version": "v1", "created": "Tue, 28 Feb 2023 20:18:59 GMT" } ]
2023-03-02T00:00:00
[ [ "Kulkarni", "Ajinkya", "" ], [ "Kulkarni", "Atharva", "" ], [ "Shatnawi", "Sara Abedalmonem Mohammad", "" ], [ "Aldarmaki", "Hanan", "" ] ]
new_dataset
0.999879
2303.00137
Yichen Sheng
Yichen Sheng, Jianming Zhang, Julien Philip, Yannick Hold-Geoffroy, Xin Sun, HE Zhang, Lu Ling, Bedrich Benes
PixHt-Lab: Pixel Height Based Light Effect Generation for Image Compositing
11 pages, 10 figures
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Lighting effects such as shadows or reflections are key in making synthetic images realistic and visually appealing. To generate such effects, traditional computer graphics uses a physically-based renderer along with 3D geometry. To compensate for the lack of geometry in 2D Image compositing, recent deep learning-based approaches introduced a pixel height representation to generate soft shadows and reflections. However, the lack of geometry limits the quality of the generated soft shadows and constrain reflections to pure specular ones. We introduce PixHt-Lab, a system leveraging an explicit mapping from pixel height representation to 3D space. Using this mapping, PixHt-Lab reconstructs both the cutout and background geometry and renders realistic, diverse, lighting effects for image compositing. Given a surface with physically-based materials, we can render reflections with varying glossiness. To generate more realistic soft shadows, we further propose to use 3D-aware buffer channels to guide a neural renderer. Both quantitative and qualitative evaluations demonstrate that PixHt-Lab significantly improves soft shadow generation.
[ { "version": "v1", "created": "Tue, 28 Feb 2023 23:52:01 GMT" } ]
2023-03-02T00:00:00
[ [ "Sheng", "Yichen", "" ], [ "Zhang", "Jianming", "" ], [ "Philip", "Julien", "" ], [ "Hold-Geoffroy", "Yannick", "" ], [ "Sun", "Xin", "" ], [ "Zhang", "HE", "" ], [ "Ling", "Lu", "" ], [ "Benes", "Bedrich", "" ] ]
new_dataset
0.997116
2303.00152
Franck Cassez
Franck Cassez, Joanne Fuller, Milad K. Ghale, David J. Pearce, and Horacio M. A. Quiles
Formal and Executable Semantics of the Ethereum Virtual Machine in Dafny
null
null
null
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
The Ethereum protocol implements a replicated state machine. The network participants keep track of the system state by: 1) agreeing on the sequence of transactions to be processed and 2) computing the state transitions that correspond to the sequence of transactions. Ethereum transactions are programs, called smart contracts, and computing a state transition requires executing some code. The Ethereum Virtual Machine (EVM) provides this capability and can execute programs written in EVM bytecode. We present a formal and executable semantics of the EVM written in the verification-friendly language Dafny: it provides (i) a readable, formal and verified specification of the semantics of the EVM; (ii) a framework to formally reason about bytecode.
[ { "version": "v1", "created": "Wed, 1 Mar 2023 00:55:33 GMT" } ]
2023-03-02T00:00:00
[ [ "Cassez", "Franck", "" ], [ "Fuller", "Joanne", "" ], [ "Ghale", "Milad K.", "" ], [ "Pearce", "David J.", "" ], [ "Quiles", "Horacio M. A.", "" ] ]
new_dataset
0.998535
2303.00168
Jingsen Zhang
Xu Chen, Jingsen Zhang, Lei Wang, Quanyu Dai, Zhenhua Dong, Ruiming Tang, Rui Zhang, Li Chen, Ji-Rong Wen
REASONER: An Explainable Recommendation Dataset with Multi-aspect Real User Labeled Ground Truths Towards more Measurable Explainable Recommendation
null
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Explainable recommendation has attracted much attention from the industry and academic communities. It has shown great potential for improving the recommendation persuasiveness, informativeness and user satisfaction. Despite a lot of promising explainable recommender models have been proposed in the past few years, the evaluation strategies of these models suffer from several limitations. For example, the explanation ground truths are not labeled by real users, the explanations are mostly evaluated based on only one aspect and the evaluation strategies can be hard to unify. To alleviate the above problems, we propose to build an explainable recommendation dataset with multi-aspect real user labeled ground truths. In specific, we firstly develop a video recommendation platform, where a series of questions around the recommendation explainability are carefully designed. Then, we recruit about 3000 users with different backgrounds to use the system, and collect their behaviors and feedback to our questions. In this paper, we detail the construction process of our dataset and also provide extensive analysis on its characteristics. In addition, we develop a library, where ten well-known explainable recommender models are implemented in a unified framework. Based on this library, we build several benchmarks for different explainable recommendation tasks. At last, we present many new opportunities brought by our dataset, which are expected to shed some new lights to the explainable recommendation field. Our dataset, library and the related documents have been released at https://reasoner2023.github.io/.
[ { "version": "v1", "created": "Wed, 1 Mar 2023 01:46:52 GMT" } ]
2023-03-02T00:00:00
[ [ "Chen", "Xu", "" ], [ "Zhang", "Jingsen", "" ], [ "Wang", "Lei", "" ], [ "Dai", "Quanyu", "" ], [ "Dong", "Zhenhua", "" ], [ "Tang", "Ruiming", "" ], [ "Zhang", "Rui", "" ], [ "Chen", "Li", "" ], [ "Wen", "Ji-Rong", "" ] ]
new_dataset
0.994426
2303.00171
Raviteja Anantha
Raviteja Anantha, Kriti Bhasin, Daniela de la Parra Aguilar, Prabal Vashisht, Becci Williamson, Srinivas Chappidi
DTW-SiameseNet: Dynamic Time Warped Siamese Network for Mispronunciation Detection and Correction
Preprint version
null
null
null
cs.LG cs.AI eess.AS
http://creativecommons.org/licenses/by-sa/4.0/
Personal Digital Assistants (PDAs) - such as Siri, Alexa and Google Assistant, to name a few - play an increasingly important role to access information and complete tasks spanning multiple domains, and by diverse groups of users. A text-to-speech (TTS) module allows PDAs to interact in a natural, human-like manner, and play a vital role when the interaction involves people with visual impairments or other disabilities. To cater to the needs of a diverse set of users, inclusive TTS is important to recognize and pronounce correctly text in different languages and dialects. Despite great progress in speech synthesis, the pronunciation accuracy of named entities in a multi-lingual setting still has a large room for improvement. Existing approaches to correct named entity (NE) mispronunciations, like retraining Grapheme-to-Phoneme (G2P) models, or maintaining a TTS pronunciation dictionary, require expensive annotation of the ground truth pronunciation, which is also time consuming. In this work, we present a highly-precise, PDA-compatible pronunciation learning framework for the task of TTS mispronunciation detection and correction. In addition, we also propose a novel mispronunciation detection model called DTW-SiameseNet, which employs metric learning with a Siamese architecture for Dynamic Time Warping (DTW) with triplet loss. We demonstrate that a locale-agnostic, privacy-preserving solution to the problem of TTS mispronunciation detection is feasible. We evaluate our approach on a real-world dataset, and a corpus of NE pronunciations of an anonymized audio dataset of person names recorded by participants from 10 different locales. Human evaluation shows our proposed approach improves pronunciation accuracy on average by ~6% compared to strong phoneme-based and audio-based baselines.
[ { "version": "v1", "created": "Wed, 1 Mar 2023 01:53:11 GMT" } ]
2023-03-02T00:00:00
[ [ "Anantha", "Raviteja", "" ], [ "Bhasin", "Kriti", "" ], [ "Aguilar", "Daniela de la Parra", "" ], [ "Vashisht", "Prabal", "" ], [ "Williamson", "Becci", "" ], [ "Chappidi", "Srinivas", "" ] ]
new_dataset
0.999631
2303.00193
Hanting Li
Hanting Li, Hongjing Niu, Zhaoqing Zhu, and Feng Zhao
CLIPER: A Unified Vision-Language Framework for In-the-Wild Facial Expression Recognition
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Facial expression recognition (FER) is an essential task for understanding human behaviors. As one of the most informative behaviors of humans, facial expressions are often compound and variable, which is manifested by the fact that different people may express the same expression in very different ways. However, most FER methods still use one-hot or soft labels as the supervision, which lack sufficient semantic descriptions of facial expressions and are less interpretable. Recently, contrastive vision-language pre-training (VLP) models (e.g., CLIP) use text as supervision and have injected new vitality into various computer vision tasks, benefiting from the rich semantics in text. Therefore, in this work, we propose CLIPER, a unified framework for both static and dynamic facial Expression Recognition based on CLIP. Besides, we introduce multiple expression text descriptors (METD) to learn fine-grained expression representations that make CLIPER more interpretable. We conduct extensive experiments on several popular FER benchmarks and achieve state-of-the-art performance, which demonstrates the effectiveness of CLIPER.
[ { "version": "v1", "created": "Wed, 1 Mar 2023 02:59:55 GMT" } ]
2023-03-02T00:00:00
[ [ "Li", "Hanting", "" ], [ "Niu", "Hongjing", "" ], [ "Zhu", "Zhaoqing", "" ], [ "Zhao", "Feng", "" ] ]
new_dataset
0.984459
2303.00204
Zhenduo Zhao
Zhenduo Zhao, Zhuo Li, Wenchao Wang, Pengyuan Zhang
PCF: ECAPA-TDNN with Progressive Channel Fusion for Speaker Verification
Accepted by ICASSP 2023
null
null
null
cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
ECAPA-TDNN is currently the most popular TDNN-series model for speaker verification, which refreshed the state-of-the-art(SOTA) performance of TDNN models. However, one-dimensional convolution has a global receptive field over the feature channel. It destroys the time-frequency relevance of the spectrogram. Besides, as ECAPA-TDNN only has five layers, a much shallower structure compared to ResNet restricts the capability to generate deep representations. To further improve ECAPA-TDNN, we propose a progressive channel fusion strategy that splits the spectrogram across the feature channel and gradually expands the receptive field through the network. Secondly, we enlarge the model by extending the depth and adding branches. Our proposed model achieves EER with 0.718 and minDCF(0.01) with 0.0858 on vox1o, relatively improved 16.1\% and 19.5\% compared with ECAPA-TDNN-large.
[ { "version": "v1", "created": "Wed, 1 Mar 2023 03:12:28 GMT" } ]
2023-03-02T00:00:00
[ [ "Zhao", "Zhenduo", "" ], [ "Li", "Zhuo", "" ], [ "Wang", "Wenchao", "" ], [ "Zhang", "Pengyuan", "" ] ]
new_dataset
0.994437
2303.00207
Anna Karanika
Anna Karanika, Rui Yang, Xiaojuan Ma, Jiangran Wang, Shalni Sundram and Indranil Gupta
CoMesh: Fully-Decentralized Control for Sense-Trigger-Actuate Routines in Edge Meshes
12 pages, 12 figures
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While mesh networking for edge settings (e.g., smart buildings, farms, battlefields, etc.) has received much attention, the layer of control over such meshes remains largely centralized and cloud-based. This paper focuses on applications with sense-trigger-actuate (STA) workloads -- these are similar to the abstraction of routines popular in smart homes, but applied to larger-scale edge IoT deployments. We present CoMesh, which tackles the challenge of building local, non-cloud, and decentralized solutions for control of sense-trigger-actuate applications. At its core CoMesh uses an abstraction called k-groups to spread in a fine-grained way, the load of STA actions. Coordination within the k-group uses selective fast and cheap mechanisms rather than expensive off-the-shelf solutions. k-group selection is proactively dynamic, and occurs by using a combination of zero-message-exchange mechanisms (to reduce load) and locality sensitive hashing (to be aware of physical layout of devices). We analyze and theoretically prove the safety of CoMesh's mechanisms. Our evaluations using both simulation and Raspberry Pi lab deployments show that CoMesh is load-balanced, fast, and fault-tolerant.
[ { "version": "v1", "created": "Wed, 1 Mar 2023 03:18:43 GMT" } ]
2023-03-02T00:00:00
[ [ "Karanika", "Anna", "" ], [ "Yang", "Rui", "" ], [ "Ma", "Xiaojuan", "" ], [ "Wang", "Jiangran", "" ], [ "Sundram", "Shalni", "" ], [ "Gupta", "Indranil", "" ] ]
new_dataset
0.991736
2303.00235
Yun Fan
Yun Fan, Yue Leng
Consta-dihedral Codes over Finite Fields
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is proved in a reference (Fan, Lin, IEEE TIT, vol.67, pp.5016-5025) that the self-dual (LCD respectively) dihedral codes over a finite field~$F$ with ${|F|=q}$ are asymptotically good if $q$ is even (odd respectively). In this paper, we investigate the algebraic property and the asymptotic property of conta-dihedral codes over $F$, and show that: if $q$ is even or $4\,|\,(q-1)$, then the self-dual consta-dihedral codes are asymptotically good; otherwise, the LCD consta-dihedral codes are asymptotically good. And, with the help of a technique developed in this paper, some errors in the reference mentioned above are corrected.
[ { "version": "v1", "created": "Wed, 1 Mar 2023 05:04:40 GMT" } ]
2023-03-02T00:00:00
[ [ "Fan", "Yun", "" ], [ "Leng", "Yue", "" ] ]
new_dataset
0.998117
2303.00260
Abhishek Verma
Sachin Kumar Verma, Abhishek Verma, Avinash Chandra Pandey
Addressing DAO Insider Attacks in IPv6-Based Low-Power and Lossy Networks
null
In 2022 IEEE Region 10 Symposium (TENSYMP) (pp. 1-6). IEEE (July, 2022)
10.1109/TENSYMP54529.2022.9864545
null
cs.CR cs.NI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Low-Power and Lossy Networks (LLNs) run on resource-constrained devices and play a key role in many Industrial Internet of Things and Cyber-Physical Systems based applications. But, achieving an energy-efficient routing in LLNs is a major challenge nowadays. This challenge is addressed by Routing Protocol for Low-power Lossy Networks (RPL), which is specified in RFC 6550 as a "Proposed Standard" at present. In RPL, a client node uses Destination Advertisement Object (DAO) control messages to pass on the destination information towards the root node. An attacker may exploit the DAO sending mechanism of RPL to perform a DAO Insider attack in LLNs. In this paper, it is shown that an aggressive attacker can drastically degrade the network performance. To address DAO Insider attack, a lightweight defense solution is proposed. The proposed solution uses an early blacklisting strategy to significantly mitigate the attack and restore RPL performance. The proposed solution is implemented and tested on Cooja Simulator.
[ { "version": "v1", "created": "Wed, 1 Mar 2023 06:33:29 GMT" } ]
2023-03-02T00:00:00
[ [ "Verma", "Sachin Kumar", "" ], [ "Verma", "Abhishek", "" ], [ "Pandey", "Avinash Chandra", "" ] ]
new_dataset
0.997358
2303.00300
Mingming Zhang
Mingming Zhang, Ye Du, Zhenghui Hu, Qingjie Liu, Yunhong Wang
BiSVP: Building Footprint Extraction via Bidirectional Serialized Vertex Prediction
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Extracting building footprints from remote sensing images has been attracting extensive attention recently. Dominant approaches address this challenging problem by generating vectorized building masks with cumbersome refinement stages, which limits the application of such methods. In this paper, we introduce a new refinement-free and end-to-end building footprint extraction method, which is conceptually intuitive, simple, and effective. Our method, termed as BiSVP, represents a building instance with ordered vertices and formulates the building footprint extraction as predicting the serialized vertices directly in a bidirectional fashion. Moreover, we propose a cross-scale feature fusion (CSFF) module to facilitate high resolution and rich semantic feature learning, which is essential for the dense building vertex prediction task. Without bells and whistles, our BiSVP outperforms state-of-the-art methods by considerable margins on three building instance segmentation benchmarks, clearly demonstrating its superiority. The code and datasets will be made public available.
[ { "version": "v1", "created": "Wed, 1 Mar 2023 07:50:34 GMT" } ]
2023-03-02T00:00:00
[ [ "Zhang", "Mingming", "" ], [ "Du", "Ye", "" ], [ "Hu", "Zhenghui", "" ], [ "Liu", "Qingjie", "" ], [ "Wang", "Yunhong", "" ] ]
new_dataset
0.998812
2303.00322
Igor Sedl\'ar
Igor Sedl\'ar
Kleene Algebra With Tests for Weighted Programs
Full version of a paper accepted to ISMVL 2023
null
null
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
Weighted programs generalize probabilistic programs and offer a framework for specifying and encoding mathematical models by means of an algorithmic representation. Kleene algebra with tests is an algebraic formalism based on regular expressions with applications in proving program equivalence. We extend the language of Kleene algebra with tests so that it is sufficient to formalize reasoning about a simplified version weighted programs. We introduce relational semantics for the extended language, and we generalize the relational semantics to an appropriate extension of Kleene algebra with tests, called Kleene algebra with weights and tests. We demonstrate by means of an example that Kleene algebra with weights and tests offers a simple algebraic framework for reasoning about equivalence and optimal runs of weighted programs.
[ { "version": "v1", "created": "Wed, 1 Mar 2023 08:35:56 GMT" } ]
2023-03-02T00:00:00
[ [ "Sedlár", "Igor", "" ] ]
new_dataset
0.990512
2303.00328
Luca Ferrarini
Yuri Faenza, Luca Ferrarini
The Total Matching Polytope of Complete Bipartite Graphs
17 pages
null
null
null
cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The total matching polytope generalizes the stable set polytope and the matching polytope. In this paper, we first propose new facet-defining inequalities for the total matching polytope. We then give an exponential-sized, non-redundant description in the original space and a compact description in an extended space of the total matching polytope of complete bipartite graphs.
[ { "version": "v1", "created": "Wed, 1 Mar 2023 08:45:36 GMT" } ]
2023-03-02T00:00:00
[ [ "Faenza", "Yuri", "" ], [ "Ferrarini", "Luca", "" ] ]
new_dataset
0.986341
2303.00337
Bilel Benjdira Dr.
Bilel Benjdira, Anis Koubaa, Ahmad Taher Azar, Zahid Khan, Adel Ammar, Wadii Boulila
TAU: A Framework for Video-Based Traffic Analytics Leveraging Artificial Intelligence and Unmanned Aerial Systems
This is the final proofread version submitted to Elsevier EAAI: please see the published version at: https://doi.org/10.1016/j.engappai.2022.105095
Engineering Applications of Artificial Intelligence, Volume 114, 2022, 105095, ISSN 0952-1976
10.1016/j.engappai.2022.105095
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Smart traffic engineering and intelligent transportation services are in increasing demand from governmental authorities to optimize traffic performance and thus reduce energy costs, increase the drivers' safety and comfort, ensure traffic laws enforcement, and detect traffic violations. In this paper, we address this challenge, and we leverage the use of Artificial Intelligence (AI) and Unmanned Aerial Vehicles (UAVs) to develop an AI-integrated video analytics framework, called TAU (Traffic Analysis from UAVs), for automated traffic analytics and understanding. Unlike previous works on traffic video analytics, we propose an automated object detection and tracking pipeline from video processing to advanced traffic understanding using high-resolution UAV images. TAU combines six main contributions. First, it proposes a pre-processing algorithm to adapt the high-resolution UAV image as input to the object detector without lowering the resolution. This ensures an excellent detection accuracy from high-quality features, particularly the small size of detected objects from UAV images. Second, it introduces an algorithm for recalibrating the vehicle coordinates to ensure that vehicles are uniquely identified and tracked across the multiple crops of the same frame. Third, it presents a speed calculation algorithm based on accumulating information from successive frames. Fourth, TAU counts the number of vehicles per traffic zone based on the Ray Tracing algorithm. Fifth, TAU has a fully independent algorithm for crossroad arbitration based on the data gathered from the different zones surrounding it. Sixth, TAU introduces a set of algorithms for extracting twenty-four types of insights from the raw data collected. The code is shared here: https://github.com/bilel-bj/TAU. Video demonstrations are provided here: https://youtu.be/wXJV0H7LviU and here: https://youtu.be/kGv0gmtVEbI.
[ { "version": "v1", "created": "Wed, 1 Mar 2023 09:03:44 GMT" } ]
2023-03-02T00:00:00
[ [ "Benjdira", "Bilel", "" ], [ "Koubaa", "Anis", "" ], [ "Azar", "Ahmad Taher", "" ], [ "Khan", "Zahid", "" ], [ "Ammar", "Adel", "" ], [ "Boulila", "Wadii", "" ] ]
new_dataset
0.988837
2303.00344
Yash Kumar Atri
Priyanshi Gupta, Yash Kumar Atri, Apurva Nagvenkar, Sourish Dasgupta, Tanmoy Chakraborty
Inline Citation Classification using Peripheral Context and Time-evolving Augmentation
accepted to PAKDD 2023
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Citation plays a pivotal role in determining the associations among research articles. It portrays essential information in indicative, supportive, or contrastive studies. The task of inline citation classification aids in extrapolating these relationships; However, existing studies are still immature and demand further scrutiny. Current datasets and methods used for inline citation classification only use citation-marked sentences constraining the model to turn a blind eye to domain knowledge and neighboring contextual sentences. In this paper, we propose a new dataset, named 3Cext, which along with the cited sentences, provides discourse information using the vicinal sentences to analyze the contrasting and entailing relationships as well as domain information. We propose PeriCite, a Transformer-based deep neural network that fuses peripheral sentences and domain knowledge. Our model achieves the state-of-the-art on the 3Cext dataset by +0.09 F1 against the best baseline. We conduct extensive ablations to analyze the efficacy of the proposed dataset and model fusion methods.
[ { "version": "v1", "created": "Wed, 1 Mar 2023 09:11:07 GMT" } ]
2023-03-02T00:00:00
[ [ "Gupta", "Priyanshi", "" ], [ "Atri", "Yash Kumar", "" ], [ "Nagvenkar", "Apurva", "" ], [ "Dasgupta", "Sourish", "" ], [ "Chakraborty", "Tanmoy", "" ] ]
new_dataset
0.995993
2303.00355
Liu Chenyang
Chenyang Liu, Jiajun Yang, Zipeng Qi, Zhengxia Zou and Zhenwei Shi
Progressive Scale-aware Network for Remote sensing Image Change Captioning
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Remote sensing (RS) images contain numerous objects of different scales, which poses significant challenges for the RS image change captioning (RSICC) task to identify visual changes of interest in complex scenes and describe them via language. However, current methods still have some weaknesses in sufficiently extracting and utilizing multi-scale information. In this paper, we propose a progressive scale-aware network (PSNet) to address the problem. PSNet is a pure Transformer-based model. To sufficiently extract multi-scale visual features, multiple progressive difference perception (PDP) layers are stacked to progressively exploit the differencing features of bitemporal features. To sufficiently utilize the extracted multi-scale features for captioning, we propose a scale-aware reinforcement (SR) module and combine it with the Transformer decoding layer to progressively utilize the features from different PDP layers. Experiments show that the PDP layer and SR module are effective and our PSNet outperforms previous methods.
[ { "version": "v1", "created": "Wed, 1 Mar 2023 09:33:49 GMT" } ]
2023-03-02T00:00:00
[ [ "Liu", "Chenyang", "" ], [ "Yang", "Jiajun", "" ], [ "Qi", "Zipeng", "" ], [ "Zou", "Zhengxia", "" ], [ "Shi", "Zhenwei", "" ] ]
new_dataset
0.952606
2303.00458
Manos Kamarianakis
Manos Kamarianakis, Antonis Protopsaltis, George Papagiannakis
AR-Assisted Surgical Care via 5G networks for First Aid Responders
3 pages, 2 figures, presented at IEEE International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD) 2022, 2-3 November 2022
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Surgeons should play a central role in disaster planning and management due to the overwhelming number of bodily injuries that are typically involved during most forms of disaster. In fact, various types of surgical procedures are performed by emergency medical teams after sudden-onset disasters, such as soft tissue wounds, orthopaedic traumas, abdominal surgeries, etc. HMD-based Augmented Reality (AR), using state-of-the-art hardware such as the Magic Leap or the Microsoft HoloLens, have long been foreseen as a key enabler for clinicians in surgical use cases, especially for procedures performed outside of the operating room. This paper describes the Use Case (UC) "AR-assisted emergency surgical care", identified in the context of the 5G-EPICENTRE EU-funded project. Specifically, the UC will experiment with holographic AR technology for emergency medical surgery teams, by overlaying deformable medical models directly on top of the patient body parts, effectively enabling surgeons to see inside (visualizing bones, blood vessels, etc.) and perform surgical actions following step-by-step instructions. The goal is to combine the computational and data-intensive nature of AR and Computer Vision algorithms with upcoming 5G network architectures deployed for edge computing so as to satisfy real-time interaction requirements and provide an efficient and powerful platform for the pervasive promotion of such applications. By developing the necessary Virtual Network Functions (VNFs) to manage data-intensive services (e.g., prerendering, caching, compression) and by exploiting available network resources and Multi-access Edge Computing (MEC) support, provided by the 5G-EPICENTRE infrastructure, this UC aims to provide powerful AR-based tools, usable on site, to first-aid responders.
[ { "version": "v1", "created": "Wed, 1 Mar 2023 12:33:31 GMT" } ]
2023-03-02T00:00:00
[ [ "Kamarianakis", "Manos", "" ], [ "Protopsaltis", "Antonis", "" ], [ "Papagiannakis", "George", "" ] ]
new_dataset
0.993352
2303.00502
Zhe Niu
Zhe Niu and Brian Mak
On the Audio-visual Synchronization for Lip-to-Speech Synthesis
null
null
null
null
cs.SD cs.CV eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most lip-to-speech (LTS) synthesis models are trained and evaluated under the assumption that the audio-video pairs in the dataset are perfectly synchronized. In this work, we show that the commonly used audio-visual datasets, such as GRID, TCD-TIMIT, and Lip2Wav, can have data asynchrony issues. Training lip-to-speech with such datasets may further cause the model asynchrony issue -- that is, the generated speech and the input video are out of sync. To address these asynchrony issues, we propose a synchronized lip-to-speech (SLTS) model with an automatic synchronization mechanism (ASM) to correct data asynchrony and penalize model asynchrony. We further demonstrate the limitation of the commonly adopted evaluation metrics for LTS with asynchronous test data and introduce an audio alignment frontend before the metrics sensitive to time alignment for better evaluation. We compare our method with state-of-the-art approaches on conventional and time-aligned metrics to show the benefits of synchronization training.
[ { "version": "v1", "created": "Wed, 1 Mar 2023 13:35:35 GMT" } ]
2023-03-02T00:00:00
[ [ "Niu", "Zhe", "" ], [ "Mak", "Brian", "" ] ]
new_dataset
0.99704
2303.00532
Christian Lienen
Christian Lienen, Sorel Horst Middeke, and Marco Platzner
fpgaDDS: An Intra-FPGA Data Distribution Service for ROS 2 Robotics Applications
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Modern computing platforms for robotics applications comprise a set of heterogeneous elements, e.g., multi-core CPUs, embedded GPUs, and FPGAs. FPGAs are reprogrammable hardware devices that allow for fast and energy-efficient computation of many relevant tasks in robotics. ROS is the de-facto programming standard for robotics and decomposes an application into a set of communicating nodes. ReconROS is a previous approach that can map complete ROS nodes into hardware for acceleration. Since ReconROS relies on standard ROS communication layers, exchanging data between hardware-mapped nodes can lead to a performance bottleneck. This paper presents fpgaDDS, a lean data distribution service for hardware-mapped ROS 2 nodes. fpgaDDS relies on a customized and statically generated streaming-based communication architecture. We detail this communication architecture with its components and outline its benefits. We evaluate fpgaDDS on a test example and a larger autonomous vehicle case study. Compared to a ROS 2 application in software, we achieve speedups of up to 13.34 and reduce jitter by two orders of magnitude.
[ { "version": "v1", "created": "Wed, 1 Mar 2023 14:13:52 GMT" } ]
2023-03-02T00:00:00
[ [ "Lienen", "Christian", "" ], [ "Middeke", "Sorel Horst", "" ], [ "Platzner", "Marco", "" ] ]
new_dataset
0.999554
2303.00534
Zheng Yuan
Zheng Yuan, Qiao Jin, Chuanqi Tan, Zhengyun Zhao, Hongyi Yuan, Fei Huang, Songfang Huang
RAMM: Retrieval-augmented Biomedical Visual Question Answering with Multi-modal Pre-training
null
null
null
null
cs.CV cs.CL
http://creativecommons.org/licenses/by/4.0/
Vision-and-language multi-modal pretraining and fine-tuning have shown great success in visual question answering (VQA). Compared to general domain VQA, the performance of biomedical VQA suffers from limited data. In this paper, we propose a retrieval-augmented pretrain-and-finetune paradigm named RAMM for biomedical VQA to overcome the data limitation issue. Specifically, we collect a new biomedical dataset named PMCPM which offers patient-based image-text pairs containing diverse patient situations from PubMed. Then, we pretrain the biomedical multi-modal model to learn visual and textual representation for image-text pairs and align these representations with image-text contrastive objective (ITC). Finally, we propose a retrieval-augmented method to better use the limited data. We propose to retrieve similar image-text pairs based on ITC from pretraining datasets and introduce a novel retrieval-attention module to fuse the representation of the image and the question with the retrieved images and texts. Experiments demonstrate that our retrieval-augmented pretrain-and-finetune paradigm obtains state-of-the-art performance on Med-VQA2019, Med-VQA2021, VQARAD, and SLAKE datasets. Further analysis shows that the proposed RAMM and PMCPM can enhance biomedical VQA performance compared with previous resources and methods. We will open-source our dataset, codes, and pretrained model.
[ { "version": "v1", "created": "Wed, 1 Mar 2023 14:21:19 GMT" } ]
2023-03-02T00:00:00
[ [ "Yuan", "Zheng", "" ], [ "Jin", "Qiao", "" ], [ "Tan", "Chuanqi", "" ], [ "Zhao", "Zhengyun", "" ], [ "Yuan", "Hongyi", "" ], [ "Huang", "Fei", "" ], [ "Huang", "Songfang", "" ] ]
new_dataset
0.994572
2303.00703
Renrui Zhang
Renrui Zhang, Liuhui Wang, Ziyu Guo, Jianbo Shi
Nearest Neighbors Meet Deep Neural Networks for Point Cloud Analysis
Accepted by WACV 2023
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 1246-1255
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Performances on standard 3D point cloud benchmarks have plateaued, resulting in oversized models and complex network design to make a fractional improvement. We present an alternative to enhance existing deep neural networks without any redesigning or extra parameters, termed as Spatial-Neighbor Adapter (SN-Adapter). Building on any trained 3D network, we utilize its learned encoding capability to extract features of the training dataset and summarize them as prototypical spatial knowledge. For a test point cloud, the SN-Adapter retrieves k nearest neighbors (k-NN) from the pre-constructed spatial prototypes and linearly interpolates the k-NN prediction with that of the original 3D network. By providing complementary characteristics, the proposed SN-Adapter serves as a plug-and-play module to economically improve performance in a non-parametric manner. More importantly, our SN-Adapter can be effectively generalized to various 3D tasks, including shape classification, part segmentation, and 3D object detection, demonstrating its superiority and robustness. We hope our approach could show a new perspective for point cloud analysis and facilitate future research.
[ { "version": "v1", "created": "Wed, 1 Mar 2023 17:57:09 GMT" } ]
2023-03-02T00:00:00
[ [ "Zhang", "Renrui", "" ], [ "Wang", "Liuhui", "" ], [ "Guo", "Ziyu", "" ], [ "Shi", "Jianbo", "" ] ]
new_dataset
0.992156
2303.00725
Saghir Alfasly
Saghir Alfasly, Zaid Al-huda, Saifullah Bello, Ahmed Elazab, Jian Lu, Chen Xu
OSRE: Object-to-Spot Rotation Estimation for Bike Parking Assessment
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Current deep models provide remarkable object detection in terms of object classification and localization. However, estimating object rotation with respect to other visual objects in the visual context of an input image still lacks deep studies due to the unavailability of object datasets with rotation annotations. This paper tackles these two challenges to solve the rotation estimation of a parked bike with respect to its parking area. First, we leverage the power of 3D graphics to build a camera-agnostic well-annotated Synthetic Bike Rotation Dataset (SynthBRSet). Then, we propose an object-to-spot rotation estimator (OSRE) by extending the object detection task to further regress the bike rotations in two axes. Since our model is purely trained on synthetic data, we adopt image smoothing techniques when deploying it on real-world images. The proposed OSRE is evaluated on synthetic and real-world data providing promising results. Our data and code are available at \href{https://github.com/saghiralfasly/OSRE-Project}{https://github.com/saghiralfasly/OSRE-Project}.
[ { "version": "v1", "created": "Wed, 1 Mar 2023 18:34:10 GMT" } ]
2023-03-02T00:00:00
[ [ "Alfasly", "Saghir", "" ], [ "Al-huda", "Zaid", "" ], [ "Bello", "Saifullah", "" ], [ "Elazab", "Ahmed", "" ], [ "Lu", "Jian", "" ], [ "Xu", "Chen", "" ] ]
new_dataset
0.999632
2303.00749
ZiYang Xie
Ziyang Xie, Junge Zhang, Wenye Li, Feihu Zhang, Li Zhang
S-NeRF: Neural Radiance Fields for Street Views
ICLR 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural Radiance Fields (NeRFs) aim to synthesize novel views of objects and scenes, given the object-centric camera views with large overlaps. However, we conjugate that this paradigm does not fit the nature of the street views that are collected by many self-driving cars from the large-scale unbounded scenes. Also, the onboard cameras perceive scenes without much overlapping. Thus, existing NeRFs often produce blurs, 'floaters' and other artifacts on street-view synthesis. In this paper, we propose a new street-view NeRF (S-NeRF) that considers novel view synthesis of both the large-scale background scenes and the foreground moving vehicles jointly. Specifically, we improve the scene parameterization function and the camera poses for learning better neural representations from street views. We also use the the noisy and sparse LiDAR points to boost the training and learn a robust geometry and reprojection based confidence to address the depth outliers. Moreover, we extend our S-NeRF for reconstructing moving vehicles that is impracticable for conventional NeRFs. Thorough experiments on the large-scale driving datasets (e.g., nuScenes and Waymo) demonstrate that our method beats the state-of-the-art rivals by reducing 7% to 40% of the mean-squared error in the street-view synthesis and a 45% PSNR gain for the moving vehicles rendering.
[ { "version": "v1", "created": "Wed, 1 Mar 2023 18:59:30 GMT" } ]
2023-03-02T00:00:00
[ [ "Xie", "Ziyang", "" ], [ "Zhang", "Junge", "" ], [ "Li", "Wenye", "" ], [ "Zhang", "Feihu", "" ], [ "Zhang", "Li", "" ] ]
new_dataset
0.99257
2006.02854
Christian Anti\'c
Christian Anti\'c
Analogical proportions
null
null
null
null
cs.LO cs.AI cs.LG cs.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Analogy-making is at the core of human and artificial intelligence and creativity with applications to such diverse tasks as proving mathematical theorems and building mathematical theories, common sense reasoning, learning, language acquisition, and story telling. This paper introduces from first principles an abstract algebraic framework of analogical proportions of the form `$a$ is to $b$ what $c$ is to $d$' in the general setting of universal algebra. This enables us to compare mathematical objects possibly across different domains in a uniform way which is crucial for AI-systems. It turns out that our notion of analogical proportions has appealing mathematical properties. As we construct our model from first principles using only elementary concepts of universal algebra, and since our model questions some basic properties of analogical proportions presupposed in the literature, to convince the reader of the plausibility of our model we show that it can be naturally embedded into first-order logic via model-theoretic types and prove from that perspective that analogical proportions are compatible with structure-preserving mappings. This provides conceptual evidence for its applicability. In a broader sense, this paper is a first step towards a theory of analogical reasoning and learning systems with potential applications to fundamental AI-problems like common sense reasoning and computational learning and creativity.
[ { "version": "v1", "created": "Thu, 4 Jun 2020 13:44:36 GMT" }, { "version": "v10", "created": "Sat, 4 Dec 2021 16:24:42 GMT" }, { "version": "v11", "created": "Fri, 18 Feb 2022 17:16:25 GMT" }, { "version": "v12", "created": "Mon, 14 Mar 2022 17:29:07 GMT" }, { "version": "v13", "created": "Sun, 8 May 2022 12:15:52 GMT" }, { "version": "v14", "created": "Tue, 28 Feb 2023 16:47:25 GMT" }, { "version": "v2", "created": "Sun, 7 Jun 2020 13:54:42 GMT" }, { "version": "v3", "created": "Tue, 25 Aug 2020 14:30:38 GMT" }, { "version": "v4", "created": "Thu, 10 Dec 2020 14:52:31 GMT" }, { "version": "v5", "created": "Sat, 17 Apr 2021 14:36:37 GMT" }, { "version": "v6", "created": "Tue, 25 May 2021 12:12:56 GMT" }, { "version": "v7", "created": "Sun, 15 Aug 2021 14:21:56 GMT" }, { "version": "v8", "created": "Mon, 22 Nov 2021 20:59:26 GMT" }, { "version": "v9", "created": "Wed, 24 Nov 2021 21:50:43 GMT" } ]
2023-03-01T00:00:00
[ [ "Antić", "Christian", "" ] ]
new_dataset
0.950269
2105.01306
Youngseo Son
Youngseo Son, Vasudha Varadarajan, H Andrew Schwartz
Discourse Relation Embeddings: Representing the Relations between Discourse Segments in Social Media
Published in EMNLP 2022 UM-IoS
null
null
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
Discourse relations are typically modeled as a discrete class that characterizes the relation between segments of text (e.g. causal explanations, expansions). However, such predefined discrete classes limits the universe of potential relationships and their nuanced differences. Analogous to contextual word embeddings, we propose representing discourse relations as points in high dimensional continuous space. However, unlike words, discourse relations often have no surface form (relations are between two segments, often with no word or phrase in that gap) which presents a challenge for existing embedding techniques. We present a novel method for automatically creating discourse relation embeddings (DiscRE), addressing the embedding challenge through a weakly supervised, multitask approach to learn diverse and nuanced relations between discourse segments in social media. Results show DiscRE can: (1) obtain the best performance on Twitter discourse relation classification task (macro F1=0.76) (2) improve the state of the art in social media causality prediction (from F1=.79 to .81), (3) perform beyond modern sentence and contextual word embeddings at traditional discourse relation classification, and (4) capture novel nuanced relations (e.g. relations semantically at the intersection of causal explanations and counterfactuals).
[ { "version": "v1", "created": "Tue, 4 May 2021 05:58:27 GMT" }, { "version": "v2", "created": "Tue, 28 Feb 2023 06:17:38 GMT" } ]
2023-03-01T00:00:00
[ [ "Son", "Youngseo", "" ], [ "Varadarajan", "Vasudha", "" ], [ "Schwartz", "H Andrew", "" ] ]
new_dataset
0.989213
2106.13201
Chengxi Li
Chengxi Li, Stanley H. Chan, Yi-Ting Chen
DROID: Driver-centric Risk Object Identification
Submitted to TPAMI
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identification of high-risk driving situations is generally approached through collision risk estimation or accident pattern recognition. In this work, we approach the problem from the perspective of subjective risk. We operationalize subjective risk assessment by predicting driver behavior changes and identifying the cause of changes. To this end, we introduce a new task called driver-centric risk object identification (DROID), which uses egocentric video to identify object(s) influencing a driver's behavior, given only the driver's response as the supervision signal. We formulate the task as a cause-effect problem and present a novel two-stage DROID framework, taking inspiration from models of situation awareness and causal inference. A subset of data constructed from the Honda Research Institute Driving Dataset (HDD) is used to evaluate DROID. We demonstrate state-of-the-art DROID performance, even compared with strong baseline models using this dataset. Additionally, we conduct extensive ablative studies to justify our design choices. Moreover, we demonstrate the applicability of DROID for risk assessment.
[ { "version": "v1", "created": "Thu, 24 Jun 2021 17:27:32 GMT" }, { "version": "v2", "created": "Fri, 7 Oct 2022 05:22:19 GMT" }, { "version": "v3", "created": "Tue, 28 Feb 2023 17:36:38 GMT" } ]
2023-03-01T00:00:00
[ [ "Li", "Chengxi", "" ], [ "Chan", "Stanley H.", "" ], [ "Chen", "Yi-Ting", "" ] ]
new_dataset
0.999502
2110.06651
Linhan Zhang
Linhan Zhang, Qian Chen, Wen Wang, Chong Deng, Shiliang Zhang, Bing Li, Wei Wang, Xin Cao
MDERank: A Masked Document Embedding Rank Approach for Unsupervised Keyphrase Extraction
13 pages, 5 figures
Finding of The 60st Annual Meeting of the Association for Computational Linguistics, 2022
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Keyphrase extraction (KPE) automatically extracts phrases in a document that provide a concise summary of the core content, which benefits downstream information retrieval and NLP tasks. Previous state-of-the-art (SOTA) methods select candidate keyphrases based on the similarity between learned representations of the candidates and the document. They suffer performance degradation on long documents due to discrepancy between sequence lengths which causes mismatch between representations of keyphrase candidates and the document. In this work, we propose a novel unsupervised embedding-based KPE approach, Masked Document Embedding Rank (MDERank), to address this problem by leveraging a mask strategy and ranking candidates by the similarity between embeddings of the source document and the masked document. We further develop a KPE-oriented BERT (KPEBERT) model by proposing a novel self-supervised contrastive learning method, which is more compatible to MDERank than vanilla BERT. Comprehensive evaluations on six KPE benchmarks demonstrate that the proposed MDERank outperforms state-of-the-art unsupervised KPE approach by average 1.80 $F1@15$ improvement. MDERank further benefits from KPEBERT and overall achieves average 3.53 $F1@15$ improvement over the SOTA SIFRank. Our code is available at \url{https://github.com/LinhanZ/mderank}.
[ { "version": "v1", "created": "Wed, 13 Oct 2021 11:29:17 GMT" }, { "version": "v2", "created": "Tue, 29 Mar 2022 09:07:29 GMT" }, { "version": "v3", "created": "Tue, 28 Feb 2023 00:54:45 GMT" } ]
2023-03-01T00:00:00
[ [ "Zhang", "Linhan", "" ], [ "Chen", "Qian", "" ], [ "Wang", "Wen", "" ], [ "Deng", "Chong", "" ], [ "Zhang", "Shiliang", "" ], [ "Li", "Bing", "" ], [ "Wang", "Wei", "" ], [ "Cao", "Xin", "" ] ]
new_dataset
0.973285
2112.02500
Peng Zheng
Jie Qin, Peng Zheng, Yichao Yan, Rong Quan, Xiaogang Cheng, Bingbing Ni
MovieNet-PS: A Large-Scale Person Search Dataset in the Wild
ICASSP 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Person search aims to jointly localize and identify a query person from natural, uncropped images, which has been actively studied over the past few years. In this paper, we delve into the rich context information globally and locally surrounding the target person, which we refer to as scene and group context, respectively. Unlike previous works that treat the two types of context individually, we exploit them in a unified global-local context network (GLCNet) with the intuitive aim of feature enhancement. Specifically, re-ID embeddings and context features are simultaneously learned in a multi-stage fashion, ultimately leading to enhanced, discriminative features for person search. We conduct the experiments on two person search benchmarks (i.e., CUHK-SYSU and PRW) as well as extend our approach to a more challenging setting (i.e., character search on MovieNet). Extensive experimental results demonstrate the consistent improvement of the proposed GLCNet over the state-of-the-art methods on all three datasets. Our source codes, pre-trained models, and the new dataset are publicly available at: https://github.com/ZhengPeng7/GLCNet.
[ { "version": "v1", "created": "Sun, 5 Dec 2021 07:38:53 GMT" }, { "version": "v2", "created": "Fri, 25 Mar 2022 11:11:26 GMT" }, { "version": "v3", "created": "Tue, 12 Apr 2022 13:20:39 GMT" }, { "version": "v4", "created": "Tue, 28 Feb 2023 11:19:31 GMT" } ]
2023-03-01T00:00:00
[ [ "Qin", "Jie", "" ], [ "Zheng", "Peng", "" ], [ "Yan", "Yichao", "" ], [ "Quan", "Rong", "" ], [ "Cheng", "Xiaogang", "" ], [ "Ni", "Bingbing", "" ] ]
new_dataset
0.999767
2201.06435
Selahattin Cansiz
Selahattin Cansiz, Cem Kesim, Sevval Nur Bektas, Zeynep Kulali, Murat Hasanreisoglu, Cigdem Gunduz-Demir
FourierNet: Shape-Preserving Network for Henle's Fiber Layer Segmentation in Optical Coherence Tomography Images
null
IEEE Journal of Biomedical and Health Informatics, vol. 27, no. 2, pp. 1036-1047, Feb. 2023
10.1109/JBHI.2022.3225425
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Henle's fiber layer (HFL) in the retina carries valuable information on the macular condition of an eye. However, in the common practice, this layer is not separately segmented but rather included in the outer nuclear layer since it is difficult to perceive HFL contours on standard optical coherence tomography (OCT) imaging. Due to its variable reflectivity under an imaging beam, delineating the HFL contours necessitates directional OCT, which requires additional imaging. This paper addresses this issue by introducing a shape-preserving network, FourierNet, that achieves HFL segmentation in standard OCT scans with the target performance obtained when directional OCT scans are used. FourierNet is a new cascaded network design that puts forward the idea of benefiting the shape prior of HFL in the network training. This design proposes to represent the shape prior by extracting Fourier descriptors on the HFL contours and defining an additional regression task of learning these descriptors. It then formulates HFL segmentation as concurrent learning of regression and classification tasks, in which Fourier descriptors are estimated from an input image to encode the shape prior and used together with the input image to construct the HFL segmentation map. Our experiments on 1470 images of 30 OCT scans reveal that quantifying the HFL shape with Fourier descriptors and concurrently learning them with the main task of HFL segmentation lead to better results. This indicates the effectiveness of designing a shape-preserving network to improve HFL segmentation by reducing the need to perform directional OCT imaging.
[ { "version": "v1", "created": "Mon, 17 Jan 2022 14:50:26 GMT" } ]
2023-03-01T00:00:00
[ [ "Cansiz", "Selahattin", "" ], [ "Kesim", "Cem", "" ], [ "Bektas", "Sevval Nur", "" ], [ "Kulali", "Zeynep", "" ], [ "Hasanreisoglu", "Murat", "" ], [ "Gunduz-Demir", "Cigdem", "" ] ]
new_dataset
0.998959
2203.13474
Erik Nijkamp Dr.
Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong
CodeGen: An Open Large Language Model for Code with Multi-Turn Program Synthesis
null
null
null
null
cs.LG cs.CL cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Program synthesis strives to generate a computer program as a solution to a given problem specification, expressed with input-output examples or natural language descriptions. The prevalence of large language models advances the state-of-the-art for program synthesis, though limited training resources and data impede open access to such models. To democratize this, we train and release a family of large language models up to 16.1B parameters, called CODEGEN, on natural language and programming language data, and open source the training library JAXFORMER. We show the utility of the trained model by demonstrating that it is competitive with the previous state-of-the-art on zero-shot Python code generation on HumanEval. We further investigate the multi-step paradigm for program synthesis, where a single program is factorized into multiple prompts specifying subproblems. To this end, we construct an open benchmark, Multi-Turn Programming Benchmark (MTPB), consisting of 115 diverse problem sets that are factorized into multi-turn prompts. Our analysis on MTPB shows that the same intent provided to CODEGEN in multi-turn fashion significantly improves program synthesis over that provided as a single turn. We make the training library JAXFORMER and model checkpoints available as open source contribution: https://github.com/salesforce/CodeGen.
[ { "version": "v1", "created": "Fri, 25 Mar 2022 06:55:15 GMT" }, { "version": "v2", "created": "Mon, 28 Mar 2022 17:10:30 GMT" }, { "version": "v3", "created": "Wed, 30 Mar 2022 06:57:04 GMT" }, { "version": "v4", "created": "Thu, 29 Sep 2022 20:43:54 GMT" }, { "version": "v5", "created": "Mon, 27 Feb 2023 21:26:48 GMT" } ]
2023-03-01T00:00:00
[ [ "Nijkamp", "Erik", "" ], [ "Pang", "Bo", "" ], [ "Hayashi", "Hiroaki", "" ], [ "Tu", "Lifu", "" ], [ "Wang", "Huan", "" ], [ "Zhou", "Yingbo", "" ], [ "Savarese", "Silvio", "" ], [ "Xiong", "Caiming", "" ] ]
new_dataset
0.996476
2206.07754
Katherine O'Toole
Katherine O'Toole and Em\H{o}ke-\'Agnes Horv\'at
Novelty and Cultural Evolution in Modern Popular Music
null
EPJ Data Science 12 (2023) 1-25
10.1140/epjds/s13688-023-00377-7
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
The ubiquity of digital music consumption has made it possible to extract information about modern music that allows us to perform large scale analysis of stylistic change over time. In order to uncover underlying patterns in cultural evolution, we examine the relationship between the established characteristics of different genres and styles, and the introduction of novel ideas that fuel this ongoing creative evolution. To understand how this dynamic plays out and shapes the cultural ecosystem, we compare musical artifacts to their contemporaries to identify novel artifacts, study the relationship between novelty and commercial success, and connect this to the changes in musical content that we can observe over time. Using Music Information Retrieval (MIR) data and lyrics from Billboard Hot 100 songs between 1974-2013, we calculate a novelty score for each song's aural attributes and lyrics. Comparing both scores to the popularity of the song following its release, we uncover key patterns in the relationship between novelty and audience reception. Additionally, we look at the link between novelty and the likelihood that a song was influential given where its MIR and lyrical features fit within the larger trends we observed.
[ { "version": "v1", "created": "Wed, 15 Jun 2022 18:25:39 GMT" }, { "version": "v2", "created": "Wed, 26 Oct 2022 19:05:54 GMT" }, { "version": "v3", "created": "Mon, 27 Feb 2023 21:34:59 GMT" } ]
2023-03-01T00:00:00
[ [ "O'Toole", "Katherine", "" ], [ "Horvát", "Emőke-Ágnes", "" ] ]
new_dataset
0.996752
2209.06628
Fangcheng Zhu
Fangcheng Zhu, Yunfan Ren, Fanze Kong, Huajie Wu, Siqi Liang, Nan Chen, Wei Xu, Fu Zhang
Swarm-LIO: Decentralized Swarm LiDAR-inertial Odometry
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate self and relative state estimation are the critical preconditions for completing swarm tasks, e.g., collaborative autonomous exploration, target tracking, search and rescue. This paper proposes Swarm-LIO: a fully decentralized state estimation method for aerial swarm systems, in which each drone performs precise ego-state estimation, exchanges ego-state and mutual observation information by wireless communication, and estimates relative state with respect to (w.r.t.) the rest of UAVs, all in real-time and only based on LiDAR-inertial measurements. A novel 3D LiDAR-based drone detection, identification and tracking method is proposed to obtain observations of teammate drones. The mutual observation measurements are then tightly-coupled with IMU and LiDAR measurements to perform real-time and accurate estimation of ego-state and relative state jointly. Extensive real-world experiments show the broad adaptability to complicated scenarios, including GPS-denied scenes, degenerate scenes for camera (dark night) or LiDAR (facing a single wall). Compared with ground-truth provided by motion capture system, the result shows the centimeter-level localization accuracy which outperforms other state-of-the-art LiDAR-inertial odometry for single UAV system.
[ { "version": "v1", "created": "Wed, 14 Sep 2022 13:24:34 GMT" }, { "version": "v2", "created": "Sat, 25 Feb 2023 15:00:05 GMT" }, { "version": "v3", "created": "Tue, 28 Feb 2023 10:47:36 GMT" } ]
2023-03-01T00:00:00
[ [ "Zhu", "Fangcheng", "" ], [ "Ren", "Yunfan", "" ], [ "Kong", "Fanze", "" ], [ "Wu", "Huajie", "" ], [ "Liang", "Siqi", "" ], [ "Chen", "Nan", "" ], [ "Xu", "Wei", "" ], [ "Zhang", "Fu", "" ] ]
new_dataset
0.970019
2209.07734
Zhenhua Xu
Zhenhua Xu, Yuxuan Liu, Yuxiang Sun, Ming Liu, Lujia Wang
CenterLineDet: CenterLine Graph Detection for Road Lanes with Vehicle-mounted Sensors by Transformer for HD Map Generation
ICRA 2023
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the fast development of autonomous driving technologies, there is an increasing demand for high-definition (HD) maps, which provide reliable and robust prior information about the static part of the traffic environments. As one of the important elements in HD maps, road lane centerline is critical for downstream tasks, such as prediction and planning. Manually annotating centerlines for road lanes in HD maps is labor-intensive, expensive and inefficient, severely restricting the wide applications of autonomous driving systems. Previous work seldom explores the lane centerline detection problem due to the complicated topology and severe overlapping issues of lane centerlines. In this paper, we propose a novel method named CenterLineDet to detect lane centerlines for automatic HD map generation. Our CenterLineDet is trained by imitation learning and can effectively detect the graph of centerlines with vehicle-mounted sensors (i.e., six cameras and one LiDAR) through iterations. Due to the use of the DETR-like transformer network, CenterLineDet can handle complicated graph topology, such as lane intersections. The proposed approach is evaluated on the large-scale public dataset NuScenes. The superiority of our CenterLineDet is demonstrated by the comparative results. Our code, supplementary materials, and video demonstrations are available at \href{https://tonyxuqaq.github.io/projects/CenterLineDet/}{https://tonyxuqaq.github.io/projects/CenterLineDet/}.
[ { "version": "v1", "created": "Fri, 16 Sep 2022 06:15:26 GMT" }, { "version": "v2", "created": "Tue, 28 Feb 2023 10:44:34 GMT" } ]
2023-03-01T00:00:00
[ [ "Xu", "Zhenhua", "" ], [ "Liu", "Yuxuan", "" ], [ "Sun", "Yuxiang", "" ], [ "Liu", "Ming", "" ], [ "Wang", "Lujia", "" ] ]
new_dataset
0.999826
2209.08772
Jingxi Xu
Jingxi Xu, Han Lin, Shuran Song, Matei Ciocarlie
TANDEM3D: Active Tactile Exploration for 3D Object Recognition
7 pages. Accepted to International Conference on Robotics and Automation (ICRA) 2023
null
null
null
cs.CV cs.AI cs.LG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tactile recognition of 3D objects remains a challenging task. Compared to 2D shapes, the complex geometry of 3D surfaces requires richer tactile signals, more dexterous actions, and more advanced encoding techniques. In this work, we propose TANDEM3D, a method that applies a co-training framework for exploration and decision making to 3D object recognition with tactile signals. Starting with our previous work, which introduced a co-training paradigm for 2D recognition problems, we introduce a number of advances that enable us to scale up to 3D. TANDEM3D is based on a novel encoder that builds 3D object representation from contact positions and normals using PointNet++. Furthermore, by enabling 6DOF movement, TANDEM3D explores and collects discriminative touch information with high efficiency. Our method is trained entirely in simulation and validated with real-world experiments. Compared to state-of-the-art baselines, TANDEM3D achieves higher accuracy and a lower number of actions in recognizing 3D objects and is also shown to be more robust to different types and amounts of sensor noise. Video is available at https://jxu.ai/tandem3d.
[ { "version": "v1", "created": "Mon, 19 Sep 2022 05:54:26 GMT" }, { "version": "v2", "created": "Tue, 28 Feb 2023 05:22:09 GMT" } ]
2023-03-01T00:00:00
[ [ "Xu", "Jingxi", "" ], [ "Lin", "Han", "" ], [ "Song", "Shuran", "" ], [ "Ciocarlie", "Matei", "" ] ]
new_dataset
0.990744
2209.09489
Zicheng Zhang
Zicheng Zhang, Yingjie Zhou, Wei Sun, Xiongkuo Min, Yuzhe Wu, Guangtao Zhai
Perceptual Quality Assessment for Digital Human Heads
null
null
null
null
cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
Digital humans are attracting more and more research interest during the last decade, the generation, representation, rendering, and animation of which have been put into large amounts of effort. However, the quality assessment of digital humans has fallen behind. Therefore, to tackle the challenge of digital human quality assessment issues, we propose the first large-scale quality assessment database for three-dimensional (3D) scanned digital human heads (DHHs). The constructed database consists of 55 reference DHHs and 1,540 distorted DHHs along with the subjective perceptual ratings. Then, a simple yet effective full-reference (FR) projection-based method is proposed to evaluate the visual quality of DHHs. The pretrained Swin Transformer tiny is employed for hierarchical feature extraction and the multi-head attention module is utilized for feature fusion. The experimental results reveal that the proposed method exhibits state-of-the-art performance among the mainstream FR metrics. The database is released at https://github.com/zzc-1998/DHHQA.
[ { "version": "v1", "created": "Tue, 20 Sep 2022 06:02:57 GMT" }, { "version": "v2", "created": "Thu, 22 Sep 2022 08:15:37 GMT" }, { "version": "v3", "created": "Tue, 10 Jan 2023 07:00:01 GMT" }, { "version": "v4", "created": "Sun, 26 Feb 2023 08:22:09 GMT" }, { "version": "v5", "created": "Tue, 28 Feb 2023 12:15:46 GMT" } ]
2023-03-01T00:00:00
[ [ "Zhang", "Zicheng", "" ], [ "Zhou", "Yingjie", "" ], [ "Sun", "Wei", "" ], [ "Min", "Xiongkuo", "" ], [ "Wu", "Yuzhe", "" ], [ "Zhai", "Guangtao", "" ] ]
new_dataset
0.993586
2210.09957
Virginie Do
Virginie Do, Elvis Dohmatob, Matteo Pirotta, Alessandro Lazaric and Nicolas Usunier
Contextual bandits with concave rewards, and an application to fair ranking
ICLR 2023
null
null
null
cs.LG cs.AI cs.CY cs.IR stat.ML
http://creativecommons.org/licenses/by/4.0/
We consider Contextual Bandits with Concave Rewards (CBCR), a multi-objective bandit problem where the desired trade-off between the rewards is defined by a known concave objective function, and the reward vector depends on an observed stochastic context. We present the first algorithm with provably vanishing regret for CBCR without restrictions on the policy space, whereas prior works were restricted to finite policy spaces or tabular representations. Our solution is based on a geometric interpretation of CBCR algorithms as optimization algorithms over the convex set of expected rewards spanned by all stochastic policies. Building on Frank-Wolfe analyses in constrained convex optimization, we derive a novel reduction from the CBCR regret to the regret of a scalar-reward bandit problem. We illustrate how to apply the reduction off-the-shelf to obtain algorithms for CBCR with both linear and general reward functions, in the case of non-combinatorial actions. Motivated by fairness in recommendation, we describe a special case of CBCR with rankings and fairness-aware objectives, leading to the first algorithm with regret guarantees for contextual combinatorial bandits with fairness of exposure.
[ { "version": "v1", "created": "Tue, 18 Oct 2022 16:11:55 GMT" }, { "version": "v2", "created": "Tue, 28 Feb 2023 10:26:48 GMT" } ]
2023-03-01T00:00:00
[ [ "Do", "Virginie", "" ], [ "Dohmatob", "Elvis", "" ], [ "Pirotta", "Matteo", "" ], [ "Lazaric", "Alessandro", "" ], [ "Usunier", "Nicolas", "" ] ]
new_dataset
0.99876
2212.08377
Yufeng Zheng
Yufeng Zheng, Wang Yifan, Gordon Wetzstein, Michael J. Black, Otmar Hilliges
PointAvatar: Deformable Point-based Head Avatars from Videos
Project page: https://zhengyuf.github.io/PointAvatar/ Code base: https://github.com/zhengyuf/pointavatar
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ability to create realistic, animatable and relightable head avatars from casual video sequences would open up wide ranging applications in communication and entertainment. Current methods either build on explicit 3D morphable meshes (3DMM) or exploit neural implicit representations. The former are limited by fixed topology, while the latter are non-trivial to deform and inefficient to render. Furthermore, existing approaches entangle lighting in the color estimation, thus they are limited in re-rendering the avatar in new environments. In contrast, we propose PointAvatar, a deformable point-based representation that disentangles the source color into intrinsic albedo and normal-dependent shading. We demonstrate that PointAvatar bridges the gap between existing mesh- and implicit representations, combining high-quality geometry and appearance with topological flexibility, ease of deformation and rendering efficiency. We show that our method is able to generate animatable 3D avatars using monocular videos from multiple sources including hand-held smartphones, laptop webcams and internet videos, achieving state-of-the-art quality in challenging cases where previous methods fail, e.g., thin hair strands, while being significantly more efficient in training than competing methods.
[ { "version": "v1", "created": "Fri, 16 Dec 2022 10:05:31 GMT" }, { "version": "v2", "created": "Tue, 28 Feb 2023 09:00:33 GMT" } ]
2023-03-01T00:00:00
[ [ "Zheng", "Yufeng", "" ], [ "Yifan", "Wang", "" ], [ "Wetzstein", "Gordon", "" ], [ "Black", "Michael J.", "" ], [ "Hilliges", "Otmar", "" ] ]
new_dataset
0.99781
2301.06958
Shusheng Yang
Shusheng Yang, Yixiao Ge, Kun Yi, Dian Li, Ying Shan, Xiaohu Qie, Xinggang Wang
RILS: Masked Visual Reconstruction in Language Semantic Space
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Both masked image modeling (MIM) and natural language supervision have facilitated the progress of transferable visual pre-training. In this work, we seek the synergy between two paradigms and study the emerging properties when MIM meets natural language supervision. To this end, we present a novel masked visual Reconstruction In Language semantic Space (RILS) pre-training framework, in which sentence representations, encoded by the text encoder, serve as prototypes to transform the vision-only signals into patch-sentence probabilities as semantically meaningful MIM reconstruction targets. The vision models can therefore capture useful components with structured information by predicting proper semantic of masked tokens. Better visual representations could, in turn, improve the text encoder via the image-text alignment objective, which is essential for the effective MIM target transformation. Extensive experimental results demonstrate that our method not only enjoys the best of previous MIM and CLIP but also achieves further improvements on various tasks due to their mutual benefits. RILS exhibits advanced transferability on downstream classification, detection, and segmentation, especially for low-shot regimes. Code will be made available at https://github.com/hustvl/RILS.
[ { "version": "v1", "created": "Tue, 17 Jan 2023 15:32:59 GMT" }, { "version": "v2", "created": "Tue, 28 Feb 2023 15:59:30 GMT" } ]
2023-03-01T00:00:00
[ [ "Yang", "Shusheng", "" ], [ "Ge", "Yixiao", "" ], [ "Yi", "Kun", "" ], [ "Li", "Dian", "" ], [ "Shan", "Ying", "" ], [ "Qie", "Xiaohu", "" ], [ "Wang", "Xinggang", "" ] ]
new_dataset
0.950464
2302.04023
Yejin Bang
Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, Quyet V. Do, Yan Xu, Pascale Fung
A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity
52 pages
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper proposes a framework for quantitatively evaluating interactive LLMs such as ChatGPT using publicly available data sets. We carry out an extensive technical evaluation of ChatGPT using 23 data sets covering 8 different common NLP application tasks. We evaluate the multitask, multilingual and multi-modal aspects of ChatGPT based on these data sets and a newly designed multimodal dataset. We find that ChatGPT outperforms LLMs with zero-shot learning on most tasks and even outperforms fine-tuned models on some tasks. We find that it is better at understanding non-Latin script languages than generating them. It is able to generate multimodal content from textual prompts, via an intermediate code generation step. Moreover, we find that ChatGPT is 63.41% accurate on average in 10 different reasoning categories under logical reasoning, non-textual reasoning, and commonsense reasoning, hence making it an unreliable reasoner. It is, for example, better at deductive than inductive reasoning. ChatGPT suffers from hallucination problems like other LLMs and it generates more extrinsic hallucinations from its parametric memory as it does not have access to an external knowledge base. Finally, the interactive feature of ChatGPT enables human collaboration with the underlying LLM to improve its performance, i.e, 8% ROUGE-1 on summarization and 2% ChrF++ on machine translation, in a multi-turn "prompt engineering" fashion. We also release codebase for evaluation set extraction.
[ { "version": "v1", "created": "Wed, 8 Feb 2023 12:35:34 GMT" }, { "version": "v2", "created": "Tue, 28 Feb 2023 15:20:21 GMT" } ]
2023-03-01T00:00:00
[ [ "Bang", "Yejin", "" ], [ "Cahyawijaya", "Samuel", "" ], [ "Lee", "Nayeon", "" ], [ "Dai", "Wenliang", "" ], [ "Su", "Dan", "" ], [ "Wilie", "Bryan", "" ], [ "Lovenia", "Holy", "" ], [ "Ji", "Ziwei", "" ], [ "Yu", "Tiezheng", "" ], [ "Chung", "Willy", "" ], [ "Do", "Quyet V.", "" ], [ "Xu", "Yan", "" ], [ "Fung", "Pascale", "" ] ]
new_dataset
0.98106
2302.12301
Rahul Deshmukh
Rahul Deshmukh, Constantine J. Roros, Amith Kashyap, Avinash C. Kak
An Aligned Multi-Temporal Multi-Resolution Satellite Image Dataset for Change Detection Research
8 pages, 4 figures, 3 tables, satellite image dataset
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an aligned multi-temporal and multi-resolution satellite image dataset for research in change detection. We expect our dataset to be useful to researchers who want to fuse information from multiple satellites for detecting changes on the surface of the earth that may not be fully visible in any single satellite. The dataset we present was created by augmenting the SpaceNet-7 dataset with temporally parallel stacks of Landsat and Sentinel images. The SpaceNet-7 dataset consists of time-sequenced Planet images recorded over 101 AOIs (Areas-of-Interest). In our dataset, for each of the 60 AOIs that are meant for training, we augment the Planet datacube with temporally parallel datacubes of Landsat and Sentinel images. The temporal alignments between the high-res Planet images, on the one hand, and the Landsat and Sentinel images, on the other, are approximate since the temporal resolution for the Planet images is one month -- each image being a mosaic of the best data collected over a month. Whenever we have a choice regarding which Landsat and Sentinel images to pair up with the Planet images, we have chosen those that had the least cloud cover. A particularly important feature of our dataset is that the high-res and the low-res images are spatially aligned together with our MuRA framework presented in this paper. Foundational to the alignment calculation is the modeling of inter-satellite misalignment errors with polynomials as in NASA's AROP algorithm. We have named our dataset MuRA-T for the MuRA framework that is used for aligning the cross-satellite images and "T" for the temporal dimension in the dataset.
[ { "version": "v1", "created": "Thu, 23 Feb 2023 19:43:20 GMT" }, { "version": "v2", "created": "Mon, 27 Feb 2023 20:50:27 GMT" } ]
2023-03-01T00:00:00
[ [ "Deshmukh", "Rahul", "" ], [ "Roros", "Constantine J.", "" ], [ "Kashyap", "Amith", "" ], [ "Kak", "Avinash C.", "" ] ]
new_dataset
0.999719
2302.12746
\'Oscar Garc\'ia-Sierra
Miguel Ortega-Mart\'in, \'Oscar Garc\'ia-Sierra, Alfonso Ardoiz, Juan Carlos Armenteros, Jorge \'Alvarez and Adri\'an Alonso
Spanish Built Factual Freectianary (Spanish-BFF): the first AI-generated free dictionary
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Dictionaries are one of the oldest and most used linguistic resources. Building them is a complex task that, to the best of our knowledge, has yet to be explored with generative Large Language Models (LLMs). We introduce the "Spanish Built Factual Freectianary" (Spanish-BFF) as the first Spanish AI-generated dictionary. This first-of-its-kind free dictionary uses GPT-3. We also define future steps we aim to follow to improve this initial commitment to the field, such as more additional languages.
[ { "version": "v1", "created": "Fri, 24 Feb 2023 16:59:54 GMT" }, { "version": "v2", "created": "Tue, 28 Feb 2023 17:54:00 GMT" } ]
2023-03-01T00:00:00
[ [ "Ortega-Martín", "Miguel", "" ], [ "García-Sierra", "Óscar", "" ], [ "Ardoiz", "Alfonso", "" ], [ "Armenteros", "Juan Carlos", "" ], [ "Álvarez", "Jorge", "" ], [ "Alonso", "Adrián", "" ] ]
new_dataset
0.995342
2302.12921
Maximillian Chen
Maximillian Chen, Zhou Yu
Pre-Finetuning for Few-Shot Emotional Speech Recognition
5 pages, 4 figures. Code available at https://github.com/maxlchen/Speech-PreFinetuning
null
null
null
cs.CL cs.LG cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Speech models have long been known to overfit individual speakers for many classification tasks. This leads to poor generalization in settings where the speakers are out-of-domain or out-of-distribution, as is common in production environments. We view speaker adaptation as a few-shot learning problem and propose investigating transfer learning approaches inspired by recent success with pre-trained models in natural language tasks. We propose pre-finetuning speech models on difficult tasks to distill knowledge into few-shot downstream classification objectives. We pre-finetune Wav2Vec2.0 on every permutation of four multiclass emotional speech recognition corpora and evaluate our pre-finetuned models through 33,600 few-shot fine-tuning trials on the Emotional Speech Dataset.
[ { "version": "v1", "created": "Fri, 24 Feb 2023 22:38:54 GMT" }, { "version": "v2", "created": "Tue, 28 Feb 2023 02:28:41 GMT" } ]
2023-03-01T00:00:00
[ [ "Chen", "Maximillian", "" ], [ "Yu", "Zhou", "" ] ]
new_dataset
0.95199
2302.13506
Yu-Tsung Lee
Yu-Tsung Lee, Haining Chen, William Enck, Hayawardh Vijayakumar, Ninghui Li, Zhiyun Qian, Giuseppe Petracca, Trent Jaeger
PolyScope: Multi-Policy Access Control Analysis to Triage Android Scoped Storage
14 pages, 5 figures, submitted to IEEE TDSC. arXiv admin note: substantial text overlap with arXiv:2008.03593
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Android's filesystem access control is a crucial aspect of its system integrity. It utilizes a combination of mandatory access controls, such as SELinux, and discretionary access controls, like Unix permissions, along with specialized access controls such as Android permissions to safeguard OEM and Android services from third-party applications. However, when OEMs introduce differentiating features, they often create vulnerabilities due to their inability to properly reconfigure this complex policy combination. To address this, we introduce the POLYSCOPE tool, which triages Android filesystem access control policies to identify attack operations - authorized operations that may be exploited by adversaries to elevate their privileges. POLYSCOPE has three significant advantages over prior analyses: it allows for the independent extension and analysis of individual policy models, understands the flexibility untrusted parties have in modifying access control policies, and can identify attack operations that system configurations permit. We demonstrate the effectiveness of POLYSCOPE by examining the impact of Scoped Storage on Android, revealing that it reduces the number of attack operations possible on external storage resources by over 50%. However, because OEMs only partially adopt Scoped Storage, we also uncover two previously unknown vulnerabilities, demonstrating how POLYSCOPE can assess an ideal scenario where all apps comply with Scoped Storage, which can reduce the number of untrusted parties accessing attack operations by over 65% on OEM systems. POLYSCOPE thus helps Android OEMs evaluate complex access control policies to pinpoint the attack operations that require further examination.
[ { "version": "v1", "created": "Mon, 27 Feb 2023 04:03:23 GMT" }, { "version": "v2", "created": "Tue, 28 Feb 2023 02:10:29 GMT" } ]
2023-03-01T00:00:00
[ [ "Lee", "Yu-Tsung", "" ], [ "Chen", "Haining", "" ], [ "Enck", "William", "" ], [ "Vijayakumar", "Hayawardh", "" ], [ "Li", "Ninghui", "" ], [ "Qian", "Zhiyun", "" ], [ "Petracca", "Giuseppe", "" ], [ "Jaeger", "Trent", "" ] ]
new_dataset
0.998095
2302.14123
Kate Donahue
Kate Donahue and Jon Kleinberg
Private Blotto: Viewpoint Competition with Polarized Agents
null
null
null
null
cs.GT cs.CY cs.SI
http://creativecommons.org/licenses/by/4.0/
Colonel Blotto games are one of the oldest settings in game theory, originally proposed over a century ago in Borel 1921. However, they were originally designed to model two centrally-controlled armies competing over zero-sum "fronts", a specific scenario with limited modern-day application. In this work, we propose and study Private Blotto games, a variant connected to crowdsourcing and social media. One key difference in Private Blotto is that individual agents act independently, without being coordinated by a central "Colonel". This model naturally arises from scenarios such as activist groups competing over multiple issues, partisan fund-raisers competing over elections in multiple states, or politically-biased social media users labeling news articles as misinformation. In this work, we completely characterize the Nash Stability of the Private Blotto game. Specifically, we show that the outcome function has a critical impact on the outcome of the game: we study whether a front is won by majority rule (median outcome) or a smoother outcome taking into account all agents (mean outcome). We study how this impacts the amount of "misallocated effort", or agents whose choices doesn't influence the final outcome. In general, mean outcome ensures that, if a stable arrangement exists, agents are close to evenly spaced across fronts, minimizing misallocated effort. However, mean outcome functions also have chaotic patterns as to when stable arrangements do and do not exist. For median outcome, we exactly characterize when a stable arrangement exists, but show that this outcome function frequently results in extremely unbalanced allocation of agents across fronts.
[ { "version": "v1", "created": "Mon, 27 Feb 2023 20:12:13 GMT" } ]
2023-03-01T00:00:00
[ [ "Donahue", "Kate", "" ], [ "Kleinberg", "Jon", "" ] ]
new_dataset
0.998736
2302.14125
Manuel Wettstein
Bernd G\"artner, Manuel Wettstein
A Note on the Faces of the Dual Koch Arrangement
null
null
null
null
cs.CG math.CO
http://creativecommons.org/licenses/by/4.0/
We analyze the faces of the dual Koch arrangement, which is the arrangement of $2^s + 1$ lines obtained by projective duality from the Koch chain $K_s$. In particular, we show that this line arrangement does not contain any $k$-gons for $k > 5$, and that the number of pentagons is $3 \cdot 2^{s-1} - 3$.
[ { "version": "v1", "created": "Mon, 27 Feb 2023 20:16:42 GMT" } ]
2023-03-01T00:00:00
[ [ "Gärtner", "Bernd", "" ], [ "Wettstein", "Manuel", "" ] ]
new_dataset
0.978079
2302.14161
Yiyuan Lee
Yiyuan Lee, Wil Thomason, Zachary Kingston, Lydia E. Kavraki
Object Reconfiguration with Simulation-Derived Feasible Actions
Appears in IEEE International Conference on Robotics and Automation (ICRA) 2023
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
3D object reconfiguration encompasses common robot manipulation tasks in which a set of objects must be moved through a series of physically feasible state changes into a desired final configuration. Object reconfiguration is challenging to solve in general, as it requires efficient reasoning about environment physics that determine action validity. This information is typically manually encoded in an explicit transition system. Constructing these explicit encodings is tedious and error-prone, and is often a bottleneck for planner use. In this work, we explore embedding a physics simulator within a motion planner to implicitly discover and specify the valid actions from any state, removing the need for manual specification of action semantics. Our experiments demonstrate that the resulting simulation-based planner can effectively produce physically valid rearrangement trajectories for a range of 3D object reconfiguration problems without requiring more than an environment description and start and goal arrangements.
[ { "version": "v1", "created": "Mon, 27 Feb 2023 21:48:31 GMT" } ]
2023-03-01T00:00:00
[ [ "Lee", "Yiyuan", "" ], [ "Thomason", "Wil", "" ], [ "Kingston", "Zachary", "" ], [ "Kavraki", "Lydia E.", "" ] ]
new_dataset
0.994358
2302.14163
Prashant Pandey
Prashant Pandey, Mustafa Chasmai, Monish Natarajan, Brejesh Lall
A Language-Guided Benchmark for Weakly Supervised Open Vocabulary Semantic Segmentation
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Increasing attention is being diverted to data-efficient problem settings like Open Vocabulary Semantic Segmentation (OVSS) which deals with segmenting an arbitrary object that may or may not be seen during training. The closest standard problems related to OVSS are Zero-Shot and Few-Shot Segmentation (ZSS, FSS) and their Cross-dataset variants where zero to few annotations are needed to segment novel classes. The existing FSS and ZSS methods utilize fully supervised pixel-labelled seen classes to segment unseen classes. Pixel-level labels are hard to obtain, and using weak supervision in the form of inexpensive image-level labels is often more practical. To this end, we propose a novel unified weakly supervised OVSS pipeline that can perform ZSS, FSS and Cross-dataset segmentation on novel classes without using pixel-level labels for either the base (seen) or the novel (unseen) classes in an inductive setting. We propose Weakly-Supervised Language-Guided Segmentation Network (WLSegNet), a novel language-guided segmentation pipeline that i) learns generalizable context vectors with batch aggregates (mean) to map class prompts to image features using frozen CLIP (a vision-language model) and ii) decouples weak ZSS/FSS into weak semantic segmentation and Zero-Shot segmentation. The learned context vectors avoid overfitting on seen classes during training and transfer better to novel classes during testing. WLSegNet avoids fine-tuning and the use of external datasets during training. The proposed pipeline beats existing methods for weak generalized Zero-Shot and weak Few-Shot semantic segmentation by 39 and 3 mIOU points respectively on PASCAL VOC and weak Few-Shot semantic segmentation by 5 mIOU points on MS COCO. On a harder setting of 2-way 1-shot weak FSS, WLSegNet beats the baselines by 13 and 22 mIOU points on PASCAL VOC and MS COCO, respectively.
[ { "version": "v1", "created": "Mon, 27 Feb 2023 21:55:48 GMT" } ]
2023-03-01T00:00:00
[ [ "Pandey", "Prashant", "" ], [ "Chasmai", "Mustafa", "" ], [ "Natarajan", "Monish", "" ], [ "Lall", "Brejesh", "" ] ]
new_dataset
0.968443
2302.14201
Alagappan Ramanathan
Alagappan Ramanathan, Sangeetha Abdu Jyothi
Nautilus: A Framework for Cross-Layer Cartography of Submarine Cables and IP Links
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Submarine cables constitute the backbone of the Internet. However, these critical infrastructure components are vulnerable to several natural and man-made threats, and during failures, are difficult to repair in their remote oceanic environments. In spite of their crucial role, we have a limited understanding of the impact of submarine cable failures on global connectivity, particularly on the higher layers of the Internet. In this paper, we present Nautilus, a framework for cross-layer cartography of submarine cables and IP links. Using a corpus of public datasets and Internet cartographic techniques, Nautilus identifies IP links that are likely traversing submarine cables and maps them to one or more potential cables. Nautilus also gives each IP to cable assignment a prediction score that reflects the confidence in the mapping. Nautilus generates a mapping for 3.05 million and 1.42 million IPv4 and IPv6 links respectively, covering 91% of all active cables. In the absence of ground truth data, we validate Nautilus mapping using three techniques: analyzing past cable failures, using targeted traceroute measurements, and comparing with public network maps of two operators.
[ { "version": "v1", "created": "Mon, 27 Feb 2023 23:35:55 GMT" } ]
2023-03-01T00:00:00
[ [ "Ramanathan", "Alagappan", "" ], [ "Jyothi", "Sangeetha Abdu", "" ] ]
new_dataset
0.997517
2302.14249
Boren Jiang
Boren Jiang, Ximeng Tao, Yuanfeng Han, Wanze Li, Gregory S.Chirikjian
Model-Free and Learning-Free Proprioceptive Humanoid Movement Control
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a novel model-free method for humanoid-robot quasi-static movement control. Traditional model-based methods often require precise robot model parameters. Additionally, existing learning-based frameworks often train the policy in simulation environments, thereby indirectly relying on a model. In contrast, we propose a proprioceptive framework based only on sensory outputs. It does not require prior knowledge of a robot's kinematic model or inertial parameters. Our method consists of three steps: 1. Planning different pairs of center of pressure (CoP) and foot position objectives within a single cycle. 2. Searching around the current configuration by slightly moving the robot's leg joints back and forth while recording the sensor measurements of its CoP and foot positions. 3. Updating the robot motion with an optimization algorithm until all objectives are achieved. We demonstrate our approach on a NAO humanoid robot platform. Experiment results show that it can successfully generate stable robot motions.
[ { "version": "v1", "created": "Tue, 28 Feb 2023 02:20:55 GMT" } ]
2023-03-01T00:00:00
[ [ "Jiang", "Boren", "" ], [ "Tao", "Ximeng", "" ], [ "Han", "Yuanfeng", "" ], [ "Li", "Wanze", "" ], [ "Chirikjian", "Gregory S.", "" ] ]
new_dataset
0.995148
2302.14251
Hyomin Kim
Hyomin Kim, Hyeonseo Nam, Jungeon Kim, Jaesik Park, and Seungyong Lee
LaplacianFusion: Detailed 3D Clothed-Human Body Reconstruction
null
ACM Transactions on Graphics (TOG) 41.6 (2022): 1-14
10.1145/3550454.3555511
null
cs.GR cs.CG
http://creativecommons.org/licenses/by/4.0/
We propose LaplacianFusion, a novel approach that reconstructs detailed and controllable 3D clothed-human body shapes from an input depth or 3D point cloud sequence. The key idea of our approach is to use Laplacian coordinates, well-known differential coordinates that have been used for mesh editing, for representing the local structures contained in the input scans, instead of implicit 3D functions or vertex displacements used previously. Our approach reconstructs a controllable base mesh using SMPL, and learns a surface function that predicts Laplacian coordinates representing surface details on the base mesh. For a given pose, we first build and subdivide a base mesh, which is a deformed SMPL template, and then estimate Laplacian coordinates for the mesh vertices using the surface function. The final reconstruction for the pose is obtained by integrating the estimated Laplacian coordinates as a whole. Experimental results show that our approach based on Laplacian coordinates successfully reconstructs more visually pleasing shape details than previous methods. The approach also enables various surface detail manipulations, such as detail transfer and enhancement.
[ { "version": "v1", "created": "Tue, 28 Feb 2023 02:22:24 GMT" } ]
2023-03-01T00:00:00
[ [ "Kim", "Hyomin", "" ], [ "Nam", "Hyeonseo", "" ], [ "Kim", "Jungeon", "" ], [ "Park", "Jaesik", "" ], [ "Lee", "Seungyong", "" ] ]
new_dataset
0.994951
2302.14261
Xueming Yan
Xueming Yan, Zhihang Fang, Yaochu Jin
Augmented Transformers with Adaptive n-grams Embedding for Multilingual Scene Text Recognition
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While vision transformers have been highly successful in improving the performance in image-based tasks, not much work has been reported on applying transformers to multilingual scene text recognition due to the complexities in the visual appearance of multilingual texts. To fill the gap, this paper proposes an augmented transformer architecture with n-grams embedding and cross-language rectification (TANGER). TANGER consists of a primary transformer with single patch embeddings of visual images, and a supplementary transformer with adaptive n-grams embeddings that aims to flexibly explore the potential correlations between neighbouring visual patches, which is essential for feature extraction from multilingual scene texts. Cross-language rectification is achieved with a loss function that takes into account both language identification and contextual coherence scoring. Extensive comparative studies are conducted on four widely used benchmark datasets as well as a new multilingual scene text dataset containing Indonesian, English, and Chinese collected from tourism scenes in Indonesia. Our experimental results demonstrate that TANGER is considerably better compared to the state-of-the-art, especially in handling complex multilingual scene texts.
[ { "version": "v1", "created": "Tue, 28 Feb 2023 02:37:30 GMT" } ]
2023-03-01T00:00:00
[ [ "Yan", "Xueming", "" ], [ "Fang", "Zhihang", "" ], [ "Jin", "Yaochu", "" ] ]
new_dataset
0.985596
2302.14286
Jianing Wang
Jianing Wang, Nuo Chen, Qiushi Sun, Wenkang Huang, Chengyu Wang, Ming Gao
HugNLP: A Unified and Comprehensive Library for Natural Language Processing
8 Pages
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
In this paper, we introduce HugNLP, a unified and comprehensive library for natural language processing (NLP) with the prevalent backend of HuggingFace Transformers, which is designed for NLP researchers to easily utilize off-the-shelf algorithms and develop novel methods with user-defined models and tasks in real-world scenarios. HugNLP consists of a hierarchical structure including models, processors and applications that unifies the learning process of pre-trained language models (PLMs) on different NLP tasks. Additionally, we present some featured NLP applications to show the effectiveness of HugNLP, such as knowledge-enhanced PLMs, universal information extraction, low-resource mining, and code understanding and generation, etc. The source code will be released on GitHub (https://github.com/wjn1996/HugNLP).
[ { "version": "v1", "created": "Tue, 28 Feb 2023 03:38:26 GMT" } ]
2023-03-01T00:00:00
[ [ "Wang", "Jianing", "" ], [ "Chen", "Nuo", "" ], [ "Sun", "Qiushi", "" ], [ "Huang", "Wenkang", "" ], [ "Wang", "Chengyu", "" ], [ "Gao", "Ming", "" ] ]
new_dataset
0.997625
2302.14298
Zikang Yuan
Zikang Yuan, Fengtian Lang, Tianle Xu, Xin Yang
LIW-OAM: Lidar-Inertial-Wheel Odometry and Mapping
8 pages, 3 figures, submit to IROS 2023. arXiv admin note: substantial text overlap with arXiv:2210.10424
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
LiDAR-inertial odometry and mapping (LIOAM), which fuses complementary information of a LiDAR and an Inertial Measurement Unit (IMU), is an attractive solution for pose estimation and mapping. In LI-OAM, both pose and velocity are regarded as state variables that need to be solved. However, the widely-used Iterative Closest Point (ICP) algorithm can only provide constraint for pose, while the velocity can only be constrained by IMU pre-integration. As a result, the velocity estimates inclined to be updated accordingly with the pose results. In this paper, we propose LIW-OAM, an accurate and robust LiDAR-inertial-wheel odometry and mapping system, which fuses the measurements from LiDAR, IMU and wheel encoder in a bundle adjustment (BA) based optimization framework. The involvement of a wheel encoder could provide velocity measurement as an important observation, which assists LI-OAM to provide a more accurate state prediction. In addition, constraining the velocity variable by the observation from wheel encoder in optimization can further improve the accuracy of state estimation. Experiment results on two public datasets demonstrate that our system outperforms all state-of-the-art LI-OAM systems in terms of smaller absolute trajectory error (ATE), and embedding a wheel encoder can greatly improve the performance of LI-OAM based on the BA framework.
[ { "version": "v1", "created": "Tue, 28 Feb 2023 04:16:21 GMT" } ]
2023-03-01T00:00:00
[ [ "Yuan", "Zikang", "" ], [ "Lang", "Fengtian", "" ], [ "Xu", "Tianle", "" ], [ "Yang", "Xin", "" ] ]
new_dataset
0.999684
2302.14306
Srikanth Malla
Srikanth Malla, Yi-Ting Chen
CLR-GAM: Contrastive Point Cloud Learning with Guided Augmentation and Feature Mapping
null
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Point cloud data plays an essential role in robotics and self-driving applications. Yet, annotating point cloud data is time-consuming and nontrivial while they enable learning discriminative 3D representations that empower downstream tasks, such as classification and segmentation. Recently, contrastive learning-based frameworks have shown promising results for learning 3D representations in a self-supervised manner. However, existing contrastive learning methods cannot precisely encode and associate structural features and search the higher dimensional augmentation space efficiently. In this paper, we present CLR-GAM, a novel contrastive learning-based framework with Guided Augmentation (GA) for efficient dynamic exploration strategy and Guided Feature Mapping (GFM) for similar structural feature association between augmented point clouds. We empirically demonstrate that the proposed approach achieves state-of-the-art performance on both simulated and real-world 3D point cloud datasets for three different downstream tasks, i.e., 3D point cloud classification, few-shot learning, and object part segmentation.
[ { "version": "v1", "created": "Tue, 28 Feb 2023 04:38:52 GMT" } ]
2023-03-01T00:00:00
[ [ "Malla", "Srikanth", "" ], [ "Chen", "Yi-Ting", "" ] ]
new_dataset
0.988716
2302.14331
Min-Ha Oh
Min-Ha Oh, Young-Hwan Kim, Seung-Min Lee, Gyeong-Seok Hwang, Kyung-Sub Kim, Jae-Young Bae, Ju-Young Kim, Ju-Yong Lee, Yu-Chan Kim, Sang Yup Kim, Seung-Kyun Kang
Lifetime-configurable soft robots via photodegradable silicone elastomer composites
58 pages, 6 figures, 2 Supplementary Text, 15 Supplementary figures, 1 movie
null
null
null
cs.RO cond-mat.mtrl-sci cond-mat.soft
http://creativecommons.org/licenses/by-nc-nd/4.0/
Developing soft robots that can control their own life-cycle and degrade on-demand while maintaining hyper-elasticity is a significant research challenge. On-demand degradable soft robots, which conserve their original functionality during operation and rapidly degrade under specific external stimulation, present the opportunity to self-direct the disappearance of temporary robots. This study proposes soft robots and materials that exhibit excellent mechanical stretchability and can degrade under ultraviolet (UV) light by mixing a fluoride-generating diphenyliodonium hexafluorophosphate (DPI-HFP) with a silicone resin. Spectroscopic analysis revealed the mechanism of Si-O-Si backbone cleavage using fluoride ion (F-), which was generated from UV exposed DPI-HFP. Furthermore, photo-differential scanning calorimetry (DSC) based thermal analysis indicated increased decomposition kinetics at increased temperatures. Additionally, we demonstrated a robotics application of this composite by fabricating a gaiting robot. The integration of soft electronics, including strain sensors, temperature sensors, and photodetectors, expanded the robotic functionalities. This study provides a simple yet novel strategy for designing lifecycle mimicking soft robotics that can be applied to reduce soft robotics waste, explore hazardous areas where retrieval of robots is impossible, and ensure hardware security with on-demand destructive material platforms.
[ { "version": "v1", "created": "Tue, 28 Feb 2023 05:54:41 GMT" } ]
2023-03-01T00:00:00
[ [ "Oh", "Min-Ha", "" ], [ "Kim", "Young-Hwan", "" ], [ "Lee", "Seung-Min", "" ], [ "Hwang", "Gyeong-Seok", "" ], [ "Kim", "Kyung-Sub", "" ], [ "Bae", "Jae-Young", "" ], [ "Kim", "Ju-Young", "" ], [ "Lee", "Ju-Yong", "" ], [ "Kim", "Yu-Chan", "" ], [ "Kim", "Sang Yup", "" ], [ "Kang", "Seung-Kyun", "" ] ]
new_dataset
0.993558
2302.14334
Yuyang Chen
Yuyang Chen, Dingkang Wang, Lenworth Thomas, Karthik Dantu, Sanjeev J. Koppal
Design of an Adaptive Lightweight LiDAR to Decouple Robot-Camera Geometry
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
A fundamental challenge in robot perception is the coupling of the sensor pose and robot pose. This has led to research in active vision where robot pose is changed to reorient the sensor to areas of interest for perception. Further, egomotion such as jitter, and external effects such as wind and others affect perception requiring additional effort in software such as image stabilization. This effect is particularly pronounced in micro-air vehicles and micro-robots who typically are lighter and subject to larger jitter but do not have the computational capability to perform stabilization in real-time. We present a novel microelectromechanical (MEMS) mirror LiDAR system to change the field of view of the LiDAR independent of the robot motion. Our design has the potential for use on small, low-power systems where the expensive components of the LiDAR can be placed external to the small robot. We show the utility of our approach in simulation and on prototype hardware mounted on a UAV. We believe that this LiDAR and its compact movable scanning design provide mechanisms to decouple robot and sensor geometry allowing us to simplify robot perception. We also demonstrate examples of motion compensation using IMU and external odometry feedback in hardware.
[ { "version": "v1", "created": "Tue, 28 Feb 2023 06:03:42 GMT" } ]
2023-03-01T00:00:00
[ [ "Chen", "Yuyang", "" ], [ "Wang", "Dingkang", "" ], [ "Thomas", "Lenworth", "" ], [ "Dantu", "Karthik", "" ], [ "Koppal", "Sanjeev J.", "" ] ]
new_dataset
0.998077
2302.14418
Ji Hou
Yu Zhang, Junle Yu, Xiaolin Huang, Wenhui Zhou, Ji Hou
PCR-CG: Point Cloud Registration via Deep Color and Geometry
accepted to ECCV2022; code at https://github.com/Gardlin/PCR-CG
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we introduce PCR-CG: a novel 3D point cloud registration module explicitly embedding the color signals into the geometry representation. Different from previous methods that only use geometry representation, our module is specifically designed to effectively correlate color into geometry for the point cloud registration task. Our key contribution is a 2D-3D cross-modality learning algorithm that embeds the deep features learned from color signals to the geometry representation. With our designed 2D-3D projection module, the pixel features in a square region centered at correspondences perceived from images are effectively correlated with point clouds. In this way, the overlapped regions can be inferred not only from point cloud but also from the texture appearances. Adding color is non-trivial. We compare against a variety of baselines designed for adding color to 3D, such as exhaustively adding per-pixel features or RGB values in an implicit manner. We leverage Predator [25] as the baseline method and incorporate our proposed module onto it. To validate the effectiveness of 2D features, we ablate different 2D pre-trained networks and show a positive correlation between the pre-trained weights and the task performance. Our experimental results indicate a significant improvement of 6.5% registration recall over the baseline method on the 3DLoMatch benchmark. We additionally evaluate our approach on SOTA methods and observe consistent improvements, such as an improvement of 2.4% registration recall over GeoTransformer as well as 3.5% over CoFiNet. Our study reveals a significant advantages of correlating explicit deep color features to the point cloud in the registration task.
[ { "version": "v1", "created": "Tue, 28 Feb 2023 08:50:17 GMT" } ]
2023-03-01T00:00:00
[ [ "Zhang", "Yu", "" ], [ "Yu", "Junle", "" ], [ "Huang", "Xiaolin", "" ], [ "Zhou", "Wenhui", "" ], [ "Hou", "Ji", "" ] ]
new_dataset
0.998657
2302.14475
Zhiwu Huang
Yabin Wang, Zhiwu Huang, Xiaopeng Hong
Benchmarking Deepart Detection
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deepfake technologies have been blurring the boundaries between the real and unreal, likely resulting in malicious events. By leveraging newly emerged deepfake technologies, deepfake researchers have been making a great upending to create deepfake artworks (deeparts), which are further closing the gap between reality and fantasy. To address potentially appeared ethics questions, this paper establishes a deepart detection database (DDDB) that consists of a set of high-quality conventional art images (conarts) and five sets of deepart images generated by five state-of-the-art deepfake models. This database enables us to explore once-for-all deepart detection and continual deepart detection. For the two new problems, we suggest four benchmark evaluations and four families of solutions on the constructed DDDB. The comprehensive study demonstrates the effectiveness of the proposed solutions on the established benchmark dataset, which is capable of paving a way to more interesting directions of deepart detection. The constructed benchmark dataset and the source code will be made publicly available.
[ { "version": "v1", "created": "Tue, 28 Feb 2023 10:34:44 GMT" } ]
2023-03-01T00:00:00
[ [ "Wang", "Yabin", "" ], [ "Huang", "Zhiwu", "" ], [ "Hong", "Xiaopeng", "" ] ]
new_dataset
0.960918
2302.14486
Gianluca D'Amico Dr
Gianluca D'Amico, Mauro Marinoni, Federico Nesti, Giulio Rossolini, Giorgio Buttazzo, Salvatore Sabina, Gianluigi Lauro
TrainSim: A Railway Simulation Framework for LiDAR and Camera Dataset Generation
Under review
null
null
null
cs.CV cs.SY eess.SY
http://creativecommons.org/licenses/by-nc-sa/4.0/
The railway industry is searching for new ways to automate a number of complex train functions, such as object detection, track discrimination, and accurate train positioning, which require the artificial perception of the railway environment through different types of sensors, including cameras, LiDARs, wheel encoders, and inertial measurement units. A promising approach for processing such sensory data is the use of deep learning models, which proved to achieve excellent performance in other application domains, as robotics and self-driving cars. However, testing new algorithms and solutions requires the availability of a large amount of labeled data, acquired in different scenarios and operating conditions, which are difficult to obtain in a real railway setting due to strict regulations and practical constraints in accessing the trackside infrastructure and equipping a train with the required sensors. To address such difficulties, this paper presents a visual simulation framework able to generate realistic railway scenarios in a virtual environment and automatically produce inertial data and labeled datasets from emulated LiDARs and cameras useful for training deep neural networks or testing innovative algorithms. A set of experimental results are reported to show the effectiveness of the proposed approach.
[ { "version": "v1", "created": "Tue, 28 Feb 2023 11:00:13 GMT" } ]
2023-03-01T00:00:00
[ [ "D'Amico", "Gianluca", "" ], [ "Marinoni", "Mauro", "" ], [ "Nesti", "Federico", "" ], [ "Rossolini", "Giulio", "" ], [ "Buttazzo", "Giorgio", "" ], [ "Sabina", "Salvatore", "" ], [ "Lauro", "Gianluigi", "" ] ]
new_dataset
0.999837
2302.14494
Elmurod Kuriyozov
Elmurod Kuriyozov, Ulugbek Salaev, Sanatbek Matlatipov, Gayrat Matlatipov
Text classification dataset and analysis for Uzbek language
Preprint of the paper accepted to The 10th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics. April 21-23, 2023, Poznan, Poland
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Text classification is an important task in Natural Language Processing (NLP), where the goal is to categorize text data into predefined classes. In this study, we analyse the dataset creation steps and evaluation techniques of multi-label news categorisation task as part of text classification. We first present a newly obtained dataset for Uzbek text classification, which was collected from 10 different news and press websites and covers 15 categories of news, press and law texts. We also present a comprehensive evaluation of different models, ranging from traditional bag-of-words models to deep learning architectures, on this newly created dataset. Our experiments show that the Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN) based models outperform the rule-based models. The best performance is achieved by the BERTbek model, which is a transformer-based BERT model trained on the Uzbek corpus. Our findings provide a good baseline for further research in Uzbek text classification.
[ { "version": "v1", "created": "Tue, 28 Feb 2023 11:21:24 GMT" } ]
2023-03-01T00:00:00
[ [ "Kuriyozov", "Elmurod", "" ], [ "Salaev", "Ulugbek", "" ], [ "Matlatipov", "Sanatbek", "" ], [ "Matlatipov", "Gayrat", "" ] ]
new_dataset
0.999601
2302.14522
Benjamin Sick
Benjamin Sick, Michael Walter, Jochen Abhau
AdaptiveShape: Solving Shape Variability for 3D Object Detection with Geometry Aware Anchor Distributions
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
3D object detection with point clouds and images plays an important role in perception tasks such as autonomous driving. Current methods show great performance on detection and pose estimation of standard-shaped vehicles but lack behind on more complex shapes as e.g. semi-trailer truck combinations. Determining the shape and motion of those special vehicles accurately is crucial in yard operation and maneuvering and industrial automation applications. This work introduces several new methods to improve and measure the performance for such classes. State-of-the-art methods are based on predefined anchor grids or heatmaps for ground truth targets. However, the underlying representations do not take the shape of different sized objects into account. Our main contribution, AdaptiveShape, uses shape aware anchor distributions and heatmaps to improve the detection capabilities. For large vehicles we achieve +10.9% AP in comparison to current shape agnostic methods. Furthermore we introduce a new fast LiDAR-camera fusion. It is based on 2D bounding box camera detections which are available in many processing pipelines. This fusion method does not rely on perfectly calibrated or temporally synchronized systems and is therefore applicable to a broad range of robotic applications. We extend a standard point pillar network to account for temporal data and improve learning of complex object movements. In addition we extended a ground truth augmentation to use grouped object pairs to further improve truck AP by +2.2% compared to conventional augmentation.
[ { "version": "v1", "created": "Tue, 28 Feb 2023 12:31:31 GMT" } ]
2023-03-01T00:00:00
[ [ "Sick", "Benjamin", "" ], [ "Walter", "Michael", "" ], [ "Abhau", "Jochen", "" ] ]
new_dataset
0.956508
2302.14534
Christopher Akiki
Christopher Akiki, Odunayo Ogundepo, Aleksandra Piktus, Xinyu Zhang, Akintunde Oladipo, Jimmy Lin, Martin Potthast
Spacerini: Plug-and-play Search Engines with Pyserini and Hugging Face
null
null
null
null
cs.IR cs.CL
http://creativecommons.org/licenses/by/4.0/
We present Spacerini, a modular framework for seamless building and deployment of interactive search applications, designed to facilitate the qualitative analysis of large scale research datasets. Spacerini integrates features from both the Pyserini toolkit and the Hugging Face ecosystem to ease the indexing text collections and deploy them as search engines for ad-hoc exploration and to make the retrieval of relevant data points quick and efficient. The user-friendly interface enables searching through massive datasets in a no-code fashion, making Spacerini broadly accessible to anyone looking to qualitatively audit their text collections. This is useful both to IR~researchers aiming to demonstrate the capabilities of their indexes in a simple and interactive way, and to NLP~researchers looking to better understand and audit the failure modes of large language models. The framework is open source and available on GitHub: https://github.com/castorini/hf-spacerini, and includes utilities to load, pre-process, index, and deploy local and web search applications. A portfolio of applications created with Spacerini for a multitude of use cases can be found by visiting https://hf.co/spacerini.
[ { "version": "v1", "created": "Tue, 28 Feb 2023 12:44:10 GMT" } ]
2023-03-01T00:00:00
[ [ "Akiki", "Christopher", "" ], [ "Ogundepo", "Odunayo", "" ], [ "Piktus", "Aleksandra", "" ], [ "Zhang", "Xinyu", "" ], [ "Oladipo", "Akintunde", "" ], [ "Lin", "Jimmy", "" ], [ "Potthast", "Martin", "" ] ]
new_dataset
0.994983
2302.14543
Himanshu .
Himanshu, Jinraj V Pushpangathan and Harikumar Kandath
RRT and Velocity Obstacles-based motion planning for Unmanned Aircraft Systems Traffic Management (UTM)
Currently under review in The 2023 International Conference On Unmanned Aircraft Systems
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
In this paper, an algorithm for Unmanned Aircraft Systems Traffic Management (UTM) for a finite number of unmanned aerial vehicles (UAVs) is proposed. This algorithm is developed by combining the Rapidly-Exploring Random Trees (RRT) and Velocity Obstacle (VO) algorithms and is referred to as the RRT-VO UTM algorithm. Here, the RRT algorithm works offline to generate obstacle-free waypoints in a given environment with known static obstacles. The VO algorithm, on the other hand, operates online to avoid collisions with other UAVS and known static obstacles. The boundary of the static obstacles are approximated by small circles to facilitate the formulation of VO algorithm. The proposed algorithm's performance is evaluated using numerical simulation and then compared to the well-known artificial potential field (APF) algorithm for collision avoidance. The advantages of the proposed method are clearly shown in terms of lower path length and collision avoidance capabilities for a challenging scenario.
[ { "version": "v1", "created": "Tue, 28 Feb 2023 13:08:11 GMT" } ]
2023-03-01T00:00:00
[ [ "Himanshu", "", "" ], [ "Pushpangathan", "Jinraj V", "" ], [ "Kandath", "Harikumar", "" ] ]
new_dataset
0.997389
2302.14574
Markus Eisenbach
Markus Eisenbach, Jannik L\"ubberstedt, Dustin Aganian, Horst-Michael Gross
A Little Bit Attention Is All You Need for Person Re-Identification
IEEE International Conference on Robotics and Automation (ICRA) 2023
null
null
null
cs.RO cs.CV
http://creativecommons.org/licenses/by/4.0/
Person re-identification plays a key role in applications where a mobile robot needs to track its users over a long period of time, even if they are partially unobserved for some time, in order to follow them or be available on demand. In this context, deep-learning based real-time feature extraction on a mobile robot is often performed on special-purpose devices whose computational resources are shared for multiple tasks. Therefore, the inference speed has to be taken into account. In contrast, person re-identification is often improved by architectural changes that come at the cost of significantly slowing down inference. Attention blocks are one such example. We will show that some well-performing attention blocks used in the state of the art are subject to inference costs that are far too high to justify their use for mobile robotic applications. As a consequence, we propose an attention block that only slightly affects the inference speed while keeping up with much deeper networks or more complex attention blocks in terms of re-identification accuracy. We perform extensive neural architecture search to derive rules at which locations this attention block should be integrated into the architecture in order to achieve the best trade-off between speed and accuracy. Finally, we confirm that the best performing configuration on a re-identification benchmark also performs well on an indoor robotic dataset.
[ { "version": "v1", "created": "Tue, 28 Feb 2023 13:54:31 GMT" } ]
2023-03-01T00:00:00
[ [ "Eisenbach", "Markus", "" ], [ "Lübberstedt", "Jannik", "" ], [ "Aganian", "Dustin", "" ], [ "Gross", "Horst-Michael", "" ] ]
new_dataset
0.996827
2302.14577
Damien Querlioz
Kamel-Eddine Harabi, Clement Turck, Marie Drouhin, Adrien Renaudineau, Thomas Bersani--Veroni, Damien Querlioz, Tifenn Hirtzlin, Elisa Vianello, Marc Bocquet, Jean-Michel Portal
A Multimode Hybrid Memristor-CMOS Prototyping Platform Supporting Digital and Analog Projects
null
null
10.1145/3566097.3567944
null
cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an integrated circuit fabricated in a process co-integrating CMOS and hafnium-oxide memristor technology, which provides a prototyping platform for projects involving memristors. Our circuit includes the periphery circuitry for using memristors within digital circuits, as well as an analog mode with direct access to memristors. The platform allows optimizing the conditions for reading and writing memristors, as well as developing and testing innovative memristor-based neuromorphic concepts.
[ { "version": "v1", "created": "Tue, 28 Feb 2023 13:55:42 GMT" } ]
2023-03-01T00:00:00
[ [ "Harabi", "Kamel-Eddine", "" ], [ "Turck", "Clement", "" ], [ "Drouhin", "Marie", "" ], [ "Renaudineau", "Adrien", "" ], [ "Bersani--Veroni", "Thomas", "" ], [ "Querlioz", "Damien", "" ], [ "Hirtzlin", "Tifenn", "" ], [ "Vianello", "Elisa", "" ], [ "Bocquet", "Marc", "" ], [ "Portal", "Jean-Michel", "" ] ]
new_dataset
0.975991
2302.14601
Sagar Pathrudkar
Sagar Pathrudkar, Saadhana Venkataraman, Deepika Kanade, Aswin Ajayan, Palash Gupta, Shehzaman Khatib, Vijaya Sarathi Indla and Saikat Mukherjee
SAFR-AV: Safety Analysis of Autonomous Vehicles using Real World Data -- An end-to-end solution for real world data driven scenario-based testing for pre-certification of AV stacks
null
null
null
null
cs.SE cs.SY eess.SY
http://creativecommons.org/licenses/by-nc-nd/4.0/
One of the major impediments in deployment of Autonomous Driving Systems (ADS) is their safety and reliability. The primary reason for the complexity of testing ADS is that it operates in an open world characterized by its non-deterministic, high-dimensional and non-stationary nature where the actions of other actors in the environment are uncontrollable from the ADS's perspective. This leads to a state space explosion problem and one way of mitigating this problem is by concretizing the scope for the system under test (SUT) by testing for a set of behavioral competencies which an ADS must demonstrate. A popular approach to testing ADS is scenario-based testing where the ADS is presented with driving scenarios from real world (and synthetically generated) data and expected to meet defined safety criteria while navigating through the scenario. We present SAFR-AV, an end-to-end ADS testing platform to enable scenario-based ADS testing. Our work addresses key real-world challenges of building an efficient large scale data ingestion pipeline and search capability to identify scenarios of interest from real world data, creating digital twins of the real-world scenarios to enable Software-in-the-Loop (SIL) testing in ADS simulators and, identifying key scenario parameter distributions to enable optimization of scenario coverage. These along with other modules of SAFR-AV would allow the platform to provide ADS pre-certifications.
[ { "version": "v1", "created": "Mon, 27 Feb 2023 11:56:41 GMT" } ]
2023-03-01T00:00:00
[ [ "Pathrudkar", "Sagar", "" ], [ "Venkataraman", "Saadhana", "" ], [ "Kanade", "Deepika", "" ], [ "Ajayan", "Aswin", "" ], [ "Gupta", "Palash", "" ], [ "Khatib", "Shehzaman", "" ], [ "Indla", "Vijaya Sarathi", "" ], [ "Mukherjee", "Saikat", "" ] ]
new_dataset
0.985017
2302.14624
Yooyoung Lee
Yooyoung Lee, Craig Greenberg, Eliot Godard, Asad A. Butt, Elliot Singer, Trang Nguyen, Lisa Mason, Douglas Reynolds
The 2022 NIST Language Recognition Evaluation
5 pages, 10 figures
null
null
null
cs.CL cs.LG cs.SD eess.AS
http://creativecommons.org/licenses/by-sa/4.0/
In 2022, the U.S. National Institute of Standards and Technology (NIST) conducted the latest Language Recognition Evaluation (LRE) in an ongoing series administered by NIST since 1996 to foster research in language recognition and to measure state-of-the-art technology. Similar to previous LREs, LRE22 focused on conversational telephone speech (CTS) and broadcast narrowband speech (BNBS) data. LRE22 also introduced new evaluation features, such as an emphasis on African languages, including low resource languages, and a test set consisting of segments containing between 3s and 35s of speech randomly sampled and extracted from longer recordings. A total of 21 research organizations, forming 16 teams, participated in this 3-month long evaluation and made a total of 65 valid system submissions to be evaluated. This paper presents an overview of LRE22 and an analysis of system performance over different evaluation conditions. The evaluation results suggest that Oromo and Tigrinya are easier to detect while Xhosa and Zulu are more challenging. A greater confusability is seen for some language pairs. When speech duration increased, system performance significantly increased up to a certain duration, and then a diminishing return on system performance is observed afterward.
[ { "version": "v1", "created": "Tue, 28 Feb 2023 15:05:33 GMT" } ]
2023-03-01T00:00:00
[ [ "Lee", "Yooyoung", "" ], [ "Greenberg", "Craig", "" ], [ "Godard", "Eliot", "" ], [ "Butt", "Asad A.", "" ], [ "Singer", "Elliot", "" ], [ "Nguyen", "Trang", "" ], [ "Mason", "Lisa", "" ], [ "Reynolds", "Douglas", "" ] ]
new_dataset
0.970035
2302.14625
Chaitanya Kaul
Kevin Mitchell, Khaled Kassem, Chaitanya Kaul, Valentin Kapitany, Philip Binner, Andrew Ramsay, Roderick Murray-Smith, Daniele Faccio
mmSense: Detecting Concealed Weapons with a Miniature Radar Sensor
Accepted by ICASSP 2023
null
null
null
cs.LG eess.SP
http://creativecommons.org/licenses/by/4.0/
For widespread adoption, public security and surveillance systems must be accurate, portable, compact, and real-time, without impeding the privacy of the individuals being observed. Current systems broadly fall into two categories -- image-based which are accurate, but lack privacy, and RF signal-based, which preserve privacy but lack portability, compactness and accuracy. Our paper proposes mmSense, an end-to-end portable miniaturised real-time system that can accurately detect the presence of concealed metallic objects on persons in a discrete, privacy-preserving modality. mmSense features millimeter wave radar technology, provided by Google's Soli sensor for its data acquisition, and TransDope, our real-time neural network, capable of processing a single radar data frame in 19 ms. mmSense achieves high recognition rates on a diverse set of challenging scenes while running on standard laptop hardware, demonstrating a significant advancement towards creating portable, cost-effective real-time radar based surveillance systems.
[ { "version": "v1", "created": "Tue, 28 Feb 2023 15:06:03 GMT" } ]
2023-03-01T00:00:00
[ [ "Mitchell", "Kevin", "" ], [ "Kassem", "Khaled", "" ], [ "Kaul", "Chaitanya", "" ], [ "Kapitany", "Valentin", "" ], [ "Binner", "Philip", "" ], [ "Ramsay", "Andrew", "" ], [ "Murray-Smith", "Roderick", "" ], [ "Faccio", "Daniele", "" ] ]
new_dataset
0.999027
2302.14736
Yunpeng Bai
Yunpeng Bai, Cairong Wang, Shuzhao Xie, Chao Dong, Chun Yuan, Zhi Wang
TextIR: A Simple Framework for Text-based Editable Image Restoration
9 pages, 8 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most existing image restoration methods use neural networks to learn strong image-level priors from huge data to estimate the lost information. However, these works still struggle in cases when images have severe information deficits. Introducing external priors or using reference images to provide information also have limitations in the application domain. In contrast, text input is more readily available and provides information with higher flexibility. In this work, we design an effective framework that allows the user to control the restoration process of degraded images with text descriptions. We use the text-image feature compatibility of the CLIP to alleviate the difficulty of fusing text and image features. Our framework can be used for various image restoration tasks, including image inpainting, image super-resolution, and image colorization. Extensive experiments demonstrate the effectiveness of our method.
[ { "version": "v1", "created": "Tue, 28 Feb 2023 16:39:36 GMT" } ]
2023-03-01T00:00:00
[ [ "Bai", "Yunpeng", "" ], [ "Wang", "Cairong", "" ], [ "Xie", "Shuzhao", "" ], [ "Dong", "Chao", "" ], [ "Yuan", "Chun", "" ], [ "Wang", "Zhi", "" ] ]
new_dataset
0.991942
2302.14746
Ji Hou
Ji Hou, Xiaoliang Dai, Zijian He, Angela Dai, Matthias Nie{\ss}ner
Mask3D: Pre-training 2D Vision Transformers by Learning Masked 3D Priors
accepted to CVPR2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current popular backbones in computer vision, such as Vision Transformers (ViT) and ResNets are trained to perceive the world from 2D images. However, to more effectively understand 3D structural priors in 2D backbones, we propose Mask3D to leverage existing large-scale RGB-D data in a self-supervised pre-training to embed these 3D priors into 2D learned feature representations. In contrast to traditional 3D contrastive learning paradigms requiring 3D reconstructions or multi-view correspondences, our approach is simple: we formulate a pre-text reconstruction task by masking RGB and depth patches in individual RGB-D frames. We demonstrate the Mask3D is particularly effective in embedding 3D priors into the powerful 2D ViT backbone, enabling improved representation learning for various scene understanding tasks, such as semantic segmentation, instance segmentation and object detection. Experiments show that Mask3D notably outperforms existing self-supervised 3D pre-training approaches on ScanNet, NYUv2, and Cityscapes image understanding tasks, with an improvement of +6.5% mIoU against the state-of-the-art Pri3D on ScanNet image semantic segmentation.
[ { "version": "v1", "created": "Tue, 28 Feb 2023 16:45:21 GMT" } ]
2023-03-01T00:00:00
[ [ "Hou", "Ji", "" ], [ "Dai", "Xiaoliang", "" ], [ "He", "Zijian", "" ], [ "Dai", "Angela", "" ], [ "Nießner", "Matthias", "" ] ]
new_dataset
0.978232