id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
2207.11605
Seyed Ehsan Marjani Bajestani
Seyed Ehsan Marjani Bajestani, Giovanni Beltrame
Event-based RGB-D sensing with structured light
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Event-based cameras (ECs) are bio-inspired sensors that asynchronously report brightness changes for each pixel. Due to their high dynamic range, pixel bandwidth, temporal resolution, low power consumption, and computational simplicity, they are beneficial for vision-based projects in challenging lighting conditions and they can detect fast movements with their microsecond response time. The first generation of ECs are monochrome, but color data is very useful and sometimes essential for certain vision-based applications. The latest technology enables manufacturers to build color ECs, trading off the size of the sensor and substantially reducing the resolution compared to monochrome models, despite having the same bandwidth. In addition, ECs only detect changes in light and do not show static or slowly moving objects. We introduce a method to detect full RGB events using a monochrome EC aided by a structured light projector. The projector emits rapidly changing RGB patterns of light beams on the scene, the reflection of which is captured by the EC. We combine the benefits of ECs and projection-based techniques and allow depth and color detection of static or moving objects with a commercial TI LightCrafter 4500 projector and a monocular monochrome EC, paving the way for frameless RGB-D sensing applications.
[ { "version": "v1", "created": "Sat, 23 Jul 2022 21:10:01 GMT" } ]
2022-07-26T00:00:00
[ [ "Bajestani", "Seyed Ehsan Marjani", "" ], [ "Beltrame", "Giovanni", "" ] ]
new_dataset
0.988087
2207.11689
Pengfei Qiu
Pengfei Qiu, Yongqiang Lyu, Haixia Wang, Dongsheng Wang, Chang Liu, Qiang Gao, Chunlu Wang, Rihui Sun, Gang Qu
PMUSpill: The Counters in Performance Monitor Unit that Leak SGX-Protected Secrets
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Performance Monitor Unit (PMU) is a significant hardware module on the current processors, which counts the events launched by processor into a set of PMU counters. Ideally, the events triggered by instructions that are executed but the results are not successfully committed (transient execution) should not be recorded. However, in this study, we discover that some PMU events triggered by the transient execution instructions will actually be recorded by PMU. Based on this, we propose the PMUSpill attack, which enables attackers to maliciously leak the secret data that are loaded during transient executions. The biggest challenge is how to encode the secret data into PMU events. We construct an instruction gadget to solve this challenge, whose execution path that can be identified by PMU counters represents what values the secret data are. We successfully implement the PMUSpill attack to leak the secret data stored in Intel Software Guard Extensions (SGX) (a Trusted Execution Environment (TEE) in the Intel's processors) through real experiments. Besides, we locate the vulnerable PMU counters and their trigger instructions by iterating all the valid PMU counters and instructions. The experiment results demonstrate that there are up to 20 PMU counters available to implement the PMUSpill attack. We also provide some possible hardware and software-based countermeasures for addressing the PMUSpill attack, which can be utilized to enhance the security of processors in future.
[ { "version": "v1", "created": "Sun, 24 Jul 2022 08:18:46 GMT" } ]
2022-07-26T00:00:00
[ [ "Qiu", "Pengfei", "" ], [ "Lyu", "Yongqiang", "" ], [ "Wang", "Haixia", "" ], [ "Wang", "Dongsheng", "" ], [ "Liu", "Chang", "" ], [ "Gao", "Qiang", "" ], [ "Wang", "Chunlu", "" ], [ "Sun", "Rihui", "" ], [ "Qu", "Gang", "" ] ]
new_dataset
0.999528
2207.11730
Praveen Kumar
Praveen Kumar, Sudhan Majhi, Subhabrata Paul
A Direct Construction of Cross Z-Complementary Sets with Flexible Lengths and Large Zero Correlation Zone
null
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
This letter proposes a direct construction for cross Z-complementary sets (CZCSs) with flexible lengths and a large zero correlation zone (ZCZ). CZCS is an extension of the cross Z-complementary pair (CZCP). The maximum possible ZCZ width of a CZCP is half of its sequence length. In this letter, for the first time, a generalized Boolean function based construction of CZCSs with a large number of constituent sequences and a ZCZ ratio of $2/3$ is presented. For integers $m$ and $\delta$, the proposed construction produces CZCS with length expressed as $2^{m-1}+2^\delta$ ($0 \leq \delta <m-1,m\geq 4$), where both odd and even lengths CZCS can be obtained. Additionally, the constructed CZCS also feature a complementary set of the same length. Finally, the proposed construction is compared with the existing works.
[ { "version": "v1", "created": "Sun, 24 Jul 2022 12:22:11 GMT" } ]
2022-07-26T00:00:00
[ [ "Kumar", "Praveen", "" ], [ "Majhi", "Sudhan", "" ], [ "Paul", "Subhabrata", "" ] ]
new_dataset
0.962025
2207.11754
Daniel Eckhoff
Daniel Eckhoff, Royce Ng, Alvaro Cassinelli
Virtual Reality Therapy for the Psychological Well-being of Palliative Care Patients in Hong Kong
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this paper we introduce novel Virtual Reality (VR) and Augmented Reality (AR) treatments to improve the psychological well being of patients in palliative care, based on interviews with a clinical psychologist who has successfully implemented VR assisted interventions on palliative care patients in the Hong Kong hospital system. Our VR and AR assisted interventions are adaptations of traditional palliative care therapies which simultaneously facilitate patients communication with family and friends while isolated in hospital due to physical weakness and COVID-19 related restrictions. The first system we propose is a networked, metaverse platform for palliative care patients to create customized virtual environments with therapists, family and friends which function as immersive and collaborative versions of 'life review' and 'reminiscence therapy'. The second proposed system will investigate the use of Mixed Reality telepresence and haptic touch in an AR environment, which will allow palliative care patients to physically feel friends and family in a virtual space, adding to the sense of presence and immersion in that environment.
[ { "version": "v1", "created": "Sun, 24 Jul 2022 14:31:52 GMT" } ]
2022-07-26T00:00:00
[ [ "Eckhoff", "Daniel", "" ], [ "Ng", "Royce", "" ], [ "Cassinelli", "Alvaro", "" ] ]
new_dataset
0.993775
2207.11765
Jose Cambronero Sanchez
Rohan Bavishi, Harshit Joshi, Jos\'e Pablo Cambronero S\'anchez, Anna Fariha, Sumit Gulwani, Vu Le, Ivan Radicek, Ashish Tiwari
Neurosymbolic Repair for Low-Code Formula Languages
null
null
null
null
cs.SE cs.AI
http://creativecommons.org/licenses/by/4.0/
Most users of low-code platforms, such as Excel and PowerApps, write programs in domain-specific formula languages to carry out nontrivial tasks. Often users can write most of the program they want, but introduce small mistakes that yield broken formulas. These mistakes, which can be both syntactic and semantic, are hard for low-code users to identify and fix, even though they can be resolved with just a few edits. We formalize the problem of producing such edits as the last-mile repair problem. To address this problem, we developed LaMirage, a LAst-MIle RepAir-engine GEnerator that combines symbolic and neural techniques to perform last-mile repair in low-code formula languages. LaMirage takes a grammar and a set of domain-specific constraints/rules, which jointly approximate the target language, and uses these to generate a repair engine that can fix formulas in that language. To tackle the challenges of localizing the errors and ranking the candidate repairs, LaMirage leverages neural techniques, whereas it relies on symbolic methods to generate candidate repairs. This combination allows LaMirage to find repairs that satisfy the provided grammar and constraints, and then pick the most natural repair. We compare LaMirage to state-of-the-art neural and symbolic approaches on 400 real Excel and PowerFx formulas, where LaMirage outperforms all baselines. We release these benchmarks to encourage subsequent work in low-code domains.
[ { "version": "v1", "created": "Sun, 24 Jul 2022 15:56:03 GMT" } ]
2022-07-26T00:00:00
[ [ "Bavishi", "Rohan", "" ], [ "Joshi", "Harshit", "" ], [ "Sánchez", "José Pablo Cambronero", "" ], [ "Fariha", "Anna", "" ], [ "Gulwani", "Sumit", "" ], [ "Le", "Vu", "" ], [ "Radicek", "Ivan", "" ], [ "Tiwari", "Ashish", "" ] ]
new_dataset
0.986755
2207.11795
Zezhou Cheng
Zezhou Cheng, Menglei Chai, Jian Ren, Hsin-Ying Lee, Kyle Olszewski, Zeng Huang, Subhransu Maji, Sergey Tulyakov
Cross-Modal 3D Shape Generation and Manipulation
ECCV 2022. Project page: https://people.cs.umass.edu/~zezhoucheng/edit3d/
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Creating and editing the shape and color of 3D objects require tremendous human effort and expertise. Compared to direct manipulation in 3D interfaces, 2D interactions such as sketches and scribbles are usually much more natural and intuitive for the users. In this paper, we propose a generic multi-modal generative model that couples the 2D modalities and implicit 3D representations through shared latent spaces. With the proposed model, versatile 3D generation and manipulation are enabled by simply propagating the editing from a specific 2D controlling modality through the latent spaces. For example, editing the 3D shape by drawing a sketch, re-colorizing the 3D surface via painting color scribbles on the 2D rendering, or generating 3D shapes of a certain category given one or a few reference images. Unlike prior works, our model does not require re-training or fine-tuning per editing task and is also conceptually simple, easy to implement, robust to input domain shifts, and flexible to diverse reconstruction on partial 2D inputs. We evaluate our framework on two representative 2D modalities of grayscale line sketches and rendered color images, and demonstrate that our method enables various shape manipulation and generation tasks with these 2D modalities.
[ { "version": "v1", "created": "Sun, 24 Jul 2022 19:22:57 GMT" } ]
2022-07-26T00:00:00
[ [ "Cheng", "Zezhou", "" ], [ "Chai", "Menglei", "" ], [ "Ren", "Jian", "" ], [ "Lee", "Hsin-Ying", "" ], [ "Olszewski", "Kyle", "" ], [ "Huang", "Zeng", "" ], [ "Maji", "Subhransu", "" ], [ "Tulyakov", "Sergey", "" ] ]
new_dataset
0.980588
2207.11808
Hossein Mirzaee
Hossein Mirzaee (1), Javad Peymanfard (2), Hamid Habibzadeh Moshtaghin (3), Hossein Zeinali (1) ((1) Amirkabir University of Technology, (2) Iran University of Science and Technology, (3) Allameh Tabataba'i University)
ArmanEmo: A Persian Dataset for Text-based Emotion Detection
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
With the recent proliferation of open textual data on social media platforms, Emotion Detection (ED) from Text has received more attention over the past years. It has many applications, especially for businesses and online service providers, where emotion detection techniques can help them make informed commercial decisions by analyzing customers/users' feelings towards their products and services. In this study, we introduce ArmanEmo, a human-labeled emotion dataset of more than 7000 Persian sentences labeled for seven categories. The dataset has been collected from different resources, including Twitter, Instagram, and Digikala (an Iranian e-commerce company) comments. Labels are based on Ekman's six basic emotions (Anger, Fear, Happiness, Hatred, Sadness, Wonder) and another category (Other) to consider any other emotion not included in Ekman's model. Along with the dataset, we have provided several baseline models for emotion classification focusing on the state-of-the-art transformer-based language models. Our best model achieves a macro-averaged F1 score of 75.39 percent across our test dataset. Moreover, we also conduct transfer learning experiments to compare our proposed dataset's generalization against other Persian emotion datasets. Results of these experiments suggest that our dataset has superior generalizability among the existing Persian emotion datasets. ArmanEmo is publicly available for non-commercial use at https://github.com/Arman-Rayan-Sharif/arman-text-emotion.
[ { "version": "v1", "created": "Sun, 24 Jul 2022 20:35:23 GMT" } ]
2022-07-26T00:00:00
[ [ "Mirzaee", "Hossein", "" ], [ "Peymanfard", "Javad", "" ], [ "Moshtaghin", "Hamid Habibzadeh", "" ], [ "Zeinali", "Hossein", "" ] ]
new_dataset
0.999909
2207.11810
Alexander Bell
Yu-Yun Tseng, Alexander Bell, and Danna Gurari
VizWiz-FewShot: Locating Objects in Images Taken by People With Visual Impairments
Accepted to ECCV 2022. The first two authors contributed equally
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We introduce a few-shot localization dataset originating from photographers who authentically were trying to learn about the visual content in the images they took. It includes nearly 10,000 segmentations of 100 categories in over 4,500 images that were taken by people with visual impairments. Compared to existing few-shot object detection and instance segmentation datasets, our dataset is the first to locate holes in objects (e.g., found in 12.3\% of our segmentations), it shows objects that occupy a much larger range of sizes relative to the images, and text is over five times more common in our objects (e.g., found in 22.4\% of our segmentations). Analysis of three modern few-shot localization algorithms demonstrates that they generalize poorly to our new dataset. The algorithms commonly struggle to locate objects with holes, very small and very large objects, and objects lacking text. To encourage a larger community to work on these unsolved challenges, we publicly share our annotated few-shot dataset at https://vizwiz.org .
[ { "version": "v1", "created": "Sun, 24 Jul 2022 20:44:51 GMT" } ]
2022-07-26T00:00:00
[ [ "Tseng", "Yu-Yun", "" ], [ "Bell", "Alexander", "" ], [ "Gurari", "Danna", "" ] ]
new_dataset
0.999649
2207.11817
Tu Nguyen
Tu N. Nguyen, Kashyab J. Ambarani, Linh Le, Ivan Djordjevic, and Zhi-Li Zhang
A Multiple-Entanglement Routing Framework for Quantum Networks
11 pages
null
null
null
cs.NI
http://creativecommons.org/licenses/by/4.0/
Quantum networks are gaining momentum in finding applications in a wide range of domains. However, little research has investigated the potential of a quantum network framework to enable highly reliable communications. The goal of this work is to investigate and design the multiple-entanglement routing framework, namely k-entangled routing. In particular, the $k$-entangled routing will enable k paths connecting all demands (source-destination pairs) in the network. To design the $k$-entangled routing, we propose two algorithms that are called Sequential Multi-path Scheduling Algorithm and Min-Cut-based Multi-path Scheduling Algorithm. In addition, we evaluate the performance of the proposed algorithms and models through a realistic quantum network simulator, NetSquid, that models the stochastic processes underlying quantum communications. The results show that the proposed algorithms (SMPSA and MCSA) largely enhance the network's traffic flexibility. The proposed paradigms would lay the foundation for further research on the area of entanglement routing.
[ { "version": "v1", "created": "Tue, 19 Jul 2022 12:09:03 GMT" } ]
2022-07-26T00:00:00
[ [ "Nguyen", "Tu N.", "" ], [ "Ambarani", "Kashyab J.", "" ], [ "Le", "Linh", "" ], [ "Djordjevic", "Ivan", "" ], [ "Zhang", "Zhi-Li", "" ] ]
new_dataset
0.991592
2207.11857
Devdeep Ray
Devdeep Ray (1 and 2), Connor Smith (1), Teng Wei (1), David Chu (1), Srinivasan Seshan (2) ((1) Google, (2) Carnegie Mellon University)
SQP: Congestion Control for Low-Latency Interactive Video Streaming
14 pages, 2 page appendix
null
null
null
cs.NI cs.MM
http://creativecommons.org/licenses/by/4.0/
This paper presents the design and evaluation of SQP, a congestion control algorithm (CCA) for interactive video streaming applications that need to stream high-bitrate compressed video with very low end-to-end frame delay (eg. AR streaming, cloud gaming). SQP uses frame-coupled, paced packet trains to sample the network bandwidth, and uses an adaptive one-way delay measurement to recover from queuing, for low, bounded queuing delay. SQP rapidly adapts to changes in the link bandwidth, ensuring high utilization and low frame delay, and also achieves competitive bandwidth shares when competing with queue-building flows within an acceptable delay envelope. SQP has good fairness properties, and works well on links with shallow buffers. In real-world A/B testing of SQP against Copa in Google's AR streaming platform, SQP achieves 27% and 15% more sessions with high bitrate and low frame delay for LTE and Wi-Fi, respectively. When competing with queue-building traffic like Cubic and BBR, SQP achieves 2-3X higher bandwidth compared to GoogCC (WebRTC), Sprout, and PCC-Vivace, and comparable performance to Copa (with mode switching).
[ { "version": "v1", "created": "Mon, 25 Jul 2022 00:37:19 GMT" } ]
2022-07-26T00:00:00
[ [ "Ray", "Devdeep", "", "1 and 2" ], [ "Smith", "Connor", "", "Google" ], [ "Wei", "Teng", "", "Google" ], [ "Chu", "David", "", "Google" ], [ "Seshan", "Srinivasan", "", "Carnegie Mellon University" ] ]
new_dataset
0.997944
2207.11889
Songlin Fan
Songlin Fan, Wei Gao, and Ge Li
Salient Object Detection for Point Clouds
Accepted to ECCV 2022. Project Page: https://git.openi.org.cn/OpenPointCloud/PCSOD
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper researches the unexplored task-point cloud salient object detection (SOD). Differing from SOD for images, we find the attention shift of point clouds may provoke saliency conflict, i.e., an object paradoxically belongs to salient and non-salient categories. To eschew this issue, we present a novel view-dependent perspective of salient objects, reasonably reflecting the most eye-catching objects in point cloud scenarios. Following this formulation, we introduce PCSOD, the first dataset proposed for point cloud SOD consisting of 2,872 in-/out-door 3D views. The samples in our dataset are labeled with hierarchical annotations, e.g., super-/sub-class, bounding box, and segmentation map, which endows the brilliant generalizability and broad applicability of our dataset verifying various conjectures. To evidence the feasibility of our solution, we further contribute a baseline model and benchmark five representative models for a comprehensive comparison. The proposed model can effectively analyze irregular and unordered points for detecting salient objects. Thanks to incorporating the task-tailored designs, our method shows visible superiority over other baselines, producing more satisfactory results. Extensive experiments and discussions reveal the promising potential of this research field, paving the way for further study.
[ { "version": "v1", "created": "Mon, 25 Jul 2022 03:35:46 GMT" } ]
2022-07-26T00:00:00
[ [ "Fan", "Songlin", "" ], [ "Gao", "Wei", "" ], [ "Li", "Ge", "" ] ]
new_dataset
0.99972
2207.11897
Tosin Ige
Tosin Ige, Sikiru Adewale
AI Powered Anti-Cyber Bullying System using Machine Learning Algorithm of Multinomial Naive Bayes and Optimized Linear Support Vector Machine
5 pages
International Journal of Advanced Computer Science and Applications(IJACSA), Volume 13 Issue 5, 2022
10.14569/IJACSA.2022.0130502
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
"Unless and until our society recognizes cyber bullying for what it is, the suffering of thousands of silent victims will continue." ~ Anna Maria Chavez. There had been series of research on cyber bullying which are unable to provide reliable solution to cyber bullying. In this research work, we were able to provide a permanent solution to this by developing a model capable of detecting and intercepting bullying incoming and outgoing messages with 92% accuracy. We also developed a chatbot automation messaging system to test our model leading to the development of Artificial Intelligence powered anti-cyber bullying system using machine learning algorithm of Multinomial Naive Bayes (MNB) and optimized linear Support Vector Machine (SVM). Our model is able to detect and intercept bullying outgoing and incoming bullying messages and take immediate action.
[ { "version": "v1", "created": "Mon, 25 Jul 2022 04:02:02 GMT" } ]
2022-07-26T00:00:00
[ [ "Ige", "Tosin", "" ], [ "Adewale", "Sikiru", "" ] ]
new_dataset
0.970923
2207.11911
Bangbang Yang
Bangbang Yang, Chong Bao, Junyi Zeng, Hujun Bao, Yinda Zhang, Zhaopeng Cui, Guofeng Zhang
NeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for Geometry and Texture Editing
Accepted to ECCV 2022 (Oral). Project Page: https://zju3dv.github.io/neumesh/
null
null
null
cs.CV cs.GR
http://creativecommons.org/licenses/by/4.0/
Very recently neural implicit rendering techniques have been rapidly evolved and shown great advantages in novel view synthesis and 3D scene reconstruction. However, existing neural rendering methods for editing purposes offer limited functionality, e.g., rigid transformation, or not applicable for fine-grained editing for general objects from daily lives. In this paper, we present a novel mesh-based representation by encoding the neural implicit field with disentangled geometry and texture codes on mesh vertices, which facilitates a set of editing functionalities, including mesh-guided geometry editing, designated texture editing with texture swapping, filling and painting operations. To this end, we develop several techniques including learnable sign indicators to magnify spatial distinguishability of mesh-based representation, distillation and fine-tuning mechanism to make a steady convergence, and the spatial-aware optimization strategy to realize precise texture editing. Extensive experiments and editing examples on both real and synthetic data demonstrate the superiority of our method on representation quality and editing ability. Code is available on the project webpage: https://zju3dv.github.io/neumesh/.
[ { "version": "v1", "created": "Mon, 25 Jul 2022 05:30:50 GMT" } ]
2022-07-26T00:00:00
[ [ "Yang", "Bangbang", "" ], [ "Bao", "Chong", "" ], [ "Zeng", "Junyi", "" ], [ "Bao", "Hujun", "" ], [ "Zhang", "Yinda", "" ], [ "Cui", "Zhaopeng", "" ], [ "Zhang", "Guofeng", "" ] ]
new_dataset
0.99944
2207.11929
Irit Dinur
Irit Dinur, Shai Evra, Ron Livne, Alexander Lubotzky, Shahar Mozes
Good Locally Testable Codes
This is a revision of arxiv.org/2111.04808 that has been adapted to a mathematical audience
null
null
null
cs.IT math.CO math.GR math.IT
http://creativecommons.org/licenses/by/4.0/
An explicit construction of locally testable codes of constant rate, constant distance and constant number of queries is given. Hence answering affirmatively the $c^3$-problem.
[ { "version": "v1", "created": "Mon, 25 Jul 2022 06:45:45 GMT" } ]
2022-07-26T00:00:00
[ [ "Dinur", "Irit", "" ], [ "Evra", "Shai", "" ], [ "Livne", "Ron", "" ], [ "Lubotzky", "Alexander", "" ], [ "Mozes", "Shahar", "" ] ]
new_dataset
0.991289
2207.11936
Sergio Barrachina-Mu\~noz Dr
Sergio Barrachina-Mu\~noz, Miquel Payar\'o, Josep Mangues-Bafalluy
Cloud-native 5G experimental platform with over-the-air transmissions and end-to-end monitoring
null
null
null
null
cs.NI
http://creativecommons.org/licenses/by/4.0/
5G represents a revolutionary shift with respect to previous generations given its design centered on network softwarization. Within such a change of paradigm, cloud-native solutions are widely regarded as the future of vertical application development because of their enhanced flexibility and adaptability to complex and dynamic scenarios. In this context, we present an experimental framework with over-the-air transmissions that tackles two critical aspects for enhancing the lifecycle management of 5G and beyond networks: cloud-native deployments of 5G core network functions (NFs) and end-to-end monitoring. First, we deploy Open5GS and Prometheus-based monitoring as containerized network functions (CNFs) in a Kubernetes cluster spanning a multi-tier network with a multi-access edge computing (MEC) host. We then demonstrate the end-to-end monitoring system by showcasing via Grafana dashboards both infrastructure resources and radio metrics of two scenarios; one devoted to user plane function (UPF) re-selection and the other to user mobility.
[ { "version": "v1", "created": "Mon, 25 Jul 2022 07:01:05 GMT" } ]
2022-07-26T00:00:00
[ [ "Barrachina-Muñoz", "Sergio", "" ], [ "Payaró", "Miquel", "" ], [ "Mangues-Bafalluy", "Josep", "" ] ]
new_dataset
0.989519
2207.11972
Jian Zhang
Chong Mou, Jian Zhang
TransCL: Transformer Makes Strong and Flexible Compressive Learning
Accepted by TPAMI 2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Compressive learning (CL) is an emerging framework that integrates signal acquisition via compressed sensing (CS) and machine learning for inference tasks directly on a small number of measurements. It can be a promising alternative to classical image-domain methods and enjoys great advantages in memory saving and computational efficiency. However, previous attempts on CL are not only limited to a fixed CS ratio, which lacks flexibility, but also limited to MNIST/CIFAR-like datasets and do not scale to complex real-world high-resolution (HR) data or vision tasks. In this paper, a novel transformer-based compressive learning framework on large-scale images with arbitrary CS ratios, dubbed TransCL, is proposed. Specifically, TransCL first utilizes the strategy of learnable block-based compressed sensing and proposes a flexible linear projection strategy to enable CL to be performed on large-scale images in an efficient block-by-block manner with arbitrary CS ratios. Then, regarding CS measurements from all blocks as a sequence, a pure transformer-based backbone is deployed to perform vision tasks with various task-oriented heads. Our sufficient analysis presents that TransCL exhibits strong resistance to interference and robust adaptability to arbitrary CS ratios. Extensive experiments for complex HR data demonstrate that the proposed TransCL can achieve state-of-the-art performance in image classification and semantic segmentation tasks. In particular, TransCL with a CS ratio of $10\%$ can obtain almost the same performance as when operating directly on the original data and can still obtain satisfying performance even with an extremely low CS ratio of $1\%$. The source codes of our proposed TransCL is available at \url{https://github.com/MC-E/TransCL/}.
[ { "version": "v1", "created": "Mon, 25 Jul 2022 08:21:48 GMT" } ]
2022-07-26T00:00:00
[ [ "Mou", "Chong", "" ], [ "Zhang", "Jian", "" ] ]
new_dataset
0.999239
2207.12039
Ciar\'an Dunne
Ciar\'an Dunne and J. B. Wells
Isabelle/HOL/GST: A Formal Proof Environment for Generalized Set Theories
null
null
null
null
cs.LO math.LO
http://creativecommons.org/licenses/by/4.0/
A generalized set theory (GST) is like a standard set theory but also can have non-set structured objects that can contain other structured objects including sets. This paper presents Isabelle/HOL support for GSTs, which are treated as type classes that combine features that specify kinds of mathematical objects, e.g., sets, ordinal numbers, functions, etc. GSTs can have an exception feature that eases representing partial functions and undefinedness. When assembling a GST, extra axioms are generated following a user-modifiable policy to fill specification gaps. Specialized type-like predicates called soft types are used extensively. Although a GST can be used without a model, for confidence in its consistency we build a model for each GST from components that specify each feature's contribution to each tier of a von-Neumann-style cumulative hierarchy defined via ordinal recursion, and we then connect the model to a separate type which the GST occupies.
[ { "version": "v1", "created": "Mon, 25 Jul 2022 10:27:15 GMT" } ]
2022-07-26T00:00:00
[ [ "Dunne", "Ciarán", "" ], [ "Wells", "J. B.", "" ] ]
new_dataset
0.956365
2207.12063
Payam Zahadat
Payam Zahadat and Ada Diaconescu
Multi-Scale Asset Distribution Model for Dynamic Environments
null
null
null
null
cs.MA cs.AI cs.SI cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many self-organising systems the ability to extract necessary resources from the external environment is essential to the system's growth and survival. Examples include the extraction of sunlight and nutrients in organic plants, of monetary income in business organisations and of mobile robots in swarm intelligence actions. When operating within competitive, ever-changing environments, such systems must distribute their internal assets wisely so as to improve and adapt their ability to extract available resources. As the system size increases, the asset-distribution process often gets organised around a multi-scale control topology. This topology may be static (fixed) or dynamic (enabling growth and structural adaptation) depending on the system's internal constraints and adaptive mechanisms. In this paper, we expand on a plant-inspired asset-distribution model and introduce a more general multi-scale model applicable across a wider range of natural and artificial system domains. We study the impact that the topology of the multi-scale control process has upon the system's ability to self-adapt asset distribution when resource availability changes within the environment. Results show how different topological characteristics and different competition levels between system branches impact overall system profitability, adaptation delays and disturbances when environmental changes occur. These findings provide a basis for system designers to select the most suitable topology and configuration for their particular application and execution environment.
[ { "version": "v1", "created": "Mon, 25 Jul 2022 11:14:49 GMT" } ]
2022-07-26T00:00:00
[ [ "Zahadat", "Payam", "" ], [ "Diaconescu", "Ada", "" ] ]
new_dataset
0.975249
2207.12084
Joao P. A. Dantas
Joao P. A. Dantas, Andre N. Costa, Vitor C. F. Gomes, Andre R. Kuroswiski, Felipe L. L. Medeiros and Diego Geraldo
ASA: A Simulation Environment for Evaluating Military Operational Scenarios
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Aerospace Simulation Environment (Ambiente de Simula\c{c}\~ao Aeroespacial -- ASA in Portuguese) is a custom-made object-oriented simulation framework developed mainly in C++ that enables the modeling and simulation of military operational scenarios to support the development of tactics and procedures in the aerospace context for the Brazilian Air Force. This work describes the ASA framework, bringing its distributed architecture for managing multiple simulation machines, a data analysis platform for post-processing simulation data, the capability of loading models at simulation runtime, and a batch mode execution platform to perform multiple independent executions simultaneously. In addition, we present a list of recent works using the ASA framework as a simulation tool in the air combat context.
[ { "version": "v1", "created": "Thu, 23 Jun 2022 15:05:30 GMT" } ]
2022-07-26T00:00:00
[ [ "Dantas", "Joao P. A.", "" ], [ "Costa", "Andre N.", "" ], [ "Gomes", "Vitor C. F.", "" ], [ "Kuroswiski", "Andre R.", "" ], [ "Medeiros", "Felipe L. L.", "" ], [ "Geraldo", "Diego", "" ] ]
new_dataset
0.999145
2207.12162
Maxime Amblard
Maria Boritchev (SEMAGRAMME, LORIA), Maxime Amblard (SEMAGRAMME, LORIA)
A Multi-Party Dialogue Ressource in French
null
13th Edition of Language Resources and Evaluation Conference (LREC 2022), Jun 2022, Marseille, France
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present Dialogues in Games (DinG), a corpus of manual transcriptions of real-life, oral, spontaneous multi-party dialogues between French-speaking players of the board game Catan. Our objective is to make available a quality resource for French, composed of long dialogues, to facilitate their study in the style of (Asher et al., 2016). In a general dialogue setting, participants share personal information, which makes it impossible to disseminate the resource freely and openly. In DinG, the attention of the participants is focused on the game, which prevents them from talking about themselves. In addition, we are conducting a study on the nature of the questions in dialogue, through annotation (Cruz Blandon et al., 2019), in order to develop more natural automatic dialogue systems.
[ { "version": "v1", "created": "Mon, 25 Jul 2022 13:02:54 GMT" } ]
2022-07-26T00:00:00
[ [ "Boritchev", "Maria", "", "SEMAGRAMME, LORIA" ], [ "Amblard", "Maxime", "", "SEMAGRAMME,\n LORIA" ] ]
new_dataset
0.999489
2207.12188
Che-Kai Liu
Che-Kai Liu, Haobang Chen, Mohsen Imani, Kai Ni, Arman Kazemi, Ann Franchesca Laguna, Michael Niemier, Xiaobo Sharon Hu, Liang Zhao, Cheng Zhuo, and Xunzhao Yin
COSIME: FeFET based Associative Memory for In-Memory Cosine Similarity Search
Accepted by the 41st International Conference on Computer Aided Design (ICCAD), San Diego, USA
null
null
null
cs.AR cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a number of machine learning models, an input query is searched across the trained class vectors to find the closest feature class vector in cosine similarity metric. However, performing the cosine similarities between the vectors in Von-Neumann machines involves a large number of multiplications, Euclidean normalizations and division operations, thus incurring heavy hardware energy and latency overheads. Moreover, due to the memory wall problem that presents in the conventional architecture, frequent cosine similarity-based searches (CSSs) over the class vectors requires a lot of data movements, limiting the throughput and efficiency of the system. To overcome the aforementioned challenges, this paper introduces COSIME, an general in-memory associative memory (AM) engine based on the ferroelectric FET (FeFET) device for efficient CSS. By leveraging the one-transistor AND gate function of FeFET devices, current-based translinear analog circuit and winner-take-all (WTA) circuitry, COSIME can realize parallel in-memory CSS across all the entries in a memory block, and output the closest word to the input query in cosine similarity metric. Evaluation results at the array level suggest that the proposed COSIME design achieves 333X and 90.5X latency and energy improvements, respectively, and realizes better classification accuracy when compared with an AM design implementing approximated CSS. The proposed in-memory computing fabric is evaluated for an HDC problem, showcasing that COSIME can achieve on average 47.1X and 98.5X speedup and energy efficiency improvements compared with an GPU implementation.
[ { "version": "v1", "created": "Mon, 25 Jul 2022 13:24:40 GMT" } ]
2022-07-26T00:00:00
[ [ "Liu", "Che-Kai", "" ], [ "Chen", "Haobang", "" ], [ "Imani", "Mohsen", "" ], [ "Ni", "Kai", "" ], [ "Kazemi", "Arman", "" ], [ "Laguna", "Ann Franchesca", "" ], [ "Niemier", "Michael", "" ], [ "Hu", "Xiaobo Sharon", "" ], [ "Zhao", "Liang", "" ], [ "Zhuo", "Cheng", "" ], [ "Yin", "Xunzhao", "" ] ]
new_dataset
0.979137
2207.12200
Miguel Lu\'is
Pedro Rito, Ana Almeida, Andreia Figueiredo, Christian Gomes, Pedro Teixeira, Rodrigo Rosmaninho, Rui Lopes, Duarte Dias, Gon\c{c}alo V\'itor, Gon\c{c}alo Perna, Miguel Silva, Carlos Senna, Duarte Raposo, Miguel Lu\'is, Susana Sargento, Arnaldo Oliveira, Nuno Borges de Carvalho
Aveiro Tech City Living Lab: A Communication, Sensing and Computing Platform for City Environments
null
null
null
null
cs.NI
http://creativecommons.org/licenses/by/4.0/
This article presents the deployment and experimentation architecture of the Aveiro Tech City Living Lab (ATCLL) in Aveiro, Portugal. This platform comprises a large number of Internet-of-Things devices with communication, sensing and computing capabilities. The communication infrastructure, built on fiber and Millimeter-wave (mmWave) links, integrates a communication network with radio terminals (WiFi, ITS-G5, C-V2X, 5G and LoRa(WAN)), multiprotocol, spread throughout 44 connected points of access in the city. Additionally, public transportation has also been equipped with communication and sensing units. All these points combine and interconnect a set of sensors, such as mobility (Radars, Lidars, video cameras) and environmental sensors. Combining edge computing and cloud management to deploy the services and manage the platform, and a data platform to gather and process the data, the living lab supports a wide range of services and applications: IoT, intelligent transportation systems and assisted driving, environmental monitoring, emergency and safety, among others. This article describes the architecture, implementation and deployment to make the overall platform to work and integrate researchers and citizens. Moreover, it showcases some examples of the performance metrics achieved in the city infrastructure, the data that can be collected, visualized and used to build services and applications to the cities, and, finally, different use cases in the mobility and safety scenarios.
[ { "version": "v1", "created": "Mon, 25 Jul 2022 13:42:09 GMT" } ]
2022-07-26T00:00:00
[ [ "Rito", "Pedro", "" ], [ "Almeida", "Ana", "" ], [ "Figueiredo", "Andreia", "" ], [ "Gomes", "Christian", "" ], [ "Teixeira", "Pedro", "" ], [ "Rosmaninho", "Rodrigo", "" ], [ "Lopes", "Rui", "" ], [ "Dias", "Duarte", "" ], [ "Vítor", "Gonçalo", "" ], [ "Perna", "Gonçalo", "" ], [ "Silva", "Miguel", "" ], [ "Senna", "Carlos", "" ], [ "Raposo", "Duarte", "" ], [ "Luís", "Miguel", "" ], [ "Sargento", "Susana", "" ], [ "Oliveira", "Arnaldo", "" ], [ "de Carvalho", "Nuno Borges", "" ] ]
new_dataset
0.999716
2207.12254
Alireza Ramezani
Adarsh Salagame, Shoghair Manjikian, Chenghao Wang, Kaushik Venkatesh Krishnamurthy, Shreyansh Pitroda, Bibek Gupta, Tobias Jacob, Benjamin Mottis, Eric Sihite, Milad Ramezani, Alireza Ramezani
A Letter on Progress Made on Husky Carbon: A Legged-Aerial, Multi-modal Platform
arXiv admin note: text overlap with arXiv:2104.05834, arXiv:2205.06392
null
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Animals, such as birds, widely use multi-modal locomotion by combining legged and aerial mobility with dominant inertial effects. The robotic biomimicry of this multi-modal locomotion feat can yield ultra-flexible systems in terms of their ability to negotiate their task spaces. The main objective of this paper is to discuss the challenges in achieving multi-modal locomotion, and to report our progress in developing our quadrupedal robot capable of multi-modal locomotion (legged and aerial locomotion), the Husky Carbon. We report the mechanical and electrical components utilized in our robot, in addition to the simulation and experimentation done to achieve our goal in developing a versatile multi-modal robotic platform.
[ { "version": "v1", "created": "Mon, 25 Jul 2022 15:18:21 GMT" } ]
2022-07-26T00:00:00
[ [ "Salagame", "Adarsh", "" ], [ "Manjikian", "Shoghair", "" ], [ "Wang", "Chenghao", "" ], [ "Krishnamurthy", "Kaushik Venkatesh", "" ], [ "Pitroda", "Shreyansh", "" ], [ "Gupta", "Bibek", "" ], [ "Jacob", "Tobias", "" ], [ "Mottis", "Benjamin", "" ], [ "Sihite", "Eric", "" ], [ "Ramezani", "Milad", "" ], [ "Ramezani", "Alireza", "" ] ]
new_dataset
0.99715
2207.12267
Su-Kyoung Kim
Su Kyoung Kim, Michael Maurus, Mathias Trampler, Marc Tabie, Elsa Andrea Kirchner
Continuous ErrP detections during multimodal human-robot interaction
null
null
null
null
cs.RO cs.HC cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Human-in-the-loop approaches are of great importance for robot applications. In the presented study, we implemented a multimodal human-robot interaction (HRI) scenario, in which a simulated robot communicates with its human partner through speech and gestures. The robot announces its intention verbally and selects the appropriate action using pointing gestures. The human partner, in turn, evaluates whether the robot's verbal announcement (intention) matches the action (pointing gesture) chosen by the robot. For cases where the verbal announcement of the robot does not match the corresponding action choice of the robot, we expect error-related potentials (ErrPs) in the human electroencephalogram (EEG). These intrinsic evaluations of robot actions by humans, evident in the EEG, were recorded in real time, continuously segmented online and classified asynchronously. For feature selection, we propose an approach that allows the combinations of forward and backward sliding windows to train a classifier. We achieved an average classification performance of 91% across 9 subjects. As expected, we also observed a relatively high variability between the subjects. In the future, the proposed feature selection approach will be extended to allow for customization of feature selection. To this end, the best combinations of forward and backward sliding windows will be automatically selected to account for inter-subject variability in classification performance. In addition, we plan to use the intrinsic human error evaluation evident in the error case by the ErrP in interactive reinforcement learning to improve multimodal human-robot interaction.
[ { "version": "v1", "created": "Mon, 25 Jul 2022 15:39:32 GMT" } ]
2022-07-26T00:00:00
[ [ "Kim", "Su Kyoung", "" ], [ "Maurus", "Michael", "" ], [ "Trampler", "Mathias", "" ], [ "Tabie", "Marc", "" ], [ "Kirchner", "Elsa Andrea", "" ] ]
new_dataset
0.972914
2207.12310
Christian Mejia-Escobar
Javier Caicedo and Pamela Acosta and Romel Pozo and Henry Guilcapi and Christian Mejia-Escobar
Estimaci\'on de \'areas de cultivo mediante Deep Learning y programaci\'on convencional
21 pages, in Spanish, 17 figures, 3 tables
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Artificial Intelligence has enabled the implementation of more accurate and efficient solutions to problems in various areas. In the agricultural sector, one of the main needs is to know at all times the extent of land occupied or not by crops in order to improve production and profitability. The traditional methods of calculation demand the collection of data manually and in person in the field, causing high labor costs, execution times, and inaccuracy in the results. The present work proposes a new method based on Deep Learning techniques complemented with conventional programming for the determination of the area of populated and unpopulated crop areas. We have considered as a case study one of the most recognized companies in the planting and harvesting of sugar cane in Ecuador. The strategy combines a Generative Adversarial Neural Network (GAN) that is trained on a dataset of aerial photographs of natural and urban landscapes to improve image resolution; a Convolutional Neural Network (CNN) trained on a dataset of aerial photographs of sugar cane plots to distinguish populated or unpopulated crop areas; and a standard image processing module for the calculation of areas in a percentage manner. The experiments performed demonstrate a significant improvement in the quality of the aerial photographs as well as a remarkable differentiation between populated and unpopulated crop areas, consequently, a more accurate result of cultivated and uncultivated areas. The proposed method can be extended to the detection of possible pests, areas of weed vegetation, dynamic crop development, and both qualitative and quantitative quality control.
[ { "version": "v1", "created": "Mon, 25 Jul 2022 16:22:55 GMT" } ]
2022-07-26T00:00:00
[ [ "Caicedo", "Javier", "" ], [ "Acosta", "Pamela", "" ], [ "Pozo", "Romel", "" ], [ "Guilcapi", "Henry", "" ], [ "Mejia-Escobar", "Christian", "" ] ]
new_dataset
0.960424
2207.12317
Peng Yin
Ivan Cisneros, Peng Yin, Ji Zhang, Howie Choset and Sebastian Scherer
ALTO: A Large-Scale Dataset for UAV Visual Place Recognition and Localization
UAV Localization dataset paper
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present the ALTO dataset, a vision-focused dataset for the development and benchmarking of Visual Place Recognition and Localization methods for Unmanned Aerial Vehicles. The dataset is composed of two long (approximately 150km and 260km) trajectories flown by a helicopter over Ohio and Pennsylvania, and it includes high precision GPS-INS ground truth location data, high precision accelerometer readings, laser altimeter readings, and RGB downward facing camera imagery. In addition, we provide reference imagery over the flight paths, which makes this dataset suitable for VPR benchmarking and other tasks common in Localization, such as image registration and visual odometry. To the author's knowledge, this is the largest real-world aerial-vehicle dataset of this kind. Our dataset is available at https://github.com/MetaSLAM/ALTO.
[ { "version": "v1", "created": "Tue, 19 Jul 2022 21:13:44 GMT" } ]
2022-07-26T00:00:00
[ [ "Cisneros", "Ivan", "" ], [ "Yin", "Peng", "" ], [ "Zhang", "Ji", "" ], [ "Choset", "Howie", "" ], [ "Scherer", "Sebastian", "" ] ]
new_dataset
0.999821
2207.12326
Lorenzo Ceragioli
Lorenzo Ceragioli, Letterio Galletta, Pierpaolo Degano, Luca Vigan\`o
Automatic Fair Exchanges
null
null
null
null
cs.CR cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a decentralized environment, exchanging resources requires users to bargain until an agreement is found. Moreover, human agreements involve a combination of collaborative and selfish behavior and often induce circularity, complicating the evaluation of exchange requests. We introduce MuAC, a policy language that allows users to state in isolation under which conditions they are open to grant their resources and what they require in return. In MuAC, exchange requests are evaluated automatically with the guarantee that the only exchanges that will take place are those that mutually satisfy users' conditions. Moreover, MuAC can be used as an enforcement mechanism to prevent users from cheating. As a proof of concept, we implement a blockchain smart contract that allows users to exchange their non-fungible tokens.
[ { "version": "v1", "created": "Mon, 25 Jul 2022 16:34:58 GMT" } ]
2022-07-26T00:00:00
[ [ "Ceragioli", "Lorenzo", "" ], [ "Galletta", "Letterio", "" ], [ "Degano", "Pierpaolo", "" ], [ "Viganò", "Luca", "" ] ]
new_dataset
0.989607
2207.12381
Khiem Le
Khiem H. Le, Hieu H. Pham, Thao BT. Nguyen, Tu A. Nguyen, Tien N. Thanh, Cuong D. Do
LightX3ECG: A Lightweight and eXplainable Deep Learning System for 3-lead Electrocardiogram Classification
Under review at Biomedical Signal Processing and Control
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Cardiovascular diseases (CVDs) are a group of heart and blood vessel disorders that is one of the most serious dangers to human health, and the number of such patients is still growing. Early and accurate detection plays a key role in successful treatment and intervention. Electrocardiogram (ECG) is the gold standard for identifying a variety of cardiovascular abnormalities. In clinical practices and most of the current research, standard 12-lead ECG is mainly used. However, using a lower number of leads can make ECG more prevalent as it can be conveniently recorded by portable or wearable devices. In this research, we develop a novel deep learning system to accurately identify multiple cardiovascular abnormalities by using only three ECG leads.
[ { "version": "v1", "created": "Mon, 25 Jul 2022 17:49:29 GMT" } ]
2022-07-26T00:00:00
[ [ "Le", "Khiem H.", "" ], [ "Pham", "Hieu H.", "" ], [ "Nguyen", "Thao BT.", "" ], [ "Nguyen", "Tu A.", "" ], [ "Thanh", "Tien N.", "" ], [ "Do", "Cuong D.", "" ] ]
new_dataset
0.998897
2207.12393
Hao Zhu
Hao Zhu, Wayne Wu, Wentao Zhu, Liming Jiang, Siwei Tang, Li Zhang, Ziwei Liu, Chen Change Loy
CelebV-HQ: A Large-Scale Video Facial Attributes Dataset
ECCV 2022. Project Page: https://celebv-hq.github.io/ ; Dataset: https://github.com/CelebV-HQ/CelebV-HQ
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Large-scale datasets have played indispensable roles in the recent success of face generation/editing and significantly facilitated the advances of emerging research fields. However, the academic community still lacks a video dataset with diverse facial attribute annotations, which is crucial for the research on face-related videos. In this work, we propose a large-scale, high-quality, and diverse video dataset with rich facial attribute annotations, named the High-Quality Celebrity Video Dataset (CelebV-HQ). CelebV-HQ contains 35,666 video clips with the resolution of 512x512 at least, involving 15,653 identities. All clips are labeled manually with 83 facial attributes, covering appearance, action, and emotion. We conduct a comprehensive analysis in terms of age, ethnicity, brightness stability, motion smoothness, head pose diversity, and data quality to demonstrate the diversity and temporal coherence of CelebV-HQ. Besides, its versatility and potential are validated on two representative tasks, i.e., unconditional video generation and video facial attribute editing. Furthermore, we envision the future potential of CelebV-HQ, as well as the new opportunities and challenges it would bring to related research directions. Data, code, and models are publicly available. Project page: https://celebv-hq.github.io.
[ { "version": "v1", "created": "Mon, 25 Jul 2022 17:57:07 GMT" } ]
2022-07-26T00:00:00
[ [ "Zhu", "Hao", "" ], [ "Wu", "Wayne", "" ], [ "Zhu", "Wentao", "" ], [ "Jiang", "Liming", "" ], [ "Tang", "Siwei", "" ], [ "Zhang", "Li", "" ], [ "Liu", "Ziwei", "" ], [ "Loy", "Chen Change", "" ] ]
new_dataset
0.99986
2207.12394
Shengyu Huang
Shengyu Huang, Zan Gojcic, Jiahui Huang, Andreas Wieser, Konrad Schindler
Dynamic 3D Scene Analysis by Point Cloud Accumulation
ECCV 2022, camera ready
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Multi-beam LiDAR sensors, as used on autonomous vehicles and mobile robots, acquire sequences of 3D range scans ("frames"). Each frame covers the scene sparsely, due to limited angular scanning resolution and occlusion. The sparsity restricts the performance of downstream processes like semantic segmentation or surface reconstruction. Luckily, when the sensor moves, frames are captured from a sequence of different viewpoints. This provides complementary information and, when accumulated in a common scene coordinate frame, yields a denser sampling and a more complete coverage of the underlying 3D scene. However, often the scanned scenes contain moving objects. Points on those objects are not correctly aligned by just undoing the scanner's ego-motion. In the present paper, we explore multi-frame point cloud accumulation as a mid-level representation of 3D scan sequences, and develop a method that exploits inductive biases of outdoor street scenes, including their geometric layout and object-level rigidity. Compared to state-of-the-art scene flow estimators, our proposed approach aims to align all 3D points in a common reference frame correctly accumulating the points on the individual objects. Our approach greatly reduces the alignment errors on several benchmark datasets. Moreover, the accumulated point clouds benefit high-level tasks like surface reconstruction.
[ { "version": "v1", "created": "Mon, 25 Jul 2022 17:57:46 GMT" } ]
2022-07-26T00:00:00
[ [ "Huang", "Shengyu", "" ], [ "Gojcic", "Zan", "" ], [ "Huang", "Jiahui", "" ], [ "Wieser", "Andreas", "" ], [ "Schindler", "Konrad", "" ] ]
new_dataset
0.983069
2008.07898
Martin Kucera
Martin Ku\v{c}era, Ond\v{r}ej Such\'y
Minimum Eccentricity Shortest Path Problem with Respect to Structural Parameters
null
null
10.1007/978-3-030-79987-8_31
null
cs.DS cs.CC
http://creativecommons.org/licenses/by/4.0/
The Minimum Eccentricity Shortest Path Problem consists in finding a shortest path with minimum eccentricity in a given undirected graph. The problem is known to be NP-complete and W[2]-hard with respect to the desired eccentricity. We present fpt-algorithms for the problem parameterized by the modular width, distance to cluster graph, the combination of treewidth with the desired eccentricity, and maximum leaf number.
[ { "version": "v1", "created": "Tue, 18 Aug 2020 12:56:02 GMT" }, { "version": "v2", "created": "Sun, 27 Jun 2021 19:58:26 GMT" }, { "version": "v3", "created": "Thu, 21 Jul 2022 18:53:29 GMT" } ]
2022-07-25T00:00:00
[ [ "Kučera", "Martin", "" ], [ "Suchý", "Ondřej", "" ] ]
new_dataset
0.989907
2112.00584
Kai Wang
Kai Wang, Paul Guerrero, Vladimir Kim, Siddhartha Chaudhuri, Minhyuk Sung, Daniel Ritchie
The Shape Part Slot Machine: Contact-based Reasoning for Generating 3D Shapes from Parts
European Conference on Computer Vision (ECCV) 2022
null
null
null
cs.GR cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present the Shape Part Slot Machine, a new method for assembling novel 3D shapes from existing parts by performing contact-based reasoning. Our method represents each shape as a graph of ``slots,'' where each slot is a region of contact between two shape parts. Based on this representation, we design a graph-neural-network-based model for generating new slot graphs and retrieving compatible parts, as well as a gradient-descent-based optimization scheme for assembling the retrieved parts into a complete shape that respects the generated slot graph. This approach does not require any semantic part labels; interestingly, it also does not require complete part geometries -- reasoning about the slots proves sufficient to generate novel, high-quality 3D shapes. We demonstrate that our method generates shapes that outperform existing modeling-by-assembly approaches regarding quality, diversity, and structural complexity.
[ { "version": "v1", "created": "Wed, 1 Dec 2021 15:54:54 GMT" }, { "version": "v2", "created": "Thu, 21 Jul 2022 22:56:38 GMT" } ]
2022-07-25T00:00:00
[ [ "Wang", "Kai", "" ], [ "Guerrero", "Paul", "" ], [ "Kim", "Vladimir", "" ], [ "Chaudhuri", "Siddhartha", "" ], [ "Sung", "Minhyuk", "" ], [ "Ritchie", "Daniel", "" ] ]
new_dataset
0.999626
2112.01551
Dave Zhenyu Chen
Dave Zhenyu Chen, Qirui Wu, Matthias Nie{\ss}ner, Angel X. Chang
D3Net: A Unified Speaker-Listener Architecture for 3D Dense Captioning and Visual Grounding
Project website: https://daveredrum.github.io/D3Net/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent studies on dense captioning and visual grounding in 3D have achieved impressive results. Despite developments in both areas, the limited amount of available 3D vision-language data causes overfitting issues for 3D visual grounding and 3D dense captioning methods. Also, how to discriminatively describe objects in complex 3D environments is not fully studied yet. To address these challenges, we present D3Net, an end-to-end neural speaker-listener architecture that can detect, describe and discriminate. Our D3Net unifies dense captioning and visual grounding in 3D in a self-critical manner. This self-critical property of D3Net also introduces discriminability during object caption generation and enables semi-supervised training on ScanNet data with partially annotated descriptions. Our method outperforms SOTA methods in both tasks on the ScanRefer dataset, surpassing the SOTA 3D dense captioning method by a significant margin.
[ { "version": "v1", "created": "Thu, 2 Dec 2021 19:00:06 GMT" }, { "version": "v2", "created": "Fri, 22 Jul 2022 11:49:32 GMT" } ]
2022-07-25T00:00:00
[ [ "Chen", "Dave Zhenyu", "" ], [ "Wu", "Qirui", "" ], [ "Nießner", "Matthias", "" ], [ "Chang", "Angel X.", "" ] ]
new_dataset
0.998746
2112.02308
Hao Zhu
Yiyu Zhuang, Hao Zhu, Xusen Sun, Xun Cao
MoFaNeRF: Morphable Facial Neural Radiance Field
accepted to ECCV2022; code available at http://github.com/zhuhao-nju/mofanerf
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a parametric model that maps free-view images into a vector space of coded facial shape, expression and appearance with a neural radiance field, namely Morphable Facial NeRF. Specifically, MoFaNeRF takes the coded facial shape, expression and appearance along with space coordinate and view direction as input to an MLP, and outputs the radiance of the space point for photo-realistic image synthesis. Compared with conventional 3D morphable models (3DMM), MoFaNeRF shows superiority in directly synthesizing photo-realistic facial details even for eyes, mouths, and beards. Also, continuous face morphing can be easily achieved by interpolating the input shape, expression and appearance codes. By introducing identity-specific modulation and texture encoder, our model synthesizes accurate photometric details and shows strong representation ability. Our model shows strong ability on multiple applications including image-based fitting, random generation, face rigging, face editing, and novel view synthesis. Experiments show that our method achieves higher representation ability than previous parametric models, and achieves competitive performance in several applications. To the best of our knowledge, our work is the first facial parametric model built upon a neural radiance field that can be used in fitting, generation and manipulation. The code and data is available at https://github.com/zhuhao-nju/mofanerf.
[ { "version": "v1", "created": "Sat, 4 Dec 2021 11:25:28 GMT" }, { "version": "v2", "created": "Fri, 22 Jul 2022 17:16:26 GMT" } ]
2022-07-25T00:00:00
[ [ "Zhuang", "Yiyu", "" ], [ "Zhu", "Hao", "" ], [ "Sun", "Xusen", "" ], [ "Cao", "Xun", "" ] ]
new_dataset
0.979382
2112.02990
Yujin Chen
Yujin Chen, Matthias Nie{\ss}ner, Angela Dai
4DContrast: Contrastive Learning with Dynamic Correspondences for 3D Scene Understanding
Accepted by ECCV 2022, Video: https://youtu.be/qhGhWZmJq3U
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new approach to instill 4D dynamic object priors into learned 3D representations by unsupervised pre-training. We observe that dynamic movement of an object through an environment provides important cues about its objectness, and thus propose to imbue learned 3D representations with such dynamic understanding, that can then be effectively transferred to improved performance in downstream 3D semantic scene understanding tasks. We propose a new data augmentation scheme leveraging synthetic 3D shapes moving in static 3D environments, and employ contrastive learning under 3D-4D constraints that encode 4D invariances into the learned 3D representations. Experiments demonstrate that our unsupervised representation learning results in improvement in downstream 3D semantic segmentation, object detection, and instance segmentation tasks, and moreover, notably improves performance in data-scarce scenarios.
[ { "version": "v1", "created": "Mon, 6 Dec 2021 13:09:07 GMT" }, { "version": "v2", "created": "Fri, 22 Jul 2022 11:54:27 GMT" } ]
2022-07-25T00:00:00
[ [ "Chen", "Yujin", "" ], [ "Nießner", "Matthias", "" ], [ "Dai", "Angela", "" ] ]
new_dataset
0.997134
2112.06346
Liang Qiu
Liang Qiu, Yizhou Zhao, Jinchao Li, Pan Lu, Baolin Peng, Jianfeng Gao, Song-Chun Zhu
ValueNet: A New Dataset for Human Value Driven Dialogue System
Paper accepted by AAAI 2022
null
10.1609/aaai.v36i10.21368
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Building a socially intelligent agent involves many challenges, one of which is to teach the agent to speak guided by its value like a human. However, value-driven chatbots are still understudied in the area of dialogue systems. Most existing datasets focus on commonsense reasoning or social norm modeling. In this work, we present a new large-scale human value dataset called ValueNet, which contains human attitudes on 21,374 text scenarios. The dataset is organized in ten dimensions that conform to the basic human value theory in intercultural research. We further develop a Transformer-based value regression model on ValueNet to learn the utility distribution. Comprehensive empirical results show that the learned value model could benefit a wide range of dialogue tasks. For example, by teaching a generative agent with reinforcement learning and the rewards from the value model, our method attains state-of-the-art performance on the personalized dialog generation dataset: Persona-Chat. With values as additional features, existing emotion recognition models enable capturing rich human emotions in the context, which further improves the empathetic response generation performance in the EmpatheticDialogues dataset. To the best of our knowledge, ValueNet is the first large-scale text dataset for human value modeling, and we are the first one trying to incorporate a value model into emotionally intelligent dialogue systems. The dataset is available at https://liang-qiu.github.io/ValueNet/.
[ { "version": "v1", "created": "Sun, 12 Dec 2021 23:02:52 GMT" } ]
2022-07-25T00:00:00
[ [ "Qiu", "Liang", "" ], [ "Zhao", "Yizhou", "" ], [ "Li", "Jinchao", "" ], [ "Lu", "Pan", "" ], [ "Peng", "Baolin", "" ], [ "Gao", "Jianfeng", "" ], [ "Zhu", "Song-Chun", "" ] ]
new_dataset
0.999102
2202.11781
Moinak Bhattacharya
Moinak Bhattacharya, Shubham Jain, Prateek Prasanna
RadioTransformer: A Cascaded Global-Focal Transformer for Visual Attention-guided Disease Classification
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this work, we present RadioTransformer, a novel visual attention-driven transformer framework, that leverages radiologists' gaze patterns and models their visuo-cognitive behavior for disease diagnosis on chest radiographs. Domain experts, such as radiologists, rely on visual information for medical image interpretation. On the other hand, deep neural networks have demonstrated significant promise in similar tasks even where visual interpretation is challenging. Eye-gaze tracking has been used to capture the viewing behavior of domain experts, lending insights into the complexity of visual search. However, deep learning frameworks, even those that rely on attention mechanisms, do not leverage this rich domain information. RadioTransformer fills this critical gap by learning from radiologists' visual search patterns, encoded as 'human visual attention regions' in a cascaded global-focal transformer framework. The overall 'global' image characteristics and the more detailed 'local' features are captured by the proposed global and focal modules, respectively. We experimentally validate the efficacy of our student-teacher approach for 8 datasets involving different disease classification tasks where eye-gaze data is not available during the inference phase. Code: https://github.com/bmi-imaginelab/radiotransformer.
[ { "version": "v1", "created": "Wed, 23 Feb 2022 20:52:30 GMT" }, { "version": "v2", "created": "Thu, 21 Jul 2022 20:36:16 GMT" } ]
2022-07-25T00:00:00
[ [ "Bhattacharya", "Moinak", "" ], [ "Jain", "Shubham", "" ], [ "Prasanna", "Prateek", "" ] ]
new_dataset
0.975187
2203.00795
Gedaliah Knizhnik
Gedaliah Knizhnik and Mark Yim
Amplitude Control for Parallel Lattices of Docked Modboats
7 pages. Accepted to the 2022 International Conference on Robotics and Automation (ICRA)
null
10.1109/ICRA46639.2022.9812381
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Modboat is a low-cost, underactuated, modular robot capable of surface swimming. It is able to swim individually, dock to other Modboats, and undock from them using only a single motor and two passive flippers. Undocking without additional actuation is achieved by causing intentional self-collision between the tails of neighboring modules; this becomes a challenge when group swimming as one connected component is desirable. In this work, we develop a control strategy to allow parallel lattices of Modboats to swim as a single unit, which conventionally requires holonomic modules. We show that the control strategy is guaranteed to avoid unintentional undocking and minimizes internal forces within the lattice. Experimental verification shows that the controller performs well and is consistent for lattices of various sizes. Controllability is maintained while swimming, but pure yaw control causes lateral movement that cannot be counteracted by the presented framework.
[ { "version": "v1", "created": "Tue, 1 Mar 2022 23:48:53 GMT" }, { "version": "v2", "created": "Fri, 22 Jul 2022 01:01:25 GMT" } ]
2022-07-25T00:00:00
[ [ "Knizhnik", "Gedaliah", "" ], [ "Yim", "Mark", "" ] ]
new_dataset
0.999272
2203.00796
Gedaliah Knizhnik
Gedaliah Knizhnik, Peihan Li, Xi Yu, and M. Ani Hsieh
Flow-Based Control of Marine Robots in Gyre-Like Environments
7 pages. Published at 2022 International Conference on Robotics and Automation (ICRA)
null
10.1109/ICRA46639.2022.9812331
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a flow-based control strategy that enables resource-constrained marine robots to patrol gyre-like flow environments on an orbital trajectory with a periodicity in a given range. The controller does not require a detailed model of the flow field and relies only on the robot's location relative to the center of the gyre. Instead of precisely tracking a pre-defined trajectory, the robots are tasked to stay in between two bounding trajectories with known periodicity. Furthermore, the proposed strategy leverages the surrounding flow field to minimize control effort. We prove that the proposed strategy enables robots to cycle in the flow satisfying the desired periodicity requirements. Our method is tested and validated both in simulation and in experiments using a low-cost, underactuated, surface swimming robot, i.e. the Modboat.
[ { "version": "v1", "created": "Tue, 1 Mar 2022 23:53:29 GMT" }, { "version": "v2", "created": "Fri, 22 Jul 2022 00:59:51 GMT" } ]
2022-07-25T00:00:00
[ [ "Knizhnik", "Gedaliah", "" ], [ "Li", "Peihan", "" ], [ "Yu", "Xi", "" ], [ "Hsieh", "M. Ani", "" ] ]
new_dataset
0.965104
2203.03041
Xuebin Qin
Xuebin Qin and Hang Dai and Xiaobin Hu and Deng-Ping Fan and Ling Shao and Luc Van Gool
Highly Accurate Dichotomous Image Segmentation
29 pages, 18 figures, ECCV 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a systematic study on a new task called dichotomous image segmentation (DIS) , which aims to segment highly accurate objects from natural images. To this end, we collected the first large-scale DIS dataset, called DIS5K, which contains 5,470 high-resolution (e.g., 2K, 4K or larger) images covering camouflaged, salient, or meticulous objects in various backgrounds. DIS is annotated with extremely fine-grained labels. Besides, we introduce a simple intermediate supervision baseline (IS-Net) using both feature-level and mask-level guidance for DIS model training. IS-Net outperforms various cutting-edge baselines on the proposed DIS5K, making it a general self-learned supervision network that can facilitate future research in DIS. Further, we design a new metric called human correction efforts (HCE) which approximates the number of mouse clicking operations required to correct the false positives and false negatives. HCE is utilized to measure the gap between models and real-world applications and thus can complement existing metrics. Finally, we conduct the largest-scale benchmark, evaluating 16 representative segmentation models, providing a more insightful discussion regarding object complexities, and showing several potential applications (e.g., background removal, art design, 3D reconstruction). Hoping these efforts can open up promising directions for both academic and industries. Project page: https://xuebinqin.github.io/dis/index.html.
[ { "version": "v1", "created": "Sun, 6 Mar 2022 20:09:19 GMT" }, { "version": "v2", "created": "Tue, 8 Mar 2022 19:13:10 GMT" }, { "version": "v3", "created": "Tue, 12 Jul 2022 07:16:02 GMT" }, { "version": "v4", "created": "Fri, 15 Jul 2022 14:28:49 GMT" } ]
2022-07-25T00:00:00
[ [ "Qin", "Xuebin", "" ], [ "Dai", "Hang", "" ], [ "Hu", "Xiaobin", "" ], [ "Fan", "Deng-Ping", "" ], [ "Shao", "Ling", "" ], [ "Van Gool", "Luc", "" ] ]
new_dataset
0.993652
2204.02445
Xianghui Xie
Xianghui Xie, Bharat Lal Bhatnagar, Gerard Pons-Moll
CHORE: Contact, Human and Object REconstruction from a single RGB image
Accepted at ECCV 2022, Camera ready version
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Most prior works in perceiving 3D humans from images reason human in isolation without their surroundings. However, humans are constantly interacting with the surrounding objects, thus calling for models that can reason about not only the human but also the object and their interaction. The problem is extremely challenging due to heavy occlusions between humans and objects, diverse interaction types and depth ambiguity. In this paper, we introduce CHORE, a novel method that learns to jointly reconstruct the human and the object from a single RGB image. CHORE takes inspiration from recent advances in implicit surface learning and classical model-based fitting. We compute a neural reconstruction of human and object represented implicitly with two unsigned distance fields, a correspondence field to a parametric body and an object pose field. This allows us to robustly fit a parametric body model and a 3D object template, while reasoning about interactions. Furthermore, prior pixel-aligned implicit learning methods use synthetic data and make assumptions that are not met in the real data. We propose a elegant depth-aware scaling that allows more efficient shape learning on real data. Experiments show that our joint reconstruction learned with the proposed strategy significantly outperforms the SOTA. Our code and models are available at https://virtualhumans.mpi-inf.mpg.de/chore
[ { "version": "v1", "created": "Tue, 5 Apr 2022 18:38:06 GMT" }, { "version": "v2", "created": "Fri, 22 Jul 2022 16:14:33 GMT" } ]
2022-07-25T00:00:00
[ [ "Xie", "Xianghui", "" ], [ "Bhatnagar", "Bharat Lal", "" ], [ "Pons-Moll", "Gerard", "" ] ]
new_dataset
0.993946
2204.04627
Fu-Jen Tsai
Fu-Jen Tsai, Yan-Tsung Peng, Yen-Yu Lin, Chung-Chi Tsai, and Chia-Wen Lin
Stripformer: Strip Transformer for Fast Image Deblurring
ECCV 2022 Oral Presentation
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Images taken in dynamic scenes may contain unwanted motion blur, which significantly degrades visual quality. Such blur causes short- and long-range region-specific smoothing artifacts that are often directional and non-uniform, which is difficult to be removed. Inspired by the current success of transformers on computer vision and image processing tasks, we develop, Stripformer, a transformer-based architecture that constructs intra- and inter-strip tokens to reweight image features in the horizontal and vertical directions to catch blurred patterns with different orientations. It stacks interlaced intra-strip and inter-strip attention layers to reveal blur magnitudes. In addition to detecting region-specific blurred patterns of various orientations and magnitudes, Stripformer is also a token-efficient and parameter-efficient transformer model, demanding much less memory usage and computation cost than the vanilla transformer but works better without relying on tremendous training data. Experimental results show that Stripformer performs favorably against state-of-the-art models in dynamic scene deblurring.
[ { "version": "v1", "created": "Sun, 10 Apr 2022 08:01:00 GMT" }, { "version": "v2", "created": "Fri, 22 Jul 2022 10:01:04 GMT" } ]
2022-07-25T00:00:00
[ [ "Tsai", "Fu-Jen", "" ], [ "Peng", "Yan-Tsung", "" ], [ "Lin", "Yen-Yu", "" ], [ "Tsai", "Chung-Chi", "" ], [ "Lin", "Chia-Wen", "" ] ]
new_dataset
0.984036
2204.14109
Mathis Petrovich
Mathis Petrovich, Michael J. Black, G\"ul Varol
TEMOS: Generating diverse human motions from textual descriptions
ECCV 2022 Camera ready
null
null
null
cs.CV cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We address the problem of generating diverse 3D human motions from textual descriptions. This challenging task requires joint modeling of both modalities: understanding and extracting useful human-centric information from the text, and then generating plausible and realistic sequences of human poses. In contrast to most previous work which focuses on generating a single, deterministic, motion from a textual description, we design a variational approach that can produce multiple diverse human motions. We propose TEMOS, a text-conditioned generative model leveraging variational autoencoder (VAE) training with human motion data, in combination with a text encoder that produces distribution parameters compatible with the VAE latent space. We show the TEMOS framework can produce both skeleton-based animations as in prior work, as well more expressive SMPL body motions. We evaluate our approach on the KIT Motion-Language benchmark and, despite being relatively straightforward, demonstrate significant improvements over the state of the art. Code and models are available on our webpage.
[ { "version": "v1", "created": "Mon, 25 Apr 2022 14:53:06 GMT" }, { "version": "v2", "created": "Fri, 22 Jul 2022 09:07:31 GMT" } ]
2022-07-25T00:00:00
[ [ "Petrovich", "Mathis", "" ], [ "Black", "Michael J.", "" ], [ "Varol", "Gül", "" ] ]
new_dataset
0.995562
2206.09024
Keyu Chen
Keyu Chen, Marzieh Babaeianjelodar, Yiwen Shi, Kamila Janmohamed, Rupak Sarkar, Ingmar Weber, Thomas Davidson, Munmun De Choudhury, Jonathan Huang, Shweta Yadav, Ashique Khudabukhsh, Preslav Ivanov Nakov, Chris Bauch, Orestis Papakyriakopoulos, Kaveh Khoshnood, and Navin Kumar
Partisan US News Media Representations of Syrian Refugees
null
null
null
null
cs.SI
http://creativecommons.org/licenses/by/4.0/
We investigate how representations of Syrian refugees (2011-2021) differ across US partisan news outlets. We analyze 47,388 articles from the online US media about Syrian refugees to detail differences in reporting between left- and right-leaning media. We use various NLP techniques to understand these differences. Our polarization and question answering results indicated that left-leaning media tended to represent refugees as child victims, welcome in the US, and right-leaning media cast refugees as Islamic terrorists. We noted similar results with our sentiment and offensive speech scores over time, which detail possibly unfavorable representations of refugees in right-leaning media. A strength of our work is how the different techniques we have applied validate each other. Based on our results, we provide several recommendations. Stakeholders may utilize our findings to intervene around refugee representations, and design communications campaigns that improve the way society sees refugees and possibly aid refugee outcomes.
[ { "version": "v1", "created": "Fri, 17 Jun 2022 21:58:36 GMT" } ]
2022-07-25T00:00:00
[ [ "Chen", "Keyu", "" ], [ "Babaeianjelodar", "Marzieh", "" ], [ "Shi", "Yiwen", "" ], [ "Janmohamed", "Kamila", "" ], [ "Sarkar", "Rupak", "" ], [ "Weber", "Ingmar", "" ], [ "Davidson", "Thomas", "" ], [ "De Choudhury", "Munmun", "" ], [ "Huang", "Jonathan", "" ], [ "Yadav", "Shweta", "" ], [ "Khudabukhsh", "Ashique", "" ], [ "Nakov", "Preslav Ivanov", "" ], [ "Bauch", "Chris", "" ], [ "Papakyriakopoulos", "Orestis", "" ], [ "Khoshnood", "Kaveh", "" ], [ "Kumar", "Navin", "" ] ]
new_dataset
0.998238
2206.10883
Zejiang Shen
Zejiang Shen, Kyle Lo, Lauren Yu, Nathan Dahlberg, Margo Schlanger, Doug Downey
Multi-LexSum: Real-World Summaries of Civil Rights Lawsuits at Multiple Granularities
37 pages, 2 figures, 9 tables
null
null
null
cs.CL cs.CY
http://creativecommons.org/licenses/by/4.0/
With the advent of large language models, methods for abstractive summarization have made great strides, creating potential for use in applications to aid knowledge workers processing unwieldy document collections. One such setting is the Civil Rights Litigation Clearinghouse (CRLC) (https://clearinghouse.net),which posts information about large-scale civil rights lawsuits, serving lawyers, scholars, and the general public. Today, summarization in the CRLC requires extensive training of lawyers and law students who spend hours per case understanding multiple relevant documents in order to produce high-quality summaries of key events and outcomes. Motivated by this ongoing real-world summarization effort, we introduce Multi-LexSum, a collection of 9,280 expert-authored summaries drawn from ongoing CRLC writing. Multi-LexSum presents a challenging multi-document summarization task given the length of the source documents, often exceeding two hundred pages per case. Furthermore, Multi-LexSum is distinct from other datasets in its multiple target summaries, each at a different granularity (ranging from one-sentence "extreme" summaries to multi-paragraph narrations of over five hundred words). We present extensive analysis demonstrating that despite the high-quality summaries in the training data (adhering to strict content and style guidelines), state-of-the-art summarization models perform poorly on this task. We release Multi-LexSum for further research in summarization methods as well as to facilitate development of applications to assist in the CRLC's mission at https://multilexsum.github.io.
[ { "version": "v1", "created": "Wed, 22 Jun 2022 07:26:55 GMT" }, { "version": "v2", "created": "Thu, 23 Jun 2022 23:40:10 GMT" }, { "version": "v3", "created": "Fri, 22 Jul 2022 17:37:58 GMT" } ]
2022-07-25T00:00:00
[ [ "Shen", "Zejiang", "" ], [ "Lo", "Kyle", "" ], [ "Yu", "Lauren", "" ], [ "Dahlberg", "Nathan", "" ], [ "Schlanger", "Margo", "" ], [ "Downey", "Doug", "" ] ]
new_dataset
0.996997
2206.12931
Ritesh Kumar
Ritesh Kumar, Siddharth Singh, Shyam Ratan, Mohit Raj, Sonal Sinha, Bornini Lahiri, Vivek Seshadri, Kalika Bali and Atul Kr. Ojha
Annotated Speech Corpus for Low Resource Indian Languages: Awadhi, Bhojpuri, Braj and Magahi
Speech for Social Good Workshop, 2022, Interspeech 2022
null
null
null
cs.CL cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we discuss an in-progress work on the development of a speech corpus for four low-resource Indo-Aryan languages -- Awadhi, Bhojpuri, Braj and Magahi using the field methods of linguistic data collection. The total size of the corpus currently stands at approximately 18 hours (approx. 4-5 hours each language) and it is transcribed and annotated with grammatical information such as part-of-speech tags, morphological features and Universal dependency relationships. We discuss our methodology for data collection in these languages, most of which was done in the middle of the COVID-19 pandemic, with one of the aims being to generate some additional income for low-income groups speaking these languages. In the paper, we also discuss the results of the baseline experiments for automatic speech recognition system in these languages.
[ { "version": "v1", "created": "Sun, 26 Jun 2022 17:28:38 GMT" } ]
2022-07-25T00:00:00
[ [ "Kumar", "Ritesh", "" ], [ "Singh", "Siddharth", "" ], [ "Ratan", "Shyam", "" ], [ "Raj", "Mohit", "" ], [ "Sinha", "Sonal", "" ], [ "Lahiri", "Bornini", "" ], [ "Seshadri", "Vivek", "" ], [ "Bali", "Kalika", "" ], [ "Ojha", "Atul Kr.", "" ] ]
new_dataset
0.997885
2207.04789
Ilia Petrov
Bernhard M\"o{\ss}ner, Christian Riegger, Arthur Bernhardt, Ilia Petrov
bloomRF: On Performing Range-Queries in Bloom-Filters with Piecewise-Monotone Hash Functions and Prefix Hashing
Extended version. Original accepted at EDBT 2023
null
null
null
cs.DB
http://creativecommons.org/licenses/by-nc-nd/4.0/
We introduce bloomRF as a unified method for approximate membership testing that supports both point- and range-queries. As a first core idea, bloomRF introduces novel prefix hashing to efficiently encode range information in the hash-code of the key itself. As a second key concept, bloomRF proposes novel piecewise-monotone hash-functions that preserve local order and support fast range-lookups with fewer memory accesses. bloomRF has near-optimal space complexity and constant query complexity. Although, bloomRF is designed for integer domains, it supports floating-points, and can serve as a multi-attribute filter. The evaluation in RocksDB and in a standalone library shows that it is more efficient and outperforms existing point-range-filters by up to 4x across a range of settings and distributions, while keeping the false-positive rate low.
[ { "version": "v1", "created": "Mon, 11 Jul 2022 11:42:25 GMT" }, { "version": "v2", "created": "Fri, 22 Jul 2022 12:54:45 GMT" } ]
2022-07-25T00:00:00
[ [ "Mößner", "Bernhard", "" ], [ "Riegger", "Christian", "" ], [ "Bernhardt", "Arthur", "" ], [ "Petrov", "Ilia", "" ] ]
new_dataset
0.953172
2207.06067
Omid Nejati Manzari
Omid Nejati Manzari, Amin Boudesh, Shahriar B. Shokouhi
Pyramid Transformer for Traffic Sign Detection
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traffic sign detection is a vital task in the visual system of self-driving cars and the automated driving system. Recently, novel Transformer-based models have achieved encouraging results for various computer vision tasks. We still observed that vanilla ViT could not yield satisfactory results in traffic sign detection because the overall size of the datasets is very small and the class distribution of traffic signs is extremely unbalanced. To overcome this problem, a novel Pyramid Transformer with locality mechanisms is proposed in this paper. Specifically, Pyramid Transformer has several spatial pyramid reduction layers to shrink and embed the input image into tokens with rich multi-scale context by using atrous convolutions. Moreover, it inherits an intrinsic scale invariance inductive bias and is able to learn local feature representation for objects at various scales, thereby enhancing the network robustness against the size discrepancy of traffic signs. The experiments are conducted on the German Traffic Sign Detection Benchmark (GTSDB). The results demonstrate the superiority of the proposed model in the traffic sign detection tasks. More specifically, Pyramid Transformer achieves 77.8% mAP on GTSDB when applied to the Cascade RCNN as the backbone, which surpasses most well-known and widely-used state-of-the-art models.
[ { "version": "v1", "created": "Wed, 13 Jul 2022 09:21:19 GMT" }, { "version": "v2", "created": "Fri, 22 Jul 2022 07:17:55 GMT" } ]
2022-07-25T00:00:00
[ [ "Manzari", "Omid Nejati", "" ], [ "Boudesh", "Amin", "" ], [ "Shokouhi", "Shahriar B.", "" ] ]
new_dataset
0.988792
2207.06706
Ariel Caputo
Ariel Caputo, Marco Emporio, Andrea Giachetti, Marco Cristani, Guido Borghi, Andrea D'Eusanio, Minh-Quan Le, Hai-Dang Nguyen, Minh-Triet Tran, F. Ambellan, M. Hanik, E. Nava-Yazdani, C. von Tycowicz
SHREC 2022 Track on Online Detection of Heterogeneous Gestures
Accepted on Computer & Graphics journal
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper presents the outcomes of a contest organized to evaluate methods for the online recognition of heterogeneous gestures from sequences of 3D hand poses. The task is the detection of gestures belonging to a dictionary of 16 classes characterized by different pose and motion features. The dataset features continuous sequences of hand tracking data where the gestures are interleaved with non-significant motions. The data have been captured using the Hololens 2 finger tracking system in a realistic use-case of mixed reality interaction. The evaluation is based not only on the detection performances but also on the latency and the false positives, making it possible to understand the feasibility of practical interaction tools based on the algorithms proposed. The outcomes of the contest's evaluation demonstrate the necessity of further research to reduce recognition errors, while the computational cost of the algorithms proposed is sufficiently low.
[ { "version": "v1", "created": "Thu, 14 Jul 2022 07:24:02 GMT" }, { "version": "v2", "created": "Fri, 22 Jul 2022 11:51:49 GMT" } ]
2022-07-25T00:00:00
[ [ "Caputo", "Ariel", "" ], [ "Emporio", "Marco", "" ], [ "Giachetti", "Andrea", "" ], [ "Cristani", "Marco", "" ], [ "Borghi", "Guido", "" ], [ "D'Eusanio", "Andrea", "" ], [ "Le", "Minh-Quan", "" ], [ "Nguyen", "Hai-Dang", "" ], [ "Tran", "Minh-Triet", "" ], [ "Ambellan", "F.", "" ], [ "Hanik", "M.", "" ], [ "Nava-Yazdani", "E.", "" ], [ "von Tycowicz", "C.", "" ] ]
new_dataset
0.995438
2207.09298
Houkun Zhu
Houkun Zhu, Dominik Scheinert, Lauritz Thamsen, Kordian Gontarska, and Odej Kao
Magpie: Automatically Tuning Static Parameters for Distributed File Systems using Deep Reinforcement Learning
Accepted at The IEEE International Conference on Cloud Engineering (IC2E) conference 2022
null
null
null
cs.DC cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Distributed file systems are widely used nowadays, yet using their default configurations is often not optimal. At the same time, tuning configuration parameters is typically challenging and time-consuming. It demands expertise and tuning operations can also be expensive. This is especially the case for static parameters, where changes take effect only after a restart of the system or workloads. We propose a novel approach, Magpie, which utilizes deep reinforcement learning to tune static parameters by strategically exploring and exploiting configuration parameter spaces. To boost the tuning of the static parameters, our method employs both server and client metrics of distributed file systems to understand the relationship between static parameters and performance. Our empirical evaluation results show that Magpie can noticeably improve the performance of the distributed file system Lustre, where our approach on average achieves 91.8% throughput gains against default configuration after tuning towards single performance indicator optimization, while it reaches 39.7% more throughput gains against the baseline.
[ { "version": "v1", "created": "Tue, 19 Jul 2022 14:32:07 GMT" }, { "version": "v2", "created": "Fri, 22 Jul 2022 13:53:19 GMT" } ]
2022-07-25T00:00:00
[ [ "Zhu", "Houkun", "" ], [ "Scheinert", "Dominik", "" ], [ "Thamsen", "Lauritz", "" ], [ "Gontarska", "Kordian", "" ], [ "Kao", "Odej", "" ] ]
new_dataset
0.979879
2207.10106
Tatsuya Matsushima
Tatsuya Matsushima, Yuki Noguchi, Jumpei Arima, Toshiki Aoki, Yuki Okita, Yuya Ikeda, Koki Ishimoto, Shohei Taniguchi, Yuki Yamashita, Shoichi Seto, Shixiang Shane Gu, Yusuke Iwasawa, Yutaka Matsuo
World Robot Challenge 2020 -- Partner Robot: A Data-Driven Approach for Room Tidying with Mobile Manipulator
null
null
null
null
cs.RO cs.AI cs.CV cs.LG cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tidying up a household environment using a mobile manipulator poses various challenges in robotics, such as adaptation to large real-world environmental variations, and safe and robust deployment in the presence of humans.The Partner Robot Challenge in World Robot Challenge (WRC) 2020, a global competition held in September 2021, benchmarked tidying tasks in the real home environments, and importantly, tested for full system performances.For this challenge, we developed an entire household service robot system, which leverages a data-driven approach to adapt to numerous edge cases that occur during the execution, instead of classical manual pre-programmed solutions. In this paper, we describe the core ingredients of the proposed robot system, including visual recognition, object manipulation, and motion planning. Our robot system won the second prize, verifying the effectiveness and potential of data-driven robot systems for mobile manipulation in home environments.
[ { "version": "v1", "created": "Wed, 20 Jul 2022 18:00:20 GMT" }, { "version": "v2", "created": "Fri, 22 Jul 2022 01:44:49 GMT" } ]
2022-07-25T00:00:00
[ [ "Matsushima", "Tatsuya", "" ], [ "Noguchi", "Yuki", "" ], [ "Arima", "Jumpei", "" ], [ "Aoki", "Toshiki", "" ], [ "Okita", "Yuki", "" ], [ "Ikeda", "Yuya", "" ], [ "Ishimoto", "Koki", "" ], [ "Taniguchi", "Shohei", "" ], [ "Yamashita", "Yuki", "" ], [ "Seto", "Shoichi", "" ], [ "Gu", "Shixiang Shane", "" ], [ "Iwasawa", "Yusuke", "" ], [ "Matsuo", "Yutaka", "" ] ]
new_dataset
0.982247
2207.10120
Davide Moltisanti
Davide Moltisanti, Jinyi Wu, Bo Dai, Chen Change Loy
BRACE: The Breakdancing Competition Dataset for Dance Motion Synthesis
ECCV 2022. Dataset available at https://github.com/dmoltisanti/brace
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative models for audio-conditioned dance motion synthesis map music features to dance movements. Models are trained to associate motion patterns to audio patterns, usually without an explicit knowledge of the human body. This approach relies on a few assumptions: strong music-dance correlation, controlled motion data and relatively simple poses and movements. These characteristics are found in all existing datasets for dance motion synthesis, and indeed recent methods can achieve good results.We introduce a new dataset aiming to challenge these common assumptions, compiling a set of dynamic dance sequences displaying complex human poses. We focus on breakdancing which features acrobatic moves and tangled postures. We source our data from the Red Bull BC One competition videos. Estimating human keypoints from these videos is difficult due to the complexity of the dance, as well as the multiple moving cameras recording setup. We adopt a hybrid labelling pipeline leveraging deep estimation models as well as manual annotations to obtain good quality keypoint sequences at a reduced cost. Our efforts produced the BRACE dataset, which contains over 3 hours and 30 minutes of densely annotated poses. We test state-of-the-art methods on BRACE, showing their limitations when evaluated on complex sequences. Our dataset can readily foster advance in dance motion synthesis. With intricate poses and swift movements, models are forced to go beyond learning a mapping between modalities and reason more effectively about body structure and movements.
[ { "version": "v1", "created": "Wed, 20 Jul 2022 18:03:54 GMT" }, { "version": "v2", "created": "Fri, 22 Jul 2022 13:02:35 GMT" } ]
2022-07-25T00:00:00
[ [ "Moltisanti", "Davide", "" ], [ "Wu", "Jinyi", "" ], [ "Dai", "Bo", "" ], [ "Loy", "Chen Change", "" ] ]
new_dataset
0.999836
2207.10479
Lukas Daniel Klausner
Angelika Adensamer and Rita Gsenger and Lukas Daniel Klausner
Wer ist schuld, wenn Algorithmen irren? Entscheidungsautomatisierung, Organisationen und Verantwortung
18 pages, 2 figures, in German
Publikation zur Wissenschaftskonferenz der Arbeiterkammer Vorarlberg im September 2021 (Forschung 1: Technikfolgenabschaetzung aus Arbeitnehmer:innenperspektive), 2022, 47-73
null
null
cs.CY cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Algorithmic decision support (ADS) is increasingly used in a whole array of different contexts and structures in various areas of society, influencing many people's lives. Its use raises questions, among others, about accountability, transparency and responsibility. Our article aims to give a brief overview of the central issues connected to ADS, responsibility and decision-making in organisational contexts and identify open questions and research gaps. Furthermore, we describe a set of guidelines and a complementary digital tool to assist practitioners in mapping responsibility when introducing ADS within their organisational context. -- Algorithmenunterst\"utzte Entscheidungsfindung (algorithmic decision support, ADS) kommt in verschiedenen Kontexten und Strukturen vermehrt zum Einsatz und beeinflusst in diversen gesellschaftlichen Bereichen das Leben vieler Menschen. Ihr Einsatz wirft einige Fragen auf, unter anderem zu den Themen Rechenschaft, Transparenz und Verantwortung. Im Folgenden m\"ochten wir einen \"Uberblick \"uber die wichtigsten Fragestellungen rund um ADS, Verantwortung und Entscheidungsfindung in organisationalen Kontexten geben und einige offene Fragen und Forschungsl\"ucken aufzeigen. Weiters beschreiben wir als konkrete Hilfestellung f\"ur die Praxis einen von uns entwickelten Leitfaden samt erg\"anzendem digitalem Tool, welches Anwender:innen insbesondere bei der Verortung und Zuordnung von Verantwortung bei der Nutzung von ADS in organisationalen Kontexten helfen soll.
[ { "version": "v1", "created": "Thu, 21 Jul 2022 13:45:10 GMT" } ]
2022-07-25T00:00:00
[ [ "Adensamer", "Angelika", "" ], [ "Gsenger", "Rita", "" ], [ "Klausner", "Lukas Daniel", "" ] ]
new_dataset
0.998565
2207.10690
Yue Sun
Yue Sun, Honggang Zhang, Zhuoming Huang, and Benyuan Liu
R2P: A Deep Learning Model from mmWave Radar to Point Cloud
arXiv admin note: text overlap with arXiv:2109.09188
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Recent research has shown the effectiveness of mmWave radar sensing for object detection in low visibility environments, which makes it an ideal technique in autonomous navigation systems. In this paper, we introduce Radar to Point Cloud (R2P), a deep learning model that generates smooth, dense, and highly accurate point cloud representation of a 3D object with fine geometry details, based on rough and sparse point clouds with incorrect points obtained from mmWave radar. These input point clouds are converted from the 2D depth images that are generated from raw mmWave radar sensor data, characterized by inconsistency, and orientation and shape errors. R2P utilizes an architecture of two sequential deep learning encoder-decoder blocks to extract the essential features of those radar-based input point clouds of an object when observed from multiple viewpoints, and to ensure the internal consistency of a generated output point cloud and its accurate and detailed shape reconstruction of the original object. We implement R2P to replace Stage 2 of our recently proposed 3DRIMR (3D Reconstruction and Imaging via mmWave Radar) system. Our experiments demonstrate the significant performance improvement of R2P over the popular existing methods such as PointNet, PCN, and the original 3DRIMR design.
[ { "version": "v1", "created": "Thu, 21 Jul 2022 18:01:05 GMT" } ]
2022-07-25T00:00:00
[ [ "Sun", "Yue", "" ], [ "Zhang", "Honggang", "" ], [ "Huang", "Zhuoming", "" ], [ "Liu", "Benyuan", "" ] ]
new_dataset
0.997172
2207.10693
Anton Bredenbeck
Anton Bredenbeck, Shubham Vyas, Martin Zwick, Dorit Borrmann, Miguel Olivares-Mendez, Andreas N\"uchter
Trajectory Optimization and Following for a Three Degrees of Freedom Overactuated Floating Platform
Accepted to IROS2022, code at https://gitlab.com/anton.bredenbeck/ff-trajectories
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Space robotics applications, such as Active Space Debris Removal (ASDR), require representative testing before launch. A commonly used approach to emulate the microgravity environment in space is air-bearing based platforms on flat-floors, such as the European Space Agency's Orbital Robotics and GNC Lab (ORGL). This work proposes a control architecture for a floating platform at the ORGL, equipped with eight solenoid-valve-based thrusters and one reaction wheel. The control architecture consists of two main components: a trajectory planner that finds optimal trajectories connecting two states and a trajectory follower that follows any physically feasible trajectory. The controller is first evaluated within an introduced simulation, achieving a 100 % success rate at finding and following trajectories to the origin within a Monte-Carlo test. Individual trajectories are also successfully followed by the physical system. In this work, we showcase the ability of the controller to reject disturbances and follow a straight-line trajectory within tens of centimeters.
[ { "version": "v1", "created": "Thu, 21 Jul 2022 18:06:20 GMT" } ]
2022-07-25T00:00:00
[ [ "Bredenbeck", "Anton", "" ], [ "Vyas", "Shubham", "" ], [ "Zwick", "Martin", "" ], [ "Borrmann", "Dorit", "" ], [ "Olivares-Mendez", "Miguel", "" ], [ "Nüchter", "Andreas", "" ] ]
new_dataset
0.970499
2207.10761
Gabriel Sarch
Gabriel Sarch, Zhaoyuan Fang, Adam W. Harley, Paul Schydlo, Michael J. Tarr, Saurabh Gupta, and Katerina Fragkiadaki
TIDEE: Tidying Up Novel Rooms using Visuo-Semantic Commonsense Priors
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We introduce TIDEE, an embodied agent that tidies up a disordered scene based on learned commonsense object placement and room arrangement priors. TIDEE explores a home environment, detects objects that are out of their natural place, infers plausible object contexts for them, localizes such contexts in the current scene, and repositions the objects. Commonsense priors are encoded in three modules: i) visuo-semantic detectors that detect out-of-place objects, ii) an associative neural graph memory of objects and spatial relations that proposes plausible semantic receptacles and surfaces for object repositions, and iii) a visual search network that guides the agent's exploration for efficiently localizing the receptacle-of-interest in the current scene to reposition the object. We test TIDEE on tidying up disorganized scenes in the AI2THOR simulation environment. TIDEE carries out the task directly from pixel and raw depth input without ever having observed the same room beforehand, relying only on priors learned from a separate set of training houses. Human evaluations on the resulting room reorganizations show TIDEE outperforms ablative versions of the model that do not use one or more of the commonsense priors. On a related room rearrangement benchmark that allows the agent to view the goal state prior to rearrangement, a simplified version of our model significantly outperforms a top-performing method by a large margin. Code and data are available at the project website: https://tidee-agent.github.io/.
[ { "version": "v1", "created": "Thu, 21 Jul 2022 21:19:18 GMT" } ]
2022-07-25T00:00:00
[ [ "Sarch", "Gabriel", "" ], [ "Fang", "Zhaoyuan", "" ], [ "Harley", "Adam W.", "" ], [ "Schydlo", "Paul", "" ], [ "Tarr", "Michael J.", "" ], [ "Gupta", "Saurabh", "" ], [ "Fragkiadaki", "Katerina", "" ] ]
new_dataset
0.992732
2207.10789
Omer Aydin
Omer Aydin
Authentication and Billing Scheme for The Electric Vehicles: EVABS
null
null
10.33461/uybisbbd.1075481
null
cs.CR
http://creativecommons.org/licenses/by-nc-sa/4.0/
The need for different energy sources has increased due to the decrease in the amount and the harm caused to the environment by its usage. Today, fossil fuels used as an energy source in land, sea or air vehicles are rapidly being replaced by different energy sources. The number and types of vehicles using energy sources other than fossil fuels are also increasing. Electricity stands out among the energy sources used. The possibility of generating electricity that is renewable, compatible with nature and at a lower cost provides a great advantage. For all these reasons, the use of electric vehicles is increasing day by day. Various solutions continue to be developed for the charging systems and post-charge billing processes of these vehicles. As a result of these solutions, the standards have not yet been fully formed. In this study, an authentication and billing scheme is proposed for charging and post-charging billing processes of electric land vehicles keeping security and privacy in the foreground. This scheme is named EVABS, which derives from the phrase "Electric Vehicle Authentication and Billing Scheme". An authentication and billing scheme is proposed where data communication is encrypted, payment transactions are handled securely and parties can authenticate over wired or wireless. The security of the proposed scheme has been examined theoretically and it has been determined that it is secure against known attacks.
[ { "version": "v1", "created": "Sat, 2 Jul 2022 23:29:24 GMT" } ]
2022-07-25T00:00:00
[ [ "Aydin", "Omer", "" ] ]
new_dataset
0.959607
2207.10795
Conner Bender
Conner Bender
DJI drone IDs are not encrypted
13 pages, 15 figures, 5 tables, 10 algorithms
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Drones are widely used in the energy, construction, agriculture, transportation, warehousing, real estate and movie industries. Key applications include surveys, inspections, deliveries and cinematography. With approximately 70-80% of the global market share of commercial off-the-shelf drones, Da-Jiang Innovations (DJI), headquartered in Shenzhen, China, essentially monopolizes the drone market. As commercial-off-the-shelf drone sales steadily rise, the Federal Aviation Administration has instituted regulations to protect the federal airspace. DJI has become a pioneer in developing remote identification technology in the form of drone ID (also known as AeroScope signals). Despite claims from the company touting its implementation of drone ID technology as "encrypted" yet later being proved incorrect for the claim, it remains a mystery on how one can grab and decode drone IDs over the air with low-cost radio frequency hardware in real-time. This research paper discusses a methodology using radio software and hardware to detect both Enhanced Wi-Fi and OcuSync drone IDs, the three types of drone ID packet structures and a functioning prototype of a DJI OcuSync detection system equipped with two HackRF Ones.
[ { "version": "v1", "created": "Sat, 16 Jul 2022 18:15:27 GMT" } ]
2022-07-25T00:00:00
[ [ "Bender", "Conner", "" ] ]
new_dataset
0.987524
2207.10806
Andrew Critch PhD
Andrew Critch
WordSig: QR streams enabling platform-independent self-identification that's impossible to deepfake
null
null
null
null
cs.CR cs.AI cs.CY
http://creativecommons.org/licenses/by/4.0/
Deepfakes can degrade the fabric of society by limiting our ability to trust video content from leaders, authorities, and even friends. Cryptographically secure digital signatures may be used by video streaming platforms to endorse content, but these signatures are applied by the content distributor rather than the participants in the video. We introduce WordSig, a simple protocol allowing video participants to digitally sign the words they speak using a stream of QR codes, and allowing viewers to verify the consistency of signatures across videos. This allows establishing a trusted connection between the viewer and the participant that is not mediated by the content distributor. Given the widespread adoption of QR codes for distributing hyperlinks and vaccination records, and the increasing prevalence of celebrity deepfakes, 2022 or later may be a good time for public figures to begin using and promoting QR-based self-authentication tools.
[ { "version": "v1", "created": "Fri, 15 Jul 2022 17:23:01 GMT" } ]
2022-07-25T00:00:00
[ [ "Critch", "Andrew", "" ] ]
new_dataset
0.991907
2207.10810
Hamed Farkhari
Joseanne Viana, Hamed Farkhari, Luis Miguel Campos, Pedro Sebastiao, Katerina Koutlia, Sandra Lagen, Luis Bernardo, Rui Dinis
A Convolutional Attention Based Deep Network Solution for UAV Network Attack Recognition over Fading Channels and Interference
6 pages, 6 figures
null
null
null
cs.CR cs.LG cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When users exchange data with Unmanned Aerial vehicles - (UAVs) over air-to-ground (A2G) wireless communication networks, they expose the link to attacks that could increase packet loss and might disrupt connectivity. For example, in emergency deliveries, losing control information (i.e data related to the UAV control communication) might result in accidents that cause UAV destruction and damage to buildings or other elements in a city. To prevent these problems, these issues must be addressed in 5G and 6G scenarios. This research offers a deep learning (DL) approach for detecting attacks in UAVs equipped with orthogonal frequency division multiplexing (OFDM) receivers on Clustered Delay Line (CDL) channels in highly complex scenarios involving authenticated terrestrial users, as well as attackers in unknown locations. We use the two observable parameters available in 5G UAV connections: the Received Signal Strength Indicator (RSSI) and the Signal to Interference plus Noise Ratio (SINR). The prospective algorithm is generalizable regarding attack identification, which does not occur during training. Further, it can identify all the attackers in the environment with 20 terrestrial users. A deeper investigation into the timing requirements for recognizing attacks show that after training, the minimum time necessary after the attack begins is 100 ms, and the minimum attack power is 2 dBm, which is the same power that the authenticated UAV uses. Our algorithm also detects moving attackers from a distance of 500 m.
[ { "version": "v1", "created": "Sat, 16 Jul 2022 22:08:12 GMT" } ]
2022-07-25T00:00:00
[ [ "Viana", "Joseanne", "" ], [ "Farkhari", "Hamed", "" ], [ "Campos", "Luis Miguel", "" ], [ "Sebastiao", "Pedro", "" ], [ "Koutlia", "Katerina", "" ], [ "Lagen", "Sandra", "" ], [ "Bernardo", "Luis", "" ], [ "Dinis", "Rui", "" ] ]
new_dataset
0.992864
2207.10812
Ammar Haydari
Ammar Haydari, Yasin Yilmaz
RSU-Based Online Intrusion Detection and Mitigation for VANET
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Secure vehicular communication is a critical factor for secure traffic management. Effective security in intelligent transportation systems (ITS) requires effective and timely intrusion detection systems (IDS). In this paper, we consider false data injection attacks and distributed denial-of-service (DDoS) attacks, especially the stealthy DDoS attacks, targeting the integrity and availability, respectively, in vehicular ad-hoc networks (VANET). Novel statistical intrusion detection and mitigation techniques based on centralized communications through roadside units (RSU) are proposed for the considered attacks. The performance of the proposed methods are evaluated using a traffic simulator and a real traffic dataset. Comparisons with the state-of-the-art solutions clearly demonstrate the superior performance of the proposed methods in terms of quick and accurate detection and localization of cyberattacks.
[ { "version": "v1", "created": "Sun, 17 Jul 2022 19:26:46 GMT" } ]
2022-07-25T00:00:00
[ [ "Haydari", "Ammar", "" ], [ "Yilmaz", "Yasin", "" ] ]
new_dataset
0.972064
2207.10817
Shakeel Ahmad Sheikh
Shakeel Ahmad Sheikh, Md Sahidullah, Fabrice Hirsch, Slim Ouni
End-to-End and Self-Supervised Learning for ComParE 2022 Stuttering Sub-Challenge
Accepted in ACM MM 2022 Conference : Grand Challenges, "\c{opyright} {Owner/Author | ACM} {2022}. This is the author's version of the work. It is posted here for your personal use. Not for redistribution
null
null
null
cs.SD cs.LG eess.AS
http://creativecommons.org/publicdomain/zero/1.0/
In this paper, we present end-to-end and speech embedding based systems trained in a self-supervised fashion to participate in the ACM Multimedia 2022 ComParE Challenge, specifically the stuttering sub-challenge. In particular, we exploit the embeddings from the pre-trained Wav2Vec2.0 model for stuttering detection (SD) on the KSoF dataset. After embedding extraction, we benchmark with several methods for SD. Our proposed self-supervised based SD system achieves a UAR of 36.9% and 41.0% on validation and test sets respectively, which is 31.32% (validation set) and 1.49% (test set) higher than the best (DeepSpectrum) challenge baseline (CBL). Moreover, we show that concatenating layer embeddings with Mel-frequency cepstral coefficients (MFCCs) features further improves the UAR of 33.81% and 5.45% on validation and test sets respectively over the CBL. Finally, we demonstrate that the summing information across all the layers of Wav2Vec2.0 surpasses the CBL by a relative margin of 45.91% and 5.69% on validation and test sets respectively. Grand-challenge: Computational Paralinguistics ChallengE
[ { "version": "v1", "created": "Wed, 20 Jul 2022 11:57:31 GMT" } ]
2022-07-25T00:00:00
[ [ "Sheikh", "Shakeel Ahmad", "" ], [ "Sahidullah", "Md", "" ], [ "Hirsch", "Fabrice", "" ], [ "Ouni", "Slim", "" ] ]
new_dataset
0.991371
2207.10823
Keita Emura
Kota Chin, Keita Emura, Kazumasa Omote, Shingo Sato
A Sealed-bid Auction with Fund Binding: Preventing Maximum Bidding Price Leakage
null
null
null
null
cs.CR cs.GT
http://creativecommons.org/licenses/by/4.0/
In an open-bid auction, a bidder can know the budgets of other bidders. Thus, a sealed-bid auction that hides bidding prices is desirable. However, in previous sealed-bid auction protocols, it has been difficult to provide a ``fund binding'' property, which would guarantee that a bidder has funds more than or equal to the bidding price and that the funds are forcibly withdrawn when the bidder wins. Thus, such protocols are vulnerable to false bidding. As a solution, many protocols employ a simple deposit method in which each bidder sends a deposit to a smart contract, which is greater than or equal to the bidding price, before the bidding phase. However, this deposit reveals the maximum bidding price, and it is preferable to hide this information. In this paper, we propose a sealed-bid auction protocol that provides a fund binding property. Our protocol not only hides the bidding price and a maximum bidding price, but also provides fund binding, simultaneously. For hiding the maximum bidding price, we pay attention to the fact that usual Ethereum transactions and transactions for sending funds to a one-time address have the same transaction structure, and it seems that they are indistinguishable. We discuss how much bidding transactions are hidden. We also employ DECO (Zhang et al,. CCS 2020) that proves the validity of the data to a verifier in which the data are taken from a source without showing the data itself. Finally, we give our implementation which shows transaction fees required and compare it to a sealed-bid auction protocol employing the simple deposit method.
[ { "version": "v1", "created": "Fri, 22 Jul 2022 00:15:02 GMT" } ]
2022-07-25T00:00:00
[ [ "Chin", "Kota", "" ], [ "Emura", "Keita", "" ], [ "Omote", "Kazumasa", "" ], [ "Sato", "Shingo", "" ] ]
new_dataset
0.950245
2207.10931
Jonathan Bourne
Jonathan Bourne, Andrea Ingianni, Rex McKenzie
What's in the laundromat? Mapping and characterising offshore owned domestic property in London
27 pages, 7 figures, 7 tables
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
The UK, particularly London, is a global hub for money laundering, a significant portion of which uses domestic property. However, understanding the distribution and characteristics of offshore domestic property in the UK is challenging due to data availability. This paper attempts to remedy that situation by enhancing a publicly available dataset of UK property owned by offshore companies. We create a data processing pipeline which draws on several datasets and machine learning techniques to create a parsed set of addresses classified into six use classes. The enhanced dataset contains 138,000 properties 44,000 more than the original dataset. The majority are domestic (95k), with a disproportionate amount of those in London (42k). The average offshore domestic property in London is worth 1.33 million GBP collectively this amounts to approximately 56 Billion GBP. We perform an in-depth analysis of the offshore domestic property in London, comparing the price, distribution and entropy/concentration with Airbnb property, low-use/empty property and conventional domestic property. We estimate that the total amount of offshore, low-use and airbnb property in London is between 144,000 and 164,000 and that they are collectively worth between 145-174 billion GBP. Furthermore, offshore domestic property is more expensive and has higher entropy/concentration than all other property types. In addition, we identify two different types of offshore property, nested and individual, which have different price and distribution characteristics. Finally, we release the enhanced offshore property dataset, the complete low-use London dataset and the pipeline for creating the enhanced dataset to reduce the barriers to studying this topic.
[ { "version": "v1", "created": "Fri, 22 Jul 2022 08:08:21 GMT" } ]
2022-07-25T00:00:00
[ [ "Bourne", "Jonathan", "" ], [ "Ingianni", "Andrea", "" ], [ "McKenzie", "Rex", "" ] ]
new_dataset
0.999488
2207.10950
Peter Naylor
Peter Naylor, Yao-Hung Hubert Tsai, Marick La\'e and Makoto Yamada
Scale dependant layer for self-supervised nuclei encoding
13 pages, 6 figures, 2 tables
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Recent developments in self-supervised learning give us the possibility to further reduce human intervention in multi-step pipelines where the focus evolves around particular objects of interest. In the present paper, the focus lays in the nuclei in histopathology images. In particular we aim at extracting cellular information in an unsupervised manner for a downstream task. As nuclei present themselves in a variety of sizes, we propose a new Scale-dependant convolutional layer to bypass scaling issues when resizing nuclei. On three nuclei datasets, we benchmark the following methods: handcrafted, pre-trained ResNet, supervised ResNet and self-supervised features. We show that the proposed convolution layer boosts performance and that this layer combined with Barlows-Twins allows for better nuclei encoding compared to the supervised paradigm in the low sample setting and outperforms all other proposed unsupervised methods. In addition, we extend the existing TNBC dataset to incorporate nuclei class annotation in order to enrich and publicly release a small sample setting dataset for nuclei segmentation and classification.
[ { "version": "v1", "created": "Fri, 22 Jul 2022 08:56:57 GMT" } ]
2022-07-25T00:00:00
[ [ "Naylor", "Peter", "" ], [ "Tsai", "Yao-Hung Hubert", "" ], [ "Laé", "Marick", "" ], [ "Yamada", "Makoto", "" ] ]
new_dataset
0.997104
2207.10953
Beichen Sun
Guanyu Zhang, Beichen Sun, Yuehan Qi, Yang Liu
Visible and Near Infrared Image Fusion Based on Texture Information
10 pages,11 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-sensor fusion is widely used in the environment perception system of the autonomous vehicle. It solves the interference caused by environmental changes and makes the whole driving system safer and more reliable. In this paper, a novel visible and near-infrared fusion method based on texture information is proposed to enhance unstructured environmental images. It aims at the problems of artifact, information loss and noise in traditional visible and near infrared image fusion methods. Firstly, the structure information of the visible image (RGB) and the near infrared image (NIR) after texture removal is obtained by relative total variation (RTV) calculation as the base layer of the fused image; secondly, a Bayesian classification model is established to calculate the noise weight and the noise information and the noise information in the visible image is adaptively filtered by joint bilateral filter; finally, the fused image is acquired by color space conversion. The experimental results demonstrate that the proposed algorithm can preserve the spectral characteristics and the unique information of visible and near-infrared images without artifacts and color distortion, and has good robustness as well as preserving the unique texture.
[ { "version": "v1", "created": "Fri, 22 Jul 2022 09:02:17 GMT" } ]
2022-07-25T00:00:00
[ [ "Zhang", "Guanyu", "" ], [ "Sun", "Beichen", "" ], [ "Qi", "Yuehan", "" ], [ "Liu", "Yang", "" ] ]
new_dataset
0.971805
2207.10955
Hang Ye
Hang Ye, Wentao Zhu, Chunyu Wang, Rujie Wu, Yizhou Wang
Faster VoxelPose: Real-time 3D Human Pose Estimation by Orthographic Projection
22 pages, 7 figures, submitted to ECCV 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While the voxel-based methods have achieved promising results for multi-person 3D pose estimation from multi-cameras, they suffer from heavy computation burdens, especially for large scenes. We present Faster VoxelPose to address the challenge by re-projecting the feature volume to the three two-dimensional coordinate planes and estimating X, Y, Z coordinates from them separately. To that end, we first localize each person by a 3D bounding box by estimating a 2D box and its height based on the volume features projected to the xy-plane and z-axis, respectively. Then for each person, we estimate partial joint coordinates from the three coordinate planes separately which are then fused to obtain the final 3D pose. The method is free from costly 3D-CNNs and improves the speed of VoxelPose by ten times and meanwhile achieves competitive accuracy as the state-of-the-art methods, proving its potential in real-time applications.
[ { "version": "v1", "created": "Fri, 22 Jul 2022 09:10:01 GMT" } ]
2022-07-25T00:00:00
[ [ "Ye", "Hang", "" ], [ "Zhu", "Wentao", "" ], [ "Wang", "Chunyu", "" ], [ "Wu", "Rujie", "" ], [ "Wang", "Yizhou", "" ] ]
new_dataset
0.997312
2207.11000
R\"udiger Ehlers
R\"udiger Ehlers and Sven Schewe
Natural Colors of Infinite Words
null
null
null
null
cs.FL cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While finite automata have minimal DFAs as a simple and natural normal form, deterministic omega-automata do not currently have anything similar. One reason for this is that a normal form for omega-regular languages has to speak about more than acceptance - for example, to have a normal form for a parity language, it should relate every infinite word to some natural color for this language. This raises the question of whether or not a concept such as a natural color of an infinite word (for a given language) exists, and, if it does, how it relates back to automata. We define the natural color of a word purely based on an omega-regular language, and show how this natural color can be traced back from any deterministic parity automaton after two cheap and simple automaton transformations. The resulting streamlined automaton does not necessarily accept every word with its natural color, but it has a 'co-run', which is like a run, but can once move to a language equivalent state, whose color is the natural color, and no co-run with a higher color exists. The streamlined automaton defines, for every color c, a good-for-games co-B\"uchi automaton that recognizes the words whose natural colors w.r.t. the represented language are at least c. This provides a canonical representation for every $\omega$-regular language, because good-for-games co-B\"uchi automata have a canonical minimal (and cheap to obtain) representation for every co-B\"uchi language.
[ { "version": "v1", "created": "Fri, 22 Jul 2022 10:36:04 GMT" } ]
2022-07-25T00:00:00
[ [ "Ehlers", "Rüdiger", "" ], [ "Schewe", "Sven", "" ] ]
new_dataset
0.99857
2207.11012
Francisca Pessanha
Francisca Pessanha, Gizem Sogancioglu
Fact sheet: Automatic Self-Reported Personality Recognition Track
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We propose an informed baseline to help disentangle the various contextual factors of influence in this type of case studies. For this purpose, we analysed the correlation between the given metadata and the self-assigned personality trait scores and developed a model based solely on this information. Further, we compared the performance of this informed baseline with models based on state-of-the-art visual, linguistic and audio features. For the present dataset, a model trained solely on simple metadata features (age, gender and number of sessions) proved to have superior or similar performance when compared with simple audio, linguistic or visual features-based systems.
[ { "version": "v1", "created": "Fri, 22 Jul 2022 11:30:11 GMT" } ]
2022-07-25T00:00:00
[ [ "Pessanha", "Francisca", "" ], [ "Sogancioglu", "Gizem", "" ] ]
new_dataset
0.995429
2207.11031
Mohammad Hajizadeh Saffar
Mohammad Hajizadeh, Mohammad Sabokrou, Adel Rahmani
MobileDenseNet: A new approach to object detection on mobile devices
null
null
null
null
cs.CV cs.LG cs.NE
http://creativecommons.org/licenses/by/4.0/
Object detection problem solving has developed greatly within the past few years. There is a need for lighter models in instances where hardware limitations exist, as well as a demand for models to be tailored to mobile devices. In this article, we will assess the methods used when creating algorithms that address these issues. The main goal of this article is to increase accuracy in state-of-the-art algorithms while maintaining speed and real-time efficiency. The most significant issues in one-stage object detection pertains to small objects and inaccurate localization. As a solution, we created a new network by the name of MobileDenseNet suitable for embedded systems. We also developed a light neck FCPNLite for mobile devices that will aid with the detection of small objects. Our research revealed that very few papers cited necks in embedded systems. What differentiates our network from others is our use of concatenation features. A small yet significant change to the head of the network amplified accuracy without increasing speed or limiting parameters. In short, our focus on the challenging CoCo and Pascal VOC datasets were 24.8 and 76.8 in percentage terms respectively - a rate higher than that recorded by other state-of-the-art systems thus far. Our network is able to increase accuracy while maintaining real-time efficiency on mobile devices. We calculated operational speed on Pixel 3 (Snapdragon 845) to 22.8 fps. The source code of this research is available on https://github.com/hajizadeh/MobileDenseNet.
[ { "version": "v1", "created": "Fri, 22 Jul 2022 12:13:59 GMT" } ]
2022-07-25T00:00:00
[ [ "Hajizadeh", "Mohammad", "" ], [ "Sabokrou", "Mohammad", "" ], [ "Rahmani", "Adel", "" ] ]
new_dataset
0.993052
2207.11082
Matias Martinez
Matias Martinez, Maria Kechagia, Anjana Perera, Justyna Petke, Federica Sarro and Aldeida Aleti
Test-based Patch Clustering for Automatically-Generated Patches Assessment
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Previous studies have shown that Automated Program Repair (APR) techniques suffer from the overfitting problem. Overfitting happens when a patch is run and the test suite does not reveal any error, but the patch actually does not fix the underlying bug or it introduces a new defect that is not covered by the test suite. Therefore, the patches generated by APR tools need to be validated by human programmers, which can be very costly, and prevents APR tools adoption in practice.Our work aims at increasing developer trust in automated patch generation by minimizing the number of plausible patches that they have to review, thereby reducing the time required to find a correct patch. We introduce a novel light-weight test-based patch clustering approach called xTestCluster, which clusters patches based on their dynamic behavior. xTestCluster is applied after the patch generation phase in order to analyze the generated patches from one or more repair tools. The novelty of xTestCluster lies in using information from execution of newly generated test cases to cluster patches generated by multiple APR approaches. A cluster is formed with patches that fail on the same generated test cases. The output from xTestCluster gives developers a) a way of reducing the number of patches to analyze, as they can focus on analyzing a sample of patches from each cluster, b) additional information attached to each patch. After analyzing 1910 plausible patches from 25 Java APR tools, our results show that xTestCluster is able to reduce the number of patches to review and analyze with a median of 50%. xTestCluster can save a significant amount of time for developers that have to review the multitude of patches generated by APR tools, and provides them with new test cases that show the differences in behavior between generated patches.
[ { "version": "v1", "created": "Fri, 22 Jul 2022 13:39:27 GMT" } ]
2022-07-25T00:00:00
[ [ "Martinez", "Matias", "" ], [ "Kechagia", "Maria", "" ], [ "Perera", "Anjana", "" ], [ "Petke", "Justyna", "" ], [ "Sarro", "Federica", "" ], [ "Aleti", "Aldeida", "" ] ]
new_dataset
0.958018
2207.11146
Abdallah Chehade
Mayuresh Savargaonkar and Abdallah Chehade
VTrackIt: A Synthetic Self-Driving Dataset with Infrastructure and Pooled Vehicle Information
null
null
null
null
cs.CV cs.AI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Artificial intelligence solutions for Autonomous Vehicles (AVs) have been developed using publicly available datasets such as Argoverse, ApolloScape, Level5, and NuScenes. One major limitation of these datasets is the absence of infrastructure and/or pooled vehicle information like lane line type, vehicle speed, traffic signs, and intersections. Such information is necessary and not complementary to eliminating high-risk edge cases. The rapid advancements in Vehicle-to-Infrastructure and Vehicle-to-Vehicle technologies show promise that infrastructure and pooled vehicle information will soon be accessible in near real-time. Taking a leap in the future, we introduce the first comprehensive synthetic dataset with intelligent infrastructure and pooled vehicle information for advancing the next generation of AVs, named VTrackIt. We also introduce the first deep learning model (InfraGAN) for trajectory predictions that considers such information. Our experiments with InfraGAN show that the comprehensive information offered by VTrackIt reduces the number of high-risk edge cases. The VTrackIt dataset is available upon request under the Creative Commons CC BY-NC-SA 4.0 license at http://vtrackit.irda.club.
[ { "version": "v1", "created": "Fri, 15 Jul 2022 16:00:33 GMT" } ]
2022-07-25T00:00:00
[ [ "Savargaonkar", "Mayuresh", "" ], [ "Chehade", "Abdallah", "" ] ]
new_dataset
0.999818
2207.11230
Thibault Clerice
Thibault Cl\'erice (ENC, CJM, HiSoMA, UJML)
You Actually Look Twice At it (YALTAi): using an object detection approach instead of region segmentation within the Kraken engine
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Layout Analysis (the identification of zones and their classification) is the first step along line segmentation in Optical Character Recognition and similar tasks. The ability of identifying main body of text from marginal text or running titles makes the difference between extracting the work full text of a digitized book and noisy outputs. We show that most segmenters focus on pixel classification and that polygonization of this output has not been used as a target for the latest competition on historical document (ICDAR 2017 and onwards), despite being the focus in the early 2010s. We propose to shift, for efficiency, the task from a pixel classification-based polygonization to an object detection using isothetic rectangles. We compare the output of Kraken and YOLOv5 in terms of segmentation and show that the later severely outperforms the first on small datasets (1110 samples and below). We release two datasets for training and evaluation on historical documents as well as a new package, YALTAi, which injects YOLOv5 in the segmentation pipeline of Kraken 4.1.
[ { "version": "v1", "created": "Tue, 19 Jul 2022 07:50:16 GMT" } ]
2022-07-25T00:00:00
[ [ "Clérice", "Thibault", "", "ENC, CJM, HiSoMA, UJML" ] ]
new_dataset
0.951512
2207.11236
Andreas Buchmueller
Andreas Buchm\"uller, Gillian Kant, Christoph Weisser, Benjamin S\"afken, Krisztina Kis-Katos, Thomas Kneib
Twitmo: A Twitter Data Topic Modeling and Visualization Package for R
16 pages, 4 figures
null
null
null
cs.IR cs.CL cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
We present Twitmo, a package that provides a broad range of methods to collect, pre-process, analyze and visualize geo-tagged Twitter data. Twitmo enables the user to collect geo-tagged Tweets from Twitter and and provides a comprehensive and user-friendly toolbox to generate topic distributions from Latent Dirichlet Allocations (LDA), correlated topic models (CTM) and structural topic models (STM). Functions are included for pre-processing of text, model building and prediction. In addition, one of the innovations of the package is the automatic pooling of Tweets into longer pseudo-documents using hashtags and cosine similarities for better topic coherence. The package additionally comes with functionality to visualize collected data sets and fitted models in static as well as interactive ways and offers built-in support for model visualizations via LDAvis providing great convenience for researchers in this area. The Twitmo package is an innovative toolbox that can be used to analyze public discourse of various topics, political parties or persons of interest in space and time.
[ { "version": "v1", "created": "Fri, 8 Jul 2022 12:23:20 GMT" } ]
2022-07-25T00:00:00
[ [ "Buchmüller", "Andreas", "" ], [ "Kant", "Gillian", "" ], [ "Weisser", "Christoph", "" ], [ "Säfken", "Benjamin", "" ], [ "Kis-Katos", "Krisztina", "" ], [ "Kneib", "Thomas", "" ] ]
new_dataset
0.999425
2207.11247
Jingkang Yang
Jingkang Yang, Yi Zhe Ang, Zujin Guo, Kaiyang Zhou, Wayne Zhang, and Ziwei Liu
Panoptic Scene Graph Generation
Accepted to ECCV'22 (Paper ID #222, Final Score 2222). Project Page: https://psgdataset.org/. OpenPSG Codebase: https://github.com/Jingkang50/OpenPSG
null
null
null
cs.CV cs.AI cs.CL cs.LG cs.MM
http://creativecommons.org/licenses/by/4.0/
Existing research addresses scene graph generation (SGG) -- a critical technology for scene understanding in images -- from a detection perspective, i.e., objects are detected using bounding boxes followed by prediction of their pairwise relationships. We argue that such a paradigm causes several problems that impede the progress of the field. For instance, bounding box-based labels in current datasets usually contain redundant classes like hairs, and leave out background information that is crucial to the understanding of context. In this work, we introduce panoptic scene graph generation (PSG), a new problem task that requires the model to generate a more comprehensive scene graph representation based on panoptic segmentations rather than rigid bounding boxes. A high-quality PSG dataset, which contains 49k well-annotated overlapping images from COCO and Visual Genome, is created for the community to keep track of its progress. For benchmarking, we build four two-stage baselines, which are modified from classic methods in SGG, and two one-stage baselines called PSGTR and PSGFormer, which are based on the efficient Transformer-based detector, i.e., DETR. While PSGTR uses a set of queries to directly learn triplets, PSGFormer separately models the objects and relations in the form of queries from two Transformer decoders, followed by a prompting-like relation-object matching mechanism. In the end, we share insights on open challenges and future directions.
[ { "version": "v1", "created": "Fri, 22 Jul 2022 17:59:53 GMT" } ]
2022-07-25T00:00:00
[ [ "Yang", "Jingkang", "" ], [ "Ang", "Yi Zhe", "" ], [ "Guo", "Zujin", "" ], [ "Zhou", "Kaiyang", "" ], [ "Zhang", "Wayne", "" ], [ "Liu", "Ziwei", "" ] ]
new_dataset
0.974453
2101.08238
Ammarah Farooq
Ammarah Farooq, Muhammad Awais, Josef Kittler, Syed Safwan Khalid
AXM-Net: Implicit Cross-Modal Feature Alignment for Person Re-identification
AAAI-2022 (Oral Paper)
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cross-modal person re-identification (Re-ID) is critical for modern video surveillance systems. The key challenge is to align cross-modality representations induced by the semantic information present for a person and ignore background information. This work presents a novel convolutional neural network (CNN) based architecture designed to learn semantically aligned cross-modal visual and textual representations. The underlying building block, named AXM-Block, is a unified multi-layer network that dynamically exploits the multi-scale knowledge from both modalities and re-calibrates each modality according to shared semantics. To complement the convolutional design, contextual attention is applied in the text branch to manipulate long-term dependencies. Moreover, we propose a unique design to enhance visual part-based feature coherence and locality information. Our framework is novel in its ability to implicitly learn aligned semantics between modalities during the feature learning stage. The unified feature learning effectively utilizes textual data as a super-annotation signal for visual representation learning and automatically rejects irrelevant information. The entire AXM-Net is trained end-to-end on CUHK-PEDES data. We report results on two tasks, person search and cross-modal Re-ID. The AXM-Net outperforms the current state-of-the-art (SOTA) methods and achieves 64.44\% Rank@1 on the CUHK-PEDES test set. It also outperforms its competitors by $>$10\% in cross-viewpoint text-to-image Re-ID scenarios on CrossRe-ID and CUHK-SYSU datasets.
[ { "version": "v1", "created": "Tue, 19 Jan 2021 16:06:39 GMT" }, { "version": "v2", "created": "Fri, 19 Mar 2021 15:28:49 GMT" }, { "version": "v3", "created": "Wed, 20 Jul 2022 23:20:12 GMT" } ]
2022-07-22T00:00:00
[ [ "Farooq", "Ammarah", "" ], [ "Awais", "Muhammad", "" ], [ "Kittler", "Josef", "" ], [ "Khalid", "Syed Safwan", "" ] ]
new_dataset
0.973104
2107.11020
Junyi Jessy Li
Tiberiu Sosea, Chau Pham, Alexander Tekle, Cornelia Caragea, Junyi Jessy Li
Emotion analysis and detection during COVID-19
LREC 2022
null
null
null
cs.CL cs.CY
http://creativecommons.org/licenses/by/4.0/
Crises such as natural disasters, global pandemics, and social unrest continuously threaten our world and emotionally affect millions of people worldwide in distinct ways. Understanding emotions that people express during large-scale crises helps inform policy makers and first responders about the emotional states of the population as well as provide emotional support to those who need such support. We present CovidEmo, ~3K English tweets labeled with emotions and temporally distributed across 18 months. Our analyses reveal the emotional toll caused by COVID-19, and changes of the social narrative and associated emotions over time. Motivated by the time-sensitive nature of crises and the cost of large-scale annotation efforts, we examine how well large pre-trained language models generalize across domains and timeline in the task of perceived emotion prediction in the context of COVID-19. Our analyses suggest that cross-domain information transfers occur, yet there are still significant gaps. We propose semi-supervised learning as a way to bridge this gap, obtaining significantly better performance using unlabeled data from the target domain.
[ { "version": "v1", "created": "Fri, 23 Jul 2021 04:07:14 GMT" }, { "version": "v2", "created": "Thu, 14 Oct 2021 22:08:59 GMT" }, { "version": "v3", "created": "Thu, 21 Jul 2022 02:16:07 GMT" } ]
2022-07-22T00:00:00
[ [ "Sosea", "Tiberiu", "" ], [ "Pham", "Chau", "" ], [ "Tekle", "Alexander", "" ], [ "Caragea", "Cornelia", "" ], [ "Li", "Junyi Jessy", "" ] ]
new_dataset
0.996614
2108.13327
Zhen Wang
Zhen Wang, Xu Shan, Xiangxie Zhang, Jie Yang
N24News: A New Dataset for Multimodal News Classification
null
Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022), pages 6768-6775
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current news datasets merely focus on text features on the news and rarely leverage the feature of images, excluding numerous essential features for news classification. In this paper, we propose a new dataset, N24News, which is generated from New York Times with 24 categories and contains both text and image information in each news. We use a multitask multimodal method and the experimental results show multimodal news classification performs better than text-only news classification. Depending on the length of the text, the classification accuracy can be increased by up to 8.11%. Our research reveals the relationship between the performance of a multimodal classifier and its sub-classifiers, and also the possible improvements when applying multimodal in news classification. N24News is shown to have great potential to prompt the multimodal news studies.
[ { "version": "v1", "created": "Mon, 30 Aug 2021 15:46:09 GMT" }, { "version": "v2", "created": "Tue, 16 Nov 2021 15:14:14 GMT" }, { "version": "v3", "created": "Fri, 17 Dec 2021 15:20:11 GMT" }, { "version": "v4", "created": "Mon, 6 Jun 2022 06:51:28 GMT" } ]
2022-07-22T00:00:00
[ [ "Wang", "Zhen", "" ], [ "Shan", "Xu", "" ], [ "Zhang", "Xiangxie", "" ], [ "Yang", "Jie", "" ] ]
new_dataset
0.999883
2109.07775
Marcell Missura
Marcell Missura, Arindam Roychoudhury and Maren Bennewitz
Fast-Replanning Motion Control for Non-Holonomic Vehicles with Aborting A*
Accepted to IROS 22
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Autonomously driving vehicles must be able to navigate in dynamic and unpredictable environments in a collision-free manner. So far, this has only been partially achieved in driverless cars and warehouse installations where marked structures such as roads, lanes, and traffic signs simplify the motion planning and collision avoidance problem. We are presenting a new control approach for car-like vehicles that is based on an unprecedentedly fast-paced A* implementation that allows the control cycle to run at a frequency of 30 Hz. This frequency enables us to place our A* algorithm as a low-level replanning controller that is well suited for navigation and collision avoidance in virtually any dynamic environment. Due to an efficient heuristic consisting of rotate-translate-rotate motions laid out along the shortest path to the target, our Short-Term Aborting A* (STAA*) converges fast and can be aborted early in order to guarantee a high and steady control rate. While our STAA* expands states along the shortest path, it takes care of collision checking with the environment including predicted states of moving obstacles, and returns the best solution found when the computation time runs out. Despite the bounded computation time, our STAA* does not get trapped in corners due to the following of the shortest path. In simulated and real-robot experiments, we demonstrate that our control approach eliminates collisions almost entirely and is superior to an improved version of the Dynamic Window Approach with predictive collision avoidance capabilities.
[ { "version": "v1", "created": "Thu, 16 Sep 2021 07:51:26 GMT" }, { "version": "v2", "created": "Thu, 17 Mar 2022 10:22:54 GMT" }, { "version": "v3", "created": "Thu, 21 Jul 2022 12:47:46 GMT" } ]
2022-07-22T00:00:00
[ [ "Missura", "Marcell", "" ], [ "Roychoudhury", "Arindam", "" ], [ "Bennewitz", "Maren", "" ] ]
new_dataset
0.999516
2110.00058
Erik Demaine
Erik D. Demaine and Maarten L\"offler and Christiane Schmidt
Rectangular Spiral Galaxies are Still Hard
24 pages, 24 figures. Thorough revision including new Section 2 proof which handles the promise problem
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spiral Galaxies is a pencil-and-paper puzzle played on a grid of unit squares: given a set of points called centers, the goal is to partition the grid into polyominoes such that each polyomino contains exactly one center and is 180{\deg} rotationally symmetric about its center. We show that this puzzle is NP-complete, ASP-complete, and #P-complete even if (a) all solutions to the puzzle have rectangles for polyominoes; or (b) the polyominoes are required to be rectangles and all solutions to the puzzle have just 1$\times$1, 1$\times$3, and 3$\times$1 rectangles. The proof for the latter variant also implies NP/ASP/#P-completeness of finding a noncrossing perfect matching in distance-2 grid graphs where edges connect vertices of Euclidean distance 2. Moreover, we prove NP-completeness of the design problem of minimizing the number of centers such that there exists a set of galaxies that exactly cover a given shape
[ { "version": "v1", "created": "Thu, 30 Sep 2021 19:33:32 GMT" }, { "version": "v2", "created": "Wed, 20 Jul 2022 18:00:16 GMT" } ]
2022-07-22T00:00:00
[ [ "Demaine", "Erik D.", "" ], [ "Löffler", "Maarten", "" ], [ "Schmidt", "Christiane", "" ] ]
new_dataset
0.998998
2110.14284
Christian Frey
Christian M.M. Frey, Yunpu Ma, Matthias Schubert
APPTeK: Agent-Based Predicate Prediction in Temporal Knowledge Graphs
null
null
null
null
cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
In temporal Knowledge Graphs (tKGs), the temporal dimension is attached to facts in a knowledge base resulting in quadruples between entities such as (Nintendo, released, Super Mario, Sep-13-1985), where the predicate holds within a time interval or at a timestamp. We propose a reinforcement learning agent gathering temporal relevant information about the query entities' neighborhoods, simultaneously. We refer to the encodings of the explored graph structures as fingerprints which are used as input to a Q-network. Our agent decides sequentially which relation type needs to be explored next to expand the local subgraphs of the query entities. Our evaluation shows that the proposed method yields competitive results compared to state-of-the-art embedding algorithms for tKGs, and we additionally gain information about the relevant structures between subjects and objects.
[ { "version": "v1", "created": "Wed, 27 Oct 2021 09:05:23 GMT" }, { "version": "v2", "created": "Thu, 21 Jul 2022 07:58:21 GMT" } ]
2022-07-22T00:00:00
[ [ "Frey", "Christian M. M.", "" ], [ "Ma", "Yunpu", "" ], [ "Schubert", "Matthias", "" ] ]
new_dataset
0.998209
2111.05610
Zijian Gao
Zijian Gao, Jingyu Liu, Weiqi Sun, Sheng Chen, Dedan Chang, Lili Zhao
CLIP2TV: Align, Match and Distill for Video-Text Retrieval
null
null
null
null
cs.CV cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
Modern video-text retrieval frameworks basically consist of three parts: video encoder, text encoder and the similarity head. With the success on both visual and textual representation learning, transformer based encoders and fusion methods have also been adopted in the field of video-text retrieval. In this report, we present CLIP2TV, aiming at exploring where the critical elements lie in transformer based methods. To achieve this, We first revisit some recent works on multi-modal learning, then introduce some techniques into video-text retrieval, finally evaluate them through extensive experiments in different configurations. Notably, CLIP2TV achieves 52.9@R1 on MSR-VTT dataset, outperforming the previous SOTA result by 4.1%.
[ { "version": "v1", "created": "Wed, 10 Nov 2021 10:05:11 GMT" }, { "version": "v2", "created": "Thu, 21 Jul 2022 17:19:19 GMT" } ]
2022-07-22T00:00:00
[ [ "Gao", "Zijian", "" ], [ "Liu", "Jingyu", "" ], [ "Sun", "Weiqi", "" ], [ "Chen", "Sheng", "" ], [ "Chang", "Dedan", "" ], [ "Zhao", "Lili", "" ] ]
new_dataset
0.99403
2111.07640
Sunghyun Park
Kangyeol Kim, Sunghyun Park, Jaeseong Lee, Sunghyo Chung, Junsoo Lee, Jaegul Choo
AnimeCeleb: Large-Scale Animation CelebHeads Dataset for Head Reenactment
40 pages; Accepted to ECCV 2022; code and dataset URL added
null
null
null
cs.AI cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
We present a novel Animation CelebHeads dataset (AnimeCeleb) to address an animation head reenactment. Different from previous animation head datasets, we utilize 3D animation models as the controllable image samplers, which can provide a large amount of head images with their corresponding detailed pose annotations. To facilitate a data creation process, we build a semi-automatic pipeline leveraging an open 3D computer graphics software with a developed annotation system. After training with the AnimeCeleb, recent head reenactment models produce high-quality animation head reenactment results, which are not achievable with existing datasets. Furthermore, motivated by metaverse application, we propose a novel pose mapping method and architecture to tackle a cross-domain head reenactment task. During inference, a user can easily transfer one's motion to an arbitrary animation head. Experiments demonstrate the usefulness of the AnimeCeleb to train animation head reenactment models, and the superiority of our cross-domain head reenactment model compared to state-of-the-art methods. Our dataset and code are available at https://github.com/kangyeolk/AnimeCeleb.
[ { "version": "v1", "created": "Mon, 15 Nov 2021 10:00:06 GMT" }, { "version": "v2", "created": "Thu, 21 Jul 2022 07:49:29 GMT" } ]
2022-07-22T00:00:00
[ [ "Kim", "Kangyeol", "" ], [ "Park", "Sunghyun", "" ], [ "Lee", "Jaeseong", "" ], [ "Chung", "Sunghyo", "" ], [ "Lee", "Junsoo", "" ], [ "Choo", "Jaegul", "" ] ]
new_dataset
0.999864
2112.08775
Jaewoo Park
Jaewoo Park, Nam Ik Cho
DProST: Dynamic Projective Spatial Transformer Network for 6D Pose Estimation
Accepted to ECCV 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predicting the object's 6D pose from a single RGB image is a fundamental computer vision task. Generally, the distance between transformed object vertices is employed as an objective function for pose estimation methods. However, projective geometry in the camera space is not considered in those methods and causes performance degradation. In this regard, we propose a new pose estimation system based on a projective grid instead of object vertices. Our pose estimation method, dynamic projective spatial transformer network (DProST), localizes the region of interest grid on the rays in camera space and transforms the grid to object space by estimated pose. The transformed grid is used as both a sampling grid and a new criterion of the estimated pose. Additionally, because DProST does not require object vertices, our method can be used in a mesh-less setting by replacing the mesh with a reconstructed feature. Experimental results show that mesh-less DProST outperforms the state-of-the-art mesh-based methods on the LINEMOD and LINEMOD-OCCLUSION dataset, and shows competitive performance on the YCBV dataset with mesh data. The source code is available at https://github.com/parkjaewoo0611/DProST
[ { "version": "v1", "created": "Thu, 16 Dec 2021 10:39:09 GMT" }, { "version": "v2", "created": "Thu, 21 Jul 2022 10:48:49 GMT" } ]
2022-07-22T00:00:00
[ [ "Park", "Jaewoo", "" ], [ "Cho", "Nam Ik", "" ] ]
new_dataset
0.997958
2112.13715
Ailing Zeng
Ailing Zeng, Lei Yang, Xuan Ju, Jiefeng Li, Jianyi Wang, Qiang Xu
SmoothNet: A Plug-and-Play Network for Refining Human Poses in Videos
Accepted by ECCV 2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
When analyzing human motion videos, the output jitters from existing pose estimators are highly-unbalanced with varied estimation errors across frames. Most frames in a video are relatively easy to estimate and only suffer from slight jitters. In contrast, for rarely seen or occluded actions, the estimated positions of multiple joints largely deviate from the ground truth values for a consecutive sequence of frames, rendering significant jitters on them. To tackle this problem, we propose to attach a dedicated temporal-only refinement network to existing pose estimators for jitter mitigation, named SmoothNet. Unlike existing learning-based solutions that employ spatio-temporal models to co-optimize per-frame precision and temporal smoothness at all the joints, SmoothNet models the natural smoothness characteristics in body movements by learning the long-range temporal relations of every joint without considering the noisy correlations among joints. With a simple yet effective motion-aware fully-connected network, SmoothNet improves the temporal smoothness of existing pose estimators significantly and enhances the estimation accuracy of those challenging frames as a side-effect. Moreover, as a temporal-only model, a unique advantage of SmoothNet is its strong transferability across various types of estimators and datasets. Comprehensive experiments on five datasets with eleven popular backbone networks across 2D and 3D pose estimation and body recovery tasks demonstrate the efficacy of the proposed solution. Code is available at https://github.com/cure-lab/SmoothNet.
[ { "version": "v1", "created": "Mon, 27 Dec 2021 14:53:30 GMT" }, { "version": "v2", "created": "Thu, 21 Jul 2022 17:15:06 GMT" } ]
2022-07-22T00:00:00
[ [ "Zeng", "Ailing", "" ], [ "Yang", "Lei", "" ], [ "Ju", "Xuan", "" ], [ "Li", "Jiefeng", "" ], [ "Wang", "Jianyi", "" ], [ "Xu", "Qiang", "" ] ]
new_dataset
0.995487
2202.08771
Jaesung Rim
Jaesung Rim, Geonung Kim, Jungeon Kim, Junyong Lee, Seungyong Lee, Sunghyun Cho
Realistic Blur Synthesis for Learning Image Deblurring
ECCV 2022,Project page: http://cg.postech.ac.kr/research/rsblur/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Training learning-based deblurring methods demands a tremendous amount of blurred and sharp image pairs. Unfortunately, existing synthetic datasets are not realistic enough, and deblurring models trained on them cannot handle real blurred images effectively. While real datasets have recently been proposed, they provide limited diversity of scenes and camera settings, and capturing real datasets for diverse settings is still challenging. To resolve this, this paper analyzes various factors that introduce differences between real and synthetic blurred images. To this end, we present RSBlur, a novel dataset with real blurred images and the corresponding sharp image sequences to enable a detailed analysis of the difference between real and synthetic blur. With the dataset, we reveal the effects of different factors in the blur generation process. Based on the analysis, we also present a novel blur synthesis pipeline to synthesize more realistic blur. We show that our synthesis pipeline can improve the deblurring performance on real blurred images.
[ { "version": "v1", "created": "Thu, 17 Feb 2022 17:14:48 GMT" }, { "version": "v2", "created": "Wed, 6 Apr 2022 22:23:59 GMT" }, { "version": "v3", "created": "Thu, 21 Jul 2022 06:05:08 GMT" } ]
2022-07-22T00:00:00
[ [ "Rim", "Jaesung", "" ], [ "Kim", "Geonung", "" ], [ "Kim", "Jungeon", "" ], [ "Lee", "Junyong", "" ], [ "Lee", "Seungyong", "" ], [ "Cho", "Sunghyun", "" ] ]
new_dataset
0.996176
2203.02113
Pinaki Nath Chowdhury
Pinaki Nath Chowdhury and Aneeshan Sain and Ayan Kumar Bhunia and Tao Xiang and Yulia Gryaditskaya and Yi-Zhe Song
FS-COCO: Towards Understanding of Freehand Sketches of Common Objects in Context
Accepted in ECCV 2022. Project Page: https://fscoco.github.io
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We advance sketch research to scenes with the first dataset of freehand scene sketches, FS-COCO. With practical applications in mind, we collect sketches that convey scene content well but can be sketched within a few minutes by a person with any sketching skills. Our dataset comprises 10,000 freehand scene vector sketches with per point space-time information by 100 non-expert individuals, offering both object- and scene-level abstraction. Each sketch is augmented with its text description. Using our dataset, we study for the first time the problem of fine-grained image retrieval from freehand scene sketches and sketch captions. We draw insights on: (i) Scene salience encoded in sketches using the strokes temporal order; (ii) Performance comparison of image retrieval from a scene sketch and an image caption; (iii) Complementarity of information in sketches and image captions, as well as the potential benefit of combining the two modalities. In addition, we extend a popular vector sketch LSTM-based encoder to handle sketches with larger complexity than was supported by previous work. Namely, we propose a hierarchical sketch decoder, which we leverage at a sketch-specific "pre-text" task. Our dataset enables for the first time research on freehand scene sketch understanding and its practical applications.
[ { "version": "v1", "created": "Fri, 4 Mar 2022 03:00:51 GMT" }, { "version": "v2", "created": "Tue, 15 Mar 2022 20:59:28 GMT" }, { "version": "v3", "created": "Thu, 21 Jul 2022 02:46:15 GMT" } ]
2022-07-22T00:00:00
[ [ "Chowdhury", "Pinaki Nath", "" ], [ "Sain", "Aneeshan", "" ], [ "Bhunia", "Ayan Kumar", "" ], [ "Xiang", "Tao", "" ], [ "Gryaditskaya", "Yulia", "" ], [ "Song", "Yi-Zhe", "" ] ]
new_dataset
0.999328
2203.03890
Xiaotong Chen
Xiaotong Chen, Huijie Zhang, Zeren Yu, Anthony Opipari, Odest Chadwicke Jenkins
ClearPose: Large-scale Transparent Object Dataset and Benchmark
ECCV 2022 accepted paper
null
null
null
cs.CV cs.RO
http://creativecommons.org/licenses/by/4.0/
Transparent objects are ubiquitous in household settings and pose distinct challenges for visual sensing and perception systems. The optical properties of transparent objects leave conventional 3D sensors alone unreliable for object depth and pose estimation. These challenges are highlighted by the shortage of large-scale RGB-Depth datasets focusing on transparent objects in real-world settings. In this work, we contribute a large-scale real-world RGB-Depth transparent object dataset named ClearPose to serve as a benchmark dataset for segmentation, scene-level depth completion and object-centric pose estimation tasks. The ClearPose dataset contains over 350K labeled real-world RGB-Depth frames and 5M instance annotations covering 63 household objects. The dataset includes object categories commonly used in daily life under various lighting and occluding conditions as well as challenging test scenarios such as cases of occlusion by opaque or translucent objects, non-planar orientations, presence of liquids, etc. We benchmark several state-of-the-art depth completion and object pose estimation deep neural networks on ClearPose. The dataset and benchmarking source code is available at https://github.com/opipari/ClearPose.
[ { "version": "v1", "created": "Tue, 8 Mar 2022 07:29:31 GMT" }, { "version": "v2", "created": "Thu, 21 Jul 2022 02:33:01 GMT" } ]
2022-07-22T00:00:00
[ [ "Chen", "Xiaotong", "" ], [ "Zhang", "Huijie", "" ], [ "Yu", "Zeren", "" ], [ "Opipari", "Anthony", "" ], [ "Jenkins", "Odest Chadwicke", "" ] ]
new_dataset
0.999752
2203.08713
Ailing Zeng
Ailing Zeng, Xuan Ju, Lei Yang, Ruiyuan Gao, Xizhou Zhu, Bo Dai, Qiang Xu
DeciWatch: A Simple Baseline for 10x Efficient 2D and 3D Pose Estimation
Accepted by ECCV 2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
This paper proposes a simple baseline framework for video-based 2D/3D human pose estimation that can achieve 10 times efficiency improvement over existing works without any performance degradation, named DeciWatch. Unlike current solutions that estimate each frame in a video, DeciWatch introduces a simple yet effective sample-denoise-recover framework that only watches sparsely sampled frames, taking advantage of the continuity of human motions and the lightweight pose representation. Specifically, DeciWatch uniformly samples less than 10% video frames for detailed estimation, denoises the estimated 2D/3D poses with an efficient Transformer architecture, and then accurately recovers the rest of the frames using another Transformer-based network. Comprehensive experimental results on three video-based human pose estimation and body mesh recovery tasks with four datasets validate the efficiency and effectiveness of DeciWatch. Code is available at https://github.com/cure-lab/DeciWatch.
[ { "version": "v1", "created": "Wed, 16 Mar 2022 16:03:37 GMT" }, { "version": "v2", "created": "Wed, 20 Jul 2022 18:02:53 GMT" } ]
2022-07-22T00:00:00
[ [ "Zeng", "Ailing", "" ], [ "Ju", "Xuan", "" ], [ "Yang", "Lei", "" ], [ "Gao", "Ruiyuan", "" ], [ "Zhu", "Xizhou", "" ], [ "Dai", "Bo", "" ], [ "Xu", "Qiang", "" ] ]
new_dataset
0.99564
2203.10157
Jon\'a\v{s} Kulh\'anek
Jon\'a\v{s} Kulh\'anek and Erik Derner and Torsten Sattler and Robert Babu\v{s}ka
ViewFormer: NeRF-free Neural Rendering from Few Images Using Transformers
ECCV 2022 poster
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Novel view synthesis is a long-standing problem. In this work, we consider a variant of the problem where we are given only a few context views sparsely covering a scene or an object. The goal is to predict novel viewpoints in the scene, which requires learning priors. The current state of the art is based on Neural Radiance Field (NeRF), and while achieving impressive results, the methods suffer from long training times as they require evaluating millions of 3D point samples via a neural network for each image. We propose a 2D-only method that maps multiple context views and a query pose to a new image in a single pass of a neural network. Our model uses a two-stage architecture consisting of a codebook and a transformer model. The codebook is used to embed individual images into a smaller latent space, and the transformer solves the view synthesis task in this more compact space. To train our model efficiently, we introduce a novel branching attention mechanism that allows us to use the same model not only for neural rendering but also for camera pose estimation. Experimental results on real-world scenes show that our approach is competitive compared to NeRF-based methods while not reasoning explicitly in 3D, and it is faster to train.
[ { "version": "v1", "created": "Fri, 18 Mar 2022 21:08:23 GMT" }, { "version": "v2", "created": "Thu, 21 Jul 2022 06:03:51 GMT" } ]
2022-07-22T00:00:00
[ [ "Kulhánek", "Jonáš", "" ], [ "Derner", "Erik", "" ], [ "Sattler", "Torsten", "" ], [ "Babuška", "Robert", "" ] ]
new_dataset
0.967527
2204.03039
Yilun Chen
Yilun Chen, Shijia Huang, Shu Liu, Bei Yu, Jiaya Jia
DSGN++: Exploiting Visual-Spatial Relation for Stereo-based 3D Detectors
13 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Camera-based 3D object detectors are welcome due to their wider deployment and lower price than LiDAR sensors. We first revisit the prior stereo detector DSGN for its stereo volume construction ways for representing both 3D geometry and semantics. We polish the stereo modeling and propose the advanced version, DSGN++, aiming to enhance effective information flow throughout the 2D-to-3D pipeline in three main aspects. First, to effectively lift the 2D information to stereo volume, we propose depth-wise plane sweeping (DPS) that allows denser connections and extracts depth-guided features. Second, for grasping differently spaced features, we present a novel stereo volume -- Dual-view Stereo Volume (DSV) that integrates front-view and top-view features and reconstructs sub-voxel depth in the camera frustum. Third, as the foreground region becomes less dominant in 3D space, we propose a multi-modal data editing strategy -- Stereo-LiDAR Copy-Paste, which ensures cross-modal alignment and improves data efficiency. Without bells and whistles, extensive experiments in various modality setups on the popular KITTI benchmark show that our method consistently outperforms other camera-based 3D detectors for all categories. Code is available at https://github.com/chenyilun95/DSGN2.
[ { "version": "v1", "created": "Wed, 6 Apr 2022 18:43:54 GMT" }, { "version": "v2", "created": "Sat, 9 Apr 2022 16:58:18 GMT" }, { "version": "v3", "created": "Thu, 21 Jul 2022 12:08:06 GMT" } ]
2022-07-22T00:00:00
[ [ "Chen", "Yilun", "" ], [ "Huang", "Shijia", "" ], [ "Liu", "Shu", "" ], [ "Yu", "Bei", "" ], [ "Jia", "Jiaya", "" ] ]
new_dataset
0.987085
2204.12103
Junjie Zhang
Junjie Zhang, Amir Khodabandeh, Kourosh Khoshelham
Centimeter-level Positioning by Instantaneous Lidar-aided GNSS Ambiguity Resolution
14 pages, 12 figures. Submitted to Measurement Science and Technology
null
10.1088/1361-6501/ac82dd
null
cs.RO eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High-precision vehicle positioning is key to the implementation of modern driving systems in urban environments. Global Navigation Satellite System (GNSS) carrier phase measurements can provide millimeter- to centimeter-level positioning, provided that the integer ambiguities are correctly resolved. Abundant code measurements are often used to facilitate integer ambiguity resolution (IAR), however, they suffer from signal blockage and multipath in urban canyons. In this contribution, a lidar-aided instantaneous ambiguity resolution method is proposed. Lidar measurements, in the form of 3D keypoints, are generated by a learning-based point cloud registration method using a pre-built HD map and integrated with GNSS observations in a mixed measurement model to produce precise float solutions, which in turn increase the ambiguity success rate. Closed-form expressions of the ambiguity variance matrix and the associated Ambiguity Dilution of Precision (ADOP) are developed to provide a priori evaluation of such lidar-aided ambiguity resolution performance. Both analytical and experimental results show that the proposed method enables successful instantaneous IAR with limited GNSS satellites and frequencies, leading to centimeter-level vehicle positioning.
[ { "version": "v1", "created": "Tue, 26 Apr 2022 06:36:45 GMT" }, { "version": "v2", "created": "Fri, 3 Jun 2022 02:00:51 GMT" } ]
2022-07-22T00:00:00
[ [ "Zhang", "Junjie", "" ], [ "Khodabandeh", "Amir", "" ], [ "Khoshelham", "Kourosh", "" ] ]
new_dataset
0.998437
2206.04382
Youwang Kim
Kim Youwang, Kim Ji-Yeon, Tae-Hyun Oh
CLIP-Actor: Text-Driven Recommendation and Stylization for Animating Human Meshes
Accepted at ECCV 2022. [Project page] https://clip-actor.github.io [Code] https://github.com/postech-ami/CLIP-Actor
null
null
null
cs.CV cs.AI cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose CLIP-Actor, a text-driven motion recommendation and neural mesh stylization system for human mesh animation. CLIP-Actor animates a 3D human mesh to conform to a text prompt by recommending a motion sequence and optimizing mesh style attributes. We build a text-driven human motion recommendation system by leveraging a large-scale human motion dataset with language labels. Given a natural language prompt, CLIP-Actor suggests a text-conforming human motion in a coarse-to-fine manner. Then, our novel zero-shot neural style optimization detailizes and texturizes the recommended mesh sequence to conform to the prompt in a temporally-consistent and pose-agnostic manner. This is distinctive in that prior work fails to generate plausible results when the pose of an artist-designed mesh does not conform to the text from the beginning. We further propose the spatio-temporal view augmentation and mask-weighted embedding attention, which stabilize the optimization process by leveraging multi-frame human motion and rejecting poorly rendered views. We demonstrate that CLIP-Actor produces plausible and human-recognizable style 3D human mesh in motion with detailed geometry and texture solely from a natural language prompt.
[ { "version": "v1", "created": "Thu, 9 Jun 2022 09:50:39 GMT" }, { "version": "v2", "created": "Thu, 21 Jul 2022 07:43:04 GMT" } ]
2022-07-22T00:00:00
[ [ "Youwang", "Kim", "" ], [ "Ji-Yeon", "Kim", "" ], [ "Oh", "Tae-Hyun", "" ] ]
new_dataset
0.999727
2206.08194
Romain Loiseau
Romain Loiseau and Mathieu Aubry and Lo\"ic Landrieu
Online Segmentation of LiDAR Sequences: Dataset and Algorithm
Code and data are available at: https://romainloiseau.fr/helixnet
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Roof-mounted spinning LiDAR sensors are widely used by autonomous vehicles. However, most semantic datasets and algorithms used for LiDAR sequence segmentation operate on $360^\circ$ frames, causing an acquisition latency incompatible with real-time applications. To address this issue, we first introduce HelixNet, a $10$ billion point dataset with fine-grained labels, timestamps, and sensor rotation information necessary to accurately assess the real-time readiness of segmentation algorithms. Second, we propose Helix4D, a compact and efficient spatio-temporal transformer architecture specifically designed for rotating LiDAR sequences. Helix4D operates on acquisition slices corresponding to a fraction of a full sensor rotation, significantly reducing the total latency. Helix4D reaches accuracy on par with the best segmentation algorithms on HelixNet and SemanticKITTI with a reduction of over $5\times$ in terms of latency and $50\times$ in model size. The code and data are available at: https://romainloiseau.fr/helixnet
[ { "version": "v1", "created": "Thu, 16 Jun 2022 14:08:58 GMT" }, { "version": "v2", "created": "Thu, 21 Jul 2022 08:40:56 GMT" } ]
2022-07-22T00:00:00
[ [ "Loiseau", "Romain", "" ], [ "Aubry", "Mathieu", "" ], [ "Landrieu", "Loïc", "" ] ]
new_dataset
0.999858
2206.13179
Yiyang Hao
Yiyang Hao (1), Ge Li (2), Yongqiang Liu (1), Xiaowei Miao (1), He Zong (1), Siyuan Jiang (1), Yang Liu (1), He Wei (1) ((1) aiXcoder, (2) Peking University)
AixBench: A Code Generation Benchmark Dataset
null
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
We present a benchmark dataset for evaluating method-level code generation task. The benchmark contains a dataset of 175 samples for automated evaluation and a dataset of 161 samples for manual evaluation. We also present a new metric for automatically evaluating the correctness of the generated code, and a set of criteria to manually evaluating the overall quality of the generated code.
[ { "version": "v1", "created": "Mon, 27 Jun 2022 10:44:48 GMT" }, { "version": "v2", "created": "Thu, 21 Jul 2022 02:55:15 GMT" } ]
2022-07-22T00:00:00
[ [ "Hao", "Yiyang", "" ], [ "Li", "Ge", "" ], [ "Liu", "Yongqiang", "" ], [ "Miao", "Xiaowei", "" ], [ "Zong", "He", "" ], [ "Jiang", "Siyuan", "" ], [ "Liu", "Yang", "" ], [ "Wei", "He", "" ] ]
new_dataset
0.999837
2207.01700
Edward Kim
Edward Kim, Tobias Andersen, Marventus, A.E., Pedro Borges, David Schmidt, Matthew Western
Emergency Management and Recovery of Luna Classic
22 pages
null
null
null
cs.CR cs.DC
http://creativecommons.org/licenses/by/4.0/
In early May 2022, the Terra ecosystem collapsed after the algorithmic stablecoin failed to maintain its peg. Emergency measures were taken by Terraform Labs (TFL) in an attempt to protect Luna and UST, but then were abruptly abandoned by TFL for Luna 2.0 several days later. At this time, the Luna Classic blockchain has been left crippled and in limbo for the last two months. In the face of impossible odds, the Luna Classic community has self organized and rallied to build and restore the blockchain. This technical document outlines the steps we, the community, have taken towards the emergency management of the Luna Classic blockchain in the weeks after the UST depeg. We outline precisely what would be implemented on-chain to mitigate the concerns of affected stakeholders, and build trust for external partners, exchanges, and third-party developers. For the Luna Classic community, validators, and developers, this outlines concrete steps on how passed governance can and will be achieved. We openly audit our own code and welcome any feedback for improvement. Let us move forward together as the true community blockchain.
[ { "version": "v1", "created": "Mon, 4 Jul 2022 19:54:59 GMT" }, { "version": "v2", "created": "Thu, 21 Jul 2022 06:36:21 GMT" } ]
2022-07-22T00:00:00
[ [ "Kim", "Edward", "" ], [ "Andersen", "Tobias", "" ], [ "Marventus", "", "" ], [ "E.", "A.", "" ], [ "Borges", "Pedro", "" ], [ "Schmidt", "David", "" ], [ "Western", "Matthew", "" ] ]
new_dataset
0.988132
2207.05223
Shijie Chen
Shijie Chen, Ziru Chen, Xiang Deng, Ashley Lewis, Lingbo Mo, Samuel Stevens, Zhen Wang, Xiang Yue, Tianshu Zhang, Yu Su, Huan Sun
Bootstrapping a User-Centered Task-Oriented Dialogue System
Published in 1st Proceedings of Alexa Prize TaskBot (Alexa Prize 2021). TacoBot won 3rd place in the challenge. See project website https://sunlab-osu.github.io/tacobot/ for details
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
We present TacoBot, a task-oriented dialogue system built for the inaugural Alexa Prize TaskBot Challenge, which assists users in completing multi-step cooking and home improvement tasks. TacoBot is designed with a user-centered principle and aspires to deliver a collaborative and accessible dialogue experience. Towards that end, it is equipped with accurate language understanding, flexible dialogue management, and engaging response generation. Furthermore, TacoBot is backed by a strong search engine and an automated end-to-end test suite. In bootstrapping the development of TacoBot, we explore a series of data augmentation strategies to train advanced neural language processing models and continuously improve the dialogue experience with collected real conversations. At the end of the semifinals, TacoBot achieved an average rating of 3.55/5.0.
[ { "version": "v1", "created": "Mon, 11 Jul 2022 23:32:54 GMT" }, { "version": "v2", "created": "Thu, 21 Jul 2022 04:57:18 GMT" } ]
2022-07-22T00:00:00
[ [ "Chen", "Shijie", "" ], [ "Chen", "Ziru", "" ], [ "Deng", "Xiang", "" ], [ "Lewis", "Ashley", "" ], [ "Mo", "Lingbo", "" ], [ "Stevens", "Samuel", "" ], [ "Wang", "Zhen", "" ], [ "Yue", "Xiang", "" ], [ "Zhang", "Tianshu", "" ], [ "Su", "Yu", "" ], [ "Sun", "Huan", "" ] ]
new_dataset
0.990429
2207.09812
Dawit Mureja Argaw
Dawit Mureja Argaw, Fabian Caba Heilbron, Joon-Young Lee, Markus Woodson, In So Kweon
The Anatomy of Video Editing: A Dataset and Benchmark Suite for AI-Assisted Video Editing
Code is available at: https://github.com/dawitmureja/AVE.git
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Machine learning is transforming the video editing industry. Recent advances in computer vision have leveled-up video editing tasks such as intelligent reframing, rotoscoping, color grading, or applying digital makeups. However, most of the solutions have focused on video manipulation and VFX. This work introduces the Anatomy of Video Editing, a dataset, and benchmark, to foster research in AI-assisted video editing. Our benchmark suite focuses on video editing tasks, beyond visual effects, such as automatic footage organization and assisted video assembling. To enable research on these fronts, we annotate more than 1.5M tags, with relevant concepts to cinematography, from 196176 shots sampled from movie scenes. We establish competitive baseline methods and detailed analyses for each of the tasks. We hope our work sparks innovative research towards underexplored areas of AI-assisted video editing.
[ { "version": "v1", "created": "Wed, 20 Jul 2022 10:53:48 GMT" }, { "version": "v2", "created": "Thu, 21 Jul 2022 06:53:02 GMT" } ]
2022-07-22T00:00:00
[ [ "Argaw", "Dawit Mureja", "" ], [ "Heilbron", "Fabian Caba", "" ], [ "Lee", "Joon-Young", "" ], [ "Woodson", "Markus", "" ], [ "Kweon", "In So", "" ] ]
new_dataset
0.999802
2207.10143
Sarra Habchi
Sarra Habchi, Guillaume Haben, Jeongju Sohn, Adriano Franci, Mike Papadakis, Maxime Cordy, Yves Le Traon
What Made This Test Flake? Pinpointing Classes Responsible for Test Flakiness
Accepted at the 38th IEEE International Conference on Software Maintenance and Evolution (ICSME)
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Flaky tests are defined as tests that manifest non-deterministic behaviour by passing and failing intermittently for the same version of the code. These tests cripple continuous integration with false alerts that waste developers' time and break their trust in regression testing. To mitigate the effects of flakiness, both researchers and industrial experts proposed strategies and tools to detect and isolate flaky tests. However, flaky tests are rarely fixed as developers struggle to localise and understand their causes. Additionally, developers working with large codebases often need to know the sources of non-determinism to preserve code quality, i.e., avoid introducing technical debt linked with non-deterministic behaviour, and to avoid introducing new flaky tests. To aid with these tasks, we propose re-targeting Fault Localisation techniques to the flaky component localisation problem, i.e., pinpointing program classes that cause the non-deterministic behaviour of flaky tests. In particular, we employ Spectrum-Based Fault Localisation (SBFL), a coverage-based fault localisation technique commonly adopted for its simplicity and effectiveness. We also utilise other data sources, such as change history and static code metrics, to further improve the localisation. Our results show that augmenting SBFL with change and code metrics ranks flaky classes in the top-1 and top-5 suggestions, in 26% and 47% of the cases. Overall, we successfully reduced the average number of classes inspected to locate the first flaky class to 19% of the total number of classes covered by flaky tests. Our results also show that localisation methods are effective in major flakiness categories, such as concurrency and asynchronous waits, indicating their general ability to identify flaky components.
[ { "version": "v1", "created": "Wed, 20 Jul 2022 18:46:22 GMT" } ]
2022-07-22T00:00:00
[ [ "Habchi", "Sarra", "" ], [ "Haben", "Guillaume", "" ], [ "Sohn", "Jeongju", "" ], [ "Franci", "Adriano", "" ], [ "Papadakis", "Mike", "" ], [ "Cordy", "Maxime", "" ], [ "Traon", "Yves Le", "" ] ]
new_dataset
0.996508
2207.10225
Elijah Cole
Elijah Cole, Kimberly Wilber, Grant Van Horn, Xuan Yang, Marco Fornoni, Pietro Perona, Serge Belongie, Andrew Howard, Oisin Mac Aodha
On Label Granularity and Object Localization
ECCV 2022
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Weakly supervised object localization (WSOL) aims to learn representations that encode object location using only image-level category labels. However, many objects can be labeled at different levels of granularity. Is it an animal, a bird, or a great horned owl? Which image-level labels should we use? In this paper we study the role of label granularity in WSOL. To facilitate this investigation we introduce iNatLoc500, a new large-scale fine-grained benchmark dataset for WSOL. Surprisingly, we find that choosing the right training label granularity provides a much larger performance boost than choosing the best WSOL algorithm. We also show that changing the label granularity can significantly improve data efficiency.
[ { "version": "v1", "created": "Wed, 20 Jul 2022 22:51:32 GMT" } ]
2022-07-22T00:00:00
[ [ "Cole", "Elijah", "" ], [ "Wilber", "Kimberly", "" ], [ "Van Horn", "Grant", "" ], [ "Yang", "Xuan", "" ], [ "Fornoni", "Marco", "" ], [ "Perona", "Pietro", "" ], [ "Belongie", "Serge", "" ], [ "Howard", "Andrew", "" ], [ "Mac Aodha", "Oisin", "" ] ]
new_dataset
0.999461