id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
2210.10483
Tiago Matos Santos
Tiago Matos Santos
O Problema do Roteamento de Interliga\c{c}\~oes El\'etricas em Circuitos Integrados
in Portuguese language
null
null
null
cs.OH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Integrated circuit design automation tools are essential for the feasibility of complex designs with millions of transistors. One of the steps performed within the process is the routing of interconnections between components of a circuit. This problem, which also aims to optimize the utilization of connection resources, has been shown to be NP-Complete and requires heuristic algorithms to look for the best achievable solutions. In this work, we present a definition of this problem in context with a brief review of existing solutions in the literature. Then, we propose a methodology for the development of an original algorithm, which aims to differentiate itself, in certain domains, from the solutions already proposed.
[ { "version": "v1", "created": "Tue, 18 Oct 2022 12:41:35 GMT" } ]
2022-10-20T00:00:00
[ [ "Santos", "Tiago Matos", "" ] ]
new_dataset
0.990336
2210.10515
Pouria Mehrabi
Pouria Mehrabi, Hamid D. Taghirad
A Segment-Wise Gaussian Process-Based Ground Segmentation With Local Smoothness Estimation
null
null
null
null
cs.LG cs.SY eess.SY
http://creativecommons.org/licenses/by-nc-nd/4.0/
Both in terrestrial and extraterrestrial environments, the precise and informative model of the ground and the surface ahead is crucial for navigation and obstacle avoidance. The ground surface is not always flat and it may be sloped, bumpy and rough specially in off-road terrestrial scenes. In bumpy and rough scenes the functional relationship of the surface-related features may vary in different areas of the ground, as the structure of the ground surface may vary suddenly and further the measured point cloud of the ground does not bear smoothness. Thus, the ground-related features must be obtained based on local estimates or even point estimates. To tackle this problem, the segment-wise GP-based ground segmentation method with local smoothness estimation is proposed. This method is an extension to our previous method in which a realistic measurement of the length-scale values were provided for the covariance kernel in each line-segment to give precise estimation of the ground for sloped terrains. In this extension, the value of the length-scale is estimated locally for each data point which makes it much more precise for the rough scenes while being not computationally complex and more robust to under-segmentation, sparsity and under-represent-ability. The segment-wise task is performed to estimate a partial continuous model of the ground for each radial range segment. Simulation results show the effectiveness of the proposed method to give a continuous and precise estimation of the ground surface in rough and bumpy scenes while being fast enough for real-world applications.
[ { "version": "v1", "created": "Wed, 19 Oct 2022 12:42:21 GMT" } ]
2022-10-20T00:00:00
[ [ "Mehrabi", "Pouria", "" ], [ "Taghirad", "Hamid D.", "" ] ]
new_dataset
0.997275
2210.10523
Theodor Schnitzler
Theodor Schnitzler, Katharina Kohls, Evangelos Bitsikas, Christina P\"opper
Hope of Delivery: Extracting User Locations From Mobile Instant Messengers
33 pages, 23 figures, 9 tables, NDSS 2023
null
10.14722/ndss.2023.23188
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Mobile instant messengers such as WhatsApp use delivery status notifications in order to inform users if a sent message has successfully reached its destination. This is useful and important information for the sender due to the often asynchronous use of the messenger service. However, as we demonstrate in this paper, this standard feature opens up a timing side channel with unexpected consequences for user location privacy. We investigate this threat conceptually and experimentally for three widely spread instant messengers. We validate that this information leak even exists in privacy-friendly messengers such as Signal and Threema. Our results show that, after a training phase, a messenger user can distinguish different locations of the message receiver. Our analyses involving multiple rounds of measurements and evaluations show that the timing side channel persists independent of distances between receiver locations -- the attack works both for receivers in different countries as well as at small scale in one city. For instance, out of three locations within the same city, the sender can determine the correct one with more than 80% accuracy. Thus, messenger users can secretly spy on each others' whereabouts when sending instant messages. As our countermeasure evaluation shows, messenger providers could effectively disable the timing side channel by randomly delaying delivery confirmations within the range of a few seconds. For users themselves, the threat is harder to prevent since there is no option to turn off delivery confirmations.
[ { "version": "v1", "created": "Wed, 19 Oct 2022 12:57:47 GMT" } ]
2022-10-20T00:00:00
[ [ "Schnitzler", "Theodor", "" ], [ "Kohls", "Katharina", "" ], [ "Bitsikas", "Evangelos", "" ], [ "Pöpper", "Christina", "" ] ]
new_dataset
0.994527
2210.10542
Thomas Lucas
Thomas Lucas, Fabien Baradel, Philippe Weinzaepfel, Gr\'egory Rogez
PoseGPT: Quantization-based 3D Human Motion Generation and Forecasting
ECCV'22 Conference paper
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We address the problem of action-conditioned generation of human motion sequences. Existing work falls into two categories: forecast models conditioned on observed past motions, or generative models conditioned on action labels and duration only. In contrast, we generate motion conditioned on observations of arbitrary length, including none. To solve this generalized problem, we propose PoseGPT, an auto-regressive transformer-based approach which internally compresses human motion into quantized latent sequences. An auto-encoder first maps human motion to latent index sequences in a discrete space, and vice-versa. Inspired by the Generative Pretrained Transformer (GPT), we propose to train a GPT-like model for next-index prediction in that space; this allows PoseGPT to output distributions on possible futures, with or without conditioning on past motion. The discrete and compressed nature of the latent space allows the GPT-like model to focus on long-range signal, as it removes low-level redundancy in the input signal. Predicting discrete indices also alleviates the common pitfall of predicting averaged poses, a typical failure case when regressing continuous values, as the average of discrete targets is not a target itself. Our experimental results show that our proposed approach achieves state-of-the-art results on HumanAct12, a standard but small scale dataset, as well as on BABEL, a recent large scale MoCap dataset, and on GRAB, a human-object interactions dataset.
[ { "version": "v1", "created": "Wed, 19 Oct 2022 13:30:39 GMT" } ]
2022-10-20T00:00:00
[ [ "Lucas", "Thomas", "" ], [ "Baradel", "Fabien", "" ], [ "Weinzaepfel", "Philippe", "" ], [ "Rogez", "Grégory", "" ] ]
new_dataset
0.969041
2210.10561
Philipp Richter
Philipp Richter and Oliver Gasser and Arthur Berger
Illuminating Large-Scale IPv6 Scanning in the Internet
null
in Proceedings of the ACM Internet Measurement Conference (IMC), 2022
10.1145/3517745.3561452
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While scans of the IPv4 space are ubiquitous, today little is known about scanning activity in the IPv6 Internet. In this work, we present a longitudinal and detailed empirical study on large-scale IPv6 scanning behavior in the Internet, based on firewall logs captured at some 230,000 hosts of a major Content Distribution Network (CDN). We develop methods to identify IPv6 scans, assess current and past levels of IPv6 scanning activity, and study dominant characteristics of scans, including scanner origins, targeted services, and insights on how scanners find target IPv6 addresses. Where possible, we compare our findings to what can be assessed from publicly available traces. Our work identifies and highlights new challenges to detect scanning activity in the IPv6 Internet, and uncovers that today's scans of the IPv6 space show widely different characteristics when compared to the more well-known IPv4 scans.
[ { "version": "v1", "created": "Wed, 19 Oct 2022 14:00:59 GMT" } ]
2022-10-20T00:00:00
[ [ "Richter", "Philipp", "" ], [ "Gasser", "Oliver", "" ], [ "Berger", "Arthur", "" ] ]
new_dataset
0.994871
2210.10565
Margherita Ronchini
Margherita Ronchini, Yasser Rezaeiyan, Milad Zamani, Gabriella Panuccio, Farshad Moradi
NET-TEN: a silicon neuromorphic network for low-latency detection of seizures in local field potentials
14 pages, 6 figures
null
null
null
cs.HC cs.AR eess.SP q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Therapeutic intervention in neurological disorders still relies heavily on pharmacological solutions, while the treatment of patients with drug resistance remains an open challenge. This is particularly true for patients with epilepsy, 30% of whom are refractory to medications. Implantable devices for chronic recording and electrical modulation of brain activity have proved a viable alternative in such cases. To operate, the device should detect the relevant electrographic biomarkers from Local Field Potentials (LFPs) and determine the right time for stimulation. To enable timely interventions, the ideal device should attain biomarker detection with low latency while operating under low power consumption to prolong the battery life. Neuromorphic networks have progressively gained reputation as low-latency low-power computing systems, which makes them a promising candidate as processing core of next-generation implantable neural interfaces. Here we introduce a fully-analog neuromorphic device implemented in CMOS technology for analyzing LFP signals in an in vitro model of acute ictogenesis. We show that the system can detect ictal and interictal events with ms-latency and with high precision, consuming on average 3.50 nW during the task. Our work paves the way to a new generation of brain implantable devices for personalized closed-loop stimulation for epilepsy treatment.
[ { "version": "v1", "created": "Wed, 19 Oct 2022 14:07:07 GMT" } ]
2022-10-20T00:00:00
[ [ "Ronchini", "Margherita", "" ], [ "Rezaeiyan", "Yasser", "" ], [ "Zamani", "Milad", "" ], [ "Panuccio", "Gabriella", "" ], [ "Moradi", "Farshad", "" ] ]
new_dataset
0.991353
2210.10581
Peipei Liu
Peipei Liu, Hong Li, Zhiyu Wang, Yimo Ren, Jie Liu, Fei Lyu, Hongsong Zhu, Limin Sun
CEntRE: A paragraph-level Chinese dataset for Relation Extraction among Enterprises
null
null
null
null
cs.CL cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Enterprise relation extraction aims to detect pairs of enterprise entities and identify the business relations between them from unstructured or semi-structured text data, and it is crucial for several real-world applications such as risk analysis, rating research and supply chain security. However, previous work mainly focuses on getting attribute information about enterprises like personnel and corporate business, and pays little attention to enterprise relation extraction. To encourage further progress in the research, we introduce the CEntRE, a new dataset constructed from publicly available business news data with careful human annotation and intelligent data processing. Extensive experiments on CEntRE with six excellent models demonstrate the challenges of our proposed dataset.
[ { "version": "v1", "created": "Wed, 19 Oct 2022 14:22:10 GMT" } ]
2022-10-20T00:00:00
[ [ "Liu", "Peipei", "" ], [ "Li", "Hong", "" ], [ "Wang", "Zhiyu", "" ], [ "Ren", "Yimo", "" ], [ "Liu", "Jie", "" ], [ "Lyu", "Fei", "" ], [ "Zhu", "Hongsong", "" ], [ "Sun", "Limin", "" ] ]
new_dataset
0.999844
2210.10606
Royi Rassin
Royi Rassin, Shauli Ravfogel, Yoav Goldberg
DALLE-2 is Seeing Double: Flaws in Word-to-Concept Mapping in Text2Image Models
5 pages, BlackboxNLP @ EMNLP 2022
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
We study the way DALLE-2 maps symbols (words) in the prompt to their references (entities or properties of entities in the generated image). We show that in stark contrast to the way human process language, DALLE-2 does not follow the constraint that each word has a single role in the interpretation, and sometimes re-use the same symbol for different purposes. We collect a set of stimuli that reflect the phenomenon: we show that DALLE-2 depicts both senses of nouns with multiple senses at once; and that a given word can modify the properties of two distinct entities in the image, or can be depicted as one object and also modify the properties of another object, creating a semantic leakage of properties between entities. Taken together, our study highlights the differences between DALLE-2 and human language processing and opens an avenue for future study on the inductive biases of text-to-image models.
[ { "version": "v1", "created": "Wed, 19 Oct 2022 14:52:40 GMT" } ]
2022-10-20T00:00:00
[ [ "Rassin", "Royi", "" ], [ "Ravfogel", "Shauli", "" ], [ "Goldberg", "Yoav", "" ] ]
new_dataset
0.996091
2210.10732
Clifford Broni-Bediako
Junshi Xia, Naoto Yokoya, Bruno Adriano, Clifford Broni-Bediako
OpenEarthMap: A Benchmark Dataset for Global High-Resolution Land Cover Mapping
Accepted by WACV 2023
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
We introduce OpenEarthMap, a benchmark dataset, for global high-resolution land cover mapping. OpenEarthMap consists of 2.2 million segments of 5000 aerial and satellite images covering 97 regions from 44 countries across 6 continents, with manually annotated 8-class land cover labels at a 0.25--0.5m ground sampling distance. Semantic segmentation models trained on the OpenEarthMap generalize worldwide and can be used as off-the-shelf models in a variety of applications. We evaluate the performance of state-of-the-art methods for unsupervised domain adaptation and present challenging problem settings suitable for further technical development. We also investigate lightweight models using automated neural architecture search for limited computational resources and fast mapping. The dataset is available at https://open-earth-map.org.
[ { "version": "v1", "created": "Wed, 19 Oct 2022 17:20:16 GMT" } ]
2022-10-20T00:00:00
[ [ "Xia", "Junshi", "" ], [ "Yokoya", "Naoto", "" ], [ "Adriano", "Bruno", "" ], [ "Broni-Bediako", "Clifford", "" ] ]
new_dataset
0.999748
2210.10770
Paul-Edouard Sarlin
Paul-Edouard Sarlin, Mihai Dusmanu, Johannes L. Sch\"onberger, Pablo Speciale, Lukas Gruber, Viktor Larsson, Ondrej Miksik, Marc Pollefeys
LaMAR: Benchmarking Localization and Mapping for Augmented Reality
Accepted at ECCV 2022, website at https://lamar.ethz.ch/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Localization and mapping is the foundational technology for augmented reality (AR) that enables sharing and persistence of digital content in the real world. While significant progress has been made, researchers are still mostly driven by unrealistic benchmarks not representative of real-world AR scenarios. These benchmarks are often based on small-scale datasets with low scene diversity, captured from stationary cameras, and lack other sensor inputs like inertial, radio, or depth data. Furthermore, their ground-truth (GT) accuracy is mostly insufficient to satisfy AR requirements. To close this gap, we introduce LaMAR, a new benchmark with a comprehensive capture and GT pipeline that co-registers realistic trajectories and sensor streams captured by heterogeneous AR devices in large, unconstrained scenes. To establish an accurate GT, our pipeline robustly aligns the trajectories against laser scans in a fully automated manner. As a result, we publish a benchmark dataset of diverse and large-scale scenes recorded with head-mounted and hand-held AR devices. We extend several state-of-the-art methods to take advantage of the AR-specific setup and evaluate them on our benchmark. The results offer new insights on current research and reveal promising avenues for future work in the field of localization and mapping for AR.
[ { "version": "v1", "created": "Wed, 19 Oct 2022 17:58:17 GMT" } ]
2022-10-20T00:00:00
[ [ "Sarlin", "Paul-Edouard", "" ], [ "Dusmanu", "Mihai", "" ], [ "Schönberger", "Johannes L.", "" ], [ "Speciale", "Pablo", "" ], [ "Gruber", "Lukas", "" ], [ "Larsson", "Viktor", "" ], [ "Miksik", "Ondrej", "" ], [ "Pollefeys", "Marc", "" ] ]
new_dataset
0.992532
2109.00356
Evan Calabrese
Evan Calabrese, Javier E. Villanueva-Meyer, Jeffrey D. Rudie, Andreas M. Rauschecker, Ujjwal Baid, Spyridon Bakas, Soonmee Cha, John T. Mongan, Christopher P. Hess
The University of California San Francisco Preoperative Diffuse Glioma MRI (UCSF-PDGM) Dataset
7 pages, 2 figures, 2 tables
Radiology: Artificial Intelligence 4.6 (2022): e220058
10.1148/ryai.220058
null
cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
Here we present the University of California San Francisco Preoperative Diffuse Glioma MRI (UCSF-PDGM) dataset. The UCSF-PDGM dataset includes 500 subjects with histopathologically-proven diffuse gliomas who were imaged with a standardized 3 Tesla preoperative brain tumor MRI protocol featuring predominantly 3D imaging, as well as advanced diffusion and perfusion imaging techniques. The dataset also includes isocitrate dehydrogenase (IDH) mutation status for all cases and O6-methylguanine-DNA methyltransferase (MGMT) promotor methylation status for World Health Organization (WHO) grade III and IV gliomas. The UCSF-PDGM has been made publicly available in the hopes that researchers around the world will use these data to continue to push the boundaries of AI applications for diffuse gliomas.
[ { "version": "v1", "created": "Mon, 30 Aug 2021 22:54:12 GMT" }, { "version": "v2", "created": "Wed, 16 Mar 2022 00:35:58 GMT" } ]
2022-10-19T00:00:00
[ [ "Calabrese", "Evan", "" ], [ "Villanueva-Meyer", "Javier E.", "" ], [ "Rudie", "Jeffrey D.", "" ], [ "Rauschecker", "Andreas M.", "" ], [ "Baid", "Ujjwal", "" ], [ "Bakas", "Spyridon", "" ], [ "Cha", "Soonmee", "" ], [ "Mongan", "John T.", "" ], [ "Hess", "Christopher P.", "" ] ]
new_dataset
0.999738
2109.03564
Yi Sun
Yi Sun, Yu Zheng, Chao Hao, Hangping Qiu
NSP-BERT: A Prompt-based Few-Shot Learner Through an Original Pre-training Task--Next Sentence Prediction
Published at COLING2022, long paper
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Using prompts to utilize language models to perform various downstream tasks, also known as prompt-based learning or prompt-learning, has lately gained significant success in comparison to the pre-train and fine-tune paradigm. Nonetheless, virtually all prompt-based methods are token-level, meaning they all utilize GPT's left-to-right language model or BERT's masked language model to perform cloze-style tasks. In this paper, we attempt to accomplish several NLP tasks in the zero-shot scenario using a BERT original pre-training task abandoned by RoBERTa and other models--Next Sentence Prediction (NSP). Unlike token-level techniques, our sentence-level prompt-based method NSP-BERT does not need to fix the length of the prompt or the position to be predicted, allowing it to handle tasks such as entity linking with ease. Based on the characteristics of NSP-BERT, we offer several quick building templates for various downstream tasks. We suggest a two-stage prompt method for word sense disambiguation tasks in particular. Our strategies for mapping the labels significantly enhance the model's performance on sentence pair tasks. On the FewCLUE benchmark, our NSP-BERT outperforms other zero-shot methods on most of these tasks and comes close to the few-shot methods.
[ { "version": "v1", "created": "Wed, 8 Sep 2021 11:57:08 GMT" }, { "version": "v2", "created": "Tue, 18 Oct 2022 09:40:35 GMT" } ]
2022-10-19T00:00:00
[ [ "Sun", "Yi", "" ], [ "Zheng", "Yu", "" ], [ "Hao", "Chao", "" ], [ "Qiu", "Hangping", "" ] ]
new_dataset
0.983655
2110.13638
George Onoufriou
George Onoufriou, Marc Hanheide, Georgios Leontidis
EDLaaS: Fully Homomorphic Encryption Over Neural Network Graphs for Vision and Private Strawberry Yield Forecasting
13 pages, 6 figures, journal
null
null
null
cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
We present automatically parameterised Fully Homomorphic Encryption (FHE) for encrypted neural network inference and exemplify our inference over FHE compatible neural networks with our own open-source framework and reproducible examples. We use the 4th generation Cheon, Kim, Kim and Song (CKKS) FHE scheme over fixed points provided by the Microsoft Simple Encrypted Arithmetic Library (MS-SEAL). We significantly enhance the usability and applicability of FHE in deep learning contexts, with a focus on the constituent graphs, traversal, and optimisation. We find that FHE is not a panacea for all privacy preserving machine learning (PPML) problems, and that certain limitations still remain, such as model training. However we also find that in certain contexts FHE is well suited for computing completely private predictions with neural networks. The ability to privately compute sensitive problems more easily, while lowering the barriers to entry, can allow otherwise too-sensitive fields to begin advantaging themselves of performant third-party neural networks. Lastly we show how encrypted deep learning can be applied to a sensitive real world problem in agri-food, i.e. strawberry yield forecasting, demonstrating competitive performance. We argue that the adoption of encrypted deep learning methods at scale could allow for a greater adoption of deep learning methodologies where privacy concerns exists, hence having a large positive potential impact within the agri-food sector and its journey to net zero.
[ { "version": "v1", "created": "Tue, 26 Oct 2021 12:43:35 GMT" }, { "version": "v2", "created": "Tue, 18 Oct 2022 10:07:07 GMT" } ]
2022-10-19T00:00:00
[ [ "Onoufriou", "George", "" ], [ "Hanheide", "Marc", "" ], [ "Leontidis", "Georgios", "" ] ]
new_dataset
0.954015
2203.02882
Ran Long
Ran Long, Christian Rauch, Tianwei Zhang, Vladimir Ivan, Tin Lun Lam and Sethu Vijayakumar
RGB-D SLAM in Indoor Planar Environments with Multiple Large Dynamic Objects
8 papges, 9 figures
IEEE Robotics and Automation Letters 2022
10.1109/LRA.2022.3186091
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work presents a novel dense RGB-D SLAM approach for dynamic planar environments that enables simultaneous multi-object tracking, camera localisation and background reconstruction. Previous dynamic SLAM methods either rely on semantic segmentation to directly detect dynamic objects; or assume that dynamic objects occupy a smaller proportion of the camera view than the static background and can, therefore, be removed as outliers. Our approach, however, enables dense SLAM when the camera view is largely occluded by multiple dynamic objects with the aid of camera motion prior. The dynamic planar objects are separated by their different rigid motions and tracked independently. The remaining dynamic non-planar areas are removed as outliers and not mapped into the background. The evaluation demonstrates that our approach outperforms the state-of-the-art methods in terms of localisation, mapping, dynamic segmentation and object tracking. We also demonstrate its robustness to large drift in the camera motion prior.
[ { "version": "v1", "created": "Sun, 6 Mar 2022 05:54:25 GMT" }, { "version": "v2", "created": "Tue, 18 Oct 2022 16:44:27 GMT" } ]
2022-10-19T00:00:00
[ [ "Long", "Ran", "" ], [ "Rauch", "Christian", "" ], [ "Zhang", "Tianwei", "" ], [ "Ivan", "Vladimir", "" ], [ "Lam", "Tin Lun", "" ], [ "Vijayakumar", "Sethu", "" ] ]
new_dataset
0.984866
2203.12602
Zhan Tong
Zhan Tong, Yibing Song, Jue Wang, Limin Wang
VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training
NeurIPS 2022 camera-ready version
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Pre-training video transformers on extra large-scale datasets is generally required to achieve premier performance on relatively small datasets. In this paper, we show that video masked autoencoders (VideoMAE) are data-efficient learners for self-supervised video pre-training (SSVP). We are inspired by the recent ImageMAE and propose customized video tube masking with an extremely high ratio. This simple design makes video reconstruction a more challenging self-supervision task, thus encouraging extracting more effective video representations during this pre-training process. We obtain three important findings on SSVP: (1) An extremely high proportion of masking ratio (i.e., 90% to 95%) still yields favorable performance of VideoMAE. The temporally redundant video content enables a higher masking ratio than that of images. (2) VideoMAE achieves impressive results on very small datasets (i.e., around 3k-4k videos) without using any extra data. (3) VideoMAE shows that data quality is more important than data quantity for SSVP. Domain shift between pre-training and target datasets is an important issue. Notably, our VideoMAE with the vanilla ViT can achieve 87.4% on Kinetics-400, 75.4% on Something-Something V2, 91.3% on UCF101, and 62.6% on HMDB51, without using any extra data. Code is available at https://github.com/MCG-NJU/VideoMAE.
[ { "version": "v1", "created": "Wed, 23 Mar 2022 17:55:10 GMT" }, { "version": "v2", "created": "Thu, 7 Jul 2022 14:38:38 GMT" }, { "version": "v3", "created": "Tue, 18 Oct 2022 09:15:42 GMT" } ]
2022-10-19T00:00:00
[ [ "Tong", "Zhan", "" ], [ "Song", "Yibing", "" ], [ "Wang", "Jue", "" ], [ "Wang", "Limin", "" ] ]
new_dataset
0.999622
2205.08121
Francis Lau C.M.
Jia Zhan and Francis C.M. Lau
Design of Joint Source-Channel Codes Based on a Generic Protograph
26 pages, 15 figures, 5 tables
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this paper, we propose using a generic protograph to design joint source-channel codes (JSCCs). We present a generalized algorithm, called protograph extrinsic information transfer for JSCC algorithm (PEXIT-JSCC algorithm), for analyzing the channel threshold of the proposed JSCC. We also propose a source generic protograph EXIT (SGP-EXIT) algorithm, which is more appropriate than the generalized source protograph extrinsic information transfer (GSP-EXIT) algorithm, for evaluating the source threshold of a generic protograph. Moreover, a collaborative optimization method based on the SGP-EXIT and PEXIT-JSCC algorithms is proposed to construct generic-protograph JSCCs with good source and channel thresholds. Finally, we construct generic-protograph JSCCs, analyze their decoding thresholds, and compare their theoretical and error performance with JSCC systems based on optimized double-protographs. Results show that our proposed codes can attain channel thresholds within 1 dB from the Shannon limit and outperform double-protograph-based JSCCs.
[ { "version": "v1", "created": "Tue, 17 May 2022 06:42:13 GMT" }, { "version": "v2", "created": "Sat, 15 Oct 2022 14:38:21 GMT" }, { "version": "v3", "created": "Tue, 18 Oct 2022 08:00:59 GMT" } ]
2022-10-19T00:00:00
[ [ "Zhan", "Jia", "" ], [ "Lau", "Francis C. M.", "" ] ]
new_dataset
0.99794
2205.14292
Dian Wang
Dian Wang, Colin Kohler, Xupeng Zhu, Mingxi Jia, Robert Platt
BulletArm: An Open-Source Robotic Manipulation Benchmark and Learning Framework
Published at ISRR 2022
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present BulletArm, a novel benchmark and learning-environment for robotic manipulation. BulletArm is designed around two key principles: reproducibility and extensibility. We aim to encourage more direct comparisons between robotic learning methods by providing a set of standardized benchmark tasks in simulation alongside a collection of baseline algorithms. The framework consists of 31 different manipulation tasks of varying difficulty, ranging from simple reaching and picking tasks to more realistic tasks such as bin packing and pallet stacking. In addition to the provided tasks, BulletArm has been built to facilitate easy expansion and provides a suite of tools to assist users when adding new tasks to the framework. Moreover, we introduce a set of five benchmarks and evaluate them using a series of state-of-the-art baseline algorithms. By including these algorithms as part of our framework, we hope to encourage users to benchmark their work on any new tasks against these baselines.
[ { "version": "v1", "created": "Sat, 28 May 2022 01:19:50 GMT" }, { "version": "v2", "created": "Mon, 17 Oct 2022 19:25:45 GMT" } ]
2022-10-19T00:00:00
[ [ "Wang", "Dian", "" ], [ "Kohler", "Colin", "" ], [ "Zhu", "Xupeng", "" ], [ "Jia", "Mingxi", "" ], [ "Platt", "Robert", "" ] ]
new_dataset
0.999558
2206.00629
Zixin Guo
Zixin Guo, Tzu-Jui Julius Wang, Jorma Laaksonen
CLIP4IDC: CLIP for Image Difference Captioning
Accepted to AACL-IJCNLP 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image Difference Captioning (IDC) aims at generating sentences to describe differences between two similar-looking images. Conventional approaches learn an IDC model with a pre-trained and usually frozen visual feature extractor. Accordingly, two major issues may arise: (1) a large domain gap usually exists between the pre-training datasets used for training such a visual encoder and that of the downstream IDC task, and (2) the visual feature extractor, when separately encoding two images, often does not effectively encode the visual changes between two images. Due to the excellent zero-shot performance of the recently proposed CLIP, we thus propose CLIP4IDC to transfer a CLIP model for the IDC task to address those issues. Different from directly fine-tuning CLIP to generate sentences, we introduce an adaptation training process to adapt CLIP's visual encoder to capture and align differences in image pairs based on the textual descriptions. Experiments on three IDC benchmark datasets, CLEVR-Change, Spot-the-Diff, and Image-Editing-Request, demonstrate the effectiveness of CLIP4IDC.
[ { "version": "v1", "created": "Wed, 1 Jun 2022 17:02:08 GMT" }, { "version": "v2", "created": "Tue, 18 Oct 2022 12:30:01 GMT" } ]
2022-10-19T00:00:00
[ [ "Guo", "Zixin", "" ], [ "Wang", "Tzu-Jui Julius", "" ], [ "Laaksonen", "Jorma", "" ] ]
new_dataset
0.988865
2206.12469
Atijit Anuchitanukul
Atijit Anuchitanukul and Lucia Specia
Burst2Vec: An Adversarial Multi-Task Approach for Predicting Emotion, Age, and Origin from Vocal Bursts
null
null
null
null
cs.SD cs.CL eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present Burst2Vec, our multi-task learning approach to predict emotion, age, and origin (i.e., native country/language) from vocal bursts. Burst2Vec utilises pre-trained speech representations to capture acoustic information from raw waveforms and incorporates the concept of model debiasing via adversarial training. Our models achieve a relative 30 % performance gain over baselines using pre-extracted features and score the highest amongst all participants in the ICML ExVo 2022 Multi-Task Challenge.
[ { "version": "v1", "created": "Fri, 24 Jun 2022 18:57:41 GMT" }, { "version": "v2", "created": "Tue, 18 Oct 2022 05:48:23 GMT" } ]
2022-10-19T00:00:00
[ [ "Anuchitanukul", "Atijit", "" ], [ "Specia", "Lucia", "" ] ]
new_dataset
0.96643
2207.08980
Siamul Karim Khan
Siamul Karim Khan, Patrick Tinsley and Adam Czajka
DeformIrisNet: An Identity-Preserving Model of Iris Texture Deformation
Accepted to the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Nonlinear iris texture deformations due to pupil size variations are one of the main factors responsible for within-class variance of genuine comparison scores in iris recognition. In dominant approaches to iris recognition, the size of a ring-shaped iris region is linearly scaled to a canonical rectangle, used further in encoding and matching. However, the biological complexity of the iris sphincter and dilator muscles causes the movements of iris features to be nonlinear in a function of pupil size, and not solely organized along radial paths. Alternatively to the existing theoretical models based on the biomechanics of iris musculature, in this paper we propose a novel deep autoencoder-based model that can effectively learn complex movements of iris texture features directly from the data. The proposed model takes two inputs, (a) an ISO-compliant near-infrared iris image with initial pupil size, and (b) the binary mask defining the target shape of the iris. The model makes all the necessary nonlinear deformations to the iris texture to match the shape of the iris in an image (a) with the shape provided by the target mask (b). The identity-preservation component of the loss function helps the model in finding deformations that preserve identity and not only the visual realism of the generated samples. We also demonstrate two immediate applications of this model: better compensation for iris texture deformations in iris recognition algorithms, compared to linear models, and the creation of a generative algorithm that can aid human forensic examiners, who may need to compare iris images with a large difference in pupil dilation. We offer the source codes and model weights available along with this paper.
[ { "version": "v1", "created": "Mon, 18 Jul 2022 23:23:23 GMT" }, { "version": "v2", "created": "Tue, 18 Oct 2022 07:32:34 GMT" } ]
2022-10-19T00:00:00
[ [ "Khan", "Siamul Karim", "" ], [ "Tinsley", "Patrick", "" ], [ "Czajka", "Adam", "" ] ]
new_dataset
0.999628
2208.03430
Anjul Tyagi
Anjul Tyagi, Tyler Estro, Geoff Kuenning, Erez Zadok, Klaus Mueller
PC-Expo: A Metrics-Based Interactive Axes Reordering Method for Parallel Coordinate Displays
Pre-print of the accepted paper at Transactions of Visualization and Computer Graphics Forum (TVCG), 2022
IEEE Transactions of Visualization and Computer Graphics, 2022
10.1109/TVCG.2022.3209392
null
cs.GR
http://creativecommons.org/licenses/by/4.0/
The axes ordering in PCP presents a particular story from the data based on the user perception of PCP polylines. Existing works focus on directly optimizing for PCP axes ordering based on some common analysis tasks like clustering, neighborhood, and correlation. However, direct optimization for PCP axes based on these common properties is restrictive because it does not account for multiple properties occurring between the axes, and for local properties that occur in small regions in the data. Also, many of these techniques do not support the human-in-the-loop (HIL) paradigm, which is crucial (i) for explainability and (ii) in cases where no single reordering scheme fits the user goals. To alleviate these problems, we present PC-Expo, a real-time visual analytics framework for all-in-one PCP line pattern detection, and axes reordering. We studied the connection of line patterns in PCPs with different data analysis tasks and datasets. PC-Expo expands prior work on PCP axes reordering by developing real-time, local detection schemes for the 12 most common analysis tasks (properties). Users can choose the story they want to present with PCPs by optimizing directly over their choice of properties. These properties can be ranked, or combined using individual weights, creating a custom optimization scheme for axes reordering. Users can control the granularity at which they want to work with their detection scheme in the data, allowing exploration of local regions. PC-Expo also supports HIL axes reordering via local-property visualization, which shows the regions of granular activity for every axis pair. Local-property visualization is helpful for PCP axes reordering based on multiple properties, when no single reordering scheme fits the user goals.
[ { "version": "v1", "created": "Sat, 6 Aug 2022 02:36:30 GMT" } ]
2022-10-19T00:00:00
[ [ "Tyagi", "Anjul", "" ], [ "Estro", "Tyler", "" ], [ "Kuenning", "Geoff", "" ], [ "Zadok", "Erez", "" ], [ "Mueller", "Klaus", "" ] ]
new_dataset
0.966798
2208.06063
Nantheera Anantrasirichai
Nantheera Anantrasirichai and Thanarat H. Chalidabhongse and Duangdao Palasuwan and Korranat Naruenatthanaset and Thananop Kobchaisawat and Nuntiporn Nunthanasup and Kanyarat Boonpeng and Xudong Ma and Alin Achim
ICIP 2022 Challenge on Parasitic Egg Detection and Classification in Microscopic Images: Dataset, Methods and Results
The 29th IEEE International Conference on Image Processing
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Manual examination of faecal smear samples to identify the existence of parasitic eggs is very time-consuming and can only be done by specialists. Therefore, an automated system is required to tackle this problem since it can relate to serious intestinal parasitic infections. This paper reviews the ICIP 2022 Challenge on parasitic egg detection and classification in microscopic images. We describe a new dataset for this application, which is the largest dataset of its kind. The methods used by participants in the challenge are summarised and discussed along with their results.
[ { "version": "v1", "created": "Thu, 11 Aug 2022 22:50:51 GMT" }, { "version": "v2", "created": "Mon, 17 Oct 2022 21:55:42 GMT" } ]
2022-10-19T00:00:00
[ [ "Anantrasirichai", "Nantheera", "" ], [ "Chalidabhongse", "Thanarat H.", "" ], [ "Palasuwan", "Duangdao", "" ], [ "Naruenatthanaset", "Korranat", "" ], [ "Kobchaisawat", "Thananop", "" ], [ "Nunthanasup", "Nuntiporn", "" ], [ "Boonpeng", "Kanyarat", "" ], [ "Ma", "Xudong", "" ], [ "Achim", "Alin", "" ] ]
new_dataset
0.999581
2208.06305
Ejup Hoxha
Ejup Hoxha, Jinglun Feng, Diar Sanakov, Ardian Gjinofci, Jizhong Xiao
Robotic Inspection and Characterization of Subsurface Defects on Concrete Structures Using Impact Sounding
null
Structural Health Monitorign 2021
10.12783/shm2021/36339
null
cs.RO eess.SP
http://creativecommons.org/licenses/by/4.0/
Impact-sounding (IS) and impact-echo (IE) are well-developed non-destructive evaluation (NDE) methods that are widely used for inspections of concrete structures to ensure the safety and sustainability. However, it is a tedious work to collect IS and IE data along grid lines covering a large target area for characterization of subsurface defects. On the other hand, data processing is very complicated that requires domain experts to interpret the results. To address the above problems, we present a novel robotic inspection system named as Impact-Rover to automate the data collection process and introduce data analytics software to visualize the inspection result allowing regular non-professional people to understand. The system consists of three modules: 1) a robotic platform with vertical mobility to collect IS and IE data in hard-to-reach locations, 2) vision-based positioning module that fuses the RGB-D camera, IMU and wheel encoder to estimate the 6-DOF pose of the robot, 3) a data analytics software module for processing the IS data to generate defect maps. The Impact-Rover hosts both IE and IS devices on a sliding mechanism and can perform move-stop-sample operations to collect multiple IS and IE data at adjustable spacing. The robot takes samples much faster than the manual data collection method because it automatically takes the multiple measurements along a straight line and records the locations. This paper focuses on reporting experimental results on IS. We calculate features and use unsupervised learning methods for analyzing the data. By combining the pose generated by our vision-based localization module and the position of the head of the sliding mechanism we can generate maps of possible defects. The results on concrete slabs demonstrate that our impact-sounding system can effectively reveal shallow defects.
[ { "version": "v1", "created": "Fri, 12 Aug 2022 14:43:52 GMT" } ]
2022-10-19T00:00:00
[ [ "Hoxha", "Ejup", "" ], [ "Feng", "Jinglun", "" ], [ "Sanakov", "Diar", "" ], [ "Gjinofci", "Ardian", "" ], [ "Xiao", "Jizhong", "" ] ]
new_dataset
0.995751
2208.10859
Colin Groth
Colin Groth, Sascha Fricke, Susana Castillo, Marcus Magnor
Wavelet-Based Fast Decoding of 360-Degree Videos
null
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a wavelet-based video codec specifically designed for VR displays that enables real-time playback of high-resolution 360{\deg} videos. Our codec exploits the fact that only a fraction of the full 360{\deg} video frame is visible on the display at any time. To load and decode the video viewport-dependently in real time, we make use of the wavelet transform for intra- as well as inter-frame coding. Thereby, the relevant content is directly streamed from the drive, without the need to hold the entire frames in memory. With an average of 193 frames per second at 8192x8192-pixel full-frame resolution, the conducted evaluation demonstrates that our codec's decoding performance is up to 272% higher than that of the state-of-the-art video codecs H.265 and AV1 for typical VR displays. By means of a perceptual study, we further illustrate the necessity of high frame rates for a better VR experience. Finally, we demonstrate how our wavelet-based codec can also directly be used in conjunction with foveation for further performance increase.
[ { "version": "v1", "created": "Tue, 23 Aug 2022 10:35:26 GMT" }, { "version": "v2", "created": "Tue, 18 Oct 2022 14:41:43 GMT" } ]
2022-10-19T00:00:00
[ [ "Groth", "Colin", "" ], [ "Fricke", "Sascha", "" ], [ "Castillo", "Susana", "" ], [ "Magnor", "Marcus", "" ] ]
new_dataset
0.99892
2209.00213
Moseli Mots'oehli
Moseli Mots'oehli and Yao Chao Yang
Public Parking Spot Detection And Geo-localization Using Transfer Learning
Accepted for presentation at SACAIR 2022. 11 pages,5 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In cities around the world, locating public parking lots with vacant parking spots is a major problem, costing commuters time and adding to traffic congestion. This work illustrates how a dataset of Geo-tagged images from a mobile phone camera, can be used in navigating to the most convenient public parking lot in Johannesburg with an available parking space, detected by a neural network powered-public camera. The images are used to fine-tune a Detectron2 model pre-trained on the ImageNet dataset to demonstrate detection and segmentation of vacant parking spots, we then add the parking lot's corresponding longitude and latitude coordinates to recommend the most convenient parking lot to the driver based on the Haversine distance and number of available parking spots. Using the VGG Image Annotation (VIA) we use images from an expanding dataset of images, and annotate these with polygon outlines of the four different types of objects of interest: cars, open parking spots, people, and car number plates. We use the segmentation model to ensure number plates can be occluded in production for car registration anonymity purposes. We get an 89% and 82% intersection over union cover score on cars and parking spaces respectively. This work has the potential to help reduce the amount of time commuters spend searching for free public parking, hence easing traffic congestion in and around shopping complexes and other public places, and maximize people's utility with respect to driving on public roads.
[ { "version": "v1", "created": "Thu, 1 Sep 2022 04:09:51 GMT" }, { "version": "v2", "created": "Sun, 4 Sep 2022 04:29:17 GMT" }, { "version": "v3", "created": "Tue, 18 Oct 2022 03:59:29 GMT" } ]
2022-10-19T00:00:00
[ [ "Mots'oehli", "Moseli", "" ], [ "Yang", "Yao Chao", "" ] ]
new_dataset
0.998564
2209.13464
Zhijian Ou
Hong Liu, Hao Peng, Zhijian Ou, Juanzi Li, Yi Huang and Junlan Feng
Information Extraction and Human-Robot Dialogue towards Real-life Tasks: A Baseline Study with the MobileCS Dataset
Accepted by EMNLP 2022 SereTOD Workshop
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, there have merged a class of task-oriented dialogue (TOD) datasets collected through Wizard-of-Oz simulated games. However, the Wizard-of-Oz data are in fact simulated data and thus are fundamentally different from real-life conversations, which are more noisy and casual. Recently, the SereTOD challenge is organized and releases the MobileCS dataset, which consists of real-world dialog transcripts between real users and customer-service staffs from China Mobile. Based on the MobileCS dataset, the SereTOD challenge has two tasks, not only evaluating the construction of the dialogue system itself, but also examining information extraction from dialog transcripts, which is crucial for building the knowledge base for TOD. This paper mainly presents a baseline study of the two tasks with the MobileCS dataset. We introduce how the two baselines are constructed, the problems encountered, and the results. We anticipate that the baselines can facilitate exciting future research to build human-robot dialogue systems for real-life tasks.
[ { "version": "v1", "created": "Tue, 27 Sep 2022 15:30:43 GMT" }, { "version": "v2", "created": "Tue, 18 Oct 2022 06:15:28 GMT" } ]
2022-10-19T00:00:00
[ [ "Liu", "Hong", "" ], [ "Peng", "Hao", "" ], [ "Ou", "Zhijian", "" ], [ "Li", "Juanzi", "" ], [ "Huang", "Yi", "" ], [ "Feng", "Junlan", "" ] ]
new_dataset
0.993393
2210.07873
Amir Zeldes
Amir Zeldes, Nick Howell, Noam Ordan and Yifat Ben Moshe
A Second Wave of UD Hebrew Treebanking and Cross-Domain Parsing
Proceedings of EMNLP 2022
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Foundational Hebrew NLP tasks such as segmentation, tagging and parsing, have relied to date on various versions of the Hebrew Treebank (HTB, Sima'an et al. 2001). However, the data in HTB, a single-source newswire corpus, is now over 30 years old, and does not cover many aspects of contemporary Hebrew on the web. This paper presents a new, freely available UD treebank of Hebrew stratified from a range of topics selected from Hebrew Wikipedia. In addition to introducing the corpus and evaluating the quality of its annotations, we deploy automatic validation tools based on grew (Guillaume, 2021), and conduct the first cross domain parsing experiments in Hebrew. We obtain new state-of-the-art (SOTA) results on UD NLP tasks, using a combination of the latest language modelling and some incremental improvements to existing transformer based approaches. We also release a new version of the UD HTB matching annotation scheme updates from our new corpus.
[ { "version": "v1", "created": "Fri, 14 Oct 2022 14:52:07 GMT" }, { "version": "v2", "created": "Tue, 18 Oct 2022 14:53:07 GMT" } ]
2022-10-19T00:00:00
[ [ "Zeldes", "Amir", "" ], [ "Howell", "Nick", "" ], [ "Ordan", "Noam", "" ], [ "Moshe", "Yifat Ben", "" ] ]
new_dataset
0.998302
2210.08305
Runkai Zhao
Runkai Zhao, Heng Wang, Chaoyi Zhang, Weidong Cai
PointNeuron: 3D Neuron Reconstruction via Geometry and Topology Learning of Point Clouds
WACV 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Digital neuron reconstruction from 3D microscopy images is an essential technique for investigating brain connectomics and neuron morphology. Existing reconstruction frameworks use convolution-based segmentation networks to partition the neuron from noisy backgrounds before applying the tracing algorithm. The tracing results are sensitive to the raw image quality and segmentation accuracy. In this paper, we propose a novel framework for 3D neuron reconstruction. Our key idea is to use the geometric representation power of the point cloud to better explore the intrinsic structural information of neurons. Our proposed framework adopts one graph convolutional network to predict the neural skeleton points and another one to produce the connectivity of these points. We finally generate the target SWC file through the interpretation of the predicted point coordinates, radius, and connections. Evaluated on the Janelia-Fly dataset from the BigNeuron project, we show that our framework achieves competitive neuron reconstruction performance. Our geometry and topology learning of point clouds could further benefit 3D medical image analysis, such as cardiac surface reconstruction. Our code is available at https://github.com/RunkaiZhao/PointNeuron.
[ { "version": "v1", "created": "Sat, 15 Oct 2022 14:11:56 GMT" }, { "version": "v2", "created": "Tue, 18 Oct 2022 01:59:13 GMT" } ]
2022-10-19T00:00:00
[ [ "Zhao", "Runkai", "" ], [ "Wang", "Heng", "" ], [ "Zhang", "Chaoyi", "" ], [ "Cai", "Weidong", "" ] ]
new_dataset
0.996496
2210.09267
Jyh-Jing Hwang
Jyh-Jing Hwang and Henrik Kretzschmar and Joshua Manela and Sean Rafferty and Nicholas Armstrong-Crews and Tiffany Chen and Dragomir Anguelov
CramNet: Camera-Radar Fusion with Ray-Constrained Cross-Attention for Robust 3D Object Detection
ECCV 2022
null
null
null
cs.CV cs.LG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robust 3D object detection is critical for safe autonomous driving. Camera and radar sensors are synergistic as they capture complementary information and work well under different environmental conditions. Fusing camera and radar data is challenging, however, as each of the sensors lacks information along a perpendicular axis, that is, depth is unknown to camera and elevation is unknown to radar. We propose the camera-radar matching network CramNet, an efficient approach to fuse the sensor readings from camera and radar in a joint 3D space. To leverage radar range measurements for better camera depth predictions, we propose a novel ray-constrained cross-attention mechanism that resolves the ambiguity in the geometric correspondences between camera features and radar features. Our method supports training with sensor modality dropout, which leads to robust 3D object detection, even when a camera or radar sensor suddenly malfunctions on a vehicle. We demonstrate the effectiveness of our fusion approach through extensive experiments on the RADIATE dataset, one of the few large-scale datasets that provide radar radio frequency imagery. A camera-only variant of our method achieves competitive performance in monocular 3D object detection on the Waymo Open Dataset.
[ { "version": "v1", "created": "Mon, 17 Oct 2022 17:18:47 GMT" }, { "version": "v2", "created": "Tue, 18 Oct 2022 01:46:28 GMT" } ]
2022-10-19T00:00:00
[ [ "Hwang", "Jyh-Jing", "" ], [ "Kretzschmar", "Henrik", "" ], [ "Manela", "Joshua", "" ], [ "Rafferty", "Sean", "" ], [ "Armstrong-Crews", "Nicholas", "" ], [ "Chen", "Tiffany", "" ], [ "Anguelov", "Dragomir", "" ] ]
new_dataset
0.999481
2210.09345
Elisa Bassignana
Elisa Bassignana and Barbara Plank
CrossRE: A Cross-Domain Dataset for Relation Extraction
Accepted in Findings of the Association for Computational Linguistics: EMNLP 2022
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Relation Extraction (RE) has attracted increasing attention, but current RE evaluation is limited to in-domain evaluation setups. Little is known on how well a RE system fares in challenging, but realistic out-of-distribution evaluation setups. To address this gap, we propose CrossRE, a new, freely-available cross-domain benchmark for RE, which comprises six distinct text domains and includes multi-label annotations. An additional innovation is that we release meta-data collected during annotation, to include explanations and flags of difficult instances. We provide an empirical evaluation with a state-of-the-art model for relation classification. As the meta-data enables us to shed new light on the state-of-the-art model, we provide a comprehensive analysis on the impact of difficult cases and find correlations between model and human annotations. Overall, our empirical investigation highlights the difficulty of cross-domain RE. We release our dataset, to spur more research in this direction.
[ { "version": "v1", "created": "Mon, 17 Oct 2022 18:33:14 GMT" } ]
2022-10-19T00:00:00
[ [ "Bassignana", "Elisa", "" ], [ "Plank", "Barbara", "" ] ]
new_dataset
0.999443
2210.09389
Rashid Mehmood PhD
Istiak Ahmad, Fahad AlQurashi, Rashid Mehmood
Potrika: Raw and Balanced Newspaper Datasets in the Bangla Language with Eight Topics and Five Attributes
10 pages, 5 figures
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Knowledge is central to human and scientific developments. Natural Language Processing (NLP) allows automated analysis and creation of knowledge. Data is a crucial NLP and machine learning ingredient. The scarcity of open datasets is a well-known problem in machine and deep learning research. This is very much the case for textual NLP datasets in English and other major world languages. For the Bangla language, the situation is even more challenging and the number of large datasets for NLP research is practically nil. We hereby present Potrika, a large single-label Bangla news article textual dataset curated for NLP research from six popular online news portals in Bangladesh (Jugantor, Jaijaidin, Ittefaq, Kaler Kontho, Inqilab, and Somoyer Alo) for the period 2014-2020. The articles are classified into eight distinct categories (National, Sports, International, Entertainment, Economy, Education, Politics, and Science \& Technology) providing five attributes (News Article, Category, Headline, Publication Date, and Newspaper Source). The raw dataset contains 185.51 million words and 12.57 million sentences contained in 664,880 news articles. Moreover, using NLP augmentation techniques, we create from the raw (unbalanced) dataset another (balanced) dataset comprising 320,000 news articles with 40,000 articles in each of the eight news categories. Potrika contains both the datasets (raw and balanced) to suit a wide range of NLP research. By far, to the best of our knowledge, Potrika is the largest and the most extensive dataset for news classification.
[ { "version": "v1", "created": "Mon, 17 Oct 2022 19:37:42 GMT" } ]
2022-10-19T00:00:00
[ [ "Ahmad", "Istiak", "" ], [ "AlQurashi", "Fahad", "" ], [ "Mehmood", "Rashid", "" ] ]
new_dataset
0.999179
2210.09396
Sky CH-Wang
Sky CH-Wang, Evan Li, Oliver Li, Smaranda Muresan, Zhou Yu
Affective Idiosyncratic Responses to Music
EMNLP 2022 Main Conference; see Github https://github.com/skychwang/music-emotions
null
null
null
cs.CL cs.AI cs.CY cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
Affective responses to music are highly personal. Despite consensus that idiosyncratic factors play a key role in regulating how listeners emotionally respond to music, precisely measuring the marginal effects of these variables has proved challenging. To address this gap, we develop computational methods to measure affective responses to music from over 403M listener comments on a Chinese social music platform. Building on studies from music psychology in systematic and quasi-causal analyses, we test for musical, lyrical, contextual, demographic, and mental health effects that drive listener affective responses. Finally, motivated by the social phenomenon known as w\v{a}ng-y\`i-y\'un, we identify influencing factors of platform user self-disclosures, the social support they receive, and notable differences in discloser user activity.
[ { "version": "v1", "created": "Mon, 17 Oct 2022 19:57:46 GMT" } ]
2022-10-19T00:00:00
[ [ "CH-Wang", "Sky", "" ], [ "Li", "Evan", "" ], [ "Li", "Oliver", "" ], [ "Muresan", "Smaranda", "" ], [ "Yu", "Zhou", "" ] ]
new_dataset
0.997082
2210.09411
Kenechukwu Mbanisi
Kenechukwu C. Mbanisi and Michael A. Gennert
Multimodal Shared Autonomy for Social Navigation Assistance of Telepresence Robots
10 pages, 4 figures
null
null
null
cs.RO cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mobile telepresence robots (MTRs) have become increasingly popular in the expanding world of remote work, providing new avenues for people to actively participate in activities at a distance. However, humans operating MTRs often have difficulty navigating in densely populated environments due to limited situation awareness and narrow field-of-view, which reduces user acceptance and satisfaction. Shared autonomy in navigation has been studied primarily in static environments or in situations where only one pedestrian interacts with the robot. We present a multimodal shared autonomy approach, leveraging visual and haptic guidance, to provide navigation assistance for remote operators in densely-populated environments. It uses a modified form of reciprocal velocity obstacles for generating safe control inputs while taking social proxemics constraints into account. Two different visual guidance designs, as well as haptic force rendering, were proposed to convey safe control input. We conducted a user study to compare the merits and limitations of multimodal navigation assistance to haptic or visual assistance alone on a shared navigation task. The study involved 15 participants operating a virtual telepresence robot in a virtual hall with moving pedestrians, using the different assistance modalities. We evaluated navigation performance, transparency and cooperation, as well as user preferences. Our results showed that participants preferred multimodal assistance with a visual guidance trajectory over haptic or visual modalities alone, although it had no impact on navigation performance. Additionally, we found that visual guidance trajectories conveyed a higher degree of understanding and cooperation than equivalent haptic cues in a navigation task.
[ { "version": "v1", "created": "Mon, 17 Oct 2022 20:23:32 GMT" } ]
2022-10-19T00:00:00
[ [ "Mbanisi", "Kenechukwu C.", "" ], [ "Gennert", "Michael A.", "" ] ]
new_dataset
0.966532
2210.09460
Matthew Sotoudeh
Matthew Sotoudeh
System-Specific Interpreters Make Megasystems Friendlier
To appear at the Eight Workshop on Live Programming (LIVE 2022)
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern operating systems, browsers, and office suites have become megasystems built on millions of lines of code. Their sheer size can intimidate even experienced users and programmers away from attempting to understand and modify the software running on their machines. This paper introduces system-specific interpreters (SSIs) as a tool to help users regain knowledge of and control over megasystems. SSIs directly execute individual modules of a megasystem in a gdb-like environment without forcing the user to build, run, and trace the entire system. A prototype framework to help write SSIs is described in this paper and available for download at https://github.com/matthewsot/ssi-live22.
[ { "version": "v1", "created": "Mon, 17 Oct 2022 22:19:22 GMT" } ]
2022-10-19T00:00:00
[ [ "Sotoudeh", "Matthew", "" ] ]
new_dataset
0.972882
2210.09495
Noriaki Ota
Noriaki Ota, Shingo Yokoi, Shinsuke Yamaoka
5th Place Solution to Kaggle Google Universal Image Embedding Competition
3 pages, 1 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present our solution, which placed 5th in the kaggle Google Universal Image Embedding Competition in 2022. We use the ViT-H visual encoder of CLIP from the openclip repository as a backbone and train a head model composed of BatchNormalization and Linear layers using ArcFace. The dataset used was a subset of products10K, GLDv2, GPR1200, and Food101. And applying TTA for part of images also improves the score. With this method, we achieve a score of 0.684 on the public and 0.688 on the private leaderboard. Our code is available. https://github.com/riron1206/kaggle-Google-Universal-Image-Embedding-Competition-5th-Place-Solution
[ { "version": "v1", "created": "Tue, 18 Oct 2022 00:34:09 GMT" } ]
2022-10-19T00:00:00
[ [ "Ota", "Noriaki", "" ], [ "Yokoi", "Shingo", "" ], [ "Yamaoka", "Shinsuke", "" ] ]
new_dataset
0.998506
2210.09729
Zan Wang
Zan Wang, Yixin Chen, Tengyu Liu, Yixin Zhu, Wei Liang, Siyuan Huang
HUMANISE: Language-conditioned Human Motion Generation in 3D Scenes
Accepted by NeurIPS 2022
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning to generate diverse scene-aware and goal-oriented human motions in 3D scenes remains challenging due to the mediocre characteristics of the existing datasets on Human-Scene Interaction (HSI); they only have limited scale/quality and lack semantics. To fill in the gap, we propose a large-scale and semantic-rich synthetic HSI dataset, denoted as HUMANISE, by aligning the captured human motion sequences with various 3D indoor scenes. We automatically annotate the aligned motions with language descriptions that depict the action and the unique interacting objects in the scene; e.g., sit on the armchair near the desk. HUMANISE thus enables a new generation task, language-conditioned human motion generation in 3D scenes. The proposed task is challenging as it requires joint modeling of the 3D scene, human motion, and natural language. To tackle this task, we present a novel scene-and-language conditioned generative model that can produce 3D human motions of the desirable action interacting with the specified objects. Our experiments demonstrate that our model generates diverse and semantically consistent human motions in 3D scenes.
[ { "version": "v1", "created": "Tue, 18 Oct 2022 10:14:11 GMT" } ]
2022-10-19T00:00:00
[ [ "Wang", "Zan", "" ], [ "Chen", "Yixin", "" ], [ "Liu", "Tengyu", "" ], [ "Zhu", "Yixin", "" ], [ "Liang", "Wei", "" ], [ "Huang", "Siyuan", "" ] ]
new_dataset
0.999698
2210.09765
Fernando Alonso-Fernandez
Fernando Alonso-Fernandez, Reuben A. Farrugia, Josef Bigun
Very Low-Resolution Iris Recognition Via Eigen-Patch Super-Resolution and Matcher Fusion
Published at Intl Conf on Biometrics: Theory, Apps and Systems, BTAS 2016
null
10.1109/BTAS.2016.7791208
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current research in iris recognition is moving towards enabling more relaxed acquisition conditions. This has effects on the quality of acquired images, with low resolution being a predominant issue. Here, we evaluate a super-resolution algorithm used to reconstruct iris images based on Eigen-transformation of local image patches. Each patch is reconstructed separately, allowing better quality of enhanced images by preserving local information. Contrast enhancement is used to improve the reconstruction quality, while matcher fusion has been adopted to improve iris recognition performance. We validate the system using a database of 1,872 near-infrared iris images. The presented approach is superior to bilinear or bicubic interpolation, especially at lower resolutions, and the fusion of the two systems pushes the EER to below 5% for down-sampling factors up to a image size of only 13x13.
[ { "version": "v1", "created": "Tue, 18 Oct 2022 11:25:19 GMT" } ]
2022-10-19T00:00:00
[ [ "Alonso-Fernandez", "Fernando", "" ], [ "Farrugia", "Reuben A.", "" ], [ "Bigun", "Josef", "" ] ]
new_dataset
0.998417
2210.09778
Fernando Alonso-Fernandez
Fernando Alonso-Fernandez, Anna Mikaelyan, Josef Bigun
Compact multi-scale periocular recognition using SAFE features
Published at IEEE/IAPR Intl Conf on Pattern Recognition, ICPR 2016
null
10.1109/ICPR.2016.7899842
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a new approach for periocular recognition based on the Symmetry Assessment by Feature Expansion (SAFE) descriptor, which encodes the presence of various symmetric curve families around image key points. We use the sclera center as single key point for feature extraction, highlighting the object-like identity properties that concentrates to this unique point of the eye. As it is demonstrated, such discriminative properties can be encoded with a reduced set of symmetric curves. Experiments are done with a database of periocular images captured with a digital camera. We test our system against reference periocular features, achieving top performance with a considerably smaller feature vector (given by the use of a single key point). All the systems tested also show a nearly steady correlation between acquisition distance and performance, and they are also able to cope well when enrolment and test images are not captured at the same distance. Fusion experiments among the available systems are also provided.
[ { "version": "v1", "created": "Tue, 18 Oct 2022 11:46:38 GMT" } ]
2022-10-19T00:00:00
[ [ "Alonso-Fernandez", "Fernando", "" ], [ "Mikaelyan", "Anna", "" ], [ "Bigun", "Josef", "" ] ]
new_dataset
0.996313
2210.09790
Kazuya Tsubokura
Kazuya Tsubokura, Fumiya Kishi, Kotomi Narita, Takuya Takeda, Yurie Iribe
Hospitable Travel Agent Dialogue Robot: Team Irisapu Project Description for DRC2022
5 pages, 5 figures, This paper is part of the proceedings of the Dialogue Robot Competition 2022
null
null
null
cs.RO cs.HC
http://creativecommons.org/licenses/by/4.0/
This paper describes the dialog robot system designed by Team Irisapu for the preliminary round of the Dialogue Robot Competition 2022 (DRC2022). Our objective was to design a hospitable travel agent robot. The system we developed was ranked 8th out of 13 systems in the preliminary round of the competition, but our robot received high marks for its naturalness and likeability.Our next challenge is to create a system that can provide more useful information to users.
[ { "version": "v1", "created": "Tue, 18 Oct 2022 12:04:59 GMT" } ]
2022-10-19T00:00:00
[ [ "Tsubokura", "Kazuya", "" ], [ "Kishi", "Fumiya", "" ], [ "Narita", "Kotomi", "" ], [ "Takeda", "Takuya", "" ], [ "Iribe", "Yurie", "" ] ]
new_dataset
0.998164
2210.09843
Emanuele Maiorana
Emanuele Maiorana, Chiara Romano, Emiliano Schena, and Carlo Massaroni
BIOWISH: Biometric Recognition using Wearable Inertial Sensors detecting Heart Activity
null
null
null
null
cs.CV eess.SP
http://creativecommons.org/licenses/by-sa/4.0/
Wearable devices are increasingly used, thanks to the wide set of applications that can be deployed exploiting their ability to monitor physical activity and health-related parameters. Their usage has been recently proposed to perform biometric recognition, leveraging on the uniqueness of the recorded traits to generate discriminative identifiers. Most of the studies conducted on this topic have considered signals derived from cardiac activity, detecting it mainly using electrical measurements thorugh electrocardiography, or optical recordings employing photoplethysmography. In this paper we instead propose a BIOmetric recognition approach using Wearable Inertial Sensors detecting Heart activity (BIOWISH). In more detail, we investigate the feasibility of exploiting mechanical measurements obtained through seismocardiography and gyrocardiography to recognize a person. Several feature extractors and classifiers, including deep learning techniques relying on transfer learning and siamese training, are employed to derive distinctive characteristics from the considered signals, and differentiate between legitimate and impostor subjects. An multi-session database, comprising acquisitions taken from subjects performing different activities, is employed to perform experimental tests simulating a verification system. The obtained results testify that identifiers derived from measurements of chest vibrations, collected by wearable inertial sensors, could be employed to guarantee high recognition performance, even when considering short-time recordings.
[ { "version": "v1", "created": "Tue, 18 Oct 2022 13:26:49 GMT" } ]
2022-10-19T00:00:00
[ [ "Maiorana", "Emanuele", "" ], [ "Romano", "Chiara", "" ], [ "Schena", "Emiliano", "" ], [ "Massaroni", "Carlo", "" ] ]
new_dataset
0.954899
2210.09873
Yong Niu
Lei Wang, Bo Ai, Yong Niu, Zhangdui Zhong, Shiwen Mao, Ning Wang, and Zhu Han
Energy Efficient Train-Ground mmWave Mobile Relay System for High Speed Railways
13 pages, 12 figures, IEEE TGCN
null
10.1109/TGCN.2022.3194036
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rapid development of high-speed railways (HSRs) puts forward high requirements on the corresponding communication system. Millimeter wave (mmWave) can be a promising solution due to its wide bandwidth, narrow beams, and rich spectrum resources. However, with the large number of antenna elements employed, energy-efficient solutions at mmWave frequencies are in great demand. Based on a mmWave HSR communication system with multiple mobile relays (MRs) on top of the train, a dynamic power-control scheme for train-ground communications is proposed. The scheme follows the regular movement characteristics of high-speed trains and considers three phases of train movement: the train enters the cell, all MRs are covered in the cell, and the train leaves the cell. The transmit power is further refined according to the number of MRs in the cell and the distance between the train and the remote radio head. By minimizing energy consumption under the constraints of the transmitted data and transmit power budget, the transmit power is allocated to multiple MRs through the multiplier punitive function-based algorithm. Comprehensive simulation results, where the velocity estimation error is taken into account, are provided to demonstrate the effectiveness of the proposed scheme over several baseline schemes.
[ { "version": "v1", "created": "Tue, 18 Oct 2022 14:07:02 GMT" } ]
2022-10-19T00:00:00
[ [ "Wang", "Lei", "" ], [ "Ai", "Bo", "" ], [ "Niu", "Yong", "" ], [ "Zhong", "Zhangdui", "" ], [ "Mao", "Shiwen", "" ], [ "Wang", "Ning", "" ], [ "Han", "Zhu", "" ] ]
new_dataset
0.998588
2210.09940
Tarun Kumar Yadav
Tarun Kumar Yadav, Devashish Gosain, Amir Herzberg, Daniel Zappala and Kent Seamons
Automatic Detection of Fake Key Attacks in Secure Messaging
An extended version of our paper published at ACM CCS 2022
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Popular instant messaging applications such as WhatsApp and Signal provide end-to-end encryption for billions of users. They rely on a centralized, application-specific server to distribute public keys and relay encrypted messages between the users. Therefore, they prevent passive attacks but are vulnerable to some active attacks. A malicious or hacked server can distribute fake keys to users to perform man-in-the-middle or impersonation attacks. While typical secure messaging applications provide a manual method for users to detect these attacks, this burdens users, and studies show it is ineffective in practice. This paper presents KTACA, a completely automated approach for key verification that is oblivious to users and easy to deploy. We motivate KTACA by designing two approaches to automatic key verification. One approach uses client auditing (KTCA) and the second uses anonymous key monitoring (AKM). Both have relatively inferior security properties, leading to KTACA, which combines these approaches to provide the best of both worlds. We provide a security analysis of each defense, identifying which attacks they can automatically detect. We implement the active attacks to demonstrate they are possible, and we also create a prototype implementation of all the defenses to measure their performance and confirm their feasibility. Finally, we discuss the strengths and weaknesses of each defense, the overhead on clients and service providers, and deployment considerations.
[ { "version": "v1", "created": "Tue, 18 Oct 2022 15:44:09 GMT" } ]
2022-10-19T00:00:00
[ [ "Yadav", "Tarun Kumar", "" ], [ "Gosain", "Devashish", "" ], [ "Herzberg", "Amir", "" ], [ "Zappala", "Daniel", "" ], [ "Seamons", "Kent", "" ] ]
new_dataset
0.989652
2210.09956
Selvarajah Thuseethan Dr.
Sivasubramaniam Janarthan, Selvarajah Thuseethan, Sutharshan Rajasegarar and John Yearwood
Double Attention-based Lightweight Network for Plant Pest Recognition
14 pages
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Timely recognition of plant pests from field images is significant to avoid potential losses of crop yields. Traditional convolutional neural network-based deep learning models demand high computational capability and require large labelled samples for each pest type for training. On the other hand, the existing lightweight network-based approaches suffer in correctly classifying the pests because of common characteristics and high similarity between multiple plant pests. In this work, a novel double attention-based lightweight deep learning architecture is proposed to automatically recognize different plant pests. The lightweight network facilitates faster and small data training while the double attention module increases performance by focusing on the most pertinent information. The proposed approach achieves 96.61%, 99.08% and 91.60% on three variants of two publicly available datasets with 5869, 545 and 500 samples, respectively. Moreover, the comparison results reveal that the proposed approach outperforms existing approaches on both small and large datasets consistently.
[ { "version": "v1", "created": "Tue, 4 Oct 2022 09:25:09 GMT" } ]
2022-10-19T00:00:00
[ [ "Janarthan", "Sivasubramaniam", "" ], [ "Thuseethan", "Selvarajah", "" ], [ "Rajasegarar", "Sutharshan", "" ], [ "Yearwood", "John", "" ] ]
new_dataset
0.956037
2210.09962
Anirudh Chakravarthy
Harshan Baskar, Anirudh S Chakravarthy, Prateek Garg, Divyam Goel, Abhijith S Raj, Kshitij Kumar, Lakshya, Ravichandra Parvatham, V Sushant, Bijay Kumar Rout
Nighttime Dehaze-Enhancement
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In this paper, we introduce a new computer vision task called nighttime dehaze-enhancement. This task aims to jointly perform dehazing and lightness enhancement. Our task fundamentally differs from nighttime dehazing -- our goal is to jointly dehaze and enhance scenes, while nighttime dehazing aims to dehaze scenes under a nighttime setting. In order to facilitate further research on this task, we release a new benchmark dataset called Reside-$\beta$ Night dataset, consisting of 4122 nighttime hazed images from 2061 scenes and 2061 ground truth images. Moreover, we also propose a new network called NDENet (Nighttime Dehaze-Enhancement Network), which jointly performs dehazing and low-light enhancement in an end-to-end manner. We evaluate our method on the proposed benchmark and achieve SSIM of 0.8962 and PSNR of 26.25. We also compare our network with other baseline networks on our benchmark to demonstrate the effectiveness of our approach. We believe that nighttime dehaze-enhancement is an essential task particularly for autonomous navigation applications, and hope that our work will open up new frontiers in research. Our dataset and code will be made publicly available upon acceptance of our paper.
[ { "version": "v1", "created": "Tue, 18 Oct 2022 16:19:25 GMT" } ]
2022-10-19T00:00:00
[ [ "Baskar", "Harshan", "" ], [ "Chakravarthy", "Anirudh S", "" ], [ "Garg", "Prateek", "" ], [ "Goel", "Divyam", "" ], [ "Raj", "Abhijith S", "" ], [ "Kumar", "Kshitij", "" ], [ "Lakshya", "", "" ], [ "Parvatham", "Ravichandra", "" ], [ "Sushant", "V", "" ], [ "Rout", "Bijay Kumar", "" ] ]
new_dataset
0.99973
2210.09984
Jimmy Lin
Xinyu Zhang, Nandan Thakur, Odunayo Ogundepo, Ehsan Kamalloo, David Alfonso-Hermelo, Xiaoguang Li, Qun Liu, Mehdi Rezagholizadeh, Jimmy Lin
Making a MIRACL: Multilingual Information Retrieval Across a Continuum of Languages
null
null
null
null
cs.IR cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
MIRACL (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual dataset we have built for the WSDM 2023 Cup challenge that focuses on ad hoc retrieval across 18 different languages, which collectively encompass over three billion native speakers around the world. These languages have diverse typologies, originate from many different language families, and are associated with varying amounts of available resources -- including what researchers typically characterize as high-resource as well as low-resource languages. Our dataset is designed to support the creation and evaluation of models for monolingual retrieval, where the queries and the corpora are in the same language. In total, we have gathered over 700k high-quality relevance judgments for around 77k queries over Wikipedia in these 18 languages, where all assessments have been performed by native speakers hired by our team. Our goal is to spur research that will improve retrieval across a continuum of languages, thus enhancing information access capabilities for diverse populations around the world, particularly those that have been traditionally underserved. This overview paper describes the dataset and baselines that we share with the community. The MIRACL website is live at http://miracl.ai/.
[ { "version": "v1", "created": "Tue, 18 Oct 2022 16:47:18 GMT" } ]
2022-10-19T00:00:00
[ [ "Zhang", "Xinyu", "" ], [ "Thakur", "Nandan", "" ], [ "Ogundepo", "Odunayo", "" ], [ "Kamalloo", "Ehsan", "" ], [ "Alfonso-Hermelo", "David", "" ], [ "Li", "Xiaoguang", "" ], [ "Liu", "Qun", "" ], [ "Rezagholizadeh", "Mehdi", "" ], [ "Lin", "Jimmy", "" ] ]
new_dataset
0.999789
2210.10036
Shaofei Wang
Shaofei Wang and Katja Schwarz and Andreas Geiger and Siyu Tang
ARAH: Animatable Volume Rendering of Articulated Human SDFs
Accepted to ECCV 2022. Project page: https://neuralbodies.github.io/arah/
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Combining human body models with differentiable rendering has recently enabled animatable avatars of clothed humans from sparse sets of multi-view RGB videos. While state-of-the-art approaches achieve realistic appearance with neural radiance fields (NeRF), the inferred geometry often lacks detail due to missing geometric constraints. Further, animating avatars in out-of-distribution poses is not yet possible because the mapping from observation space to canonical space does not generalize faithfully to unseen poses. In this work, we address these shortcomings and propose a model to create animatable clothed human avatars with detailed geometry that generalize well to out-of-distribution poses. To achieve detailed geometry, we combine an articulated implicit surface representation with volume rendering. For generalization, we propose a novel joint root-finding algorithm for simultaneous ray-surface intersection search and correspondence search. Our algorithm enables efficient point sampling and accurate point canonicalization while generalizing well to unseen poses. We demonstrate that our proposed pipeline can generate clothed avatars with high-quality pose-dependent geometry and appearance from a sparse set of multi-view RGB videos. Our method achieves state-of-the-art performance on geometry and appearance reconstruction while creating animatable avatars that generalize well to out-of-distribution poses beyond the small number of training poses.
[ { "version": "v1", "created": "Tue, 18 Oct 2022 17:56:59 GMT" } ]
2022-10-19T00:00:00
[ [ "Wang", "Shaofei", "" ], [ "Schwarz", "Katja", "" ], [ "Geiger", "Andreas", "" ], [ "Tang", "Siyu", "" ] ]
new_dataset
0.990263
2210.10045
Sharon Levy
Sharon Levy, Emily Allaway, Melanie Subbiah, Lydia Chilton, Desmond Patton, Kathleen McKeown, William Yang Wang
SafeText: A Benchmark for Exploring Physical Safety in Language Models
Accepted to EMNLP 2022
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding what constitutes safe text is an important issue in natural language processing and can often prevent the deployment of models deemed harmful and unsafe. One such type of safety that has been scarcely studied is commonsense physical safety, i.e. text that is not explicitly violent and requires additional commonsense knowledge to comprehend that it leads to physical harm. We create the first benchmark dataset, SafeText, comprising real-life scenarios with paired safe and physically unsafe pieces of advice. We utilize SafeText to empirically study commonsense physical safety across various models designed for text generation and commonsense reasoning tasks. We find that state-of-the-art large language models are susceptible to the generation of unsafe text and have difficulty rejecting unsafe advice. As a result, we argue for further studies of safety and the assessment of commonsense physical safety in models before release.
[ { "version": "v1", "created": "Tue, 18 Oct 2022 17:59:31 GMT" } ]
2022-10-19T00:00:00
[ [ "Levy", "Sharon", "" ], [ "Allaway", "Emily", "" ], [ "Subbiah", "Melanie", "" ], [ "Chilton", "Lydia", "" ], [ "Patton", "Desmond", "" ], [ "McKeown", "Kathleen", "" ], [ "Wang", "William Yang", "" ] ]
new_dataset
0.999878
2210.10046
Guanqi Zhan
Guanqi Zhan, Weidi Xie, Andrew Zisserman
A Tri-Layer Plugin to Improve Occluded Detection
BMVC 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Detecting occluded objects still remains a challenge for state-of-the-art object detectors. The objective of this work is to improve the detection for such objects, and thereby improve the overall performance of a modern object detector. To this end we make the following four contributions: (1) We propose a simple 'plugin' module for the detection head of two-stage object detectors to improve the recall of partially occluded objects. The module predicts a tri-layer of segmentation masks for the target object, the occluder and the occludee, and by doing so is able to better predict the mask of the target object. (2) We propose a scalable pipeline for generating training data for the module by using amodal completion of existing object detection and instance segmentation training datasets to establish occlusion relationships. (3) We also establish a COCO evaluation dataset to measure the recall performance of partially occluded and separated objects. (4) We show that the plugin module inserted into a two-stage detector can boost the performance significantly, by only fine-tuning the detection head, and with additional improvements if the entire architecture is fine-tuned. COCO results are reported for Mask R-CNN with Swin-T or Swin-S backbones, and Cascade Mask R-CNN with a Swin-B backbone.
[ { "version": "v1", "created": "Tue, 18 Oct 2022 17:59:51 GMT" } ]
2022-10-19T00:00:00
[ [ "Zhan", "Guanqi", "" ], [ "Xie", "Weidi", "" ], [ "Zisserman", "Andrew", "" ] ]
new_dataset
0.99482
2007.08224
Enrico Meloni
Enrico Meloni, Luca Pasqualini, Matteo Tiezzi, Marco Gori, Stefano Melacci
SAILenv: Learning in Virtual Visual Environments Made Simple
8 pages, 7 figures, submitted to ICPR 2020
null
10.1109/ICPR48806.2021.9412909
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, researchers in Machine Learning algorithms, Computer Vision scientists, engineers and others, showed a growing interest in 3D simulators as a mean to artificially create experimental settings that are very close to those in the real world. However, most of the existing platforms to interface algorithms with 3D environments are often designed to setup navigation-related experiments, to study physical interactions, or to handle ad-hoc cases that are not thought to be customized, sometimes lacking a strong photorealistic appearance and an easy-to-use software interface. In this paper, we present a novel platform, SAILenv, that is specifically designed to be simple and customizable, and that allows researchers to experiment visual recognition in virtual 3D scenes. A few lines of code are needed to interface every algorithm with the virtual world, and non-3D-graphics experts can easily customize the 3D environment itself, exploiting a collection of photorealistic objects. Our framework yields pixel-level semantic and instance labeling, depth, and, to the best of our knowledge, it is the only one that provides motion-related information directly inherited from the 3D engine. The client-server communication operates at a low level, avoiding the overhead of HTTP-based data exchanges. We perform experiments using a state-of-the-art object detector trained on real-world images, showing that it is able to recognize the photorealistic 3D objects of our environment. The computational burden of the optical flow compares favourably with the estimation performed using modern GPU-based convolutional networks or more classic implementations. We believe that the scientific community will benefit from the easiness and high-quality of our framework to evaluate newly proposed algorithms in their own customized realistic conditions.
[ { "version": "v1", "created": "Thu, 16 Jul 2020 09:50:23 GMT" }, { "version": "v2", "created": "Mon, 20 Jul 2020 15:42:02 GMT" } ]
2022-10-18T00:00:00
[ [ "Meloni", "Enrico", "" ], [ "Pasqualini", "Luca", "" ], [ "Tiezzi", "Matteo", "" ], [ "Gori", "Marco", "" ], [ "Melacci", "Stefano", "" ] ]
new_dataset
0.993278
2012.09700
Xuanhong Chen
Xuanhong Chen, Kairui Feng, Naiyuan Liu, Bingbing Ni, Yifan Lu, Zhengyan Tong, Ziang Liu
RainNet: A Large-Scale Imagery Dataset and Benchmark for Spatial Precipitation Downscaling
Accepted at NeurIPS 2022. Project page: https://neuralchen.github.io/RainNet/
Conference on Neural Information Processing Systems (NeurIPS) 2022
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
AI-for-science approaches have been applied to solve scientific problems (e.g., nuclear fusion, ecology, genomics, meteorology) and have achieved highly promising results. Spatial precipitation downscaling is one of the most important meteorological problem and urgently requires the participation of AI. However, the lack of a well-organized and annotated large-scale dataset hinders the training and verification of more effective and advancing deep-learning models for precipitation downscaling. To alleviate these obstacles, we present the first large-scale spatial precipitation downscaling dataset named RainNet, which contains more than $62,400$ pairs of high-quality low/high-resolution precipitation maps for over $17$ years, ready to help the evolution of deep learning models in precipitation downscaling. Specifically, the precipitation maps carefully collected in RainNet cover various meteorological phenomena (e.g., hurricane, squall), which is of great help to improve the model generalization ability. In addition, the map pairs in RainNet are organized in the form of image sequences ($720$ maps per month or 1 map/hour), showing complex physical properties, e.g., temporal misalignment, temporal sparse, and fluid properties. Furthermore, two deep-learning-oriented metrics are specifically introduced to evaluate or verify the comprehensive performance of the trained model (e.g., prediction maps reconstruction accuracy). To illustrate the applications of RainNet, 14 state-of-the-art models, including deep models and traditional approaches, are evaluated. To fully explore potential downscaling solutions, we propose an implicit physical estimation benchmark framework to learn the above characteristics. Extensive experiments demonstrate the value of RainNet in training and evaluating downscaling models. Our dataset is available at https://neuralchen.github.io/RainNet/.
[ { "version": "v1", "created": "Thu, 17 Dec 2020 16:12:17 GMT" }, { "version": "v2", "created": "Fri, 18 Dec 2020 03:22:57 GMT" }, { "version": "v3", "created": "Fri, 14 Oct 2022 19:17:05 GMT" } ]
2022-10-18T00:00:00
[ [ "Chen", "Xuanhong", "" ], [ "Feng", "Kairui", "" ], [ "Liu", "Naiyuan", "" ], [ "Ni", "Bingbing", "" ], [ "Lu", "Yifan", "" ], [ "Tong", "Zhengyan", "" ], [ "Liu", "Ziang", "" ] ]
new_dataset
0.999573
2107.08146
Peter Jansen
Peter Jansen and Jordan Boyd-Graber
Picard understanding Darmok: A Dataset and Model for Metaphor-Rich Translation in a Constructed Language
Accepted to the the 2022 Workshop on Figurative Language Processing (at EMNLP 2022)
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Tamarian, a fictional language introduced in the Star Trek episode Darmok, communicates meaning through utterances of metaphorical references, such as "Darmok and Jalad at Tanagra" instead of "We should work together." This work assembles a Tamarian-English dictionary of utterances from the original episode and several follow-on novels, and uses this to construct a parallel corpus of 456 English-Tamarian utterances. A machine translation system based on a large language model (T5) is trained using this parallel corpus, and is shown to produce an accuracy of 76% when translating from English to Tamarian on known utterances.
[ { "version": "v1", "created": "Fri, 16 Jul 2021 23:35:45 GMT" }, { "version": "v2", "created": "Fri, 14 Oct 2022 20:35:02 GMT" } ]
2022-10-18T00:00:00
[ [ "Jansen", "Peter", "" ], [ "Boyd-Graber", "Jordan", "" ] ]
new_dataset
0.999829
2108.07622
Cunhua Pan
Kangda Zhi, Cunhua Pan, Hong Ren, Kezhi Wang, Maged Elkashlan, Marco Di Renzo, Robert Schober, H. Vincent Poor, Jiangzhou Wang, and Lajos Hanzo
Two-Timescale Design for Reconfigurable Intelligent Surface-Aided Massive MIMO Systems with Imperfect CSI
Revision in IEEE TIT. Keywords: Reconfigurable Intelligent Surface, Intelligent Reflecting Surface, Massive MIMO, Channel estimation, etc
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
This paper investigates the two-timescale transmission design for reconfigurable intelligent surface (RIS)-aided massive multiple-input multiple-output (MIMO) systems, where the beamforming at the base station (BS) is adapted to the rapidly-changing instantaneous channel state information (CSI), while the passive beamforming at the RIS is adapted to the slowly-changing statistical CSI. Specifically, we first propose a linear minimum mean square error (LMMSE) estimator to obtain the aggregated channel from the users to the BS in each channel coherence interval. Based on the estimated channel, we apply the low-complexity maximal ratio combining (MRC) beamforming at the BS, and then derive the ergodic achievable rate in a closed form expression. To draw design insights, we perform a detailed theoretical analysis departing from the derived ergodic achievable rate. If the BS-RIS channel is Rician distributed, we prove that the transmit power can be scaled proportionally to $1/M$, as the number of BS antennas, $M$, grows to infinity while maintaining a non-zero rate. If the BS-RIS channel is Rayleigh distributed, the transmit power can be scaled either proportionally to $1/\sqrt{M}$ as $M$ grows large, or proportionally to $1/N$ as the number of reflecting elements, $N$, grows large, while still maintaining a non-zero rate. By capitalizing on the derived expression of the data rate under the statistical knowledge of the CSI, we maximize the minimum user rate by designing the passive beamforming at the RIS. Numerical results confirm that, even in the presence of imperfect CSI, the integration of an RIS in massive MIMO systems results in promising performance gains. In addition, the obtained results reveal that it is favorable to place the RIS close to the users rather than close to the BS.
[ { "version": "v1", "created": "Tue, 17 Aug 2021 13:51:12 GMT" }, { "version": "v2", "created": "Wed, 18 Aug 2021 00:45:01 GMT" }, { "version": "v3", "created": "Sat, 28 May 2022 15:09:47 GMT" }, { "version": "v4", "created": "Sat, 15 Oct 2022 07:45:57 GMT" } ]
2022-10-18T00:00:00
[ [ "Zhi", "Kangda", "" ], [ "Pan", "Cunhua", "" ], [ "Ren", "Hong", "" ], [ "Wang", "Kezhi", "" ], [ "Elkashlan", "Maged", "" ], [ "Di Renzo", "Marco", "" ], [ "Schober", "Robert", "" ], [ "Poor", "H. Vincent", "" ], [ "Wang", "Jiangzhou", "" ], [ "Hanzo", "Lajos", "" ] ]
new_dataset
0.994245
2109.03631
Mohammad Ridwan Kabir
Mohammad Ridwan Kabir (1), Mohammad Anas Jawad (1), Mohaimin Ehsan (1), Hasan Mahmud (1), Md. Kamrul Hasan (1) ((1) Department of Computer Science and Engineering (CSE), Islamic University of Technology (IUT), Gazipur, Bangladesh.)
Renovo: Prototype of a Low-Cost Sensor-Based Therapeutic System for Upper Limb Rehabilitation
27 pages, 10 figures, 5 tables
null
null
null
cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Stroke patients with Upper Limb Disability (ULD) are re-acclimated to their lost motor capability through therapeutic interventions, following assessment by Physiotherapists (PTs) using various qualitative assessment protocols. However, the assessments are often biased and prone to errors. Real-time visualization and quantitative analysis of various Performance Metrics (PMs) of patient's motion data, such as - Range of Motion (RoM), Repetition Rate (RR), Velocity (V), etc., may be vital for proper assessment. In this study, we present Renovo, a wearable inertial sensor-based therapeutic system, which assists PTs with real-time visualization and quantitative patient assessment, while providing patients with progress feedback. We showcase the results of a three-week pilot study on the rehabilitation of ULD patients (N=16), in 3 successive sessions at one-week interval, following evaluation both by Renovo and PTs (N=5). Results suggest that sensor-based quantitative assessment reduces the possibility of human error and bias, enhancing efficiency of rehabilitation.
[ { "version": "v1", "created": "Wed, 8 Sep 2021 13:23:25 GMT" }, { "version": "v2", "created": "Sun, 12 Sep 2021 16:28:11 GMT" }, { "version": "v3", "created": "Mon, 17 Oct 2022 05:14:59 GMT" } ]
2022-10-18T00:00:00
[ [ "Kabir", "Mohammad Ridwan", "" ], [ "Jawad", "Mohammad Anas", "" ], [ "Ehsan", "Mohaimin", "" ], [ "Mahmud", "Hasan", "" ], [ "Hasan", "Md. Kamrul", "" ] ]
new_dataset
0.999244
2109.07846
Md. Mohi Uddin Khan
Abdullah Bin Shams, Md. Mohsin Sarker Raihan, Md. Mohi Uddin Khan, Ocean Monjur and Rahat Bin Preo
Telehealthcare and Telepathology in Pandemic: A Noninvasive, Low-Cost Micro-Invasive and Multimodal Real-Time Online Application for Early Diagnosis of COVID-19 Infection
32 Pages. This article has been submitted for review to a prestigious journal
null
null
null
cs.LG cs.SD eess.AS q-bio.BM
http://creativecommons.org/licenses/by-nc-nd/4.0/
To contain the spread of the virus and stop the overcrowding of hospitalized patients, the coronavirus pandemic crippled healthcare facilities, mandating lockdowns and promoting remote work. As a result, telehealth has become increasingly popular for offering low-risk care to patients. However, the difficulty of preventing the next potential waves of infection has increased by constant virus mutation into new forms and a general lack of test kits, particularly in developing nations. In this research, a unique cloud-based application for the early identification of individuals who may have COVID-19 infection is proposed. The application provides five modes of diagnosis from possible symptoms (f1), cough sound (f2), specific blood biomarkers (f3), Raman spectral data of blood specimens (f4), and ECG signal paper-based image (f5). When a user selects an option and enters the information, the data is sent to the cloud server. The deployed machine learning (ML) and deep learning (DL) models classify the data in real time and inform the user of the likelihood of COVID-19 infection. Our deployed models can classify with an accuracy of 100%, 99.80%, 99.55%, 95.65%, and 77.59% from f3, f4, f5, f2, and f1 respectively. Moreover, the sensitivity for f2, f3, and f4 is 100%, which indicates the correct identification of COVID positive patients. This is significant in limiting the spread of the virus. Additionally, another ML model, as seen to offer 92% accuracy serves to identify patients who, out of a large group of patients admitted to the hospital cohort, need immediate critical care support by estimating the mortality risk of patients from blood parameters. The instantaneous multimodal nature of our technique offers multiplex and accurate diagnostic methods, highlighting the effectiveness of telehealth as a simple, widely available, and low-cost diagnostic solution, even for future pandemics.
[ { "version": "v1", "created": "Thu, 16 Sep 2021 10:22:31 GMT" }, { "version": "v2", "created": "Sat, 15 Oct 2022 20:10:21 GMT" } ]
2022-10-18T00:00:00
[ [ "Shams", "Abdullah Bin", "" ], [ "Raihan", "Md. Mohsin Sarker", "" ], [ "Khan", "Md. Mohi Uddin", "" ], [ "Monjur", "Ocean", "" ], [ "Preo", "Rahat Bin", "" ] ]
new_dataset
0.998632
2109.07989
Lalli Myllyaho
Lalli Myllyaho, Mikko Raatikainen, Tomi M\"annist\"o, Jukka K. Nurminen, Tommi Mikkonen
On Misbehaviour and Fault Tolerance in Machine Learning Systems
15 pages, 1 figure, 2 tables. The manuscript has been accepted to the Journal of Systems and Software
null
10.1016/j.jss.2021.111096
null
cs.SE
http://creativecommons.org/licenses/by-nc-nd/4.0/
Machine learning (ML) provides us with numerous opportunities, allowing ML systems to adapt to new situations and contexts. At the same time, this adaptability raises uncertainties concerning the run-time product quality or dependability, such as reliability and security, of these systems. Systems can be tested and monitored, but this does not provide protection against faults and failures in adapted ML systems themselves. We studied software designs that aim at introducing fault tolerance in ML systems so that possible problems in ML components of the systems can be avoided. The research was conducted as a case study, and its data was collected through five semi-structured interviews with experienced software architects. We present a conceptualisation of the misbehaviour of ML systems, the perceived role of fault tolerance, and the designs used. Common patterns to incorporating ML components in design in a fault tolerant fashion have started to emerge. ML models are, for example, guarded by monitoring the inputs and their distribution, and enforcing business rules on acceptable outputs. Multiple, specialised ML models are used to adapt to the variations and changes in the surrounding world, and simpler fall-over techniques like default outputs are put in place to have systems up and running in the face of problems. However, the general role of these patterns is not widely acknowledged. This is mainly due to the relative immaturity of using ML as part of a complete software system: the field still lacks established frameworks and practices beyond training to implement, operate, and maintain the software that utilises ML. ML software engineering needs further analysis and development on all fronts.
[ { "version": "v1", "created": "Thu, 16 Sep 2021 13:58:18 GMT" } ]
2022-10-18T00:00:00
[ [ "Myllyaho", "Lalli", "" ], [ "Raatikainen", "Mikko", "" ], [ "Männistö", "Tomi", "" ], [ "Nurminen", "Jukka K.", "" ], [ "Mikkonen", "Tommi", "" ] ]
new_dataset
0.993484
2110.08057
Zihan Zhang
Zihan Zhang, Xiangyang Ji, Yuan Zhou
Almost Optimal Batch-Regret Tradeoff for Batch Linear Contextual Bandits
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the optimal batch-regret tradeoff for batch linear contextual bandits. For any batch number $M$, number of actions $K$, time horizon $T$, and dimension $d$, we provide an algorithm and prove its regret guarantee, which, due to technical reasons, features a two-phase expression as the time horizon $T$ grows. We also prove a lower bound theorem that surprisingly shows the optimality of our two-phase regret upper bound (up to logarithmic factors) in the \emph{full range} of the problem parameters, therefore establishing the exact batch-regret tradeoff. Compared to the recent work \citep{ruan2020linear} which showed that $M = O(\log \log T)$ batches suffice to achieve the asymptotically minimax-optimal regret without the batch constraints, our algorithm is simpler and easier for practical implementation. Furthermore, our algorithm achieves the optimal regret for all $T \geq d$, while \citep{ruan2020linear} requires that $T$ greater than an unrealistically large polynomial of $d$. Along our analysis, we also prove a new matrix concentration inequality with dependence on their dynamic upper bounds, which, to the best of our knowledge, is the first of its kind in literature and maybe of independent interest.
[ { "version": "v1", "created": "Fri, 15 Oct 2021 12:32:33 GMT" }, { "version": "v2", "created": "Tue, 9 Nov 2021 09:14:19 GMT" }, { "version": "v3", "created": "Sat, 15 Oct 2022 04:21:31 GMT" } ]
2022-10-18T00:00:00
[ [ "Zhang", "Zihan", "" ], [ "Ji", "Xiangyang", "" ], [ "Zhou", "Yuan", "" ] ]
new_dataset
0.992288
2112.01047
Taolin Zhang
Taolin Zhang, Chengyu Wang, Nan Hu, Minghui Qiu, Chengguang Tang, Xiaofeng He, Jun Huang
DKPLM: Decomposable Knowledge-enhanced Pre-trained Language Model for Natural Language Understanding
Accepted by AAAI22
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Knowledge-Enhanced Pre-trained Language Models (KEPLMs) are pre-trained models with relation triples injecting from knowledge graphs to improve language understanding abilities. To guarantee effective knowledge injection, previous studies integrate models with knowledge encoders for representing knowledge retrieved from knowledge graphs. The operations for knowledge retrieval and encoding bring significant computational burdens, restricting the usage of such models in real-world applications that require high inference speed. In this paper, we propose a novel KEPLM named DKPLM that Decomposes Knowledge injection process of the Pre-trained Language Models in pre-training, fine-tuning and inference stages, which facilitates the applications of KEPLMs in real-world scenarios. Specifically, we first detect knowledge-aware long-tail entities as the target for knowledge injection, enhancing the KEPLMs' semantic understanding abilities and avoiding injecting redundant information. The embeddings of long-tail entities are replaced by "pseudo token representations" formed by relevant knowledge triples. We further design the relational knowledge decoding task for pre-training to force the models to truly understand the injected knowledge by relation triple reconstruction. Experiments show that our model outperforms other KEPLMs significantly over zero-shot knowledge probing tasks and multiple knowledge-aware language understanding tasks. We further show that DKPLM has a higher inference speed than other competing models due to the decomposing mechanism.
[ { "version": "v1", "created": "Thu, 2 Dec 2021 08:19:42 GMT" }, { "version": "v2", "created": "Sun, 16 Oct 2022 02:51:32 GMT" } ]
2022-10-18T00:00:00
[ [ "Zhang", "Taolin", "" ], [ "Wang", "Chengyu", "" ], [ "Hu", "Nan", "" ], [ "Qiu", "Minghui", "" ], [ "Tang", "Chengguang", "" ], [ "He", "Xiaofeng", "" ], [ "Huang", "Jun", "" ] ]
new_dataset
0.990947
2201.13230
\'Ad\'am Kov\'acs
\'Ad\'am Kov\'acs, Kinga G\'emes, Eszter Ikl\'odi, G\'abor Recski
POTATO: exPlainable infOrmation exTrAcTion framewOrk
4 pages
null
10.1145/3511808.3557196
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
We present POTATO, a task- and languageindependent framework for human-in-the-loop (HITL) learning of rule-based text classifiers using graph-based features. POTATO handles any type of directed graph and supports parsing text into Abstract Meaning Representations (AMR), Universal Dependencies (UD), and 4lang semantic graphs. A streamlit-based user interface allows users to build rule systems from graph patterns, provides real-time evaluation based on ground truth data, and suggests rules by ranking graph features using interpretable machine learning models. Users can also provide patterns over graphs using regular expressions, and POTATO can recommend refinements of such rules. POTATO is applied in projects across domains and languages, including classification tasks on German legal text and English social media data. All components of our system are written in Python, can be installed via pip, and are released under an MIT License on GitHub.
[ { "version": "v1", "created": "Mon, 31 Jan 2022 13:43:02 GMT" }, { "version": "v2", "created": "Sun, 16 Oct 2022 22:57:26 GMT" } ]
2022-10-18T00:00:00
[ [ "Kovács", "Ádám", "" ], [ "Gémes", "Kinga", "" ], [ "Iklódi", "Eszter", "" ], [ "Recski", "Gábor", "" ] ]
new_dataset
0.997208
2202.02394
Yash Jakhotiya
Yash Jakhotiya, Vaibhav Kumar, Ashwin Pathak, Raj Shah
JARVix at SemEval-2022 Task 2: It Takes One to Know One? Idiomaticity Detection using Zero and One-Shot Learning
Accepted at the 16th International Workshop on Semantic Evaluation (SemEval-2022), NAACL. Best Project Award for Georgia Tech CS 7650. Code available at https://github.com/ashwinpathak20/Idiomaticity_Detection_Using_Few_Shot_Learning
null
10.18653/v1/2022.semeval-1.19
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large Language Models have been successful in a wide variety of Natural Language Processing tasks by capturing the compositionality of the text representations. In spite of their great success, these vector representations fail to capture meaning of idiomatic multi-word expressions (MWEs). In this paper, we focus on the detection of idiomatic expressions by using binary classification. We use a dataset consisting of the literal and idiomatic usage of MWEs in English and Portuguese. Thereafter, we perform the classification in two different settings: zero shot and one shot, to determine if a given sentence contains an idiom or not. N shot classification for this task is defined by N number of common idioms between the training and testing sets. In this paper, we train multiple Large Language Models in both the settings and achieve an F1 score (macro) of 0.73 for the zero shot setting and an F1 score (macro) of 0.85 for the one shot setting. An implementation of our work can be found at https://github.com/ashwinpathak20/Idiomaticity_Detection_Using_Few_Shot_Learning.
[ { "version": "v1", "created": "Fri, 4 Feb 2022 21:17:41 GMT" }, { "version": "v2", "created": "Wed, 2 Mar 2022 17:28:34 GMT" }, { "version": "v3", "created": "Fri, 13 May 2022 18:20:16 GMT" }, { "version": "v4", "created": "Thu, 2 Jun 2022 23:40:33 GMT" }, { "version": "v5", "created": "Tue, 21 Jun 2022 22:20:54 GMT" }, { "version": "v6", "created": "Thu, 23 Jun 2022 05:15:17 GMT" } ]
2022-10-18T00:00:00
[ [ "Jakhotiya", "Yash", "" ], [ "Kumar", "Vaibhav", "" ], [ "Pathak", "Ashwin", "" ], [ "Shah", "Raj", "" ] ]
new_dataset
0.999817
2202.05599
Jiaan Wang
Jiaan Wang, Fandong Meng, Ziyao Lu, Duo Zheng, Zhixu Li, Jianfeng Qu, Jie Zhou
ClidSum: A Benchmark Dataset for Cross-Lingual Dialogue Summarization
Accepted to EMNLP 2022 (main conference)
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present ClidSum, a benchmark dataset for building cross-lingual summarization systems on dialogue documents. It consists of 67k+ dialogue documents from two subsets (i.e., SAMSum and MediaSum) and 112k+ annotated summaries in different target languages. Based on the proposed ClidSum, we introduce two benchmark settings for supervised and semi-supervised scenarios, respectively. We then build various baseline systems in different paradigms (pipeline and end-to-end) and conduct extensive experiments on ClidSum to provide deeper analyses. Furthermore, we propose mDialBART which extends mBART-50 (a multi-lingual BART) via further pre-training. The multiple objectives used in the further pre-training stage help the pre-trained model capture the structural characteristics as well as important content in dialogues and the transformation from source to the target language. Experimental results show the superiority of mDialBART, as an end-to-end model, outperforms strong pipeline models on ClidSum. Finally, we discuss specific challenges that current approaches faced with this task and give multiple promising directions for future research. We have released the dataset and code at https://github.com/krystalan/ClidSum.
[ { "version": "v1", "created": "Fri, 11 Feb 2022 13:32:14 GMT" }, { "version": "v2", "created": "Sun, 16 Oct 2022 09:29:30 GMT" } ]
2022-10-18T00:00:00
[ [ "Wang", "Jiaan", "" ], [ "Meng", "Fandong", "" ], [ "Lu", "Ziyao", "" ], [ "Zheng", "Duo", "" ], [ "Li", "Zhixu", "" ], [ "Qu", "Jianfeng", "" ], [ "Zhou", "Jie", "" ] ]
new_dataset
0.999821
2204.02915
Lo\"ic Bidoux
Lo\"ic Bidoux, Philippe Gaborit
Compact Post-Quantum Signatures from Proofs of Knowledge leveraging Structure for the PKP, SD and RSD Problems
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
The MPC-in-the-head introduced in [IKOS07] has established itself as an important paradigm to design efficient digital signatures. It has been leveraged in the Picnic scheme [CDG+ 20] that reached the third round of the NIST PQC Standardization process. It has also been used in [Beu20] to introduce the Proof of Knowledge (PoK) with Helper paradigm. This construction permits to design shorter signatures but induces a non negligible performance overhead as it uses cut-and-choose. In this paper, we introduce the PoK leveraging structure paradigm along with its associated challenge space amplification technique. Our new approach to design PoK brings some improvements over the PoK with Helper one. Indeed, we show how one can substitute the Helper in these constructions by leveraging the underlying structure of the considered problem. This approach does not suffer from the performance overhead inherent to the PoK with Helper paradigm hence offers different trade-offs between security, signature sizes and performances. We also present four new post-quantum signature schemes. The first one is based on a new PoK with Helper for the Syndrome Decoding problem. It relies on ideas from [BGKM22] and [FJR21] and improve the latter using a new technique that can be seen as performing some cut-and-choose with a meet in the middle approach. The three other signatures are based on our new PoK leveraging structure approach and as such illustrate its versatility. We provide new PoK related to the Permuted Kernel Problem (PKP), Syndrome Decoding (SD) problem and Rank Syndrome Decoding (RSD) problem. In practice, these PoK lead to comparable or shorter signatures than existing ones. Indeed, considering (public key + signature), we get sizes below 9kB for our signature related to the PKP problem, below 15kB for our signature related to the SD problem and below 7kB for our signature related to the RSD problem.
[ { "version": "v1", "created": "Wed, 6 Apr 2022 16:09:26 GMT" }, { "version": "v2", "created": "Mon, 17 Oct 2022 14:07:31 GMT" } ]
2022-10-18T00:00:00
[ [ "Bidoux", "Loïc", "" ], [ "Gaborit", "Philippe", "" ] ]
new_dataset
0.979362
2204.10321
Adam Tonderski
Adam Tonderski, Joakim Johnander, Christoffer Petersson, and Kalle {\AA}str\"om
Future Object Detection with Spatiotemporal Transformers
14 pages, 6 figures
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
We propose the task Future Object Detection, in which the goal is to predict the bounding boxes for all visible objects in a future video frame. While this task involves recognizing temporal and kinematic patterns, in addition to the semantic and geometric ones, it only requires annotations in the standard form for individual, single (future) frames, in contrast to expensive full sequence annotations. We propose to tackle this task with an end-to-end method, in which a detection transformer is trained to directly output the future objects. In order to make accurate predictions about the future, it is necessary to capture the dynamics in the scene, both object motion and the movement of the ego-camera. To this end, we extend existing detection transformers in two ways. First, we experiment with three different mechanisms that enable the network to spatiotemporally process multiple frames. Second, we provide ego-motion information to the model in a learnable manner. We show that both of these extensions improve the future object detection performance substantially. Our final approach learns to capture the dynamics and makes predictions on par with an oracle for prediction horizons up to 100 ms, and outperforms all baselines for longer prediction horizons. By visualizing the attention maps, we observe that a form of tracking emerges within the network. Code is available at github.com/atonderski/future-object-detection.
[ { "version": "v1", "created": "Thu, 21 Apr 2022 17:58:36 GMT" }, { "version": "v2", "created": "Mon, 17 Oct 2022 13:45:29 GMT" } ]
2022-10-18T00:00:00
[ [ "Tonderski", "Adam", "" ], [ "Johnander", "Joakim", "" ], [ "Petersson", "Christoffer", "" ], [ "Åström", "Kalle", "" ] ]
new_dataset
0.980153
2205.12404
Tuhin Chakrabarty Mr
Tuhin Chakrabarty, Arkadiy Saakyan, Debanjan Ghosh and Smaranda Muresan
FLUTE: Figurative Language Understanding through Textual Explanations
EMNLP 2022 Main Conference (Long Paper)
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Figurative language understanding has been recently framed as a recognizing textual entailment (RTE) task (a.k.a. natural language inference, or NLI). However, similar to classical RTE/NLI datasets, the current benchmarks suffer from spurious correlations and annotation artifacts. To tackle this problem, work on NLI has built explanation-based datasets such as e-SNLI, allowing us to probe whether language models are right for the right reasons.Yet no such data exists for figurative language, making it harder to assess genuine understanding of such expressions. To address this issue, we release FLUTE, a dataset of 9,000 figurative NLI instances with explanations, spanning four categories: Sarcasm, Simile, Metaphor, and Idioms. We collect the data through a model-in-the-loop framework based on GPT-3, crowd workers, and expert annotators. We show how utilizing GPT-3 in conjunction with human annotators (novices and experts) can aid in scaling up the creation of datasets even for such complex linguistic phenomena as figurative language. The baseline performance of the T5 model fine-tuned on FLUTE shows that our dataset can bring us a step closer to developing models that understand figurative language through textual explanations.
[ { "version": "v1", "created": "Tue, 24 May 2022 23:25:02 GMT" }, { "version": "v2", "created": "Fri, 7 Oct 2022 19:43:36 GMT" }, { "version": "v3", "created": "Fri, 14 Oct 2022 18:40:00 GMT" } ]
2022-10-18T00:00:00
[ [ "Chakrabarty", "Tuhin", "" ], [ "Saakyan", "Arkadiy", "" ], [ "Ghosh", "Debanjan", "" ], [ "Muresan", "Smaranda", "" ] ]
new_dataset
0.999598
2205.13011
Dario Tscholl
Dario Tscholl, Stephan-Daniel Gravert, Aurel X. Appius and Robert K. Katzschmann
Flying Hydraulically Amplified Electrostatic Gripper System for Aerial Object Manipulation
16 pages, 12 figures, accepted and presented at the International Symposium on Robotics Research (ISRR) 2022. Video: youtube.com/watch?v=7PmZ8C0Ji08
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rapid and versatile object manipulation in air is an open challenge. An energy-efficient and adaptive soft gripper combined with an agile aerial vehicle could revolutionize aerial robotic manipulation in areas such as warehousing. This paper presents a bio-inspired gripper powered by hydraulically amplified electrostatic actuators mounted to a quadcopter that can interact safely and naturally with its environment. Our gripping concept is motivated by an eagle's foot. Our custom multi-actuator concept is inspired by a scorpion tail design (consisting of a base electrode with pouches stacked adjacently) and spider-inspired joints (classic pouch motors with a flexible hinge layer). A hybrid of these two designs realizes a higher force output under moderate deflections of up to 25{\deg} compared to single-hinge concepts. In addition, sandwiching the hinge layer improves the robustness of the gripper. For the first time, we show that soft manipulation in air is possible using electrostatic actuation. This study demonstrates the potential of untethered hydraulically amplified actuators in aerial robotic manipulation. Our proof of concept opens up the use of hydraulic electrostatic actuators in mobile aerial systems.
[ { "version": "v1", "created": "Wed, 25 May 2022 18:44:28 GMT" }, { "version": "v2", "created": "Wed, 21 Sep 2022 07:51:09 GMT" }, { "version": "v3", "created": "Sat, 15 Oct 2022 21:08:29 GMT" } ]
2022-10-18T00:00:00
[ [ "Tscholl", "Dario", "" ], [ "Gravert", "Stephan-Daniel", "" ], [ "Appius", "Aurel X.", "" ], [ "Katzschmann", "Robert K.", "" ] ]
new_dataset
0.997074
2205.13634
Yuhao Zhang
Yuhao Zhang, Aws Albarghouthi, Loris D'Antoni
BagFlip: A Certified Defense against Data Poisoning
Neurips 2022
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Machine learning models are vulnerable to data-poisoning attacks, in which an attacker maliciously modifies the training set to change the prediction of a learned model. In a trigger-less attack, the attacker can modify the training set but not the test inputs, while in a backdoor attack the attacker can also modify test inputs. Existing model-agnostic defense approaches either cannot handle backdoor attacks or do not provide effective certificates (i.e., a proof of a defense). We present BagFlip, a model-agnostic certified approach that can effectively defend against both trigger-less and backdoor attacks. We evaluate BagFlip on image classification and malware detection datasets. BagFlip is equal to or more effective than the state-of-the-art approaches for trigger-less attacks and more effective than the state-of-the-art approaches for backdoor attacks.
[ { "version": "v1", "created": "Thu, 26 May 2022 21:09:24 GMT" }, { "version": "v2", "created": "Sun, 16 Oct 2022 15:48:46 GMT" } ]
2022-10-18T00:00:00
[ [ "Zhang", "Yuhao", "" ], [ "Albarghouthi", "Aws", "" ], [ "D'Antoni", "Loris", "" ] ]
new_dataset
0.987431
2206.01724
Chengliang Zhong
Chengliang Zhong, Peixing You, Xiaoxue Chen, Hao Zhao, Fuchun Sun, Guyue Zhou, Xiaodong Mu, Chuang Gan, Wenbing Huang
SNAKE: Shape-aware Neural 3D Keypoint Field
Accepted by NeurIPS 2022. Codes are available at https://github.com/zhongcl-thu/SNAKE
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Detecting 3D keypoints from point clouds is important for shape reconstruction, while this work investigates the dual question: can shape reconstruction benefit 3D keypoint detection? Existing methods either seek salient features according to statistics of different orders or learn to predict keypoints that are invariant to transformation. Nevertheless, the idea of incorporating shape reconstruction into 3D keypoint detection is under-explored. We argue that this is restricted by former problem formulations. To this end, a novel unsupervised paradigm named SNAKE is proposed, which is short for shape-aware neural 3D keypoint field. Similar to recent coordinate-based radiance or distance field, our network takes 3D coordinates as inputs and predicts implicit shape indicators and keypoint saliency simultaneously, thus naturally entangling 3D keypoint detection and shape reconstruction. We achieve superior performance on various public benchmarks, including standalone object datasets ModelNet40, KeypointNet, SMPL meshes and scene-level datasets 3DMatch and Redwood. Intrinsic shape awareness brings several advantages as follows. (1) SNAKE generates 3D keypoints consistent with human semantic annotation, even without such supervision. (2) SNAKE outperforms counterparts in terms of repeatability, especially when the input point clouds are down-sampled. (3) the generated keypoints allow accurate geometric registration, notably in a zero-shot setting. Codes are available at https://github.com/zhongcl-thu/SNAKE
[ { "version": "v1", "created": "Fri, 3 Jun 2022 17:58:43 GMT" }, { "version": "v2", "created": "Mon, 17 Oct 2022 07:45:17 GMT" } ]
2022-10-18T00:00:00
[ [ "Zhong", "Chengliang", "" ], [ "You", "Peixing", "" ], [ "Chen", "Xiaoxue", "" ], [ "Zhao", "Hao", "" ], [ "Sun", "Fuchun", "" ], [ "Zhou", "Guyue", "" ], [ "Mu", "Xiaodong", "" ], [ "Gan", "Chuang", "" ], [ "Huang", "Wenbing", "" ] ]
new_dataset
0.978998
2206.10071
Kay Liu
Kay Liu, Yingtong Dou, Yue Zhao, Xueying Ding, Xiyang Hu, Ruitong Zhang, Kaize Ding, Canyu Chen, Hao Peng, Kai Shu, Lichao Sun, Jundong Li, George H. Chen, Zhihao Jia, Philip S. Yu
BOND: Benchmarking Unsupervised Outlier Node Detection on Static Attributed Graphs
NeurIPS 2022. Benchmark available at https://github.com/pygod-team/pygod/tree/main/benchmark
null
null
null
cs.LG cs.SI
http://creativecommons.org/licenses/by/4.0/
Detecting which nodes in graphs are outliers is a relatively new machine learning task with numerous applications. Despite the proliferation of algorithms developed in recent years for this task, there has been no standard comprehensive setting for performance evaluation. Consequently, it has been difficult to understand which methods work well and when under a broad range of settings. To bridge this gap, we present--to the best of our knowledge--the first comprehensive benchmark for unsupervised outlier node detection on static attributed graphs called BOND, with the following highlights. (1) We benchmark the outlier detection performance of 14 methods ranging from classical matrix factorization to the latest graph neural networks. (2) Using nine real datasets, our benchmark assesses how the different detection methods respond to two major types of synthetic outliers and separately to "organic" (real non-synthetic) outliers. (3) Using an existing random graph generation technique, we produce a family of synthetically generated datasets of different graph sizes that enable us to compare the running time and memory usage of the different outlier detection algorithms. Based on our experimental results, we discuss the pros and cons of existing graph outlier detection algorithms, and we highlight opportunities for future research. Importantly, our code is freely available and meant to be easily extendable: https://github.com/pygod-team/pygod/tree/main/benchmark
[ { "version": "v1", "created": "Tue, 21 Jun 2022 01:46:38 GMT" }, { "version": "v2", "created": "Sun, 16 Oct 2022 01:18:45 GMT" } ]
2022-10-18T00:00:00
[ [ "Liu", "Kay", "" ], [ "Dou", "Yingtong", "" ], [ "Zhao", "Yue", "" ], [ "Ding", "Xueying", "" ], [ "Hu", "Xiyang", "" ], [ "Zhang", "Ruitong", "" ], [ "Ding", "Kaize", "" ], [ "Chen", "Canyu", "" ], [ "Peng", "Hao", "" ], [ "Shu", "Kai", "" ], [ "Sun", "Lichao", "" ], [ "Li", "Jundong", "" ], [ "Chen", "George H.", "" ], [ "Jia", "Zhihao", "" ], [ "Yu", "Philip S.", "" ] ]
new_dataset
0.995685
2206.10910
Xiao-Feng Zhang
Xiao Feng Zhang and Chao Chen Gu and Shan Ying Zhu
SpA-Former: Transformer image shadow detection and removal via spatial attention
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose an end-to-end SpA-Former to recover a shadow-free image from a single shaded image. Unlike traditional methods that require two steps for shadow detection and then shadow removal, the SpA-Former unifies these steps into one, which is a one-stage network capable of directly learning the mapping function between shadows and no shadows, it does not require a separate shadow detection. Thus, SpA-former is adaptable to real image de-shadowing for shadows projected on different semantic regions. SpA-Former consists of transformer layer and a series of joint Fourier transform residual blocks and two-wheel joint spatial attention. The network in this paper is able to handle the task while achieving a very fast processing efficiency. Our code is relased on https://github.com/zhangbaijin/SpA-Former-shadow-removal
[ { "version": "v1", "created": "Wed, 22 Jun 2022 08:30:22 GMT" }, { "version": "v2", "created": "Wed, 29 Jun 2022 04:36:52 GMT" }, { "version": "v3", "created": "Mon, 17 Oct 2022 03:27:55 GMT" } ]
2022-10-18T00:00:00
[ [ "Zhang", "Xiao Feng", "" ], [ "Gu", "Chao Chen", "" ], [ "Zhu", "Shan Ying", "" ] ]
new_dataset
0.980004
2206.13597
Jiepeng Wang
Jiepeng Wang, Peng Wang, Xiaoxiao Long, Christian Theobalt, Taku Komura, Lingjie Liu, Wenping Wang
NeuRIS: Neural Reconstruction of Indoor Scenes Using Normal Priors
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Reconstructing 3D indoor scenes from 2D images is an important task in many computer vision and graphics applications. A main challenge in this task is that large texture-less areas in typical indoor scenes make existing methods struggle to produce satisfactory reconstruction results. We propose a new method, named NeuRIS, for high quality reconstruction of indoor scenes. The key idea of NeuRIS is to integrate estimated normal of indoor scenes as a prior in a neural rendering framework for reconstructing large texture-less shapes and, importantly, to do this in an adaptive manner to also enable the reconstruction of irregular shapes with fine details. Specifically, we evaluate the faithfulness of the normal priors on-the-fly by checking the multi-view consistency of reconstruction during the optimization process. Only the normal priors accepted as faithful will be utilized for 3D reconstruction, which typically happens in the regions of smooth shapes possibly with weak texture. However, for those regions with small objects or thin structures, for which the normal priors are usually unreliable, we will only rely on visual features of the input images, since such regions typically contain relatively rich visual features (e.g., shade changes and boundary contours). Extensive experiments show that NeuRIS significantly outperforms the state-of-the-art methods in terms of reconstruction quality.
[ { "version": "v1", "created": "Mon, 27 Jun 2022 19:22:03 GMT" }, { "version": "v2", "created": "Sun, 16 Oct 2022 14:30:57 GMT" } ]
2022-10-18T00:00:00
[ [ "Wang", "Jiepeng", "" ], [ "Wang", "Peng", "" ], [ "Long", "Xiaoxiao", "" ], [ "Theobalt", "Christian", "" ], [ "Komura", "Taku", "" ], [ "Liu", "Lingjie", "" ], [ "Wang", "Wenping", "" ] ]
new_dataset
0.99711
2207.06025
Domenico Lof\`u
Domenico Lof\`u, Pietro Tedeschi, Tommaso Di Noia and Eugenio Di Sciascio
URANUS: Radio Frequency Tracking, Classification and Identification of Unmanned Aircraft Vehicles
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Safety and security issues for Critical Infrastructures (CI) are growing as attackers increasingly adopt drones as an attack vector flying in sensitive airspace, such as airports, military bases, city centres, and crowded places. The rapid proliferation of drones for merchandise, shipping recreations activities, and other commercial applications poses severe concerns on the CI operators due to the violations and the invasions of the restricted airspaces. A cost-effective framework is needed to detect, classify and identify the presence of drones in such cases. In this paper, we demonstrate that CI operators can detect, classify and identify timely and efficiently drones (multi-copter and fixed-wings) invading no-drone zones, with an inexpensive RF-based detection framework named URANUS. Our experiments show that by using Random Forest classifier, we achieved a classification accuracy of 93.4% in the classification of one or multiple specific drones. The tracking performance achieves an accuracy with an average of MAE=0.3650, MSE=0.9254 and R2 = 0.7502. Our framework has been released as open-source, to enable the community to verify our findings and use URANUS as a ready-to-use basis for further analysis.
[ { "version": "v1", "created": "Wed, 13 Jul 2022 08:14:18 GMT" }, { "version": "v2", "created": "Mon, 17 Oct 2022 08:27:44 GMT" } ]
2022-10-18T00:00:00
[ [ "Lofù", "Domenico", "" ], [ "Tedeschi", "Pietro", "" ], [ "Di Noia", "Tommaso", "" ], [ "Di Sciascio", "Eugenio", "" ] ]
new_dataset
0.999657
2207.08192
Zeyi Liu
Zeyi Liu, Zhenjia Xu, Shuran Song
BusyBot: Learning to Interact, Reason, and Plan in a BusyBoard Environment
CoRL 2022 camera-ready; Website: https://busybot.cs.columbia.edu/
null
null
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce BusyBoard, a toy-inspired robot learning environment that leverages a diverse set of articulated objects and inter-object functional relations to provide rich visual feedback for robot interactions. Based on this environment, we introduce a learning framework, BusyBot, which allows an agent to jointly acquire three fundamental capabilities (interaction, reasoning, and planning) in an integrated and self-supervised manner. With the rich sensory feedback provided by BusyBoard, BusyBot first learns a policy to efficiently interact with the environment; then with data collected using the policy, BusyBot reasons the inter-object functional relations through a causal discovery network; and finally by combining the learned interaction policy and relation reasoning skill, the agent is able to perform goal-conditioned manipulation tasks. We evaluate BusyBot in both simulated and real-world environments, and validate its generalizability to unseen objects and relations. Video is available at https://youtu.be/EJ98xBJZ9ek.
[ { "version": "v1", "created": "Sun, 17 Jul 2022 14:43:06 GMT" }, { "version": "v2", "created": "Mon, 17 Oct 2022 03:23:15 GMT" } ]
2022-10-18T00:00:00
[ [ "Liu", "Zeyi", "" ], [ "Xu", "Zhenjia", "" ], [ "Song", "Shuran", "" ] ]
new_dataset
0.999045
2207.09639
Justin Cui
Justin Cui, Ruochen Wang, Si Si, Cho-Jui Hsieh
DC-BENCH: Dataset Condensation Benchmark
null
null
null
null
cs.LG cs.AI cs.CV
http://creativecommons.org/licenses/by/4.0/
Dataset Condensation is a newly emerging technique aiming at learning a tiny dataset that captures the rich information encoded in the original dataset. As the size of datasets contemporary machine learning models rely on becomes increasingly large, condensation methods become a prominent direction for accelerating network training and reducing data storage. Despite numerous methods have been proposed in this rapidly growing field, evaluating and comparing different condensation methods is non-trivial and still remains an open issue. The quality of condensed dataset are often shadowed by many critical contributing factors to the end performance, such as data augmentation and model architectures. The lack of a systematic way to evaluate and compare condensation methods not only hinders our understanding of existing techniques, but also discourages practical usage of the synthesized datasets. This work provides the first large-scale standardized benchmark on Dataset Condensation. It consists of a suite of evaluations to comprehensively reflect the generability and effectiveness of condensation methods through the lens of their generated dataset. Leveraging this benchmark, we conduct a large-scale study of current condensation methods, and report many insightful findings that open up new possibilities for future development. The benchmark library, including evaluators, baseline methods, and generated datasets, is open-sourced to facilitate future research and application.
[ { "version": "v1", "created": "Wed, 20 Jul 2022 03:54:05 GMT" }, { "version": "v2", "created": "Mon, 17 Oct 2022 07:47:01 GMT" } ]
2022-10-18T00:00:00
[ [ "Cui", "Justin", "" ], [ "Wang", "Ruochen", "" ], [ "Si", "Si", "" ], [ "Hsieh", "Cho-Jui", "" ] ]
new_dataset
0.999824
2207.10894
Julie Jiang
Julie Jiang, Emily Chen, Luca Luceri, Goran Muri\'c, Francesco Pierri, Ho-Chun Herbert Chang, Emilio Ferrara
What are Your Pronouns? Examining Gender Pronoun Usage on Twitter
13 pages, 10 figures, 2 tables
null
null
null
cs.SI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Stating your gender pronouns, along with your name, is becoming the new norm of self-introductions at school, at the workplace, and online. The increasing prevalence and awareness of nonconforming gender identities put discussions of developing gender-inclusive language at the forefront. This work presents the first empirical research on gender pronoun usage on large-scale social media. Leveraging a Twitter dataset of over 2 billion tweets collected continuously over two years, we find that the public declaration of gender pronouns is on the rise, with most people declaring as using she series pronouns, followed by he series pronouns, and a smaller but considerable amount of non-binary pronouns. From analyzing Twitter posts and sharing activities, we can discern users who use gender pronouns from those who do not and also distinguish users of various gender identities. We further illustrate the relationship between explicit forms of social network exposure to gender pronouns and their eventual gender pronoun adoption. This work carries crucial implications for gender-identity studies and initiates new research directions in gender-related fairness and inclusion, as well as support against online harassment and discrimination on social media.
[ { "version": "v1", "created": "Fri, 22 Jul 2022 06:13:45 GMT" }, { "version": "v2", "created": "Fri, 14 Oct 2022 21:14:46 GMT" } ]
2022-10-18T00:00:00
[ [ "Jiang", "Julie", "" ], [ "Chen", "Emily", "" ], [ "Luceri", "Luca", "" ], [ "Murić", "Goran", "" ], [ "Pierri", "Francesco", "" ], [ "Chang", "Ho-Chun Herbert", "" ], [ "Ferrara", "Emilio", "" ] ]
new_dataset
0.998733
2207.12126
Mathilde Papillon
Mathilde Papillon, Mariel Pettee, Nina Miolane
PirouNet: Creating Dance through Artist-Centric Deep Learning
18 pages, 6 figures
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Using Artificial Intelligence (AI) to create dance choreography with intention is still at an early stage. Methods that conditionally generate dance sequences remain limited in their ability to follow choreographer-specific creative direction, often relying on external prompts or supervised learning. In the same vein, fully annotated dance datasets are rare and labor intensive. To fill this gap and help leverage deep learning as a meaningful tool for choreographers, we propose "PirouNet", a semi-supervised conditional recurrent variational autoencoder together with a dance labeling web application. PirouNet allows dance professionals to annotate data with their own subjective creative labels and subsequently generate new bouts of choreography based on their aesthetic criteria. Thanks to the proposed semi-supervised approach, PirouNet only requires a small portion of the dataset to be labeled, typically on the order of 1%. We demonstrate PirouNet's capabilities as it generates original choreography based on the "Laban Time Effort", an established dance notion describing intention for a movement's time dynamics. We extensively evaluate PirouNet's dance creations through a series of qualitative and quantitative metrics, validating its applicability as a tool for choreographers.
[ { "version": "v1", "created": "Thu, 21 Jul 2022 18:04:59 GMT" }, { "version": "v2", "created": "Fri, 14 Oct 2022 23:49:17 GMT" } ]
2022-10-18T00:00:00
[ [ "Papillon", "Mathilde", "" ], [ "Pettee", "Mariel", "" ], [ "Miolane", "Nina", "" ] ]
new_dataset
0.999061
2208.08738
Chang Xu
Chang Xu, Jinwang Wang, Wen Yang, Huai Yu, Lei Yu, Gui-Song Xia
RFLA: Gaussian Receptive Field based Label Assignment for Tiny Object Detection
ECCV2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Detecting tiny objects is one of the main obstacles hindering the development of object detection. The performance of generic object detectors tends to drastically deteriorate on tiny object detection tasks. In this paper, we point out that either box prior in the anchor-based detector or point prior in the anchor-free detector is sub-optimal for tiny objects. Our key observation is that the current anchor-based or anchor-free label assignment paradigms will incur many outlier tiny-sized ground truth samples, leading to detectors imposing less focus on the tiny objects. To this end, we propose a Gaussian Receptive Field based Label Assignment (RFLA) strategy for tiny object detection. Specifically, RFLA first utilizes the prior information that the feature receptive field follows Gaussian distribution. Then, instead of assigning samples with IoU or center sampling strategy, a new Receptive Field Distance (RFD) is proposed to directly measure the similarity between the Gaussian receptive field and ground truth. Considering that the IoU-threshold based and center sampling strategy are skewed to large objects, we further design a Hierarchical Label Assignment (HLA) module based on RFD to achieve balanced learning for tiny objects. Extensive experiments on four datasets demonstrate the effectiveness of the proposed methods. Especially, our approach outperforms the state-of-the-art competitors with 4.0 AP points on the AI-TOD dataset. Codes are available at https://github.com/Chasel-Tsui/mmdet-rfla
[ { "version": "v1", "created": "Thu, 18 Aug 2022 09:35:56 GMT" }, { "version": "v2", "created": "Mon, 17 Oct 2022 06:25:59 GMT" } ]
2022-10-18T00:00:00
[ [ "Xu", "Chang", "" ], [ "Wang", "Jinwang", "" ], [ "Yang", "Wen", "" ], [ "Yu", "Huai", "" ], [ "Yu", "Lei", "" ], [ "Xia", "Gui-Song", "" ] ]
new_dataset
0.994689
2208.10004
Muying Luo
Muying Luo, Shunping Ji, Shiqing Wei
A diverse large-scale building dataset and a novel plug-and-play domain generalization method for building extraction
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we introduce a new building dataset and propose a novel domain generalization method to facilitate the development of building extraction from high-resolution remote sensing images. The problem with the current building datasets involves that they lack diversity, the quality of the labels is unsatisfactory, and they are hardly used to train a building extraction model with good generalization ability, so as to properly evaluate the real performance of a model in practical scenes. To address these issues, we built a diverse, large-scale, and high-quality building dataset named the WHU-Mix building dataset, which is more practice-oriented. The WHU-Mix building dataset consists of a training/validation set containing 43,727 diverse images collected from all over the world, and a test set containing 8402 images from five other cities on five continents. In addition, to further improve the generalization ability of a building extraction model, we propose a domain generalization method named batch style mixing (BSM), which can be embedded as an efficient plug-and-play module in the frond-end of a building extraction model, providing the model with a progressively larger data distribution to learn data-invariant knowledge. The experiments conducted in this study confirmed the potential of the WHU-Mix building dataset to improve the performance of a building extraction model, resulting in a 6-36% improvement in mIoU, compared to the other existing datasets. The adverse impact of the inaccurate labels in the other datasets can cause about 20% IoU decrease. The experiments also confirmed the high performance of the proposed BSM module in enhancing the generalization ability and robustness of a model, exceeding the baseline model without domain generalization by 13% and the recent domain generalization methods by 4-15% in mIoU.
[ { "version": "v1", "created": "Mon, 22 Aug 2022 01:43:13 GMT" }, { "version": "v2", "created": "Mon, 17 Oct 2022 13:32:55 GMT" } ]
2022-10-18T00:00:00
[ [ "Luo", "Muying", "" ], [ "Ji", "Shunping", "" ], [ "Wei", "Shiqing", "" ] ]
new_dataset
0.966063
2209.05434
Junshu Tang
Junshu Tang, Bo Zhang, Binxin Yang, Ting Zhang, Dong Chen, Lizhuang Ma, Fang Wen
3DFaceShop: Explicitly Controllable 3D-Aware Portrait Generation
Project webpage: https://junshutang.github.io/control/index.html
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In contrast to the traditional avatar creation pipeline which is a costly process, contemporary generative approaches directly learn the data distribution from photographs. While plenty of works extend unconditional generative models and achieve some levels of controllability, it is still challenging to ensure multi-view consistency, especially in large poses. In this work, we propose a network that generates 3D-aware portraits while being controllable according to semantic parameters regarding pose, identity, expression and illumination. Our network uses neural scene representation to model 3D-aware portraits, whose generation is guided by a parametric face model that supports explicit control. While the latent disentanglement can be further enhanced by contrasting images with partially different attributes, there still exists noticeable inconsistency in non-face areas, e.g., hair and background, when animating expressions. Wesolve this by proposing a volume blending strategy in which we form a composite output by blending dynamic and static areas, with two parts segmented from the jointly learned semantic field. Our method outperforms prior arts in extensive experiments, producing realistic portraits with vivid expression in natural lighting when viewed from free viewpoints. It also demonstrates generalization ability to real images as well as out-of-domain data, showing great promise in real applications.
[ { "version": "v1", "created": "Mon, 12 Sep 2022 17:40:08 GMT" }, { "version": "v2", "created": "Tue, 20 Sep 2022 07:35:50 GMT" }, { "version": "v3", "created": "Mon, 17 Oct 2022 07:02:29 GMT" } ]
2022-10-18T00:00:00
[ [ "Tang", "Junshu", "" ], [ "Zhang", "Bo", "" ], [ "Yang", "Binxin", "" ], [ "Zhang", "Ting", "" ], [ "Chen", "Dong", "" ], [ "Ma", "Lizhuang", "" ], [ "Wen", "Fang", "" ] ]
new_dataset
0.997465
2209.09874
Fei Xia
Boyuan Chen and Fei Xia and Brian Ichter and Kanishka Rao and Keerthana Gopalakrishnan and Michael S. Ryoo and Austin Stone and Daniel Kappler
Open-vocabulary Queryable Scene Representations for Real World Planning
v2, added references to concurrent work and acknowledgments
null
null
null
cs.RO cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large language models (LLMs) have unlocked new capabilities of task planning from human instructions. However, prior attempts to apply LLMs to real-world robotic tasks are limited by the lack of grounding in the surrounding scene. In this paper, we develop NLMap, an open-vocabulary and queryable scene representation to address this problem. NLMap serves as a framework to gather and integrate contextual information into LLM planners, allowing them to see and query available objects in the scene before generating a context-conditioned plan. NLMap first establishes a natural language queryable scene representation with Visual Language models (VLMs). An LLM based object proposal module parses instructions and proposes involved objects to query the scene representation for object availability and location. An LLM planner then plans with such information about the scene. NLMap allows robots to operate without a fixed list of objects nor executable options, enabling real robot operation unachievable by previous methods. Project website: https://nlmap-saycan.github.io
[ { "version": "v1", "created": "Tue, 20 Sep 2022 17:29:56 GMT" }, { "version": "v2", "created": "Sat, 15 Oct 2022 07:05:36 GMT" } ]
2022-10-18T00:00:00
[ [ "Chen", "Boyuan", "" ], [ "Xia", "Fei", "" ], [ "Ichter", "Brian", "" ], [ "Rao", "Kanishka", "" ], [ "Gopalakrishnan", "Keerthana", "" ], [ "Ryoo", "Michael S.", "" ], [ "Stone", "Austin", "" ], [ "Kappler", "Daniel", "" ] ]
new_dataset
0.996954
2210.05236
Lin Ma
Lin Ma, Jiangtao Gong, Hao Xu, Hao Chen, Hao Zhao, Wenbing Huang and Guyue Zhou
Planning Assembly Sequence with Graph Transformer
Submitted to ICRA2023
null
null
null
cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Assembly sequence planning (ASP) is the essential process for modern manufacturing, proven to be NP-complete thus its effective and efficient solution has been a challenge for researchers in the field. In this paper, we present a graph-transformer based framework for the ASP problem which is trained and demonstrated on a self-collected ASP database. The ASP database contains a self-collected set of LEGO models. The LEGO model is abstracted to a heterogeneous graph structure after a thorough analysis of the original structure and feature extraction. The ground truth assembly sequence is first generated by brute-force search and then adjusted manually to in line with human rational habits. Based on this self-collected ASP dataset, we propose a heterogeneous graph-transformer framework to learn the latent rules for assembly planning. We evaluated the proposed framework in a series of experiment. The results show that the similarity of the predicted and ground truth sequences can reach 0.44, a medium correlation measured by Kendall's $\tau$. Meanwhile, we compared the different effects of node features and edge features and generated a feasible and reasonable assembly sequence as a benchmark for further research. Our data set and code is available on https://github.com/AIR-DISCOVER/ICRA\_ASP.
[ { "version": "v1", "created": "Tue, 11 Oct 2022 08:06:16 GMT" }, { "version": "v2", "created": "Wed, 12 Oct 2022 15:00:34 GMT" }, { "version": "v3", "created": "Sat, 15 Oct 2022 08:26:28 GMT" } ]
2022-10-18T00:00:00
[ [ "Ma", "Lin", "" ], [ "Gong", "Jiangtao", "" ], [ "Xu", "Hao", "" ], [ "Chen", "Hao", "" ], [ "Zhao", "Hao", "" ], [ "Huang", "Wenbing", "" ], [ "Zhou", "Guyue", "" ] ]
new_dataset
0.985104
2210.06570
Yuekun Dai
Yuekun Dai, Chongyi Li, Shangchen Zhou, Ruicheng Feng, Chen Change Loy
Flare7K: A Phenomenological Nighttime Flare Removal Dataset
Camera-ready version for NeurIPS 2022 Track Datasets and Benchmarks
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Artificial lights commonly leave strong lens flare artifacts on images captured at night. Nighttime flare not only affects the visual quality but also degrades the performance of vision algorithms. Existing flare removal methods mainly focus on removing daytime flares and fail in nighttime. Nighttime flare removal is challenging because of the unique luminance and spectrum of artificial lights and the diverse patterns and image degradation of the flares captured at night. The scarcity of nighttime flare removal datasets limits the research on this crucial task. In this paper, we introduce, Flare7K, the first nighttime flare removal dataset, which is generated based on the observation and statistics of real-world nighttime lens flares. It offers 5,000 scattering and 2,000 reflective flare images, consisting of 25 types of scattering flares and 10 types of reflective flares. The 7,000 flare patterns can be randomly added to flare-free images, forming the flare-corrupted and flare-free image pairs. With the paired data, we can train deep models to restore flare-corrupted images taken in the real world effectively. Apart from abundant flare patterns, we also provide rich annotations, including the labeling of light source, glare with shimmer, reflective flare, and streak, which are commonly absent from existing datasets. Hence, our dataset can facilitate new work in nighttime flare removal and more fine-grained analysis of flare patterns. Extensive experiments show that our dataset adds diversity to existing flare datasets and pushes the frontier of nighttime flare removal.
[ { "version": "v1", "created": "Wed, 12 Oct 2022 20:17:24 GMT" } ]
2022-10-18T00:00:00
[ [ "Dai", "Yuekun", "" ], [ "Li", "Chongyi", "" ], [ "Zhou", "Shangchen", "" ], [ "Feng", "Ruicheng", "" ], [ "Loy", "Chen Change", "" ] ]
new_dataset
0.999791
2210.06909
Georg W\"olflein
Georg W\"olflein, In Hwa Um, David J Harrison, Ognjen Arandjelovi\'c
HoechstGAN: Virtual Lymphocyte Staining Using Generative Adversarial Networks
Accepted at IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2023
null
null
null
cs.CV cs.LG q-bio.QM
http://creativecommons.org/licenses/by/4.0/
The presence and density of specific types of immune cells are important to understand a patient's immune response to cancer. However, immunofluorescence staining required to identify T cell subtypes is expensive, time-consuming, and rarely performed in clinical settings. We present a framework to virtually stain Hoechst images (which are cheap and widespread) with both CD3 and CD8 to identify T cell subtypes in clear cell renal cell carcinoma using generative adversarial networks. Our proposed method jointly learns both staining tasks, incentivising the network to incorporate mutually beneficial information from each task. We devise a novel metric to quantify the virtual staining quality, and use it to evaluate our method.
[ { "version": "v1", "created": "Thu, 13 Oct 2022 11:23:19 GMT" }, { "version": "v2", "created": "Mon, 17 Oct 2022 12:21:42 GMT" } ]
2022-10-18T00:00:00
[ [ "Wölflein", "Georg", "" ], [ "Um", "In Hwa", "" ], [ "Harrison", "David J", "" ], [ "Arandjelović", "Ognjen", "" ] ]
new_dataset
0.996652
2210.08015
Juan Heredia
Juan Heredia, Christian Schlette, and Mikkel Baun Kj{\ae}rgaard
AR Training App for Energy Optimal Programming of Cobots
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Worldwide most factories aim for low-cost and fast production ignoring resources and energy consumption. But, high revenues have been accompanied by environmental degradation. The United Nations reacted to the ecological problem and proposed the Sustainable Development Goals, and one of them is Sustainable Production (Goal 12). In addition, the participation of lightweight robots, such as collaborative robots, in modern industrial production is increasing. The energy consumption of a single collaborative robot is not significant, however, the consumption of more and more cobots worldwide is representative. Consequently, our research focuses on strategies to reduce the energy consumption of lightweight robots aiming for sustainable production. Firstly, the energy consumption of the lightweight robot UR10e is assessed by a set of experiments. We analyzed the results of the experiments to describe the relationship between the energy consumption and the evaluation parameters, thus paving the way to optimization strategies. Next, we propose four strategies to reduce energy consumption: 1) optimal standby position, 2) optimal robot instruction, 3) optimal motion time, and 4) reduction of dissipative energy. The results show that cobots potentially reduce from 3\% up to 37\% of their energy consumption, depending on the optimization technique. To disseminate the results of our research, we developed an AR game in which the users learn how to energy-efficiently program cobots.
[ { "version": "v1", "created": "Fri, 14 Oct 2022 15:10:43 GMT" } ]
2022-10-18T00:00:00
[ [ "Heredia", "Juan", "" ], [ "Schlette", "Christian", "" ], [ "Kjærgaard", "Mikkel Baun", "" ] ]
new_dataset
0.983048
2210.08041
Song Gao
Yunlei Liang, Jiawei Zhu, Wen Ye, Song Gao
Region2Vec: Community Detection on Spatial Networks Using Graph Embedding with Node Attributes and Spatial Interactions
4 pages, 1 page
The 30th International Conference on Advances in Geographic Information Systems (SIGSPATIAL'22), November 1-4, 2022, Seattle, WA, USA
10.1145/3557915.3560974
null
cs.SI cs.AI
http://creativecommons.org/licenses/by/4.0/
Community Detection algorithms are used to detect densely connected components in complex networks and reveal underlying relationships among components. As a special type of networks, spatial networks are usually generated by the connections among geographic regions. Identifying the spatial network communities can help reveal the spatial interaction patterns, understand the hidden regional structures and support regional development decision-making. Given the recent development of Graph Convolutional Networks (GCN) and its powerful performance in identifying multi-scale spatial interactions, we proposed an unsupervised GCN-based community detection method "region2vec" on spatial networks. Our method first generates node embeddings for regions that share common attributes and have intense spatial interactions, and then applies clustering algorithms to detect communities based on their embedding similarity and spatial adjacency. Experimental results show that while existing methods trade off either attribute similarities or spatial interactions for one another, "region2vec" maintains a great balance between both and performs the best when one wants to maximize both attribute similarities and spatial interactions within communities.
[ { "version": "v1", "created": "Mon, 10 Oct 2022 02:32:55 GMT" } ]
2022-10-18T00:00:00
[ [ "Liang", "Yunlei", "" ], [ "Zhu", "Jiawei", "" ], [ "Ye", "Wen", "" ], [ "Gao", "Song", "" ] ]
new_dataset
0.979923
2210.08061
Mahyar Najibi
Mahyar Najibi, Jingwei Ji, Yin Zhou, Charles R. Qi, Xinchen Yan, Scott Ettinger, Dragomir Anguelov
Motion Inspired Unsupervised Perception and Prediction in Autonomous Driving
ECCV 2022
null
null
null
cs.CV cs.LG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning-based perception and prediction modules in modern autonomous driving systems typically rely on expensive human annotation and are designed to perceive only a handful of predefined object categories. This closed-set paradigm is insufficient for the safety-critical autonomous driving task, where the autonomous vehicle needs to process arbitrarily many types of traffic participants and their motion behaviors in a highly dynamic world. To address this difficulty, this paper pioneers a novel and challenging direction, i.e., training perception and prediction models to understand open-set moving objects, with no human supervision. Our proposed framework uses self-learned flow to trigger an automated meta labeling pipeline to achieve automatic supervision. 3D detection experiments on the Waymo Open Dataset show that our method significantly outperforms classical unsupervised approaches and is even competitive to the counterpart with supervised scene flow. We further show that our approach generates highly promising results in open-set 3D detection and trajectory prediction, confirming its potential in closing the safety gap of fully supervised systems.
[ { "version": "v1", "created": "Fri, 14 Oct 2022 18:55:44 GMT" } ]
2022-10-18T00:00:00
[ [ "Najibi", "Mahyar", "" ], [ "Ji", "Jingwei", "" ], [ "Zhou", "Yin", "" ], [ "Qi", "Charles R.", "" ], [ "Yan", "Xinchen", "" ], [ "Ettinger", "Scott", "" ], [ "Anguelov", "Dragomir", "" ] ]
new_dataset
0.977721
2210.08116
Md. Nayem Hasan Muntasir
Md. Nayem Hasan Muntasir, Tariqul Islam Siam, Md. Kamruzzaman Sarker
A Low-cost Humanoid Prototype Intended to assist people with disability using Raspberry Pi
The number of total pages is 8, number of figures is 3, and number of tables is 2
null
null
null
cs.RO cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
This paper will try to delineate the making of a Humanoid prototype intended to assist people with disability (PWD). The assistance that this prototype will offer is rather rudimentary. However, our key focus is to make the prototype cost-friendly while pertaining to its humanoid-like functionalities. Considering growing needs of Robots, facilities for further installment of features have been made available in this project. The prototype will be of humanoid shape harnessing the power of Artificial Neural Network (ANN) to converse with the users. The prototype uses a raspberry pi and as the computational capability of a raspberry pi is minimal, we cut corners to squeeze the last drop of performance and make it as efficient as possible.
[ { "version": "v1", "created": "Tue, 4 Oct 2022 20:05:03 GMT" } ]
2022-10-18T00:00:00
[ [ "Muntasir", "Md. Nayem Hasan", "" ], [ "Siam", "Tariqul Islam", "" ], [ "Sarker", "Md. Kamruzzaman", "" ] ]
new_dataset
0.999439
2210.08129
Shubhanshu Mishra
Shubhanshu Mishra, Aman Saini, Raheleh Makki, Sneha Mehta, Aria Haghighi, Ali Mollahosseini
TweetNERD -- End to End Entity Linking Benchmark for Tweets
19 pages, 2 figures. Accepted to Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track 2022. Data available at: https://doi.org/10.5281/zenodo.6617192 under Creative Commons Attribution 4.0 International (CC BY 4.0) license. Check out more details at https://github.com/twitter-research/TweetNERD
null
null
null
cs.CL cs.AI cs.IR cs.LG
http://creativecommons.org/licenses/by/4.0/
Named Entity Recognition and Disambiguation (NERD) systems are foundational for information retrieval, question answering, event detection, and other natural language processing (NLP) applications. We introduce TweetNERD, a dataset of 340K+ Tweets across 2010-2021, for benchmarking NERD systems on Tweets. This is the largest and most temporally diverse open sourced dataset benchmark for NERD on Tweets and can be used to facilitate research in this area. We describe evaluation setup with TweetNERD for three NERD tasks: Named Entity Recognition (NER), Entity Linking with True Spans (EL), and End to End Entity Linking (End2End); and provide performance of existing publicly available methods on specific TweetNERD splits. TweetNERD is available at: https://doi.org/10.5281/zenodo.6617192 under Creative Commons Attribution 4.0 International (CC BY 4.0) license. Check out more details at https://github.com/twitter-research/TweetNERD.
[ { "version": "v1", "created": "Fri, 14 Oct 2022 21:55:07 GMT" } ]
2022-10-18T00:00:00
[ [ "Mishra", "Shubhanshu", "" ], [ "Saini", "Aman", "" ], [ "Makki", "Raheleh", "" ], [ "Mehta", "Sneha", "" ], [ "Haghighi", "Aria", "" ], [ "Mollahosseini", "Ali", "" ] ]
new_dataset
0.99938
2210.08132
Weili Wang
Weili Wang, Omid Abbasi, Halim Yanikomeroglu, Chengchao Liang, Lun Tang, and Qianbin Chen
VHetNets for AI and AI for VHetNets: An Anomaly Detection Case Study for Ubiquitous IoT
null
null
null
null
cs.NI cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vertical heterogenous networks (VHetNets) and artificial intelligence (AI) play critical roles in 6G and beyond networks. This article presents an AI-native VHetNets architecture to enable the synergy of VHetNets and AI, thereby supporting varieties of AI services while facilitating automatic and intelligent network management. Anomaly detection in Internet of Things (IoT) is a major AI service required by many fields, including intrusion detection, state monitoring, device-activity analysis, security supervision and so on. Conventional anomaly detection technologies mainly consider the anomaly detection as a standalone service that is independent of any other network management functionalities, which cannot be used directly in ubiquitous IoT due to the resource constrained end nodes and decentralized data distribution. In this article, we develop an AI-native VHetNets-enabled framework to provide the anomaly detection service for ubiquitous IoT, whose implementation is assisted by intelligent network management functionalities. We first discuss the possibilities of VHetNets used for distributed AI model training to provide anomaly detection service for ubiquitous IoT, i.e., VHetNets for AI. After that, we study the application of AI approaches in helping provide automatic and intelligent network management functionalities for VHetNets, i.e., AI for VHetNets, whose aim is to facilitate the efficient implementation of anomaly detection service. Finally, a case study is presented to demonstrate the efficiency and effectiveness of the proposed AI-native VHetNets-enabled anomaly detection framework.
[ { "version": "v1", "created": "Fri, 14 Oct 2022 21:55:57 GMT" } ]
2022-10-18T00:00:00
[ [ "Wang", "Weili", "" ], [ "Abbasi", "Omid", "" ], [ "Yanikomeroglu", "Halim", "" ], [ "Liang", "Chengchao", "" ], [ "Tang", "Lun", "" ], [ "Chen", "Qianbin", "" ] ]
new_dataset
0.967973
2210.08137
Ahalya Prabhakar
Ahalya Prabhakar and Aude Billard
User-specific, Adaptable Safety Controllers Facilitate User Adoption in Human-Robot Collaboration
Presented at the AI-HRI Symposium at AAAI Fall Symposium Series (FSS) 2022 (arXiv:2209.14292)
null
null
AIHRI/2022/5084
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As assistive and collaborative robots become more ubiquitous in the real-world, we need to develop interfaces and controllers that are safe for users to build trust and encourage adoption. In this Blue Sky paper, we discuss the need for co-evolving task and user-specific safety controllers that can accommodate people's safety preferences. We argue that while most adaptive controllers focus on behavioral adaptation, safety adaptation is also a major consideration for building trust in collaborative systems. Furthermore, we highlight the need for adaptation over time, to account for user's changes in preferences as experience and trust builds. We provide a general formulation for what these interfaces should look like and what features are necessary for making them feasible and successful. In this formulation, users provide demonstrations and labelled safety ratings from which a safety value function is learned. These value functions can be updated by updating the safety labels on demonstrations to learn an updated function. We discuss how this can be implemented at a high-level, as well as some promising approaches and techniques for enabling this.
[ { "version": "v1", "created": "Fri, 14 Oct 2022 22:05:39 GMT" } ]
2022-10-18T00:00:00
[ [ "Prabhakar", "Ahalya", "" ], [ "Billard", "Aude", "" ] ]
new_dataset
0.996917
2210.08249
Yongwei Zhou
Yongwei Zhou, Junwei Bao, Chaoqun Duan, Youzheng Wu, Xiaodong He, Tiejun Zhao
UniRPG: Unified Discrete Reasoning over Table and Text as Program Generation
Accepted to EMNLP 2022
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Question answering requiring discrete reasoning, e.g., arithmetic computing, comparison, and counting, over knowledge is a challenging task. In this paper, we propose UniRPG, a semantic-parsing-based approach advanced in interpretability and scalability, to perform unified discrete reasoning over heterogeneous knowledge resources, i.e., table and text, as program generation. Concretely, UniRPG consists of a neural programmer and a symbolic program executor, where a program is the composition of a set of pre-defined general atomic and higher-order operations and arguments extracted from table and text. First, the programmer parses a question into a program by generating operations and copying arguments, and then the executor derives answers from table and text based on the program. To alleviate the costly program annotation issue, we design a distant supervision approach for programmer learning, where pseudo programs are automatically constructed without annotated derivations. Extensive experiments on the TAT-QA dataset show that UniRPG achieves tremendous improvements and enhances interpretability and scalability compared with state-of-the-art methods, even without derivation annotation. Moreover, it achieves promising performance on the textual dataset DROP without derivations.
[ { "version": "v1", "created": "Sat, 15 Oct 2022 10:17:52 GMT" } ]
2022-10-18T00:00:00
[ [ "Zhou", "Yongwei", "" ], [ "Bao", "Junwei", "" ], [ "Duan", "Chaoqun", "" ], [ "Wu", "Youzheng", "" ], [ "He", "Xiaodong", "" ], [ "Zhao", "Tiejun", "" ] ]
new_dataset
0.996141
2210.08274
Xixi Wu
Xixi Wu, Yun Xiong, Yao Zhang, Yizhu Jiao, Caihua Shan, Yiheng Sun, Yangyong Zhu, and Philip S. Yu
CLARE: A Semi-supervised Community Detection Algorithm
Accepted by KDD'2022
null
10.1145/3534678.3539370
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Community detection refers to the task of discovering closely related subgraphs to understand the networks. However, traditional community detection algorithms fail to pinpoint a particular kind of community. This limits its applicability in real-world networks, e.g., distinguishing fraud groups from normal ones in transaction networks. Recently, semi-supervised community detection emerges as a solution. It aims to seek other similar communities in the network with few labeled communities as training data. Existing works can be regarded as seed-based: locate seed nodes and then develop communities around seeds. However, these methods are quite sensitive to the quality of selected seeds since communities generated around a mis-detected seed may be irrelevant. Besides, they have individual issues, e.g., inflexibility and high computational overhead. To address these issues, we propose CLARE, which consists of two key components, Community Locator and Community Rewriter. Our idea is that we can locate potential communities and then refine them. Therefore, the community locator is proposed for quickly locating potential communities by seeking subgraphs that are similar to training ones in the network. To further adjust these located communities, we devise the community rewriter. Enhanced by deep reinforcement learning, it suggests intelligent decisions, such as adding or dropping nodes, to refine community structures flexibly. Extensive experiments verify both the effectiveness and efficiency of our work compared with prior state-of-the-art approaches on multiple real-world datasets.
[ { "version": "v1", "created": "Sat, 15 Oct 2022 12:37:46 GMT" } ]
2022-10-18T00:00:00
[ [ "Wu", "Xixi", "" ], [ "Xiong", "Yun", "" ], [ "Zhang", "Yao", "" ], [ "Jiao", "Yizhu", "" ], [ "Shan", "Caihua", "" ], [ "Sun", "Yiheng", "" ], [ "Zhu", "Yangyong", "" ], [ "Yu", "Philip S.", "" ] ]
new_dataset
0.993162
2210.08281
Felix Klement
Felix Klement, Henrich C. P\"ohls, Stefan Katzenbeisser
Man-in-the-OBD: A modular, protocol agnostic firewall for automotive dongles to enhance privacy and security
22 pages
null
null
null
cs.CR cs.NI
http://creativecommons.org/licenses/by/4.0/
Third-party dongles for cars, e.g. from insurance companies, can extract sensitive data and even send commands to the car via the standardized OBD-II interface. Due to the lack of message authentication mechanisms, this leads to major security vulnerabilities for example regarding the connection with malicious devices. Therefore, we apply a modular, protocol-independent firewall approach by placing a man-in-the-middle between the third-party dongle and the car's OBD-II interface. With this privileged network position, we demonstrate how the data flow accessible through the OBD-II interface can be modified or restricted. We can modify the messages contents or delay the arrival of messages by using our fine-granular configurable rewriting rules, specifically designed to work protocol agnostic. We have implemented our modular approach for a configurable firewall at the OBD-II interface and successfully tested it against third-party dongles available on the market. Thus, our approach enables a security layer to enhance automotive privacy and security of dongle users, which is of high relevance due to missing message authentications on the level of the electronic control units.
[ { "version": "v1", "created": "Sat, 15 Oct 2022 13:07:23 GMT" } ]
2022-10-18T00:00:00
[ [ "Klement", "Felix", "" ], [ "Pöhls", "Henrich C.", "" ], [ "Katzenbeisser", "Stefan", "" ] ]
new_dataset
0.980705
2210.08284
Muhammad Al-Qurishi Dr
Muhammad AL-Qurishi and Sarah AlQaseemi and Riad Soussi
AraLegal-BERT: A pretrained language model for Arabic Legal text
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The effectiveness of the BERT model on multiple linguistic tasks has been well documented. On the other hand, its potentials for narrow and specific domains such as Legal, have not been fully explored. In this paper, we examine how BERT can be used in the Arabic legal domain and try customizing this language model for several downstream tasks using several different domain-relevant training and testing datasets to train BERT from scratch. We introduce the AraLegal-BERT, a bidirectional encoder Transformer-based model that have been thoroughly tested and carefully optimized with the goal to amplify the impact of NLP-driven solution concerning jurisprudence, legal documents, and legal practice. We fine-tuned AraLegal-BERT and evaluated it against three BERT variations for Arabic language in three natural languages understanding (NLU) tasks. The results show that the base version of AraLegal-BERT achieve better accuracy than the general and original BERT over the Legal text.
[ { "version": "v1", "created": "Sat, 15 Oct 2022 13:08:40 GMT" } ]
2022-10-18T00:00:00
[ [ "AL-Qurishi", "Muhammad", "" ], [ "AlQaseemi", "Sarah", "" ], [ "Soussi", "Riad", "" ] ]
new_dataset
0.964188
2210.08307
Panagiotis Kasnesis
Panagiotis Kasnesis, Christos Chatzigeorgiou, Dimitrios G. Kogias, Charalampos Z. Patrikakis, Harris V. Georgiou and Aspasia Tzeletopoulou
MoRSE: Deep Learning-based Arm Gesture Recognition for Search and Rescue Operations
Accepted for presentation in the IEEE 8th World Forum on Internet of Things
null
null
null
cs.LG cs.HC
http://creativecommons.org/licenses/by-sa/4.0/
Efficient and quick remote communication in search and rescue operations can be life-saving for the first responders. However, while operating on the field means of communication based on text, image and audio are not suitable for several disaster scenarios. In this paper, we present a smartwatch-based application, which utilizes a Deep Learning (DL) model, to recognize a set of predefined arm gestures, maps them into Morse code via vibrations enabling remote communication amongst first responders. The model performance was evaluated by training it using 4,200 gestures performed by 7 subjects (cross-validation) wearing a smartwatch on their dominant arm. Our DL model relies on convolutional pooling and surpasses the performance of existing DL approaches and common machine learning classifiers, obtaining gesture recognition accuracy above 95%. We conclude by discussing the results and providing future directions.
[ { "version": "v1", "created": "Sat, 15 Oct 2022 14:23:54 GMT" } ]
2022-10-18T00:00:00
[ [ "Kasnesis", "Panagiotis", "" ], [ "Chatzigeorgiou", "Christos", "" ], [ "Kogias", "Dimitrios G.", "" ], [ "Patrikakis", "Charalampos Z.", "" ], [ "Georgiou", "Harris V.", "" ], [ "Tzeletopoulou", "Aspasia", "" ] ]
new_dataset
0.962328
2210.08353
Juncheng Liu
Juncheng Liu, Bryan Hooi, Kenji Kawaguchi, Xiaokui Xiao
MGNNI: Multiscale Graph Neural Networks with Implicit Layers
NeurIPS 2022
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Recently, implicit graph neural networks (GNNs) have been proposed to capture long-range dependencies in underlying graphs. In this paper, we introduce and justify two weaknesses of implicit GNNs: the constrained expressiveness due to their limited effective range for capturing long-range dependencies, and their lack of ability to capture multiscale information on graphs at multiple resolutions. To show the limited effective range of previous implicit GNNs, We first provide a theoretical analysis and point out the intrinsic relationship between the effective range and the convergence of iterative equations used in these models. To mitigate the mentioned weaknesses, we propose a multiscale graph neural network with implicit layers (MGNNI) which is able to model multiscale structures on graphs and has an expanded effective range for capturing long-range dependencies. We conduct comprehensive experiments for both node classification and graph classification to show that MGNNI outperforms representative baselines and has a better ability for multiscale modeling and capturing of long-range dependencies.
[ { "version": "v1", "created": "Sat, 15 Oct 2022 18:18:55 GMT" } ]
2022-10-18T00:00:00
[ [ "Liu", "Juncheng", "" ], [ "Hooi", "Bryan", "" ], [ "Kawaguchi", "Kenji", "" ], [ "Xiao", "Xiaokui", "" ] ]
new_dataset
0.998951
2210.08394
Sizhe An
Sizhe An, Yin Li, Umit Ogras
mRI: Multi-modal 3D Human Pose Estimation Dataset using mmWave, RGB-D, and Inertial Sensors
Thirty-sixth Conference on Neural Information Processing Systems (NeurIPS 2022). Project page: https://sizhean.github.io/mri
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
The ability to estimate 3D human body pose and movement, also known as human pose estimation (HPE), enables many applications for home-based health monitoring, such as remote rehabilitation training. Several possible solutions have emerged using sensors ranging from RGB cameras, depth sensors, millimeter-Wave (mmWave) radars, and wearable inertial sensors. Despite previous efforts on datasets and benchmarks for HPE, few dataset exploits multiple modalities and focuses on home-based health monitoring. To bridge the gap, we present mRI, a multi-modal 3D human pose estimation dataset with mmWave, RGB-D, and Inertial Sensors. Our dataset consists of over 160k synchronized frames from 20 subjects performing rehabilitation exercises and supports the benchmarks of HPE and action detection. We perform extensive experiments using our dataset and delineate the strength of each modality. We hope that the release of mRI can catalyze the research in pose estimation, multi-modal learning, and action understanding, and more importantly facilitate the applications of home-based health monitoring.
[ { "version": "v1", "created": "Sat, 15 Oct 2022 23:08:44 GMT" } ]
2022-10-18T00:00:00
[ [ "An", "Sizhe", "" ], [ "Li", "Yin", "" ], [ "Ogras", "Umit", "" ] ]
new_dataset
0.999817
2210.08402
Jenia Jitsev
Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk and Jenia Jitsev
LAION-5B: An open large-scale dataset for training next generation image-text models
36th Conference on Neural Information Processing Systems (NeurIPS 2022), Track on Datasets and Benchmarks. OpenReview: https://openreview.net/forum?id=M3Y74vmsMcY
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Groundbreaking language-vision architectures like CLIP and DALL-E proved the utility of training on large amounts of noisy image-text data, without relying on expensive accurate labels used in standard vision unimodal supervised learning. The resulting models showed capabilities of strong text-guided image generation and transfer to downstream tasks, while performing remarkably at zero-shot classification with noteworthy out-of-distribution robustness. Since then, large-scale language-vision models like ALIGN, BASIC, GLIDE, Flamingo and Imagen made further improvements. Studying the training and capabilities of such models requires datasets containing billions of image-text pairs. Until now, no datasets of this size have been made openly available for the broader research community. To address this problem and democratize research on large-scale multi-modal models, we present LAION-5B - a dataset consisting of 5.85 billion CLIP-filtered image-text pairs, of which 2.32B contain English language. We show successful replication and fine-tuning of foundational models like CLIP, GLIDE and Stable Diffusion using the dataset, and discuss further experiments enabled with an openly available dataset of this scale. Additionally we provide several nearest neighbor indices, an improved web-interface for dataset exploration and subset generation, and detection scores for watermark, NSFW, and toxic content detection. Announcement page https://laion.ai/laion-5b-a-new-era-of-open-large-scale-multi-modal-datasets/
[ { "version": "v1", "created": "Sun, 16 Oct 2022 00:08:18 GMT" } ]
2022-10-18T00:00:00
[ [ "Schuhmann", "Christoph", "" ], [ "Beaumont", "Romain", "" ], [ "Vencu", "Richard", "" ], [ "Gordon", "Cade", "" ], [ "Wightman", "Ross", "" ], [ "Cherti", "Mehdi", "" ], [ "Coombes", "Theo", "" ], [ "Katta", "Aarush", "" ], [ "Mullis", "Clayton", "" ], [ "Wortsman", "Mitchell", "" ], [ "Schramowski", "Patrick", "" ], [ "Kundurthy", "Srivatsa", "" ], [ "Crowson", "Katherine", "" ], [ "Schmidt", "Ludwig", "" ], [ "Kaczmarczyk", "Robert", "" ], [ "Jitsev", "Jenia", "" ] ]
new_dataset
0.999579
2210.08414
Alan Wagner
Alan R. Wagner, Colin Holbrook, Daniel Holman, Brett Sheeran, Vidullan Surendran, Jared Armagost, Savanna Spazak, Yinxuan Yin
Using Virtual Reality to Simulate Human-Robot Emergency Evacuation Scenarios
Accepted at AAAI Fall Symposium AI-HRI Workshop
null
null
AIHRI/2022/8997
cs.RO cs.AI cs.HC
http://creativecommons.org/licenses/by/4.0/
This paper describes our recent effort to use virtual reality to simulate threatening emergency evacuation scenarios in which a robot guides a person to an exit. Our prior work has demonstrated that people will follow a robot's guidance, even when the robot is faulty, during an emergency evacuation. Yet, because physical in-person emergency evacuation experiments are difficult and costly to conduct and because we would like to evaluate many different factors, we are motivated to develop a system that immerses people in the simulation environment to encourage genuine subject reactions. We are working to complete experiments verifying the validity of our approach.
[ { "version": "v1", "created": "Sun, 16 Oct 2022 02:29:30 GMT" } ]
2022-10-18T00:00:00
[ [ "Wagner", "Alan R.", "" ], [ "Holbrook", "Colin", "" ], [ "Holman", "Daniel", "" ], [ "Sheeran", "Brett", "" ], [ "Surendran", "Vidullan", "" ], [ "Armagost", "Jared", "" ], [ "Spazak", "Savanna", "" ], [ "Yin", "Yinxuan", "" ] ]
new_dataset
0.978799
2210.08455
Luiza Labazanova Miss
Luiza Labazanova, Shuang Peng, Liuming Qiu, Hoi-Yin Lee, Thrishantha Nanayakkara and David Navarro-Alarcon
Self-Reconfigurable Soft-Rigid Mobile Agent with Variable Stiffness and Adaptive Morphology
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
In this paper, we propose a novel design of a hybrid mobile robot with controllable stiffness and deformable shape. Compared to conventional mobile agents, our system can switch between rigid and compliant phases by solidifying or melting Field's metal in its structure and, thus, alter its shape through the motion of its active components. In the soft state, the robot's main body can bend into circular arcs, which enables it to conform to surrounding curved objects. This variable geometry of the robot creates new motion modes which cannot be described by standard (i.e., fixed geometry) models. To this end, we develop a unified mathematical model that captures the differential kinematics of both rigid and soft states. An optimised control strategy is further proposed to select the most appropriate phase states and motion modes needed to reach a target pose-shape configuration. The performance of our new method is validated with numerical simulations and experiments conducted on a prototype system. The simulation source code is available at https://github.com/Louashka/2sr-agent-simulation.git}{GitHub repository.
[ { "version": "v1", "created": "Sun, 16 Oct 2022 06:07:50 GMT" } ]
2022-10-18T00:00:00
[ [ "Labazanova", "Luiza", "" ], [ "Peng", "Shuang", "" ], [ "Qiu", "Liuming", "" ], [ "Lee", "Hoi-Yin", "" ], [ "Nanayakkara", "Thrishantha", "" ], [ "Navarro-Alarcon", "David", "" ] ]
new_dataset
0.999352
2210.08463
Xiaoqiang Wang
Xiaoqiang Wang, Jiaojiao Wang, Chengju Li, Yansheng Wu
Two classes of narrow-sense BCH codes and their duals
null
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
BCH codes and their dual codes are two special subclasses of cyclic codes and are the best linear codes in many cases. A lot of progress on the study of BCH cyclic codes has been made, but little is known about the minimum distances of the duals of BCH codes. Recently, a new concept called dually-BCH code was introduced to investigate the duals of BCH codes and the lower bounds on their minimum distances in \cite{GDL21}. For a prime power $q$ and an integer $m \ge 4$, let $n=\frac{q^m-1}{q+1}$ \ ($m$ even), or $n=\frac{q^m-1}{q-1}$ \ ($q>2$). In this paper, some sufficient and necessary conditions in terms of the designed distance will be given to ensure that the narrow-sense BCH codes of length $n$ are dually-BCH codes, which extended the results in \cite{GDL21}. Lower bounds on the minimum distances of their dual codes are developed for $n=\frac{q^m-1}{q+1}$ \ ($m$ even). As byproducts, we present the largest coset leader $\delta_1$ modulo $n$ being of two types, which proves a conjecture in \cite{WLP19} and partially solves an open problem in \cite{Li2017}. We also investigate the parameters of the narrow-sense BCH codes of length $n$ with design distance $\delta_1$. The BCH codes presented in this paper have good parameters in general.
[ { "version": "v1", "created": "Sun, 16 Oct 2022 06:41:57 GMT" } ]
2022-10-18T00:00:00
[ [ "Wang", "Xiaoqiang", "" ], [ "Wang", "Jiaojiao", "" ], [ "Li", "Chengju", "" ], [ "Wu", "Yansheng", "" ] ]
new_dataset
0.992738
2210.08472
Yuan-Gen Wang
Chao Zhou, Yuan-Gen Wang, Guopu Zhu
Object-Attentional Untargeted Adversarial Attack
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks are facing severe threats from adversarial attacks. Most existing black-box attacks fool target model by generating either global perturbations or local patches. However, both global perturbations and local patches easily cause annoying visual artifacts in adversarial example. Compared with some smooth regions of an image, the object region generally has more edges and a more complex texture. Thus small perturbations on it will be more imperceptible. On the other hand, the object region is undoubtfully the decisive part of an image to classification tasks. Motivated by these two facts, we propose an object-attentional adversarial attack method for untargeted attack. Specifically, we first generate an object region by intersecting the object detection region from YOLOv4 with the salient object detection (SOD) region from HVPNet. Furthermore, we design an activation strategy to avoid the reaction caused by the incomplete SOD. Then, we perform an adversarial attack only on the detected object region by leveraging Simple Black-box Adversarial Attack (SimBA). To verify the proposed method, we create a unique dataset by extracting all the images containing the object defined by COCO from ImageNet-1K, named COCO-Reduced-ImageNet in this paper. Experimental results on ImageNet-1K and COCO-Reduced-ImageNet show that under various system settings, our method yields the adversarial example with better perceptual quality meanwhile saving the query budget up to 24.16\% compared to the state-of-the-art approaches including SimBA.
[ { "version": "v1", "created": "Sun, 16 Oct 2022 07:45:13 GMT" } ]
2022-10-18T00:00:00
[ [ "Zhou", "Chao", "" ], [ "Wang", "Yuan-Gen", "" ], [ "Zhu", "Guopu", "" ] ]
new_dataset
0.987098