id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
2210.00008
Yash Jakhotiya
Yash Jakhotiya, Heramb Patil, Jugal Rawlani, Dr. Sunil B. Mane
Adversarial Attacks on Transformers-Based Malware Detectors
Accepted to the 2022 NeurIPS ML Safety Workshop. Code available at https://github.com/yashjakhotiya/Adversarial-Attacks-On-Transformers
null
null
null
cs.CR cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Signature-based malware detectors have proven to be insufficient as even a small change in malignant executable code can bypass these signature-based detectors. Many machine learning-based models have been proposed to efficiently detect a wide variety of malware. Many of these models are found to be susceptible to adversarial attacks - attacks that work by generating intentionally designed inputs that can force these models to misclassify. Our work aims to explore vulnerabilities in the current state of the art malware detectors to adversarial attacks. We train a Transformers-based malware detector, carry out adversarial attacks resulting in a misclassification rate of 23.9% and propose defenses that reduce this misclassification rate to half. An implementation of our work can be found at https://github.com/yashjakhotiya/Adversarial-Attacks-On-Transformers.
[ { "version": "v1", "created": "Sat, 1 Oct 2022 22:23:03 GMT" }, { "version": "v2", "created": "Sat, 5 Nov 2022 17:27:59 GMT" } ]
2022-11-08T00:00:00
[ [ "Jakhotiya", "Yash", "" ], [ "Patil", "Heramb", "" ], [ "Rawlani", "Jugal", "" ], [ "Mane", "Dr. Sunil B.", "" ] ]
new_dataset
0.966931
2210.04573
Navid Rekabsaz
Selim Fekih, Nicol\`o Tamagnone, Benjamin Minixhofer, Ranjan Shrestha, Ximena Contla, Ewan Oglethorpe, Navid Rekabsaz
HumSet: Dataset of Multilingual Information Extraction and Classification for Humanitarian Crisis Response
Published at Findings of EMNLP 2022
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Timely and effective response to humanitarian crises requires quick and accurate analysis of large amounts of text data - a process that can highly benefit from expert-assisted NLP systems trained on validated and annotated data in the humanitarian response domain. To enable creation of such NLP systems, we introduce and release HumSet, a novel and rich multilingual dataset of humanitarian response documents annotated by experts in the humanitarian response community. The dataset provides documents in three languages (English, French, Spanish) and covers a variety of humanitarian crises from 2018 to 2021 across the globe. For each document, HUMSET provides selected snippets (entries) as well as assigned classes to each entry annotated using common humanitarian information analysis frameworks. HUMSET also provides novel and challenging entry extraction and multi-label entry classification tasks. In this paper, we take a first step towards approaching these tasks and conduct a set of experiments on Pre-trained Language Models (PLM) to establish strong baselines for future research in this domain. The dataset is available at https://blog.thedeep.io/humset/.
[ { "version": "v1", "created": "Mon, 10 Oct 2022 11:28:07 GMT" }, { "version": "v2", "created": "Fri, 21 Oct 2022 12:10:49 GMT" }, { "version": "v3", "created": "Sun, 6 Nov 2022 10:37:03 GMT" } ]
2022-11-08T00:00:00
[ [ "Fekih", "Selim", "" ], [ "Tamagnone", "Nicolò", "" ], [ "Minixhofer", "Benjamin", "" ], [ "Shrestha", "Ranjan", "" ], [ "Contla", "Ximena", "" ], [ "Oglethorpe", "Ewan", "" ], [ "Rekabsaz", "Navid", "" ] ]
new_dataset
0.999828
2210.05050
Omar Costilla Reyes
Jennifer J. Sun, Megan Tjandrasuwita, Atharva Sehgal, Armando Solar-Lezama, Swarat Chaudhuri, Yisong Yue, Omar Costilla-Reyes
Neurosymbolic Programming for Science
Neural Information Processing Systems 2022 - AI for science workshop
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Neurosymbolic Programming (NP) techniques have the potential to accelerate scientific discovery. These models combine neural and symbolic components to learn complex patterns and representations from data, using high-level concepts or known constraints. NP techniques can interface with symbolic domain knowledge from scientists, such as prior knowledge and experimental context, to produce interpretable outputs. We identify opportunities and challenges between current NP models and scientific workflows, with real-world examples from behavior analysis in science: to enable the use of NP broadly for workflows across the natural and social sciences.
[ { "version": "v1", "created": "Mon, 10 Oct 2022 23:46:41 GMT" }, { "version": "v2", "created": "Mon, 7 Nov 2022 15:21:32 GMT" } ]
2022-11-08T00:00:00
[ [ "Sun", "Jennifer J.", "" ], [ "Tjandrasuwita", "Megan", "" ], [ "Sehgal", "Atharva", "" ], [ "Solar-Lezama", "Armando", "" ], [ "Chaudhuri", "Swarat", "" ], [ "Yue", "Yisong", "" ], [ "Costilla-Reyes", "Omar", "" ] ]
new_dataset
0.982773
2210.15834
Kunhong Liu Dr
Jia-Xin Ye, Xin-Cheng Wen, Xuan-Ze Wang, Yong Xu, Yan Luo, Chang-Li Wu, Li-Yan Chen, Kun-Hong Liu
GM-TCNet: Gated Multi-scale Temporal Convolutional Network using Emotion Causality for Speech Emotion Recognition
The source code is available at: https://github.com/Jiaxin-Ye/GM-TCNet
speech communication, 145, November 2022, 21-35
10.1016/j.specom.2022.07.005
null
cs.SD cs.AI cs.HC eess.AS
http://creativecommons.org/licenses/by-nc-sa/4.0/
In human-computer interaction, Speech Emotion Recognition (SER) plays an essential role in understanding the user's intent and improving the interactive experience. While similar sentimental speeches own diverse speaker characteristics but share common antecedents and consequences, an essential challenge for SER is how to produce robust and discriminative representations through causality between speech emotions. In this paper, we propose a Gated Multi-scale Temporal Convolutional Network (GM-TCNet) to construct a novel emotional causality representation learning component with a multi-scale receptive field. GM-TCNet deploys a novel emotional causality representation learning component to capture the dynamics of emotion across the time domain, constructed with dilated causal convolution layer and gating mechanism. Besides, it utilizes skip connection fusing high-level features from different gated convolution blocks to capture abundant and subtle emotion changes in human speech. GM-TCNet first uses a single type of feature, mel-frequency cepstral coefficients, as inputs and then passes them through the gated temporal convolutional module to generate the high-level features. Finally, the features are fed to the emotion classifier to accomplish the SER task. The experimental results show that our model maintains the highest performance in most cases compared to state-of-the-art techniques.
[ { "version": "v1", "created": "Fri, 28 Oct 2022 02:00:40 GMT" } ]
2022-11-08T00:00:00
[ [ "Ye", "Jia-Xin", "" ], [ "Wen", "Xin-Cheng", "" ], [ "Wang", "Xuan-Ze", "" ], [ "Xu", "Yong", "" ], [ "Luo", "Yan", "" ], [ "Wu", "Chang-Li", "" ], [ "Chen", "Li-Yan", "" ], [ "Liu", "Kun-Hong", "" ] ]
new_dataset
0.998945
2210.17146
Xunping Jiang
Ling Sun, Guiqiong Liu, Xunping Jiang, Junrui Liu, Xu Wang, Han Yang, Shiping Yang
LAD-RCNN:A Powerful Tool for Livestock Face Detection and Normalization
8 figures, 5 tables
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the demand for standardized large-scale livestock farming and the development of artificial intelligence technology, a lot of research in area of animal face recognition were carried on pigs, cattle, sheep and other livestock. Face recognition consists of three sub-task: face detection, face normalizing and face identification. Most of animal face recognition study focuses on face detection and face identification. Animals are often uncooperative when taking photos, so the collected animal face images are often in arbitrary directions. The use of non-standard images may significantly reduce the performance of face recognition system. However, there is no study on normalizing of the animal face image with arbitrary directions. In this study, we developed a light-weight angle detection and region-based convolutional network (LAD-RCNN) containing a new rotation angle coding method that can detect the rotation angle and the location of animal face in one-stage. LAD-RCNN has a frame rate of 72.74 FPS (including all steps) on a single GeForce RTX 2080 Ti GPU. LAD-RCNN has been evaluated on multiple dataset including goat dataset and gaot infrared image. Evaluation result show that the AP of face detection was more than 95% and the deviation between the detected rotation angle and the ground-truth rotation angle were less than 0.036 (i.e. 6.48{\deg}) on all the test dataset. This shows that LAD-RCNN has excellent performance on livestock face and its direction detection, and therefore it is very suitable for livestock face detection and Normalizing. Code is available at https://github.com/SheepBreedingLab-HZAU/LAD-RCNN/
[ { "version": "v1", "created": "Mon, 31 Oct 2022 08:54:21 GMT" }, { "version": "v2", "created": "Sat, 5 Nov 2022 09:11:13 GMT" } ]
2022-11-08T00:00:00
[ [ "Sun", "Ling", "" ], [ "Liu", "Guiqiong", "" ], [ "Jiang", "Xunping", "" ], [ "Liu", "Junrui", "" ], [ "Wang", "Xu", "" ], [ "Yang", "Han", "" ], [ "Yang", "Shiping", "" ] ]
new_dataset
0.994319
2211.02695
Hadi Salman
Hadi Salman, Caleb Parks, Shi Yin Hong, Justin Zhan
WaveNets: Wavelet Channel Attention Networks
IEEE BigData2022 conference
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Channel Attention reigns supreme as an effective technique in the field of computer vision. However, the proposed channel attention by SENet suffers from information loss in feature learning caused by the use of Global Average Pooling (GAP) to represent channels as scalars. Thus, designing effective channel attention mechanisms requires finding a solution to enhance features preservation in modeling channel inter-dependencies. In this work, we utilize Wavelet transform compression as a solution to the channel representation problem. We first test wavelet transform as an Auto-Encoder model equipped with conventional channel attention module. Next, we test wavelet transform as a standalone channel compression method. We prove that global average pooling is equivalent to the recursive approximate Haar wavelet transform. With this proof, we generalize channel attention using Wavelet compression and name it WaveNet. Implementation of our method can be embedded within existing channel attention methods with a couple of lines of code. We test our proposed method using ImageNet dataset for image classification task. Our method outperforms the baseline SENet, and achieves the state-of-the-art results. Our code implementation is publicly available at https://github.com/hady1011/WaveNet-C.
[ { "version": "v1", "created": "Fri, 4 Nov 2022 18:26:47 GMT" } ]
2022-11-08T00:00:00
[ [ "Salman", "Hadi", "" ], [ "Parks", "Caleb", "" ], [ "Hong", "Shi Yin", "" ], [ "Zhan", "Justin", "" ] ]
new_dataset
0.999143
2211.02903
Yongmao Zhang
Yongmao Zhang, Heyang Xue, Hanzhao Li, Lei Xie, Tingwei Guo, Ruixiong Zhang, Caixia Gong
VISinger 2: High-Fidelity End-to-End Singing Voice Synthesis Enhanced by Digital Signal Processing Synthesizer
Submitted to ICASSP 2023
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
End-to-end singing voice synthesis (SVS) model VISinger can achieve better performance than the typical two-stage model with fewer parameters. However, VISinger has several problems: text-to-phase problem, the end-to-end model learns the meaningless mapping of text-to-phase; glitches problem, the harmonic components corresponding to the periodic signal of the voiced segment occurs a sudden change with audible artefacts; low sampling rate, the sampling rate of 24KHz does not meet the application needs of high-fidelity generation with the full-band rate (44.1KHz or higher). In this paper, we propose VISinger 2 to address these issues by integrating the digital signal processing (DSP) methods with VISinger. Specifically, inspired by recent advances in differentiable digital signal processing (DDSP), we incorporate a DSP synthesizer into the decoder to solve the above issues. The DSP synthesizer consists of a harmonic synthesizer and a noise synthesizer to generate periodic and aperiodic signals, respectively, from the latent representation z in VISinger. It supervises the posterior encoder to extract the latent representation without phase information and avoid the prior encoder modelling text-to-phase mapping. To avoid glitch artefacts, the HiFi-GAN is modified to accept the waveforms generated by the DSP synthesizer as a condition to produce the singing voice. Moreover, with the improved waveform decoder, VISinger 2 manages to generate 44.1kHz singing audio with richer expression and better quality. Experiments on OpenCpop corpus show that VISinger 2 outperforms VISinger, CpopSing and RefineSinger in both subjective and objective metrics.
[ { "version": "v1", "created": "Sat, 5 Nov 2022 13:35:00 GMT" } ]
2022-11-08T00:00:00
[ [ "Zhang", "Yongmao", "" ], [ "Xue", "Heyang", "" ], [ "Li", "Hanzhao", "" ], [ "Xie", "Lei", "" ], [ "Guo", "Tingwei", "" ], [ "Zhang", "Ruixiong", "" ], [ "Gong", "Caixia", "" ] ]
new_dataset
0.998617
2211.02926
Konrad Staniszewski
Konrad Staniszewski (University of Warsaw, IDEAS NCBR Sp. z o.o.)
Parity Games of Bounded Tree-Depth
This is the full version of the paper that has been accepted at CSL 2023 and is going to be published in Leibniz International Proceedings in Informatics (LIPIcs)
null
null
null
cs.CC
http://creativecommons.org/licenses/by/4.0/
The exact complexity of solving parity games is a major open problem. Several authors have searched for efficient algorithms over specific classes of graphs. In particular, Obdr\v{z}\'{a}lek showed that for graphs of bounded tree-width or clique-width, the problem is in $\mathrm{P}$, which was later improved by Ganardi, who showed that it is even in $\mathrm{LOGCFL}$ (with an additional assumption for clique-width case). Here we extend this line of research by showing that for graphs of bounded tree-depth the problem of solving parity games is in logspace uniform $\text{AC}^0$. We achieve this by first considering a parameter that we obtain from a modification of clique-width, which we call shallow clique-width. We subsequently provide a suitable reduction.
[ { "version": "v1", "created": "Sat, 5 Nov 2022 15:14:15 GMT" } ]
2022-11-08T00:00:00
[ [ "Staniszewski", "Konrad", "", "University of Warsaw, IDEAS NCBR Sp. z o.o." ] ]
new_dataset
0.964278
2211.02950
Ivan Habernal
Leonard Bongard, Lena Held, Ivan Habernal
The Legal Argument Reasoning Task in Civil Procedure
Camera ready, to appear at the Natural Legal Language Processing Workshop 2022 co-located with EMNLP
null
null
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
We present a new NLP task and dataset from the domain of the U.S. civil procedure. Each instance of the dataset consists of a general introduction to the case, a particular question, and a possible solution argument, accompanied by a detailed analysis of why the argument applies in that case. Since the dataset is based on a book aimed at law students, we believe that it represents a truly complex task for benchmarking modern legal language models. Our baseline evaluation shows that fine-tuning a legal transformer provides some advantage over random baseline models, but our analysis reveals that the actual ability to infer legal arguments remains a challenging open research question.
[ { "version": "v1", "created": "Sat, 5 Nov 2022 17:41:00 GMT" } ]
2022-11-08T00:00:00
[ [ "Bongard", "Leonard", "" ], [ "Held", "Lena", "" ], [ "Habernal", "Ivan", "" ] ]
new_dataset
0.99903
2211.03014
Ramviyas Parasuraman
Michael Starks, Aryan Gupta, Sanjay Sarma Oruganti Venkata, Ramviyas Parasuraman
HeRoSwarm: Fully-Capable Miniature Swarm Robot Hardware Design With Open-Source ROS Support
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Experiments using large numbers of miniature swarm robots are desirable to teach, study, and test multi-robot and swarm intelligence algorithms and their applications. To realize the full potential of a swarm robot, it should be capable of not only motion but also sensing, computing, communication, and power management modules with multiple options. Current swarm robot platforms developed for commercial and academic research purposes lack several of these critical attributes by focusing only on a few of these aspects. Therefore, in this paper, we propose the HeRoSwarm, a fully-capable swarm robot platform with open-source hardware and software support. The proposed robot hardware is a low-cost design with commercial off-the-shelf components that uniquely integrates multiple sensing, communication, and computing modalities with various power management capabilities into a tiny footprint. Moreover, our swarm robot with odometry capability with Robot Operating Systems (ROS) support is unique in its kind. This simple yet powerful swarm robot design has been extensively verified with different prototyping variants and multi-robot experimental demonstrations.
[ { "version": "v1", "created": "Sun, 6 Nov 2022 03:07:58 GMT" } ]
2022-11-08T00:00:00
[ [ "Starks", "Michael", "" ], [ "Gupta", "Aryan", "" ], [ "Venkata", "Sanjay Sarma Oruganti", "" ], [ "Parasuraman", "Ramviyas", "" ] ]
new_dataset
0.992402
2211.03250
Zhitong Ni
Zhitong Ni, J. Andrew Zhang, Kai Wu, and Ren Ping Liu
Uplink Sensing Using CSI Ratio in Perceptive Mobile Networks
null
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by-sa/4.0/
Uplink sensing in perceptive mobile networks (PMNs), which uses uplink communication signals for sensing the environment around a base station, faces challenging issues of clock asynchronism and the requirement of a line-of-sight (LOS) path between transmitters and receivers. The channel state information (CSI) ratio has been applied to resolve these issues, however, current research on the CSI ratio is limited to Doppler estimation in a single dynamic path. This paper proposes an advanced parameter estimation scheme that can extract multiple dynamic parameters, including Doppler frequency, angle-of-arrival (AoA), and delay, in a communication uplink channel and completes the localization of multiple moving targets. Our scheme is based on the multi-element Taylor series of the CSI ratio that converts a nonlinear function of sensing parameters to linear forms and enables the applications of traditional sensing algorithms. Using the truncated Taylor series, we develop novel multiple-signal-classification grid searching algorithms for estimating Doppler frequencies and AoAs and use the least-square method to obtain delays. Both experimental and simulation results are provided, demonstrating that our proposed scheme can achieve good performances for sensing both single and multiple dynamic paths, without requiring the presence of a LOS path.
[ { "version": "v1", "created": "Mon, 7 Nov 2022 00:54:12 GMT" } ]
2022-11-08T00:00:00
[ [ "Ni", "Zhitong", "" ], [ "Zhang", "J. Andrew", "" ], [ "Wu", "Kai", "" ], [ "Liu", "Ren Ping", "" ] ]
new_dataset
0.989447
2211.03251
Olivia Hsu
Olivia Hsu, Alexander Rucker, Tian Zhao, Kunle Olukotun, and Fredrik Kjolstad
Stardust: Compiling Sparse Tensor Algebra to a Reconfigurable Dataflow Architecture
15 pages, 13 figures, 6 tables,
null
null
null
cs.PL cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce Stardust, a compiler that compiles sparse tensor algebra to reconfigurable dataflow architectures (RDAs). Stardust introduces new user-provided data representation and scheduling language constructs for mapping to resource-constrained accelerated architectures. Stardust uses the information provided by these constructs to determine on-chip memory placement and to lower to the Capstan RDA through a parallel-patterns rewrite system that targets the Spatial programming model. The Stardust compiler is implemented as a new compilation path inside the TACO open-source system. Using cycle-accurate simulation, we demonstrate that Stardust can generate more Capstan tensor operations than its authors had implemented and that it results in 138$\times$ better performance than generated CPU kernels and 41$\times$ better performance than generated GPU kernels.
[ { "version": "v1", "created": "Mon, 7 Nov 2022 01:01:43 GMT" } ]
2022-11-08T00:00:00
[ [ "Hsu", "Olivia", "" ], [ "Rucker", "Alexander", "" ], [ "Zhao", "Tian", "" ], [ "Olukotun", "Kunle", "" ], [ "Kjolstad", "Fredrik", "" ] ]
new_dataset
0.964876
2211.03313
Hojin Seo
Hojin Seo, Yeoun-Jae Kim, Jaesoon Choi, Youngjin Moon
Quasi-Static Analysis on Transoral Surgical Tendon-Driven Articulated Robot Units
null
null
null
null
cs.RO physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wire actuation in tendon-driven continuum robots enables the transmission of force from a distance, but it is understood that tension control problems can arise when a pulley is used to actuate two cables in a push-pull mode. This paper analyzes the relationship between angle of rotation, pressure, as well as variables of a single continuum unit in a quasi-static equilibrium. The primary objective of the quasi-static analysis was to output pressure and the analysis, given the tensions applied. Static equilibrium condition was established, and the bisection method was carried out for the angle of rotation. The function for the bisection method considered pressure-induced forces, friction forces, and weight. {\theta} was 17.14{\deg}, and p was 405.6 Pa when Tl and Ts were given the values of 1 N and 2 N, respectively. The results seemed to be consistent with the preliminary design specification, calling for further simulations and experiments.
[ { "version": "v1", "created": "Mon, 7 Nov 2022 05:29:12 GMT" } ]
2022-11-08T00:00:00
[ [ "Seo", "Hojin", "" ], [ "Kim", "Yeoun-Jae", "" ], [ "Choi", "Jaesoon", "" ], [ "Moon", "Youngjin", "" ] ]
new_dataset
0.965512
2211.03371
SeungHeon Doh
Taesu Kim, SeungHeon Doh, Gyunpyo Lee, Hyungseok Jeon, Juhan Nam, Hyeon-Jeong Suk
Hi,KIA: A Speech Emotion Recognition Dataset for Wake-Up Words
Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2022
null
null
null
cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
Wake-up words (WUW) is a short sentence used to activate a speech recognition system to receive the user's speech input. WUW utterances include not only the lexical information for waking up the system but also non-lexical information such as speaker identity or emotion. In particular, recognizing the user's emotional state may elaborate the voice communication. However, there is few dataset where the emotional state of the WUW utterances is labeled. In this paper, we introduce Hi, KIA, a new WUW dataset which consists of 488 Korean accent emotional utterances collected from four male and four female speakers and each of utterances is labeled with four emotional states including anger, happy, sad, or neutral. We present the step-by-step procedure to build the dataset, covering scenario selection, post-processing, and human validation for label agreement. Also, we provide two classification models for WUW speech emotion recognition using the dataset. One is based on traditional hand-craft features and the other is a transfer-learning approach using a pre-trained neural network. These classification models could be used as benchmarks in further research.
[ { "version": "v1", "created": "Mon, 7 Nov 2022 08:57:16 GMT" } ]
2022-11-08T00:00:00
[ [ "Kim", "Taesu", "" ], [ "Doh", "SeungHeon", "" ], [ "Lee", "Gyunpyo", "" ], [ "Jeon", "Hyungseok", "" ], [ "Nam", "Juhan", "" ], [ "Suk", "Hyeon-Jeong", "" ] ]
new_dataset
0.999828
2211.03375
Haoshu Fang
Hao-Shu Fang, Jiefeng Li, Hongyang Tang, Chao Xu, Haoyi Zhu, Yuliang Xiu, Yong-Lu Li, Cewu Lu
AlphaPose: Whole-Body Regional Multi-Person Pose Estimation and Tracking in Real-Time
Documents for AlphaPose, accepted to TPAMI
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate whole-body multi-person pose estimation and tracking is an important yet challenging topic in computer vision. To capture the subtle actions of humans for complex behavior analysis, whole-body pose estimation including the face, body, hand and foot is essential over conventional body-only pose estimation. In this paper, we present AlphaPose, a system that can perform accurate whole-body pose estimation and tracking jointly while running in realtime. To this end, we propose several new techniques: Symmetric Integral Keypoint Regression (SIKR) for fast and fine localization, Parametric Pose Non-Maximum-Suppression (P-NMS) for eliminating redundant human detections and Pose Aware Identity Embedding for jointly pose estimation and tracking. During training, we resort to Part-Guided Proposal Generator (PGPG) and multi-domain knowledge distillation to further improve the accuracy. Our method is able to localize whole-body keypoints accurately and tracks humans simultaneously given inaccurate bounding boxes and redundant detections. We show a significant improvement over current state-of-the-art methods in both speed and accuracy on COCO-wholebody, COCO, PoseTrack, and our proposed Halpe-FullBody pose estimation dataset. Our model, source codes and dataset are made publicly available at https://github.com/MVIG-SJTU/AlphaPose.
[ { "version": "v1", "created": "Mon, 7 Nov 2022 09:15:38 GMT" } ]
2022-11-08T00:00:00
[ [ "Fang", "Hao-Shu", "" ], [ "Li", "Jiefeng", "" ], [ "Tang", "Hongyang", "" ], [ "Xu", "Chao", "" ], [ "Zhu", "Haoyi", "" ], [ "Xiu", "Yuliang", "" ], [ "Li", "Yong-Lu", "" ], [ "Lu", "Cewu", "" ] ]
new_dataset
0.997644
2211.03402
Liang Peng
Liang Peng, Jun Li, Wenbo Shao, and Hong Wang
PeSOTIF: a Challenging Visual Dataset for Perception SOTIF Problems in Long-tail Traffic Scenarios
7 pages, 5 figures, 4 tables, submitted to 2023 ICRA
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Perception algorithms in autonomous driving systems confront great challenges in long-tail traffic scenarios, where the problems of Safety of the Intended Functionality (SOTIF) could be triggered by the algorithm performance insufficiencies and dynamic operational environment. However, such scenarios are not systematically included in current open-source datasets, and this paper fills the gap accordingly. Based on the analysis and enumeration of trigger conditions, a high-quality diverse dataset is released, including various long-tail traffic scenarios collected from multiple resources. Considering the development of probabilistic object detection (POD), this dataset marks trigger sources that may cause perception SOTIF problems in the scenarios as key objects. In addition, an evaluation protocol is suggested to verify the effectiveness of POD algorithms in identifying the key objects via uncertainty. The dataset never stops expanding, and the first batch of open-source data includes 1126 frames with an average of 2.27 key objects and 2.47 normal objects in each frame. To demonstrate how to use this dataset for SOTIF research, this paper further quantifies the perception SOTIF entropy to confirm whether a scenario is unknown and unsafe for a perception system. The experimental results show that the quantified entropy can effectively and efficiently reflect the failure of the perception algorithm.
[ { "version": "v1", "created": "Mon, 7 Nov 2022 10:07:30 GMT" } ]
2022-11-08T00:00:00
[ [ "Peng", "Liang", "" ], [ "Li", "Jun", "" ], [ "Shao", "Wenbo", "" ], [ "Wang", "Hong", "" ] ]
new_dataset
0.999775
2211.03433
Marco Guerini
Helena Bonaldi, Sara Dellantonio, Serra Sinem Tekiroglu, Marco Guerini
Human-Machine Collaboration Approaches to Build a Dialogue Dataset for Hate Speech Countering
To appear in Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (long paper)
null
null
null
cs.CL cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fighting online hate speech is a challenge that is usually addressed using Natural Language Processing via automatic detection and removal of hate content. Besides this approach, counter narratives have emerged as an effective tool employed by NGOs to respond to online hate on social media platforms. For this reason, Natural Language Generation is currently being studied as a way to automatize counter narrative writing. However, the existing resources necessary to train NLG models are limited to 2-turn interactions (a hate speech and a counter narrative as response), while in real life, interactions can consist of multiple turns. In this paper, we present a hybrid approach for dialogical data collection, which combines the intervention of human expert annotators over machine generated dialogues obtained using 19 different configurations. The result of this work is DIALOCONAN, the first dataset comprising over 3000 fictitious multi-turn dialogues between a hater and an NGO operator, covering 6 targets of hate.
[ { "version": "v1", "created": "Mon, 7 Nov 2022 10:37:13 GMT" } ]
2022-11-08T00:00:00
[ [ "Bonaldi", "Helena", "" ], [ "Dellantonio", "Sara", "" ], [ "Tekiroglu", "Serra Sinem", "" ], [ "Guerini", "Marco", "" ] ]
new_dataset
0.972568
2211.03442
Prathamesh Kalamkar
Prathamesh Kalamkar, Astha Agarwal, Aman Tiwari, Smita Gupta, Saurabh Karn, Vivek Raghavan
Named Entity Recognition in Indian court judgments
to be published in NLLP 2022 Workshop at EMNLP
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Identification of named entities from legal texts is an essential building block for developing other legal Artificial Intelligence applications. Named Entities in legal texts are slightly different and more fine-grained than commonly used named entities like Person, Organization, Location etc. In this paper, we introduce a new corpus of 46545 annotated legal named entities mapped to 14 legal entity types. The Baseline model for extracting legal named entities from judgment text is also developed.
[ { "version": "v1", "created": "Mon, 7 Nov 2022 10:44:44 GMT" } ]
2022-11-08T00:00:00
[ [ "Kalamkar", "Prathamesh", "" ], [ "Agarwal", "Astha", "" ], [ "Tiwari", "Aman", "" ], [ "Gupta", "Smita", "" ], [ "Karn", "Saurabh", "" ], [ "Raghavan", "Vivek", "" ] ]
new_dataset
0.96745
2211.03471
Ricardo J. Rodr\'iguez
Ricardo J. Rodr\'iguez and Jos\'e Luis Salazar and Juli\'an Fern\'andez-Navajas
Sittin'On the Dock of the (WiFi) Bay: On the Frame Aggregation under IEEE 802.11 DCF
null
null
null
null
cs.NI
http://creativecommons.org/licenses/by-nc-sa/4.0/
It is well known that frame aggregation in Internet communications improves transmission efficiency. However, it also causes a delay that for some real-time communications is inappropriate, thus creating a trade-off between efficiency and delay. In this paper, we establish the conditions for frame aggregation under the IEEE 802.11 DCF protocol to be beneficial on average delay. To do so, we first describe the transmission time in IEEE 802.11 in a stochastic framework and then we calculate the optimal value of the frames that, when aggregated, saves transmission time in the long term. Our findings, discussed with numerical experimentation, show that frame aggregation reduces transmission congestion and transmission delays.
[ { "version": "v1", "created": "Mon, 7 Nov 2022 11:33:58 GMT" } ]
2022-11-08T00:00:00
[ [ "Rodríguez", "Ricardo J.", "" ], [ "Salazar", "José Luis", "" ], [ "Fernández-Navajas", "Julián", "" ] ]
new_dataset
0.998689
2211.03475
Michele Wigger
Sara Faour, Mustapha Hamad, Mireille Sarkiss, and Michele Wigger
Testing Against Independence with an Eavesdropper
submitted to ITW 2023
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
We study a distributed binary hypothesis testing (HT) problem with communication and security constraints, involving three parties: a remote sensor called Alice, a legitimate decision centre called Bob, and an eavesdropper called Eve, all having their own source observations. In this system, Alice conveys a rate R description of her observation to Bob, and Bob performs a binary hypothesis test on the joint distribution underlying his and Alice's observations. The goal of Alice and Bob is to maximise the exponential decay of Bob's miss-detection (type II-error) probability under two constraints: Bob's false alarm-probability (type-I error) probability has to stay below a given threshold and Eve's uncertainty (equivocation) about Alice's observations should stay above a given security threshold even when Eve learns Alice's message. For the special case of testing against independence, we characterise the largest possible type-II error exponent under the described type-I error probability and security constraints.
[ { "version": "v1", "created": "Mon, 7 Nov 2022 11:39:05 GMT" } ]
2022-11-08T00:00:00
[ [ "Faour", "Sara", "" ], [ "Hamad", "Mustapha", "" ], [ "Sarkiss", "Mireille", "" ], [ "Wigger", "Michele", "" ] ]
new_dataset
0.978255
2211.03484
Gerhard Kurz
Gerhard Kurz and Sebastian A. Scherer and Peter Biber and David Fleer
When Geometry is not Enough: Using Reflector Markers in Lidar SLAM
Accepted at IROS 2022
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Lidar-based SLAM systems perform well in a wide range of circumstances by relying on the geometry of the environment. However, even mature and reliable approaches struggle when the environment contains structureless areas such as long hallways. To allow the use of lidar-based SLAM in such environments, we propose to add reflector markers in specific locations that would otherwise be difficult. We present an algorithm to reliably detect these markers and two approaches to fuse the detected markers with geometry-based scan matching. The performance of the proposed methods is demonstrated on real-world datasets from several industrial environments.
[ { "version": "v1", "created": "Mon, 7 Nov 2022 12:07:11 GMT" } ]
2022-11-08T00:00:00
[ [ "Kurz", "Gerhard", "" ], [ "Scherer", "Sebastian A.", "" ], [ "Biber", "Peter", "" ], [ "Fleer", "David", "" ] ]
new_dataset
0.986367
2211.03506
Himanshu Thapliyal
Jun-Cheng Chin, Tyler Cultice and Himanshu Thapliyal
CAN Bus: The Future of Additive Manufacturing (3D Printing)
6 pages
IEEE Consumer Electronics Magazine, 2022
null
null
cs.NI
http://creativecommons.org/licenses/by/4.0/
Additive Manufacturing (AM) is gaining renewed popularity and attention due to low-cost fabrication systems proliferating the market. Current communication protocols used in AM limit the connection flexibility between the control board and peripherals; they are often complex in their wiring and thus restrict their avenue of expansion. Thus, the Controller Area Network (CAN) bus is an attractive pathway for inter-hardware connections due to its innate quality. However, the combination of CAN and AM is not well explored and documented in existing literature. This article aims to provide examples of CAN bus applications in AM.
[ { "version": "v1", "created": "Thu, 27 Oct 2022 13:26:53 GMT" } ]
2022-11-08T00:00:00
[ [ "Chin", "Jun-Cheng", "" ], [ "Cultice", "Tyler", "" ], [ "Thapliyal", "Himanshu", "" ] ]
new_dataset
0.974458
2211.03589
Ruofan Wang
Juan Xu, Ruofan Wang, Yan Zhang, Hongmin Huang
A Reliable Multipath Routing Protocol Based on Link Stability
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wireless NanoSensor Network (WNSN) is a new type of sensor network with broad application prospects. In view of the limited energy of nanonodes and unstable links in WNSNs, we propose a reliable multi-path routing based on link stability (RMRLS). RMRLS selects the optimal path which perfects best in the link stability evaluation model, and then selects an alternative route by the routing similarity judgment model. RMRLS uses tew paths to cope with changes in the network topology. The simulation shows that the RMRLS protocol has advantages in data packet transmission success rate and average throughput, which can improve the stability and reliability of the network.
[ { "version": "v1", "created": "Mon, 7 Nov 2022 14:28:03 GMT" } ]
2022-11-08T00:00:00
[ [ "Xu", "Juan", "" ], [ "Wang", "Ruofan", "" ], [ "Zhang", "Yan", "" ], [ "Huang", "Hongmin", "" ] ]
new_dataset
0.99905
2211.03612
Ming Liu
Ming Liu, Yaojia LV, Jingrun Zhang, Ruiji Fu, Bing Qin
BigCilin: An Automatic Chinese Open-domain Knowledge Graph with Fine-grained Hypernym-Hyponym Relations
5 pages, 3 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents BigCilin, the first Chinese open-domain knowledge graph with fine-grained hypernym-hyponym re-lations which are extracted automatically from multiple sources for Chinese named entities. With the fine-grained hypernym-hyponym relations, BigCilin owns flexible semantic hierarchical structure. Since the hypernym-hyponym paths are automati-cally generated and one entity may have several senses, we provide a path disambi-guation solution to map a hypernym-hyponym path of one entity to its one sense on the condition that the path and the sense express the same meaning. In order to conveniently access our BigCilin Knowle-dge graph, we provide web interface in two ways. One is that it supports querying any Chinese named entity and browsing the extracted hypernym-hyponym paths surro-unding the query entity. The other is that it gives a top-down browsing view to illust-rate the overall hierarchical structure of our BigCilin knowledge graph over some sam-pled entities.
[ { "version": "v1", "created": "Mon, 7 Nov 2022 15:05:01 GMT" } ]
2022-11-08T00:00:00
[ [ "Liu", "Ming", "" ], [ "LV", "Yaojia", "" ], [ "Zhang", "Jingrun", "" ], [ "Fu", "Ruiji", "" ], [ "Qin", "Bing", "" ] ]
new_dataset
0.997188
2211.03615
Ali Abedi
Ali Abedi, Faranak Dayyani, Charlene Chu, Shehroz S. Khan
MAISON -- Multimodal AI-based Sensor platform for Older Individuals
null
null
null
null
cs.LG cs.AI cs.DC eess.SP
http://creativecommons.org/licenses/by/4.0/
There is a global aging population requiring the need for the right tools that can enable older adults' greater independence and the ability to age at home, as well as assist healthcare workers. It is feasible to achieve this objective by building predictive models that assist healthcare workers in monitoring and analyzing older adults' behavioral, functional, and psychological data. To develop such models, a large amount of multimodal sensor data is typically required. In this paper, we propose MAISON, a scalable cloud-based platform of commercially available smart devices capable of collecting desired multimodal sensor data from older adults and patients living in their own homes. The MAISON platform is novel due to its ability to collect a greater variety of data modalities than the existing platforms, as well as its new features that result in seamless data collection and ease of use for older adults who may not be digitally literate. We demonstrated the feasibility of the MAISON platform with two older adults discharged home from a large rehabilitation center. The results indicate that the MAISON platform was able to collect and store sensor data in a cloud without functional glitches or performance degradation. This paper will also discuss the challenges faced during the development of the platform and data collection in the homes of older adults. MAISON is a novel platform designed to collect multimodal data and facilitate the development of predictive models for detecting key health indicators, including social isolation, depression, and functional decline, and is feasible to use with older adults in the community.
[ { "version": "v1", "created": "Mon, 7 Nov 2022 15:09:04 GMT" } ]
2022-11-08T00:00:00
[ [ "Abedi", "Ali", "" ], [ "Dayyani", "Faranak", "" ], [ "Chu", "Charlene", "" ], [ "Khan", "Shehroz S.", "" ] ]
new_dataset
0.96775
2211.03662
William Buchanan Prof
Fawad Ahmed, Muneeb Ur Rehman, Jawad Ahmad, Muhammad Shahbaz Khan, Wadii Boulila, Gautam Srivastava, Jerry Chun-Wei Lin, William J. Buchanan
A DNA Based Colour Image Encryption Scheme Using A Convolutional Autoencoder
null
(2022) ACM Trans. Multimedia Comput. Commun. Appl
10.1145/3570165
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
With the advancement in technology, digital images can easily be transmitted and stored over the Internet. Encryption is used to avoid illegal interception of digital images. Encrypting large-sized colour images in their original dimension generally results in low encryption/decryption speed along with exerting a burden on the limited bandwidth of the transmission channel. To address the aforementioned issues, a new encryption scheme for colour images employing convolutional autoencoder, DNA and chaos is presented in this paper. The proposed scheme has two main modules, the dimensionality conversion module using the proposed convolutional autoencoder, and the encryption/decryption module using DNA and chaos. The dimension of the input colour image is first reduced from N $\times$ M $\times$ 3 to P $\times$ Q gray-scale image using the encoder. Encryption and decryption are then performed in the reduced dimension space. The decrypted gray-scale image is upsampled to obtain the original colour image having dimension N $\times$ M $\times$ 3. The training and validation accuracy of the proposed autoencoder is 97% and 95%, respectively. Once the autoencoder is trained, it can be used to reduce and subsequently increase the dimension of any arbitrary input colour image. The efficacy of the designed autoencoder has been demonstrated by the successful reconstruction of the compressed image into the original colour image with negligible perceptual distortion. The second major contribution presented in this paper is an image encryption scheme using DNA along with multiple chaotic sequences and substitution boxes. The security of the proposed image encryption algorithm has been gauged using several evaluation parameters, such as histogram of the cipher image, entropy, NPCR, UACI, key sensitivity, contrast, etc. encryption.
[ { "version": "v1", "created": "Mon, 7 Nov 2022 16:19:31 GMT" } ]
2022-11-08T00:00:00
[ [ "Ahmed", "Fawad", "" ], [ "Rehman", "Muneeb Ur", "" ], [ "Ahmad", "Jawad", "" ], [ "Khan", "Muhammad Shahbaz", "" ], [ "Boulila", "Wadii", "" ], [ "Srivastava", "Gautam", "" ], [ "Lin", "Jerry Chun-Wei", "" ], [ "Buchanan", "William J.", "" ] ]
new_dataset
0.993674
2211.03688
Zixin Yang
Zixin Yang, Richard Simon, Cristian A.Linte
Learning Feature Descriptors for Pre- and Intra-operative Point Cloud Matching for Laparoscopic Liver Registration
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Purpose: In laparoscopic liver surgery (LLS), pre-operative information can be overlaid onto the intra-operative scene by registering a 3D pre-operative model to the intra-operative partial surface reconstructed from the laparoscopic video. To assist with this task, we explore the use of learning-based feature descriptors, which, to our best knowledge, have not been explored for use in laparoscopic liver registration. Furthermore, a dataset to train and evaluate the use of learning-based descriptors does not exist. Methods: We present the LiverMatch dataset consisting of 16 preoperative models and their simulated intra-operative 3D surfaces. We also propose the LiverMatch network designed for this task, which outputs per-point feature descriptors, visibility scores, and matched points. Results: We compare the proposed LiverMatch network with anetwork closest to LiverMatch, and a histogram-based 3D descriptor on the testing split of the LiverMatch dataset, which includes two unseen pre-operative models and 1400 intra-operative surfaces. Results suggest that our LiverMatch network can predict more accurate and dense matches than the other two methods and can be seamlessly integrated with a RANSAC-ICP-based registration algorithm to achieve an accurate initial alignment. Conclusion: The use of learning-based feature descriptors in LLR is promising, as it can help achieve an accurate initial rigid alignment, which, in turn, serves as an initialization for subsequent non-rigid registration. We will release the dataset and code upon acceptance.
[ { "version": "v1", "created": "Mon, 7 Nov 2022 16:58:39 GMT" } ]
2022-11-08T00:00:00
[ [ "Yang", "Zixin", "" ], [ "Simon", "Richard", "" ], [ "Linte", "Cristian A.", "" ] ]
new_dataset
0.99764
2211.03690
Charles Fleming
Chengkai Yu and Charles Fleming and Hai-Ning Liang
Scale Invariant Privacy Preserving Video via Wavelet Decomposition
null
International Journal of Design, Analysis & Tools for Integrated Circuits & Systems 7.1 (2018)
null
null
cs.CR cs.CV
http://creativecommons.org/licenses/by/4.0/
Video surveillance has become ubiquitous in the modern world. Mobile devices, surveillance cameras, and IoT devices, all can record video that can violate our privacy. One proposed solution for this is privacy-preserving video, which removes identifying information from the video as it is produced. Several algorithms for this have been proposed, but all of them suffer from scale issues: in order to sufficiently anonymize near-camera objects, distant objects become unidentifiable. In this paper, we propose a scale-invariant method, based on wavelet decomposition.
[ { "version": "v1", "created": "Mon, 7 Nov 2022 17:03:23 GMT" } ]
2022-11-08T00:00:00
[ [ "Yu", "Chengkai", "" ], [ "Fleming", "Charles", "" ], [ "Liang", "Hai-Ning", "" ] ]
new_dataset
0.988658
2211.03730
Mehedi Hasan Bijoy
Mehedi Hasan Bijoy, Nahid Hossain, Salekul Islam, Swakkhar Shatabda
DPCSpell: A Transformer-based Detector-Purificator-Corrector Framework for Spelling Error Correction of Bangla and Resource Scarce Indic Languages
23 pages, 4 figures, and 7 tables
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Spelling error correction is the task of identifying and rectifying misspelled words in texts. It is a potential and active research topic in Natural Language Processing because of numerous applications in human language understanding. The phonetically or visually similar yet semantically distinct characters make it an arduous task in any language. Earlier efforts on spelling error correction in Bangla and resource-scarce Indic languages focused on rule-based, statistical, and machine learning-based methods which we found rather inefficient. In particular, machine learning-based approaches, which exhibit superior performance to rule-based and statistical methods, are ineffective as they correct each character regardless of its appropriateness. In this work, we propose a novel detector-purificator-corrector framework based on denoising transformers by addressing previous issues. Moreover, we present a method for large-scale corpus creation from scratch which in turn resolves the resource limitation problem of any left-to-right scripted language. The empirical outcomes demonstrate the effectiveness of our approach that outperforms previous state-of-the-art methods by a significant margin for Bangla spelling error correction. The models and corpus are publicly available at https://tinyurl.com/DPCSpell.
[ { "version": "v1", "created": "Mon, 7 Nov 2022 17:59:05 GMT" } ]
2022-11-08T00:00:00
[ [ "Bijoy", "Mehedi Hasan", "" ], [ "Hossain", "Nahid", "" ], [ "Islam", "Salekul", "" ], [ "Shatabda", "Swakkhar", "" ] ]
new_dataset
0.999021
2211.03779
Maitreya Patel
Maitreya Patel and Tejas Gokhale and Chitta Baral and Yezhou Yang
CRIPP-VQA: Counterfactual Reasoning about Implicit Physical Properties via Video Question Answering
Accepted to EMNLP 2022; https://maitreyapatel.com/CRIPP-VQA/
null
null
null
cs.CV cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Videos often capture objects, their visible properties, their motion, and the interactions between different objects. Objects also have physical properties such as mass, which the imaging pipeline is unable to directly capture. However, these properties can be estimated by utilizing cues from relative object motion and the dynamics introduced by collisions. In this paper, we introduce CRIPP-VQA, a new video question answering dataset for reasoning about the implicit physical properties of objects in a scene. CRIPP-VQA contains videos of objects in motion, annotated with questions that involve counterfactual reasoning about the effect of actions, questions about planning in order to reach a goal, and descriptive questions about visible properties of objects. The CRIPP-VQA test set enables evaluation under several out-of-distribution settings -- videos with objects with masses, coefficients of friction, and initial velocities that are not observed in the training distribution. Our experiments reveal a surprising and significant performance gap in terms of answering questions about implicit properties (the focus of this paper) and explicit properties of objects (the focus of prior work).
[ { "version": "v1", "created": "Mon, 7 Nov 2022 18:55:26 GMT" } ]
2022-11-08T00:00:00
[ [ "Patel", "Maitreya", "" ], [ "Gokhale", "Tejas", "" ], [ "Baral", "Chitta", "" ], [ "Yang", "Yezhou", "" ] ]
new_dataset
0.999863
1805.12262
Ghalia Hemrit
Ghalia Hemrit, Graham D. Finlayson, Arjan Gijsenij, Peter Gehler, Simone Bianco, Brian Funt, Mark Drew and Lilong Shi
Rehabilitating the ColorChecker Dataset for Illuminant Estimation
4 pages, 3 figures, 2 tables, Proceedings of the 26th Color and Imaging Conference
Color and Imaging Conference, 2018
10.2352/ISSN.2169-2629.2018.26.350
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a previous work, it was shown that there is a curious problem with the benchmark ColorChecker dataset for illuminant estimation. To wit, this dataset has at least 3 different sets of ground-truths. Typically, for a single algorithm a single ground-truth is used. But then different algorithms, whose performance is measured with respect to different ground-truths, are compared against each other and then ranked. This makes no sense. We show in this paper that there are also errors in how each ground-truth set was calculated. As a result, all performance rankings based on the ColorChecker dataset - and there are scores of these - are inaccurate. In this paper, we re-generate a new 'recommended' set of ground-truth based on the calculation methodology described by Shi and Funt. We then review the performance evaluation of a range of illuminant estimation algorithms. Compared with the legacy ground-truths, we find that the difference in how algorithms perform can be large, with many local rankings of algorithms being reversed. Finally, we draw the readers attention to our new 'open' data repository which, we hope, will allow the ColorChecker set to be rehabilitated and once again to become a useful benchmark for illuminant estimation algorithms.
[ { "version": "v1", "created": "Wed, 30 May 2018 23:41:17 GMT" }, { "version": "v2", "created": "Wed, 12 Sep 2018 11:30:31 GMT" }, { "version": "v3", "created": "Mon, 17 Sep 2018 16:53:27 GMT" } ]
2022-11-07T00:00:00
[ [ "Hemrit", "Ghalia", "" ], [ "Finlayson", "Graham D.", "" ], [ "Gijsenij", "Arjan", "" ], [ "Gehler", "Peter", "" ], [ "Bianco", "Simone", "" ], [ "Funt", "Brian", "" ], [ "Drew", "Mark", "" ], [ "Shi", "Lilong", "" ] ]
new_dataset
0.999022
2110.04792
Lu Zou
Lu Zou, Zhangjin Huang, Naijie Gu, Guoping Wang
6D-ViT: Category-Level 6D Object Pose Estimation via Transformer-based Instance Representation Learning
13 pages, 12 figures
IEEE Transactions on Image Processing 2022
10.1109/TIP.2022.3216980
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents 6D-ViT, a transformer-based instance representation learning network, which is suitable for highly accurate category-level object pose estimation on RGB-D images. Specifically, a novel two-stream encoder-decoder framework is dedicated to exploring complex and powerful instance representations from RGB images, point clouds and categorical shape priors. For this purpose, the whole framework consists of two main branches, named Pixelformer and Pointformer. The Pixelformer contains a pyramid transformer encoder with an all-MLP decoder to extract pixelwise appearance representations from RGB images, while the Pointformer relies on a cascaded transformer encoder and an all-MLP decoder to acquire the pointwise geometric characteristics from point clouds. Then, dense instance representations (i.e., correspondence matrix, deformation field) are obtained from a multi-source aggregation network with shape priors, appearance and geometric information as input. Finally, the instance 6D pose is computed by leveraging the correspondence among dense representations, shape priors, and the instance point clouds. Extensive experiments on both synthetic and real-world datasets demonstrate that the proposed 3D instance representation learning framework achieves state-of-the-art performance on both datasets, and significantly outperforms all existing methods.
[ { "version": "v1", "created": "Sun, 10 Oct 2021 13:34:16 GMT" }, { "version": "v2", "created": "Sat, 30 Oct 2021 07:44:57 GMT" } ]
2022-11-07T00:00:00
[ [ "Zou", "Lu", "" ], [ "Huang", "Zhangjin", "" ], [ "Gu", "Naijie", "" ], [ "Wang", "Guoping", "" ] ]
new_dataset
0.997696
2112.07471
Yufeng Zheng
Yufeng Zheng, Victoria Fern\'andez Abrevaya, Marcel C. B\"uhler, Xu Chen, Michael J. Black, Otmar Hilliges
I M Avatar: Implicit Morphable Head Avatars from Videos
Accepted at CVPR 2022 as an oral presentation. Project page https://ait.ethz.ch/projects/2022/IMavatar/ ; Github page: https://github.com/zhengyuf/IMavatar
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditional 3D morphable face models (3DMMs) provide fine-grained control over expression but cannot easily capture geometric and appearance details. Neural volumetric representations approach photorealism but are hard to animate and do not generalize well to unseen expressions. To tackle this problem, we propose IMavatar (Implicit Morphable avatar), a novel method for learning implicit head avatars from monocular videos. Inspired by the fine-grained control mechanisms afforded by conventional 3DMMs, we represent the expression- and pose- related deformations via learned blendshapes and skinning fields. These attributes are pose-independent and can be used to morph the canonical geometry and texture fields given novel expression and pose parameters. We employ ray marching and iterative root-finding to locate the canonical surface intersection for each pixel. A key contribution is our novel analytical gradient formulation that enables end-to-end training of IMavatars from videos. We show quantitatively and qualitatively that our method improves geometry and covers a more complete expression space compared to state-of-the-art methods.
[ { "version": "v1", "created": "Tue, 14 Dec 2021 15:30:32 GMT" }, { "version": "v2", "created": "Wed, 15 Dec 2021 15:55:34 GMT" }, { "version": "v3", "created": "Wed, 30 Mar 2022 11:43:27 GMT" }, { "version": "v4", "created": "Mon, 4 Apr 2022 14:59:07 GMT" }, { "version": "v5", "created": "Tue, 19 Apr 2022 08:48:23 GMT" }, { "version": "v6", "created": "Fri, 4 Nov 2022 12:01:17 GMT" } ]
2022-11-07T00:00:00
[ [ "Zheng", "Yufeng", "" ], [ "Abrevaya", "Victoria Fernández", "" ], [ "Bühler", "Marcel C.", "" ], [ "Chen", "Xu", "" ], [ "Black", "Michael J.", "" ], [ "Hilliges", "Otmar", "" ] ]
new_dataset
0.981622
2204.13483
Lianqing Zheng
Lianqing Zheng, Zhixiong Ma, Xichan Zhu, Bin Tan, Sen Li, Kai Long, Weiqi Sun, Sihan Chen, Lu Zhang, Mengyue Wan, Libo Huang, Jie Bai
TJ4DRadSet: A 4D Radar Dataset for Autonomous Driving
2022 IEEE International Intelligent Transportation Systems Conference (ITSC 2022)
null
10.1109/ITSC55140.2022.9922539
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
The next-generation high-resolution automotive radar (4D radar) can provide additional elevation measurement and denser point clouds, which has great potential for 3D sensing in autonomous driving. In this paper, we introduce a dataset named TJ4DRadSet with 4D radar points for autonomous driving research. The dataset was collected in various driving scenarios, with a total of 7757 synchronized frames in 44 consecutive sequences, which are well annotated with 3D bounding boxes and track ids. We provide a 4D radar-based 3D object detection baseline for our dataset to demonstrate the effectiveness of deep learning methods for 4D radar point clouds. The dataset can be accessed via the following link: https://github.com/TJRadarLab/TJ4DRadSet.
[ { "version": "v1", "created": "Thu, 28 Apr 2022 13:17:06 GMT" }, { "version": "v2", "created": "Sat, 30 Apr 2022 06:15:11 GMT" }, { "version": "v3", "created": "Wed, 27 Jul 2022 09:46:06 GMT" } ]
2022-11-07T00:00:00
[ [ "Zheng", "Lianqing", "" ], [ "Ma", "Zhixiong", "" ], [ "Zhu", "Xichan", "" ], [ "Tan", "Bin", "" ], [ "Li", "Sen", "" ], [ "Long", "Kai", "" ], [ "Sun", "Weiqi", "" ], [ "Chen", "Sihan", "" ], [ "Zhang", "Lu", "" ], [ "Wan", "Mengyue", "" ], [ "Huang", "Libo", "" ], [ "Bai", "Jie", "" ] ]
new_dataset
0.99983
2204.14264
Jinlan Fu
Jinlan Fu, See-Kiong Ng, Pengfei Liu
Polyglot Prompt: Multilingual Multitask PrompTraining
EMNLP 2022 (Main Conference)
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper aims for a potential architectural improvement for multilingual learning and asks: Can different tasks from different languages be modeled in a monolithic framework, i.e. without any task/language-specific module? The benefit of achieving this could open new doors for future multilingual research, including allowing systems trained on low resources to be further assisted by other languages as well as other tasks. We approach this goal by developing a learning framework named Polyglot Prompting to exploit prompting methods for learning a unified semantic space for different languages and tasks with multilingual prompt engineering. We performed a comprehensive evaluation of 6 tasks, namely topic classification, sentiment classification, named entity recognition, question answering, natural language inference, and summarization, covering 24 datasets and 49 languages. The experimental results demonstrated the efficacy of multilingual multitask prompt-based learning and led to inspiring observations. We also present an interpretable multilingual evaluation methodology and show how the proposed framework, multilingual multitask prompt training, works. We release all datasets prompted in the best setting and code.
[ { "version": "v1", "created": "Fri, 29 Apr 2022 17:40:50 GMT" }, { "version": "v2", "created": "Fri, 4 Nov 2022 06:01:05 GMT" } ]
2022-11-07T00:00:00
[ [ "Fu", "Jinlan", "" ], [ "Ng", "See-Kiong", "" ], [ "Liu", "Pengfei", "" ] ]
new_dataset
0.995668
2205.12496
Harsh Trivedi
Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, Ashish Sabharwal
Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts
Accepted at EMNLP'22
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Question-answering datasets require a broad set of reasoning skills. We show how to use question decompositions to teach language models these broad reasoning skills in a robust fashion. Specifically, we use widely available QDMR representations to programmatically create hard-to-cheat synthetic contexts for real questions in six multi-step reasoning datasets. These contexts are carefully designed to avoid reasoning shortcuts prevalent in real contexts that prevent models from learning the right skills. This results in a pretraining dataset, named TeaBReaC, containing 525K multi-step questions (with associated formal programs) covering about 900 reasoning patterns. We show that pretraining standard language models (LMs) on TeaBReaC before fine-tuning them on target datasets improves their performance by up to 13 F1 points across 4 multi-step QA datasets, with up to 21 point gain on more complex questions. The resulting models also demonstrate higher robustness, with a 5-8 F1 point improvement on two contrast sets. Furthermore, TeaBReaC pretraining substantially improves model performance and robustness even when starting with numerate LMs pretrained using recent methods (e.g., PReasM, POET). Our work thus shows how to effectively use decomposition-guided contexts to robustly teach multi-step reasoning.
[ { "version": "v1", "created": "Wed, 25 May 2022 05:13:21 GMT" }, { "version": "v2", "created": "Thu, 3 Nov 2022 19:38:06 GMT" } ]
2022-11-07T00:00:00
[ [ "Trivedi", "Harsh", "" ], [ "Balasubramanian", "Niranjan", "" ], [ "Khot", "Tushar", "" ], [ "Sabharwal", "Ashish", "" ] ]
new_dataset
0.981168
2210.14136
Fazlourrahman Balouchzahi
Fazlourrahman Balouchzahi and Grigori Sidorov and Alexander Gelbukh
PolyHope: Two-Level Hope Speech Detection from Tweets
20 pages, 9 figures
null
null
null
cs.CL cs.AI cs.CY cs.LG
http://creativecommons.org/licenses/by/4.0/
Hope is characterized as openness of spirit toward the future, a desire, expectation, and wish for something to happen or to be true that remarkably affects human's state of mind, emotions, behaviors, and decisions. Hope is usually associated with concepts of desired expectations and possibility/probability concerning the future. Despite its importance, hope has rarely been studied as a social media analysis task. This paper presents a hope speech dataset that classifies each tweet first into "Hope" and "Not Hope", then into three fine-grained hope categories: "Generalized Hope", "Realistic Hope", and "Unrealistic Hope" (along with "Not Hope"). English tweets in the first half of 2022 were collected to build this dataset. Furthermore, we describe our annotation process and guidelines in detail and discuss the challenges of classifying hope and the limitations of the existing hope speech detection corpora. In addition, we reported several baselines based on different learning approaches, such as traditional machine learning, deep learning, and transformers, to benchmark our dataset. We evaluated our baselines using weighted-averaged and macro-averaged F1-scores. Observations show that a strict process for annotator selection and detailed annotation guidelines enhanced the dataset's quality. This strict annotation process resulted in promising performance for simple machine learning classifiers with only bi-grams; however, binary and multiclass hope speech detection results reveal that contextual embedding models have higher performance in this dataset.
[ { "version": "v1", "created": "Tue, 25 Oct 2022 16:34:03 GMT" }, { "version": "v2", "created": "Thu, 3 Nov 2022 19:54:01 GMT" } ]
2022-11-07T00:00:00
[ [ "Balouchzahi", "Fazlourrahman", "" ], [ "Sidorov", "Grigori", "" ], [ "Gelbukh", "Alexander", "" ] ]
new_dataset
0.999811
2211.02141
Mohammad Imrul Jubair
Simanta Deb Turja, Mohammad Imrul Jubair, Md. Shafiur Rahman, Md. Hasib Al Zadid, Mohtasim Hossain Shovon, Md. Faraz Kabir Khan
Shapes2Toon: Generating Cartoon Characters from Simple Geometric Shapes
Accepted as a full paper in AICCSA2022 (19th ACS/IEEE International Conference on Computer Systems and Applications)
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Cartoons are an important part of our entertainment culture. Though drawing a cartoon is not for everyone, creating it using an arrangement of basic geometric primitives that approximates that character is a fairly frequent technique in art. The key motivation behind this technique is that human bodies - as well as cartoon figures - can be split down into various basic geometric primitives. Numerous tutorials are available that demonstrate how to draw figures using an appropriate arrangement of fundamental shapes, thus assisting us in creating cartoon characters. This technique is very beneficial for children in terms of teaching them how to draw cartoons. In this paper, we develop a tool - shape2toon - that aims to automate this approach by utilizing a generative adversarial network which combines geometric primitives (i.e. circles) and generate a cartoon figure (i.e. Mickey Mouse) depending on the given approximation. For this purpose, we created a dataset of geometrically represented cartoon characters. We apply an image-to-image translation technique on our dataset and report the results in this paper. The experimental results show that our system can generate cartoon characters from input layout of geometric shapes. In addition, we demonstrate a web-based tool as a practical implication of our work.
[ { "version": "v1", "created": "Thu, 3 Nov 2022 20:52:19 GMT" } ]
2022-11-07T00:00:00
[ [ "Turja", "Simanta Deb", "" ], [ "Jubair", "Mohammad Imrul", "" ], [ "Rahman", "Md. Shafiur", "" ], [ "Zadid", "Md. Hasib Al", "" ], [ "Shovon", "Mohtasim Hossain", "" ], [ "Khan", "Md. Faraz Kabir", "" ] ]
new_dataset
0.999588
2211.02175
Bing Shuai
Bing Shuai, Alessandro Bergamo, Uta Buechler, Andrew Berneshawi, Alyssa Boden, Joseph Tighe
Large Scale Real-World Multi-Person Tracking
ECCV 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a new large scale multi-person tracking dataset -- \texttt{PersonPath22}, which is over an order of magnitude larger than currently available high quality multi-object tracking datasets such as MOT17, HiEve, and MOT20 datasets. The lack of large scale training and test data for this task has limited the community's ability to understand the performance of their tracking systems on a wide range of scenarios and conditions such as variations in person density, actions being performed, weather, and time of day. \texttt{PersonPath22} dataset was specifically sourced to provide a wide variety of these conditions and our annotations include rich meta-data such that the performance of a tracker can be evaluated along these different dimensions. The lack of training data has also limited the ability to perform end-to-end training of tracking systems. As such, the highest performing tracking systems all rely on strong detectors trained on external image datasets. We hope that the release of this dataset will enable new lines of research that take advantage of large scale video based training data.
[ { "version": "v1", "created": "Thu, 3 Nov 2022 23:03:13 GMT" } ]
2022-11-07T00:00:00
[ [ "Shuai", "Bing", "" ], [ "Bergamo", "Alessandro", "" ], [ "Buechler", "Uta", "" ], [ "Berneshawi", "Andrew", "" ], [ "Boden", "Alyssa", "" ], [ "Tighe", "Joseph", "" ] ]
new_dataset
0.999315
2211.02179
Kevin Cheang
Kevin Cheang, Cameron Rasmussen, Dayeol Lee, David W. Kohlbrenner, Krste Asanovi\'c, Sanjit A. Seshia
Verifying RISC-V Physical Memory Protection
SECRISC-V 2019 Workshop
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We formally verify an open-source hardware implementation of physical memory protection (PMP) in RISC-V, which is a standard feature used for memory isolation in security critical systems such as the Keystone trusted execution environment. PMP provides per-hardware-thread machine-mode control registers that specify the access privileges for physical memory regions. We first formalize the functional property of the PMP rules based on the RISC-V ISA manual. Then, we use the LIME tool to translate an open-source implementation of the PMP hardware module written in Chisel to the UCLID5 formal verification language. We encode the formal specification in UCLID5 and verify the functional correctness of the hardware. This is an initial effort towards verifying the Keystone framework, where the trusted computing base (TCB) relies on PMP to provide security guarantees such as integrity and confidentiality.
[ { "version": "v1", "created": "Thu, 3 Nov 2022 23:12:28 GMT" } ]
2022-11-07T00:00:00
[ [ "Cheang", "Kevin", "" ], [ "Rasmussen", "Cameron", "" ], [ "Lee", "Dayeol", "" ], [ "Kohlbrenner", "David W.", "" ], [ "Asanović", "Krste", "" ], [ "Seshia", "Sanjit A.", "" ] ]
new_dataset
0.998765
2211.02223
Chunming Jiang
Chunming Jiang, Yilei Zhang
Adversarial Defense via Neural Oscillation inspired Gradient Masking
null
null
null
null
cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spiking neural networks (SNNs) attract great attention due to their low power consumption, low latency, and biological plausibility. As they are widely deployed in neuromorphic devices for low-power brain-inspired computing, security issues become increasingly important. However, compared to deep neural networks (DNNs), SNNs currently lack specifically designed defense methods against adversarial attacks. Inspired by neural membrane potential oscillation, we propose a novel neural model that incorporates the bio-inspired oscillation mechanism to enhance the security of SNNs. Our experiments show that SNNs with neural oscillation neurons have better resistance to adversarial attacks than ordinary SNNs with LIF neurons on kinds of architectures and datasets. Furthermore, we propose a defense method that changes model's gradients by replacing the form of oscillation, which hides the original training gradients and confuses the attacker into using gradients of 'fake' neurons to generate invalid adversarial samples. Our experiments suggest that the proposed defense method can effectively resist both single-step and iterative attacks with comparable defense effectiveness and much less computational costs than adversarial training methods on DNNs. To the best of our knowledge, this is the first work that establishes adversarial defense through masking surrogate gradients on SNNs.
[ { "version": "v1", "created": "Fri, 4 Nov 2022 02:13:19 GMT" } ]
2022-11-07T00:00:00
[ [ "Jiang", "Chunming", "" ], [ "Zhang", "Yilei", "" ] ]
new_dataset
0.964993
2211.02269
Winston Wu
Changyuan Qiu, Winston Wu, Xinliang Frederick Zhang, Lu Wang
Late Fusion with Triplet Margin Objective for Multimodal Ideology Prediction and Analysis
EMNLP 2022
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Prior work on ideology prediction has largely focused on single modalities, i.e., text or images. In this work, we introduce the task of multimodal ideology prediction, where a model predicts binary or five-point scale ideological leanings, given a text-image pair with political content. We first collect five new large-scale datasets with English documents and images along with their ideological leanings, covering news articles from a wide range of US mainstream media and social media posts from Reddit and Twitter. We conduct in-depth analyses of news articles and reveal differences in image content and usage across the political spectrum. Furthermore, we perform extensive experiments and ablation studies, demonstrating the effectiveness of targeted pretraining objectives on different model components. Our best-performing model, a late-fusion architecture pretrained with a triplet objective over multimodal content, outperforms the state-of-the-art text-only model by almost 4% and a strong multimodal baseline with no pretraining by over 3%.
[ { "version": "v1", "created": "Fri, 4 Nov 2022 05:45:26 GMT" } ]
2022-11-07T00:00:00
[ [ "Qiu", "Changyuan", "" ], [ "Wu", "Winston", "" ], [ "Zhang", "Xinliang Frederick", "" ], [ "Wang", "Lu", "" ] ]
new_dataset
0.95154
2211.02295
Tao Yu
Tao Yu, Kento Kajiwara, Kiyomichi Araki, Kei Sakaguchi
Experiment of Multi-UAV Full-Duplex System Equipped with Directional Antennas
The paper was accepted by IEEE Consumer Communications & Networking Conference (CCNC) 2023
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the key enablers for the realization of a variety of unmanned aerial vehicle (UAV)-based systems is the high-performance communication system linking many UAVs and ground station. We have proposed a spectrum-efficient full-duplex directional-antennas-equipped multi-UAV communication system with low hardware complexity to address the issues of low spectrum efficiency caused by co-channel interference in areal channels. In this paper, by using the prototype system including UAVs and ground station, field experiments are carried out to confirm the feasibility and effectiveness of the proposed system's key feature, i.e., co-channel interference cancellation among UAVs by directional antennas and UAV relative position control, instead of energy-consuming dedicated self-interference cancellers on UAVs in traditional full-duplex systems. Both uplink and downlink performance are tested. Specially, in downlink experiment, channel power of interference between a pair of two UAVs is measured when UAVs are in different positional relationships. The experiment results agree well with the designs and confirm that the proposed system can greatly improve the system performance.
[ { "version": "v1", "created": "Fri, 4 Nov 2022 07:28:16 GMT" } ]
2022-11-07T00:00:00
[ [ "Yu", "Tao", "" ], [ "Kajiwara", "Kento", "" ], [ "Araki", "Kiyomichi", "" ], [ "Sakaguchi", "Kei", "" ] ]
new_dataset
0.982254
2211.02321
Zhao Zhang
Bo Wang, Zhao Zhang, Mingbo Zhao, Xiaojie Jin, Mingliang Xu, Meng Wang
OSIC: A New One-Stage Image Captioner Coined
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mainstream image caption models are usually two-stage captioners, i.e., calculating object features by pre-trained detector, and feeding them into a language model to generate text descriptions. However, such an operation will cause a task-based information gap to decrease the performance, since the object features in detection task are suboptimal representation and cannot provide all necessary information for subsequent text generation. Besides, object features are usually represented by the last layer features that lose the local details of input images. In this paper, we propose a novel One-Stage Image Captioner (OSIC) with dynamic multi-sight learning, which directly transforms input image into descriptive sentences in one stage. As a result, the task-based information gap can be greatly reduced. To obtain rich features, we use the Swin Transformer to calculate multi-level features, and then feed them into a novel dynamic multi-sight embedding module to exploit both global structure and local texture of input images. To enhance the global modeling of encoder for caption, we propose a new dual-dimensional refining module to non-locally model the interaction of the embedded features. Finally, OSIC can obtain rich and useful information to improve the image caption task. Extensive comparisons on benchmark MS-COCO dataset verified the superior performance of our method.
[ { "version": "v1", "created": "Fri, 4 Nov 2022 08:50:09 GMT" } ]
2022-11-07T00:00:00
[ [ "Wang", "Bo", "" ], [ "Zhang", "Zhao", "" ], [ "Zhao", "Mingbo", "" ], [ "Jin", "Xiaojie", "" ], [ "Xu", "Mingliang", "" ], [ "Wang", "Meng", "" ] ]
new_dataset
0.993275
2211.02356
Rajat Tandon
Jeffrey Liu, Rajat Tandon, Uma Durairaj, Jiani Guo, Spencer Zahabizadeh, Sanjana Ilango, Jeremy Tang, Neelesh Gupta, Zoe Zhou, Jelena Mirkovic
Did your child get disturbed by an inappropriate advertisement on YouTube?
In Proceedings of KDD Undergraduate Consortium (KDD-UC 2022)
null
null
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
YouTube is a popular video platform for sharing creative content and ideas, targeting different demographics. Adults, older children, and young children are all avid viewers of YouTube videos. Meanwhile, countless young-kid-oriented channels have produced numerous instructional and age appropriate videos for young children. However, inappropriate content for young children, such as violent or sexually suggestive content, still exists. And children lack the ability to decide whether a video is appropriate for them or not, which then causes a huge risk to children's mental health. Prior works have focused on identifying YouTube videos that are inappropriate for children. However, these works ignore that not only the actual video content influences children, but also the advertisements that are shown with those videos. In this paper, we quantify the influence of inappropriate advertisements on YouTube videos that are appropriate for young children to watch. We analyze the advertising patterns of 24.6 K diverse YouTube videos appropriate for young children. We find that 9.9% of the 4.6 K unique advertisements shown on these 24.6 K videos contain inappropriate content for young children. Moreover, we observe that 26.9% of all the 24.6 K appropriate videos include at least one ad that is inappropriate for young children. Additionally, we publicly release our datasets and provide recommendations about how to address this issue.
[ { "version": "v1", "created": "Fri, 4 Nov 2022 10:28:54 GMT" } ]
2022-11-07T00:00:00
[ [ "Liu", "Jeffrey", "" ], [ "Tandon", "Rajat", "" ], [ "Durairaj", "Uma", "" ], [ "Guo", "Jiani", "" ], [ "Zahabizadeh", "Spencer", "" ], [ "Ilango", "Sanjana", "" ], [ "Tang", "Jeremy", "" ], [ "Gupta", "Neelesh", "" ], [ "Zhou", "Zoe", "" ], [ "Mirkovic", "Jelena", "" ] ]
new_dataset
0.999453
2211.02443
Yuhang Gai
Yuhang Gai, Bing Wang, Jiwen Zhang, Dan Wu, and Ken Chen
Robotic Assembly Control Reconfiguration Based on Transfer Reinforcement Learning for Objects with Different Geometric Features
null
null
null
null
cs.RO cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
Robotic force-based compliance control is a preferred approach to achieve high-precision assembly tasks. When the geometric features of assembly objects are asymmetric or irregular, reinforcement learning (RL) agents are gradually incorporated into the compliance controller to adapt to complex force-pose mapping which is hard to model analytically. Since force-pose mapping is strongly dependent on geometric features, a compliance controller is only optimal for current geometric features. To reduce the learning cost of assembly objects with different geometric features, this paper is devoted to answering how to reconfigure existing controllers for new assembly objects with different geometric features. In this paper, model-based parameters are first reconfigured based on the proposed Equivalent Theory of Compliance Law (ETCL). Then the RL agent is transferred based on the proposed Weighted Dimensional Policy Distillation (WDPD) method. The experiment results demonstrate that the control reconfiguration method costs less time and achieves better control performance, which confirms the validity of proposed methods.
[ { "version": "v1", "created": "Fri, 4 Nov 2022 13:31:11 GMT" } ]
2022-11-07T00:00:00
[ [ "Gai", "Yuhang", "" ], [ "Wang", "Bing", "" ], [ "Zhang", "Jiwen", "" ], [ "Wu", "Dan", "" ], [ "Chen", "Ken", "" ] ]
new_dataset
0.952423
2211.02567
Dazhen Deng
Dazhen Deng, Aoyu Wu, Haotian Li, Ji Lan, Yong Wang, Huamin Qu, Yingcai Wu
KB4VA: A Knowledge Base of Visualization Designs for Visual Analytics
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Visual analytics (VA) systems have been widely used to facilitate decision-making and analytical reasoning in various application domains. VA involves visual designs, interaction designs, and data mining, which is a systematic and complex paradigm. In this work, we focus on the design of effective visualizations for complex data and analytical tasks, which is a critical step in designing a VA system. This step is challenging because it requires extensive knowledge about domain problems and visualization to design effective encodings. Existing visualization designs published in top venues are valuable resources to inspire designs for problems with similar data structures and tasks. However, those designs are hard to understand, parse, and retrieve due to the lack of specifications. To address this problem, we build KB4VA, a knowledge base of visualization designs in VA systems with comprehensive labels about their analytical tasks and visual encodings. Our labeling scheme is inspired by a workshop study with 12 VA researchers to learn user requirements in understanding and retrieving professional visualization designs in VA systems. The theme extends Vega-Lite specifications for describing advanced and composited visualization designs in a declarative manner, thus facilitating human understanding and automatic indexing. To demonstrate the usefulness of our knowledge base, we present a user study about design inspirations for VA tasks. In summary, our work opens new perspectives for enhancing the accessibility and reusability of professional visualization designs.
[ { "version": "v1", "created": "Thu, 3 Nov 2022 01:58:13 GMT" } ]
2022-11-07T00:00:00
[ [ "Deng", "Dazhen", "" ], [ "Wu", "Aoyu", "" ], [ "Li", "Haotian", "" ], [ "Lan", "Ji", "" ], [ "Wang", "Yong", "" ], [ "Qu", "Huamin", "" ], [ "Wu", "Yingcai", "" ] ]
new_dataset
0.999033
2211.02598
Paolo Gibertini
Paolo Gibertini, Luca Fehlings, Suzanne Lancaster, Quang Duong, Thomas Mikolajick, Catherine Dubourdieu, Stefan Slesazeck, Erika Covi, Veeresh Deshpande
A Ferroelectric Tunnel Junction-based Integrate-and-Fire Neuron
null
null
null
null
cs.ET cs.NE
http://creativecommons.org/licenses/by/4.0/
Event-based neuromorphic systems provide a low-power solution by using artificial neurons and synapses to process data asynchronously in the form of spikes. Ferroelectric Tunnel Junctions (FTJs) are ultra low-power memory devices and are well-suited to be integrated in these systems. Here, we present a hybrid FTJ-CMOS Integrate-and-Fire neuron which constitutes a fundamental building block for new-generation neuromorphic networks for edge computing. We demonstrate electrically tunable neural dynamics achievable by tuning the switching of the FTJ device.
[ { "version": "v1", "created": "Fri, 4 Nov 2022 17:13:58 GMT" } ]
2022-11-07T00:00:00
[ [ "Gibertini", "Paolo", "" ], [ "Fehlings", "Luca", "" ], [ "Lancaster", "Suzanne", "" ], [ "Duong", "Quang", "" ], [ "Mikolajick", "Thomas", "" ], [ "Dubourdieu", "Catherine", "" ], [ "Slesazeck", "Stefan", "" ], [ "Covi", "Erika", "" ], [ "Deshpande", "Veeresh", "" ] ]
new_dataset
0.999708
2211.02648
Juan Carlos Dibene Simental
Juan C. Dibene, Enrique Dunn
HoloLens 2 Sensor Streaming
Technical report
null
null
null
cs.MM
http://creativecommons.org/licenses/by/4.0/
We present a HoloLens 2 server application for streaming device data via TCP in real time. The server can stream data from the four grayscale cameras, depth sensor, IMU, front RGB camera, microphone, head tracking, eye tracking, and hand tracking. Each sent data frame has a timestamp and, optionally, the instantaneous pose of the device in 3D space. The server allows downloading device calibration data, such as camera intrinsics, and can be integrated into Unity projects as a plugin, with support for basic upstream capabilities. To achieve real time video streaming at full frame rate, we leverage the video encoding capabilities of the HoloLens 2. Finally, we present a Python library for receiving and decoding the data, which includes utilities that facilitate passing the data to other libraries. The source code, Python demos, and precompiled binaries are available at https://github.com/jdibenes/hl2ss.
[ { "version": "v1", "created": "Fri, 4 Nov 2022 17:58:52 GMT" } ]
2022-11-07T00:00:00
[ [ "Dibene", "Juan C.", "" ], [ "Dunn", "Enrique", "" ] ]
new_dataset
0.998505
2211.02652
Jianfei Zhou
Jianfei Zhou and Tianxing Jiang and Shuwei Song and Ting Chen
AntFuzzer: A Grey-Box Fuzzing Framework for EOSIO Smart Contracts
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the past few years, several attacks against the vulnerabilities of EOSIO smart contracts have caused severe financial losses to this prevalent blockchain platform. As a lightweight test-generation approach, grey-box fuzzing can open up the possibility of improving the security of EOSIO smart contracts. However, developing a practical grey-box fuzzer for EOSIO smart contracts from scratch is time-consuming and requires a deep understanding of EOSIO internals. In this work, we proposed AntFuzzer, the first highly extensible grey-box fuzzing framework for EOSIO smart contracts. AntFuzzer implements a novel approach that interfaces AFL to conduct AFL-style grey-box fuzzing on EOSIO smart contracts. Compared to black-box fuzzing tools, AntFuzzer can effectively trigger those hard-to-cover branches. It achieved an improvement in code coverage on 37.5% of smart contracts in our benchmark dataset. AntFuzzer provides unified interfaces for users to easily develop new detection plugins for continually emerging vulnerabilities. We have implemented 6 detection plugins on AntFuzzer to detect major vulnerabilities of EOSIO smart contracts. In our large-scale fuzzing experiments on 4,616 real-world smart contracts, AntFuzzer successfully detected 741 vulnerabilities. The results demonstrate the effectiveness and efficiency of AntFuzzer and our detection pl
[ { "version": "v1", "created": "Wed, 2 Nov 2022 08:29:21 GMT" } ]
2022-11-07T00:00:00
[ [ "Zhou", "Jianfei", "" ], [ "Jiang", "Tianxing", "" ], [ "Song", "Shuwei", "" ], [ "Chen", "Ting", "" ] ]
new_dataset
0.996203
1906.11898
L\'eonard Boussioux
L\'eonard Boussioux, Tom\'as Giro-Larraz, Charles Guille-Escuret, Mehdi Cherti, Bal\'azs K\'egl
InsectUp: Crowdsourcing Insect Observations to Assess Demographic Shifts and Improve Classification
Appearing at the International Conference on Machine Learning, AI for Social Good Workshop, Long Beach, United States, 2019 Appearing at the International Conference on Computer Vision, AI for Wildlife Conservation Workshop, Seoul, South Korea, 2019 5 pages, 6 figures
null
null
null
cs.CV cs.AI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Insects play such a crucial role in ecosystems that a shift in demography of just a few species can have devastating consequences at environmental, social and economic levels. Despite this, evaluation of insect demography is strongly limited by the difficulty of collecting census data at sufficient scale. We propose a method to gather and leverage observations from bystanders, hikers, and entomology enthusiasts in order to provide researchers with data that could significantly help anticipate and identify environmental threats. Finally, we show that there is indeed interest on both sides for such collaboration.
[ { "version": "v1", "created": "Thu, 30 May 2019 00:57:15 GMT" }, { "version": "v2", "created": "Wed, 29 Jan 2020 18:39:03 GMT" } ]
2022-11-04T00:00:00
[ [ "Boussioux", "Léonard", "" ], [ "Giro-Larraz", "Tomás", "" ], [ "Guille-Escuret", "Charles", "" ], [ "Cherti", "Mehdi", "" ], [ "Kégl", "Balázs", "" ] ]
new_dataset
0.999484
2102.01468
Yinbo Yu
Yinbo Yu and Jiajia Liu
TAPInspector: Safety and Liveness Verification of Concurrent Trigger-Action IoT Systems
null
IEEE Transactions on Information Forensics and Security 2022
10.1109/TIFS.2022.3214084
null
cs.CR cs.HC cs.NI cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Trigger-action programming (TAP) is a popular end-user programming framework that can simplify the Internet of Things (IoT) automation with simple trigger-action rules. However, it also introduces new security and safety threats. A lot of advanced techniques have been proposed to address this problem. Rigorously reasoning about the security of a TAP-based IoT system requires a well-defined model and verification method both against rule semantics and physical-world features, e.g., concurrency, rule latency, extended action, tardy attributes, and connection-based rule interactions, which has been missing until now. By analyzing these features, we find 9 new types of rule interaction vulnerabilities and validate them on two commercial IoT platforms. We then present TAPInspector, a novel system to detect these interaction vulnerabilities in concurrent TAP-based IoT systems. It automatically extracts TAP rules from IoT apps, translates them into a hybrid model by model slicing and state compression, and performs semantic analysis and model checking with various safety and liveness properties. Our experiments corroborate that TAPInspector is practical: it identifies 533 violations related to rule interaction from 1108 real-world market IoT apps and is at least 60000 times faster than the baseline without optimization.
[ { "version": "v1", "created": "Tue, 2 Feb 2021 12:39:59 GMT" }, { "version": "v2", "created": "Fri, 6 May 2022 01:17:07 GMT" } ]
2022-11-04T00:00:00
[ [ "Yu", "Yinbo", "" ], [ "Liu", "Jiajia", "" ] ]
new_dataset
0.993593
2103.14027
Yosuke Shinya
Yosuke Shinya
USB: Universal-Scale Object Detection Benchmark
BMVC 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Benchmarks, such as COCO, play a crucial role in object detection. However, existing benchmarks are insufficient in scale variation, and their protocols are inadequate for fair comparison. In this paper, we introduce the Universal-Scale object detection Benchmark (USB). USB has variations in object scales and image domains by incorporating COCO with the recently proposed Waymo Open Dataset and Manga109-s dataset. To enable fair comparison and inclusive research, we propose training and evaluation protocols. They have multiple divisions for training epochs and evaluation image resolutions, like weight classes in sports, and compatibility across training protocols, like the backward compatibility of the Universal Serial Bus. Specifically, we request participants to report results with not only higher protocols (longer training) but also lower protocols (shorter training). Using the proposed benchmark and protocols, we conducted extensive experiments using 15 methods and found weaknesses of existing COCO-biased methods. The code is available at https://github.com/shinya7y/UniverseNet .
[ { "version": "v1", "created": "Thu, 25 Mar 2021 17:59:15 GMT" }, { "version": "v2", "created": "Wed, 8 Dec 2021 18:32:00 GMT" }, { "version": "v3", "created": "Wed, 2 Nov 2022 19:12:01 GMT" } ]
2022-11-04T00:00:00
[ [ "Shinya", "Yosuke", "" ] ]
new_dataset
0.999858
2105.06763
EPTCS
Matteo Capucci (University of Strathclyde), Neil Ghani (University of Strathclyde), J\'er\'emy Ledent (University of Strathclyde), Fredrik Nordvall Forsberg (University of Strathclyde)
Translating Extensive Form Games to Open Games with Agency
In Proceedings ACT 2021, arXiv:2211.01102
EPTCS 372, 2022, pp. 221-234
10.4204/EPTCS.372.16
null
cs.GT cs.MA math.CT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show open games cover extensive form games with both perfect and imperfect information. Doing so forces us to address two current weaknesses in open games: the lack of a notion of player and their agency within open games, and the lack of choice operators. Using the former we construct the latter, and these choice operators subsume previous proposed operators for open games, thereby making progress towards a core, canonical and ergonomic calculus of game operators. Collectively these innovations increase the level of compositionality of open games, and demonstrate their expressiveness.
[ { "version": "v1", "created": "Fri, 14 May 2021 11:15:25 GMT" }, { "version": "v2", "created": "Thu, 3 Nov 2022 14:09:57 GMT" } ]
2022-11-04T00:00:00
[ [ "Capucci", "Matteo", "", "University of Strathclyde" ], [ "Ghani", "Neil", "", "University of\n Strathclyde" ], [ "Ledent", "Jérémy", "", "University of Strathclyde" ], [ "Forsberg", "Fredrik Nordvall", "", "University of Strathclyde" ] ]
new_dataset
0.982387
2106.07763
EPTCS
Guillaume Boisseau (University of Oxford, UK), Pawe{\l} Soboci\'nski (Tallinn University of Technology, Estonia)
String Diagrammatic Electrical Circuit Theory
In Proceedings ACT 2021, arXiv:2211.01102
EPTCS 372, 2022, pp. 178-191
10.4204/EPTCS.372.13
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
We develop a comprehensive string diagrammatic treatment of electrical circuits. Building on previous, limited case studies, we introduce controlled sources and meters as elements, and the impedance calculus, a powerful toolbox for diagrammatic reasoning on circuit diagrams. We demonstrate the power of our approach by giving idiomatic proofs of several textbook results, including the superposition theorem and Thevenin's theorem.
[ { "version": "v1", "created": "Mon, 14 Jun 2021 21:21:52 GMT" }, { "version": "v2", "created": "Thu, 3 Nov 2022 14:18:10 GMT" } ]
2022-11-04T00:00:00
[ [ "Boisseau", "Guillaume", "", "University of Oxford, UK" ], [ "Sobociński", "Paweł", "", "Tallinn University of Technology, Estonia" ] ]
new_dataset
0.999164
2202.06633
Jianqiao Zhao
Jianqiao Zhao, Yanyang Li, Wanyu Du, Yangfeng Ji, Dong Yu, Michael R. Lyu, Liwei Wang
FlowEval: A Consensus-Based Dialogue Evaluation Framework Using Segment Act Flows
EMNLP 2022 camera-ready version
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite recent progress in open-domain dialogue evaluation, how to develop automatic metrics remains an open problem. We explore the potential of dialogue evaluation featuring dialog act information, which was hardly explicitly modeled in previous methods. However, defined at the utterance level in general, dialog act is of coarse granularity, as an utterance can contain multiple segments possessing different functions. Hence, we propose segment act, an extension of dialog act from utterance level to segment level, and crowdsource a large-scale dataset for it. To utilize segment act flows, sequences of segment acts, for evaluation, we develop the first consensus-based dialogue evaluation framework, FlowEval. This framework provides a reference-free approach for dialog evaluation by finding pseudo-references. Extensive experiments against strong baselines on three benchmark datasets demonstrate the effectiveness and other desirable characteristics of our FlowEval, pointing out a potential path for better dialogue evaluation.
[ { "version": "v1", "created": "Mon, 14 Feb 2022 11:37:20 GMT" }, { "version": "v2", "created": "Thu, 3 Nov 2022 07:36:50 GMT" } ]
2022-11-04T00:00:00
[ [ "Zhao", "Jianqiao", "" ], [ "Li", "Yanyang", "" ], [ "Du", "Wanyu", "" ], [ "Ji", "Yangfeng", "" ], [ "Yu", "Dong", "" ], [ "Lyu", "Michael R.", "" ], [ "Wang", "Liwei", "" ] ]
new_dataset
0.998644
2204.11235
Ga\"etan Dou\'eneau-Tabot
Olivier Carton, Ga\"etan Dou\'eneau-Tabot
Continuous rational functions are deterministic regular
41 pages
null
null
null
cs.FL
http://creativecommons.org/licenses/by-sa/4.0/
A word-to-word function is rational if it can be realized by a non-deterministic one-way transducer. Over finite words, it is a classical result that any rational function is regular, i.e. it can be computed by a deterministic two-way transducer, or equivalently, by a deterministic streaming string transducer (a one-way automaton which manipulates string registers). This result no longer holds for infinite words, since a non-deterministic one-way transducer can guess, and check along its run, properties such as infinitely many occurrences of some pattern, which is impossible for a deterministic machine. In this paper, we identify the class of rational functions over infinite words which are also computable by a deterministic two-way transducer. It coincides with the class of rational functions which are continuous, and this property can thus be decided. This solves an open question raised in a previous paper of Dave et al.
[ { "version": "v1", "created": "Sun, 24 Apr 2022 10:07:21 GMT" }, { "version": "v2", "created": "Thu, 3 Nov 2022 07:11:57 GMT" } ]
2022-11-04T00:00:00
[ [ "Carton", "Olivier", "" ], [ "Douéneau-Tabot", "Gaëtan", "" ] ]
new_dataset
0.99191
2207.04908
Aldi Piroli
Aldi Piroli, Vinzenz Dallabetta, Marc Walessa, Daniel Meissner, Johannes Kopp, Klaus Dietmayer
Detection of Condensed Vehicle Gas Exhaust in LiDAR Point Clouds
Accepted for ITSC2022
2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC)
10.1109/ITSC55140.2022.9922475
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
LiDAR sensors used in autonomous driving applications are negatively affected by adverse weather conditions. One common, but understudied effect, is the condensation of vehicle gas exhaust in cold weather. This everyday phenomenon can severely impact the quality of LiDAR measurements, resulting in a less accurate environment perception by creating artifacts like ghost object detections. In the literature, the semantic segmentation of adverse weather effects like rain and fog is achieved using learning-based approaches. However, such methods require large sets of labeled data, which can be extremely expensive and laborious to get. We address this problem by presenting a two-step approach for the detection of condensed vehicle gas exhaust. First, we identify for each vehicle in a scene its emission area and detect gas exhaust if present. Then, isolated clouds are detected by modeling through time the regions of space where gas exhaust is likely to be present. We test our method on real urban data, showing that our approach can reliably detect gas exhaust in different scenarios, making it appealing for offline pre-labeling and online applications such as ghost object detection.
[ { "version": "v1", "created": "Mon, 11 Jul 2022 14:36:27 GMT" } ]
2022-11-04T00:00:00
[ [ "Piroli", "Aldi", "" ], [ "Dallabetta", "Vinzenz", "" ], [ "Walessa", "Marc", "" ], [ "Meissner", "Daniel", "" ], [ "Kopp", "Johannes", "" ], [ "Dietmayer", "Klaus", "" ] ]
new_dataset
0.995557
2209.13511
Yanbing Mao
Yanbing Mao, Lui Sha, Huajie Shao, Yuliang Gu, Qixin Wang, Tarek Abdelzaher
Phy-Taylor: Physics-Model-Based Deep Neural Networks
Working Paper
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Purely data-driven deep neural networks (DNNs) applied to physical engineering systems can infer relations that violate physics laws, thus leading to unexpected consequences. To address this challenge, we propose a physics-model-based DNN framework, called Phy-Taylor, that accelerates learning compliant representations with physical knowledge. The Phy-Taylor framework makes two key contributions; it introduces a new architectural Physics-compatible neural network (PhN), and features a novel compliance mechanism, we call {\em Physics-guided Neural Network Editing\}. The PhN aims to directly capture nonlinearities inspired by physical quantities, such as kinetic energy, potential energy, electrical power, and aerodynamic drag force. To do so, the PhN augments neural network layers with two key components: (i) monomials of Taylor series expansion of nonlinear functions capturing physical knowledge, and (ii) a suppressor for mitigating the influence of noise. The neural-network editing mechanism further modifies network links and activation functions consistently with physical knowledge. As an extension, we also propose a self-correcting Phy-Taylor framework that introduces two additional capabilities: (i) physics-model-based safety relationship learning, and (ii) automatic output correction when violations of safety occur. Through experiments, we show that (by expressing hard-to-learn nonlinearities directly and by constraining dependencies) Phy-Taylor features considerably fewer parameters, and a remarkably accelerated training process, while offering enhanced model robustness and accuracy.
[ { "version": "v1", "created": "Tue, 27 Sep 2022 16:30:35 GMT" }, { "version": "v2", "created": "Thu, 3 Nov 2022 04:44:33 GMT" } ]
2022-11-04T00:00:00
[ [ "Mao", "Yanbing", "" ], [ "Sha", "Lui", "" ], [ "Shao", "Huajie", "" ], [ "Gu", "Yuliang", "" ], [ "Wang", "Qixin", "" ], [ "Abdelzaher", "Tarek", "" ] ]
new_dataset
0.995443
2210.02890
Soujanya Poria
Siqi Shen, Deepanway Ghosal, Navonil Majumder, Henry Lim, Rada Mihalcea, Soujanya Poria
Multiview Contextual Commonsense Inference: A New Dataset and Task
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
Contextual commonsense inference is the task of generating various types of explanations around the events in a dyadic dialogue, including cause, motivation, emotional reaction, and others. Producing a coherent and non-trivial explanation requires awareness of the dialogue's structure and of how an event is grounded in the context. In this work, we create CICEROv2, a dataset consisting of 8,351 instances from 2,379 dialogues, containing multiple human-written answers for each contextual commonsense inference question, representing a type of explanation on cause, subsequent event, motivation, and emotional reaction. We show that the inferences in CICEROv2 are more semantically diverse than other contextual commonsense inference datasets. To solve the inference task, we propose a collection of pre-training objectives, including concept denoising and utterance sorting to prepare a pre-trained model for the downstream contextual commonsense inference task. Our results show that the proposed pre-training objectives are effective at adapting the pre-trained T5-Large model for the contextual commonsense inference task.
[ { "version": "v1", "created": "Thu, 6 Oct 2022 13:08:41 GMT" }, { "version": "v2", "created": "Thu, 3 Nov 2022 00:33:48 GMT" } ]
2022-11-04T00:00:00
[ [ "Shen", "Siqi", "" ], [ "Ghosal", "Deepanway", "" ], [ "Majumder", "Navonil", "" ], [ "Lim", "Henry", "" ], [ "Mihalcea", "Rada", "" ], [ "Poria", "Soujanya", "" ] ]
new_dataset
0.999891
2210.11703
Kaiyuan Chen
Kaiyuan Chen, Alexander Thomas, Hanming Lu, William Mullen, Jeffery Ichnowski, Rahul Arya, Nivedha Krishnakumar, Ryan Teoh, Willis Wang, Anthony Joseph, John Kubiatowicz
SCL: A Secure Concurrency Layer For Paranoid Stateful Lambdas
updated with acknowledgement; 14 pages, 11 figures, 2 tables
null
null
null
cs.CR cs.DC
http://creativecommons.org/licenses/by/4.0/
We propose a federated Function-as-a-Service (FaaS) execution model that provides secure and stateful execution in both Cloud and Edge environments. The FaaS workers, called Paranoid Stateful Lambdas (PSLs), collaborate with one another to perform large parallel computations. We exploit cryptographically hardened and mobile bundles of data, called DataCapsules, to provide persistent state for our PSLs, whose execution is protected using hardware-secured TEEs. To make PSLs easy to program and performant, we build the familiar Key-Value Store interface on top of DataCapsules in a way that allows amortization of cryptographic operations. We demonstrate PSLs functioning in an edge environment running on a group of Intel NUCs with SGXv2. As described, our Secure Concurrency Layer (SCL), provides eventually-consistent semantics over written values using untrusted and unordered multicast. All SCL communication is encrypted, unforgeable, and private. For durability, updates are recorded in replicated DataCapsules, which are append-only cryptographically-hardened blockchain with confidentiality, integrity, and provenance guarantees. Values for inactive keys are stored in a log-structured merge-tree (LSM) in the same DataCapsule. SCL features a variety of communication optimizations, such as an efficient message passing framework that reduces the latency up to 44x from the Intel SGX SDK, and an actor-based cryptographic processing architecture that batches cryptographic operations and increases throughput by 81x.
[ { "version": "v1", "created": "Fri, 21 Oct 2022 03:10:03 GMT" }, { "version": "v2", "created": "Wed, 2 Nov 2022 23:52:02 GMT" } ]
2022-11-04T00:00:00
[ [ "Chen", "Kaiyuan", "" ], [ "Thomas", "Alexander", "" ], [ "Lu", "Hanming", "" ], [ "Mullen", "William", "" ], [ "Ichnowski", "Jeffery", "" ], [ "Arya", "Rahul", "" ], [ "Krishnakumar", "Nivedha", "" ], [ "Teoh", "Ryan", "" ], [ "Wang", "Willis", "" ], [ "Joseph", "Anthony", "" ], [ "Kubiatowicz", "John", "" ] ]
new_dataset
0.998671
2211.01226
Artem Reshetnikov
Artem Reshetnikov, Maria-Cristina Marinescu, Joaquim More Lopez
DEArt: Dataset of European Art
VISART VI. Workshop at the European Conference of Computer Vision (ECCV)
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Large datasets that were made publicly available to the research community over the last 20 years have been a key enabling factor for the advances in deep learning algorithms for NLP or computer vision. These datasets are generally pairs of aligned image / manually annotated metadata, where images are photographs of everyday life. Scholarly and historical content, on the other hand, treat subjects that are not necessarily popular to a general audience, they may not always contain a large number of data points, and new data may be difficult or impossible to collect. Some exceptions do exist, for instance, scientific or health data, but this is not the case for cultural heritage (CH). The poor performance of the best models in computer vision - when tested over artworks - coupled with the lack of extensively annotated datasets for CH, and the fact that artwork images depict objects and actions not captured by photographs, indicate that a CH-specific dataset would be highly valuable for this community. We propose DEArt, at this point primarily an object detection and pose classification dataset meant to be a reference for paintings between the XIIth and the XVIIIth centuries. It contains more than 15000 images, about 80% non-iconic, aligned with manual annotations for the bounding boxes identifying all instances of 69 classes as well as 12 possible poses for boxes identifying human-like objects. Of these, more than 50 classes are CH-specific and thus do not appear in other datasets; these reflect imaginary beings, symbolic entities and other categories related to art. Additionally, existing datasets do not include pose annotations. Our results show that object detectors for the cultural heritage domain can achieve a level of precision comparable to state-of-art models for generic images via transfer learning.
[ { "version": "v1", "created": "Wed, 2 Nov 2022 16:05:35 GMT" }, { "version": "v2", "created": "Thu, 3 Nov 2022 07:33:46 GMT" } ]
2022-11-04T00:00:00
[ [ "Reshetnikov", "Artem", "" ], [ "Marinescu", "Maria-Cristina", "" ], [ "Lopez", "Joaquim More", "" ] ]
new_dataset
0.999764
2211.01551
Faisal Tareque Shohan
Faisal Tareque Shohan, Abu Ubaida Akash, Muhammad Ibrahim, Mohammad Shafiul Alam
Crime Prediction using Machine Learning with a Novel Crime Dataset
24 pages
null
null
null
cs.LG cs.AI cs.CY
http://creativecommons.org/licenses/by-nc-nd/4.0/
Crime is an unlawful act that carries legal repercussions. Bangladesh has a high crime rate due to poverty, population growth, and many other socio-economic issues. For law enforcement agencies, understanding crime patterns is essential for preventing future criminal activity. For this purpose, these agencies need structured crime database. This paper introduces a novel crime dataset that contains temporal, geographic, weather, and demographic data about 6574 crime incidents of Bangladesh. We manually gather crime news articles of a seven year time span from a daily newspaper archive. We extract basic features from these raw text. Using these basic features, we then consult standard service-providers of geo-location and weather data in order to garner these information related to the collected crime incidents. Furthermore, we collect demographic information from Bangladesh National Census data. All these information are combined that results in a standard machine learning dataset. Together, 36 features are engineered for the crime prediction task. Five supervised machine learning classification algorithms are then evaluated on this newly built dataset and satisfactory results are achieved. We also conduct exploratory analysis on various aspects the dataset. This dataset is expected to serve as the foundation for crime incidence prediction systems for Bangladesh and other countries. The findings of this study will help law enforcement agencies to forecast and contain crime as well as to ensure optimal resource allocation for crime patrol and prevention.
[ { "version": "v1", "created": "Thu, 3 Nov 2022 01:55:52 GMT" } ]
2022-11-04T00:00:00
[ [ "Shohan", "Faisal Tareque", "" ], [ "Akash", "Abu Ubaida", "" ], [ "Ibrahim", "Muhammad", "" ], [ "Alam", "Mohammad Shafiul", "" ] ]
new_dataset
0.986548
2211.01559
Yifan Gao
Yifan Gao, Danni Zhang and Haoyue Li
The ProfessionAl Go annotation datasEt (PAGE)
Journal version of arXiv:2205.00254, under review
null
null
null
cs.AI cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The game of Go has been highly under-researched due to the lack of game records and analysis tools. In recent years, the increasing number of professional competitions and the advent of AlphaZero-based algorithms provide an excellent opportunity for analyzing human Go games on a large scale. In this paper, we present the ProfessionAl Go annotation datasEt (PAGE), containing 98,525 games played by 2,007 professional players and spans over 70 years. The dataset includes rich AI analysis results for each move. Moreover, PAGE provides detailed metadata for every player and game after manual cleaning and labeling. Beyond the preliminary analysis of the dataset, we provide sample tasks that benefit from our dataset to demonstrate the potential application of PAGE in multiple research directions. To the best of our knowledge, PAGE is the first dataset with extensive annotation in the game of Go. This work is an extended version of [1] where we perform a more detailed description, analysis, and application.
[ { "version": "v1", "created": "Thu, 3 Nov 2022 02:41:41 GMT" } ]
2022-11-04T00:00:00
[ [ "Gao", "Yifan", "" ], [ "Zhang", "Danni", "" ], [ "Li", "Haoyue", "" ] ]
new_dataset
0.998482
2211.01566
Ramchander Rao Bhaskara
Roshan Thomas Eapen, Ramchander Rao Bhaskara, Manoranjan Majji
NaRPA: Navigation and Rendering Pipeline for Astronautics
49 pages, 22 figures
null
null
null
cs.GR cs.CV cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper presents Navigation and Rendering Pipeline for Astronautics (NaRPA) - a novel ray-tracing-based computer graphics engine to model and simulate light transport for space-borne imaging. NaRPA incorporates lighting models with attention to atmospheric and shading effects for the synthesis of space-to-space and ground-to-space virtual observations. In addition to image rendering, the engine also possesses point cloud, depth, and contour map generation capabilities to simulate passive and active vision-based sensors and to facilitate the designing, testing, or verification of visual navigation algorithms. Physically based rendering capabilities of NaRPA and the efficacy of the proposed rendering algorithm are demonstrated using applications in representative space-based environments. A key demonstration includes NaRPA as a tool for generating stereo imagery and application in 3D coordinate estimation using triangulation. Another prominent application of NaRPA includes a novel differentiable rendering approach for image-based attitude estimation is proposed to highlight the efficacy of the NaRPA engine for simulating vision-based navigation and guidance operations.
[ { "version": "v1", "created": "Thu, 3 Nov 2022 03:07:21 GMT" } ]
2022-11-04T00:00:00
[ [ "Eapen", "Roshan Thomas", "" ], [ "Bhaskara", "Ramchander Rao", "" ], [ "Majji", "Manoranjan", "" ] ]
new_dataset
0.999449
2211.01585
Ao Zhang
Ao Zhang, Fan Yu, Kaixun Huang, Lei Xie, Longbiao Wang, Eng Siong Chng, Hui Bu, Binbin Zhang, Wei Chen, Xin Xu
The ISCSLP 2022 Intelligent Cockpit Speech Recognition Challenge (ICSRC): Dataset, Tracks, Baseline and Results
Accepted by ISCSLP2022
null
null
null
cs.SD eess.AS
http://creativecommons.org/licenses/by-sa/4.0/
This paper summarizes the outcomes from the ISCSLP 2022 Intelligent Cockpit Speech Recognition Challenge (ICSRC). We first address the necessity of the challenge and then introduce the associated dataset collected from a new-energy vehicle (NEV) covering a variety of cockpit acoustic conditions and linguistic contents. We then describe the track arrangement and the baseline system. Specifically, we set up two tracks in terms of allowed model/system size to investigate resource-constrained and -unconstrained setups, targeting to vehicle embedded as well as cloud ASR systems respectively. Finally we summarize the challenge results and provide the major observations from the submitted systems.
[ { "version": "v1", "created": "Thu, 3 Nov 2022 04:45:28 GMT" } ]
2022-11-04T00:00:00
[ [ "Zhang", "Ao", "" ], [ "Yu", "Fan", "" ], [ "Huang", "Kaixun", "" ], [ "Xie", "Lei", "" ], [ "Wang", "Longbiao", "" ], [ "Chng", "Eng Siong", "" ], [ "Bu", "Hui", "" ], [ "Zhang", "Binbin", "" ], [ "Chen", "Wei", "" ], [ "Xu", "Xin", "" ] ]
new_dataset
0.999676
2211.01589
Yuan Hu
Yuan Hu, Zhibin Wang, Zhou Huang, Yu Liu
PolyBuilding: Polygon Transformer for End-to-End Building Extraction
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present PolyBuilding, a fully end-to-end polygon Transformer for building extraction. PolyBuilding direct predicts vector representation of buildings from remote sensing images. It builds upon an encoder-decoder transformer architecture and simultaneously outputs building bounding boxes and polygons. Given a set of polygon queries, the model learns the relations among them and encodes context information from the image to predict the final set of building polygons with fixed vertex numbers. Corner classification is performed to distinguish the building corners from the sampled points, which can be used to remove redundant vertices along the building walls during inference. A 1-d non-maximum suppression (NMS) is further applied to reduce vertex redundancy near the building corners. With the refinement operations, polygons with regular shapes and low complexity can be effectively obtained. Comprehensive experiments are conducted on the CrowdAI dataset. Quantitative and qualitative results show that our approach outperforms prior polygonal building extraction methods by a large margin. It also achieves a new state-of-the-art in terms of pixel-level coverage, instance-level precision and recall, and geometry-level properties (including contour regularity and polygon complexity).
[ { "version": "v1", "created": "Thu, 3 Nov 2022 04:53:17 GMT" } ]
2022-11-04T00:00:00
[ [ "Hu", "Yuan", "" ], [ "Wang", "Zhibin", "" ], [ "Huang", "Zhou", "" ], [ "Liu", "Yu", "" ] ]
new_dataset
0.996229
2211.01600
Lily Goli
Lily Goli, Daniel Rebain, Sara Sabour, Animesh Garg, Andrea Tagliasacchi
nerf2nerf: Pairwise Registration of Neural Radiance Fields
null
null
null
null
cs.CV cs.AI cs.RO
http://creativecommons.org/licenses/by/4.0/
We introduce a technique for pairwise registration of neural fields that extends classical optimization-based local registration (i.e. ICP) to operate on Neural Radiance Fields (NeRF) -- neural 3D scene representations trained from collections of calibrated images. NeRF does not decompose illumination and color, so to make registration invariant to illumination, we introduce the concept of a ''surface field'' -- a field distilled from a pre-trained NeRF model that measures the likelihood of a point being on the surface of an object. We then cast nerf2nerf registration as a robust optimization that iteratively seeks a rigid transformation that aligns the surface fields of the two scenes. We evaluate the effectiveness of our technique by introducing a dataset of pre-trained NeRF scenes -- our synthetic scenes enable quantitative evaluations and comparisons to classical registration techniques, while our real scenes demonstrate the validity of our technique in real-world scenarios. Additional results available at: https://nerf2nerf.github.io
[ { "version": "v1", "created": "Thu, 3 Nov 2022 06:04:59 GMT" } ]
2022-11-04T00:00:00
[ [ "Goli", "Lily", "" ], [ "Rebain", "Daniel", "" ], [ "Sabour", "Sara", "" ], [ "Garg", "Animesh", "" ], [ "Tagliasacchi", "Andrea", "" ] ]
new_dataset
0.989798
2211.01604
Alex Beatson
Tian Qin, Alex Beatson, Deniz Oktay, Nick McGreivy, Ryan P. Adams
Meta-PDE: Learning to Solve PDEs Quickly Without a Mesh
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Partial differential equations (PDEs) are often computationally challenging to solve, and in many settings many related PDEs must be be solved either at every timestep or for a variety of candidate boundary conditions, parameters, or geometric domains. We present a meta-learning based method which learns to rapidly solve problems from a distribution of related PDEs. We use meta-learning (MAML and LEAP) to identify initializations for a neural network representation of the PDE solution such that a residual of the PDE can be quickly minimized on a novel task. We apply our meta-solving approach to a nonlinear Poisson's equation, 1D Burgers' equation, and hyperelasticity equations with varying parameters, geometries, and boundary conditions. The resulting Meta-PDE method finds qualitatively accurate solutions to most problems within a few gradient steps; for the nonlinear Poisson and hyper-elasticity equation this results in an intermediate accuracy approximation up to an order of magnitude faster than a baseline finite element analysis (FEA) solver with equivalent accuracy. In comparison to other learned solvers and surrogate models, this meta-learning approach can be trained without supervision from expensive ground-truth data, does not require a mesh, and can even be used when the geometry and topology varies between tasks.
[ { "version": "v1", "created": "Thu, 3 Nov 2022 06:17:52 GMT" } ]
2022-11-04T00:00:00
[ [ "Qin", "Tian", "" ], [ "Beatson", "Alex", "" ], [ "Oktay", "Deniz", "" ], [ "McGreivy", "Nick", "" ], [ "Adams", "Ryan P.", "" ] ]
new_dataset
0.990197
2211.01629
Omkar Ranadive
Omkar Ranadive, Jisu Kim, Serin Lee, Youngseo Cha, Heechan Park, Minkook Cho, Young K. Hwang
Image-based Early Detection System for Wildfires
Published in Tackling Climate Change with Machine Learning workshop, Thirty-sixth Conference on Neural Information Processing Systems (NeurIPS 2022)
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wildfires are a disastrous phenomenon which cause damage to land, loss of property, air pollution, and even loss of human life. Due to the warmer and drier conditions created by climate change, more severe and uncontrollable wildfires are expected to occur in the coming years. This could lead to a global wildfire crisis and have dire consequences on our planet. Hence, it has become imperative to use technology to help prevent the spread of wildfires. One way to prevent the spread of wildfires before they become too large is to perform early detection i.e, detecting the smoke before the actual fire starts. In this paper, we present our Wildfire Detection and Alert System which use machine learning to detect wildfire smoke with a high degree of accuracy and can send immediate alerts to users. Our technology is currently being used in the USA to monitor data coming in from hundreds of cameras daily. We show that our system has a high true detection rate and a low false detection rate. Our performance evaluation study also shows that on an average our system detects wildfire smoke faster than an actual person.
[ { "version": "v1", "created": "Thu, 3 Nov 2022 07:38:30 GMT" } ]
2022-11-04T00:00:00
[ [ "Ranadive", "Omkar", "" ], [ "Kim", "Jisu", "" ], [ "Lee", "Serin", "" ], [ "Cha", "Youngseo", "" ], [ "Park", "Heechan", "" ], [ "Cho", "Minkook", "" ], [ "Hwang", "Young K.", "" ] ]
new_dataset
0.992941
2211.01644
Kai Chen
Kai Chen, Stephen James, Congying Sui, Yun-Hui Liu, Pieter Abbeel, Qi Dou
StereoPose: Category-Level 6D Transparent Object Pose Estimation from Stereo Images via Back-View NOCS
7 pages, 6 figures, Project homepage: https://appsrv.cse.cuhk.edu.hk/~kaichen/stereopose.html
null
null
null
cs.RO cs.AI cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most existing methods for category-level pose estimation rely on object point clouds. However, when considering transparent objects, depth cameras are usually not able to capture meaningful data, resulting in point clouds with severe artifacts. Without a high-quality point cloud, existing methods are not applicable to challenging transparent objects. To tackle this problem, we present StereoPose, a novel stereo image framework for category-level object pose estimation, ideally suited for transparent objects. For a robust estimation from pure stereo images, we develop a pipeline that decouples category-level pose estimation into object size estimation, initial pose estimation, and pose refinement. StereoPose then estimates object pose based on representation in the normalized object coordinate space~(NOCS). To address the issue of image content aliasing, we further define a back-view NOCS map for the transparent object. The back-view NOCS aims to reduce the network learning ambiguity caused by content aliasing, and leverage informative cues on the back of the transparent object for more accurate pose estimation. To further improve the performance of the stereo framework, StereoPose is equipped with a parallax attention module for stereo feature fusion and an epipolar loss for improving the stereo-view consistency of network predictions. Extensive experiments on the public TOD dataset demonstrate the superiority of the proposed StereoPose framework for category-level 6D transparent object pose estimation.
[ { "version": "v1", "created": "Thu, 3 Nov 2022 08:36:09 GMT" } ]
2022-11-04T00:00:00
[ [ "Chen", "Kai", "" ], [ "James", "Stephen", "" ], [ "Sui", "Congying", "" ], [ "Liu", "Yun-Hui", "" ], [ "Abbeel", "Pieter", "" ], [ "Dou", "Qi", "" ] ]
new_dataset
0.969713
2211.01705
Jihyun Mun
Jihyun Mun, Sunhee Kim, Myeong Ju Kim, Jiwon Ryu, Sejoong Kim, Minhwa Chung
A speech corpus for chronic kidney disease
null
null
null
null
cs.CL
http://creativecommons.org/publicdomain/zero/1.0/
In this study, we present a speech corpus of patients with chronic kidney disease (CKD) that will be used for research on pathological voice analysis, automatic illness identification, and severity prediction. This paper introduces the steps involved in creating this corpus, including the choice of speech-related parameters and speech lists as well as the recording technique. The speakers in this corpus, 289 CKD patients with varying degrees of severity who were categorized based on estimated glomerular filtration rate (eGFR), delivered sustained vowels, sentence, and paragraph stimuli. This study compared and analyzed the voice characteristics of CKD patients with those of the control group; the results revealed differences in voice quality, phoneme-level pronunciation, prosody, glottal source, and aerodynamic parameters.
[ { "version": "v1", "created": "Thu, 3 Nov 2022 10:57:48 GMT" } ]
2022-11-04T00:00:00
[ [ "Mun", "Jihyun", "" ], [ "Kim", "Sunhee", "" ], [ "Kim", "Myeong Ju", "" ], [ "Ryu", "Jiwon", "" ], [ "Kim", "Sejoong", "" ], [ "Chung", "Minhwa", "" ] ]
new_dataset
0.970115
2211.01730
Mehmet Emre Ozfatura
Emre Ozfatura and Yulin Shao and Amin Ghazanfari and Alberto Perotti and Branislav Popovic and Deniz Gunduz
Feedback is Good, Active Feedback is Better: Block Attention Active Feedback Codes
null
null
null
null
cs.IT cs.AI cs.LG eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural network (DNN)-assisted channel coding designs, such as low-complexity neural decoders for existing codes, or end-to-end neural-network-based auto-encoder designs are gaining interest recently due to their improved performance and flexibility; particularly for communication scenarios in which high-performing structured code designs do not exist. Communication in the presence of feedback is one such communication scenario, and practical code design for feedback channels has remained an open challenge in coding theory for many decades. Recently, DNN-based designs have shown impressive results in exploiting feedback. In particular, generalized block attention feedback (GBAF) codes, which utilizes the popular transformer architecture, achieved significant improvement in terms of the block error rate (BLER) performance. However, previous works have focused mainly on passive feedback, where the transmitter observes a noisy version of the signal at the receiver. In this work, we show that GBAF codes can also be used for channels with active feedback. We implement a pair of transformer architectures, at the transmitter and the receiver, which interact with each other sequentially, and achieve a new state-of-the-art BLER performance, especially in the low SNR regime.
[ { "version": "v1", "created": "Thu, 3 Nov 2022 11:44:06 GMT" } ]
2022-11-04T00:00:00
[ [ "Ozfatura", "Emre", "" ], [ "Shao", "Yulin", "" ], [ "Ghazanfari", "Amin", "" ], [ "Perotti", "Alberto", "" ], [ "Popovic", "Branislav", "" ], [ "Gunduz", "Deniz", "" ] ]
new_dataset
0.99281
2211.01812
Hadi Hajieghrary
Sevag Tafnakaji and Hadi Hajieghrary and Quentin Teixeira and Yasemin Bekiroglu
Benchmarking local motion planners for navigation of mobile manipulators
Accepted to be presented at 2023 IEEE/SICE International Symposium on System Integration
null
null
null
cs.RO cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
There are various trajectory planners for mobile manipulators. It is often challenging to compare their performance under similar circumstances due to differences in hardware, dissimilarity of tasks and objectives, as well as uncertainties in measurements and operating environments. In this paper, we propose a simulation framework to evaluate the performance of the local trajectory planners to generate smooth, and dynamically and kinematically feasible trajectories for mobile manipulators in the same environment. We focus on local planners as they are key components that provide smooth trajectories while carrying a load, react to dynamic obstacles, and avoid collisions. We evaluate two prominent local trajectory planners, Dynamic-Window Approach (DWA) and Time Elastic Band (TEB) using the metrics that we introduce. Moreover, our software solution is applicable to any other local planners used in the Robot Operating System (ROS) framework, without additional programming effort.
[ { "version": "v1", "created": "Thu, 3 Nov 2022 13:45:55 GMT" } ]
2022-11-04T00:00:00
[ [ "Tafnakaji", "Sevag", "" ], [ "Hajieghrary", "Hadi", "" ], [ "Teixeira", "Quentin", "" ], [ "Bekiroglu", "Yasemin", "" ] ]
new_dataset
0.998955
2211.01829
Seulbae Kim
Seulbae Kim and Major Liu and Junghwan "John" Rhee and Yuseok Jeon and Yonghwi Kwon and Chung Hwan Kim
DriveFuzz: Discovering Autonomous Driving Bugs through Driving Quality-Guided Fuzzing
This is the full version of the paper published at ACM CCS 2022. This version includes the appendices (pages 14 and 15)
null
10.1145/3548606.3560558
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Autonomous driving has become real; semi-autonomous driving vehicles in an affordable price range are already on the streets, and major automotive vendors are actively developing full self-driving systems to deploy them in this decade. Before rolling the products out to the end-users, it is critical to test and ensure the safety of the autonomous driving systems, consisting of multiple layers intertwined in a complicated way. However, while safety-critical bugs may exist in any layer and even across layers, relatively little attention has been given to testing the entire driving system across all the layers. Prior work mainly focuses on white-box testing of individual layers and preventing attacks on each layer. In this paper, we aim at holistic testing of autonomous driving systems that have a whole stack of layers integrated in their entirety. Instead of looking into the individual layers, we focus on the vehicle states that the system continuously changes in the driving environment. This allows us to design DriveFuzz, a new systematic fuzzing framework that can uncover potential vulnerabilities regardless of their locations. DriveFuzz automatically generates and mutates driving scenarios based on diverse factors leveraging a high-fidelity driving simulator. We build novel driving test oracles based on the real-world traffic rules to detect safety-critical misbehaviors, and guide the fuzzer towards such misbehaviors through driving quality metrics referring to the physical states of the vehicle. DriveFuzz has discovered 30 new bugs in various layers of two autonomous driving systems (Autoware and CARLA Behavior Agent) and three additional bugs in the CARLA simulator. We further analyze the impact of these bugs and how an adversary may exploit them as security vulnerabilities to cause critical accidents in the real world.
[ { "version": "v1", "created": "Tue, 25 Oct 2022 19:31:55 GMT" } ]
2022-11-04T00:00:00
[ [ "Kim", "Seulbae", "" ], [ "Liu", "Major", "" ], [ "Rhee", "Junghwan \"John\"", "" ], [ "Jeon", "Yuseok", "" ], [ "Kwon", "Yonghwi", "" ], [ "Kim", "Chung Hwan", "" ] ]
new_dataset
0.98524
2211.01839
Filip Szatkowski
Filip Szatkowski, Karol J. Piczak, Przemys{\l}aw Spurek, Jacek Tabor, Tomasz Trzci\'nski
HyperSound: Generating Implicit Neural Representations of Audio Signals with Hypernetworks
null
null
null
null
cs.SD cs.AI cs.LG cs.NE eess.AS
http://creativecommons.org/licenses/by/4.0/
Implicit neural representations (INRs) are a rapidly growing research field, which provides alternative ways to represent multimedia signals. Recent applications of INRs include image super-resolution, compression of high-dimensional signals, or 3D rendering. However, these solutions usually focus on visual data, and adapting them to the audio domain is not trivial. Moreover, it requires a separately trained model for every data sample. To address this limitation, we propose HyperSound, a meta-learning method leveraging hypernetworks to produce INRs for audio signals unseen at training time. We show that our approach can reconstruct sound waves with quality comparable to other state-of-the-art models.
[ { "version": "v1", "created": "Thu, 3 Nov 2022 14:20:32 GMT" } ]
2022-11-04T00:00:00
[ [ "Szatkowski", "Filip", "" ], [ "Piczak", "Karol J.", "" ], [ "Spurek", "Przemysław", "" ], [ "Tabor", "Jacek", "" ], [ "Trzciński", "Tomasz", "" ] ]
new_dataset
0.98776
2211.01859
Ramtin Gharleghi
Ramtin Gharleghi, Dona Adikari, Katy Ellenberger, Mark Webster, Chris Ellis, Arcot Sowmya, Sze-Yuan Ooi, Susann Beier
Computed tomography coronary angiogram images, annotations and associated data of normal and diseased arteries
10 pages, 3 figures. Submitted to the journal Scientific Data. For associated challenge, see https://asoca.grand-challenge.org/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computed Tomography Coronary Angiography (CTCA) is a non-invasive method to evaluate coronary artery anatomy and disease. CTCA is ideal for geometry reconstruction to create virtual models of coronary arteries. To our knowledge there is no public dataset that includes centrelines and segmentation of the full coronary tree. We provide anonymized CTCA images, voxel-wise annotations and associated data in the form of centrelines, calcification scores and meshes of the coronary lumen in 20 normal and 20 diseased cases. Images were obtained along with patient information with informed, written consent as part of Coronary Atlas (https://www.coronaryatlas.org/). Cases were classified as normal (zero calcium score with no signs of stenosis) or diseased (confirmed coronary artery disease). Manual voxel-wise segmentations by three experts were combined using majority voting to generate the final annotations. Provided data can be used for a variety of research purposes, such as 3D printing patient-specific models, development and validation of segmentation algorithms, education and training of medical personnel and in-silico analyses such as testing of medical devices.
[ { "version": "v1", "created": "Thu, 3 Nov 2022 14:50:43 GMT" } ]
2022-11-04T00:00:00
[ [ "Gharleghi", "Ramtin", "" ], [ "Adikari", "Dona", "" ], [ "Ellenberger", "Katy", "" ], [ "Webster", "Mark", "" ], [ "Ellis", "Chris", "" ], [ "Sowmya", "Arcot", "" ], [ "Ooi", "Sze-Yuan", "" ], [ "Beier", "Susann", "" ] ]
new_dataset
0.997954
2211.01917
Joel Brogan
David Cornett III and Joel Brogan and Nell Barber and Deniz Aykac and Seth Baird and Nick Burchfield and Carl Dukes and Andrew Duncan and Regina Ferrell and Jim Goddard and Gavin Jager and Matt Larson and Bart Murphy and Christi Johnson and Ian Shelley and Nisha Srinivas and Brandon Stockwell and Leanne Thompson and Matt Yohe and Robert Zhang and Scott Dolvin and Hector J. Santos-Villalobos and David S. Bolme
Expanding Accurate Person Recognition to New Altitudes and Ranges: The BRIAR Dataset
null
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Face recognition technology has advanced significantly in recent years due largely to the availability of large and increasingly complex training datasets for use in deep learning models. These datasets, however, typically comprise images scraped from news sites or social media platforms and, therefore, have limited utility in more advanced security, forensics, and military applications. These applications require lower resolution, longer ranges, and elevated viewpoints. To meet these critical needs, we collected and curated the first and second subsets of a large multi-modal biometric dataset designed for use in the research and development (R&D) of biometric recognition technologies under extremely challenging conditions. Thus far, the dataset includes more than 350,000 still images and over 1,300 hours of video footage of approximately 1,000 subjects. To collect this data, we used Nikon DSLR cameras, a variety of commercial surveillance cameras, specialized long-rage R&D cameras, and Group 1 and Group 2 UAV platforms. The goal is to support the development of algorithms capable of accurately recognizing people at ranges up to 1,000 m and from high angles of elevation. These advances will include improvements to the state of the art in face recognition and will support new research in the area of whole-body recognition using methods based on gait and anthropometry. This paper describes methods used to collect and curate the dataset, and the dataset's characteristics at the current stage.
[ { "version": "v1", "created": "Thu, 3 Nov 2022 15:51:39 GMT" } ]
2022-11-04T00:00:00
[ [ "Cornett", "David", "III" ], [ "Brogan", "Joel", "" ], [ "Barber", "Nell", "" ], [ "Aykac", "Deniz", "" ], [ "Baird", "Seth", "" ], [ "Burchfield", "Nick", "" ], [ "Dukes", "Carl", "" ], [ "Duncan", "Andrew", "" ], [ "Ferrell", "Regina", "" ], [ "Goddard", "Jim", "" ], [ "Jager", "Gavin", "" ], [ "Larson", "Matt", "" ], [ "Murphy", "Bart", "" ], [ "Johnson", "Christi", "" ], [ "Shelley", "Ian", "" ], [ "Srinivas", "Nisha", "" ], [ "Stockwell", "Brandon", "" ], [ "Thompson", "Leanne", "" ], [ "Yohe", "Matt", "" ], [ "Zhang", "Robert", "" ], [ "Dolvin", "Scott", "" ], [ "Santos-Villalobos", "Hector J.", "" ], [ "Bolme", "David S.", "" ] ]
new_dataset
0.967951
2211.01941
Wei Sun
Rushmian Annoy Wadud, Wei Sun
DyOb-SLAM : Dynamic Object Tracking SLAM System
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Simultaneous Localization & Mapping (SLAM) is the process of building a mutual relationship between localization and mapping of the subject in its surrounding environment. With the help of different sensors, various types of SLAM systems have developed to deal with the problem of building the relationship between localization and mapping. A limitation in the SLAM process is the lack of consideration of dynamic objects in the mapping of the environment. We propose the Dynamic Object Tracking SLAM (DyOb-SLAM), which is a Visual SLAM system that can localize and map the surrounding dynamic objects in the environment as well as track the dynamic objects in each frame. With the help of a neural network and a dense optical flow algorithm, dynamic objects and static objects in an environment can be differentiated. DyOb-SLAM creates two separate maps for both static and dynamic contents. For the static features, a sparse map is obtained. For the dynamic contents, a trajectory global map is created as output. As a result, a frame to frame real-time based dynamic object tracking system is obtained. With the pose calculation of the dynamic objects and camera, DyOb-SLAM can estimate the speed of the dynamic objects with time. The performance of DyOb-SLAM is observed by comparing it with a similar Visual SLAM system, VDO-SLAM and the performance is measured by calculating the camera and object pose errors as well as the object speed error.
[ { "version": "v1", "created": "Thu, 3 Nov 2022 16:28:19 GMT" } ]
2022-11-04T00:00:00
[ [ "Wadud", "Rushmian Annoy", "" ], [ "Sun", "Wei", "" ] ]
new_dataset
0.956951
1906.04376
Xuewen Yang
Xuewen Yang, Xin Wang
Recognizing License Plates in Real-Time
License Plate Detection and Recognition, Computer Vision, Supervised Learning
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
License plate detection and recognition (LPDR) is of growing importance for enabling intelligent transportation and ensuring the security and safety of the cities. However, LPDR faces a big challenge in a practical environment. The license plates can have extremely diverse sizes, fonts and colors, and the plate images are usually of poor quality caused by skewed capturing angles, uneven lighting, occlusion, and blurring. In applications such as surveillance, it often requires fast processing. To enable real-time and accurate license plate recognition, in this work, we propose a set of techniques: 1) a contour reconstruction method along with edge-detection to quickly detect the candidate plates; 2) a simple zero-one-alternation scheme to effectively remove the fake top and bottom borders around plates to facilitate more accurate segmentation of characters on plates; 3) a set of techniques to augment the training data, incorporate SIFT features into the CNN network, and exploit transfer learning to obtain the initial parameters for more effective training; and 4) a two-phase verification procedure to determine the correct plate at low cost, a statistical filtering in the plate detection stage to quickly remove unwanted candidates, and the accurate CR results after the CR process to perform further plate verification without additional processing. We implement a complete LPDR system based on our algorithms. The experimental results demonstrate that our system can accurately recognize license plate in real-time. Additionally, it works robustly under various levels of illumination and noise, and in the presence of car movement. Compared to peer schemes, our system is not only among the most accurate ones but is also the fastest, and can be easily applied to other scenarios.
[ { "version": "v1", "created": "Tue, 11 Jun 2019 03:45:49 GMT" }, { "version": "v2", "created": "Sun, 5 Apr 2020 15:44:44 GMT" }, { "version": "v3", "created": "Tue, 14 Sep 2021 05:16:37 GMT" }, { "version": "v4", "created": "Thu, 19 May 2022 23:33:06 GMT" }, { "version": "v5", "created": "Mon, 13 Jun 2022 05:56:06 GMT" }, { "version": "v6", "created": "Wed, 2 Nov 2022 16:04:38 GMT" } ]
2022-11-03T00:00:00
[ [ "Yang", "Xuewen", "" ], [ "Wang", "Xin", "" ] ]
new_dataset
0.999586
1910.06452
Gabriele Dragotto
Margarida Carvalho, Gabriele Dragotto, Felipe Feijoo, Andrea Lodi, Sriram Sankaranarayanan
When Nash Meets Stackelberg
null
null
null
null
cs.GT math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article introduces a class of $Nash$ games among $Stackelberg$ players ($NASPs$), namely, a class of simultaneous non-cooperative games where the players solve sequential Stackelberg games. Specifically, each player solves a Stackelberg game where a leader optimizes a (parametrized) linear objective function subject to linear constraints while its followers solve convex quadratic problems subject to the standard optimistic assumption. Although we prove that deciding if a $NASP$ instance admits a Nash equilibrium is generally a $\Sigma^2_p$-hard decision problem, we devise two exact and computationally-efficient algorithms to compute and select Nash equilibria or certify that no equilibrium exists. We apply $NASPs$ to model the hierarchical interactions of international energy markets where climate-change aware regulators oversee the operations of profit-driven energy producers. By combining real-world data with our models, we find that Nash equilibria provide informative, and often counterintuitive, managerial insights for market regulators.
[ { "version": "v1", "created": "Mon, 14 Oct 2019 22:32:13 GMT" }, { "version": "v2", "created": "Sun, 22 Dec 2019 10:23:53 GMT" }, { "version": "v3", "created": "Tue, 21 Apr 2020 16:12:53 GMT" }, { "version": "v4", "created": "Thu, 18 Jun 2020 14:34:43 GMT" }, { "version": "v5", "created": "Tue, 7 Sep 2021 22:13:26 GMT" }, { "version": "v6", "created": "Wed, 2 Nov 2022 16:22:00 GMT" } ]
2022-11-03T00:00:00
[ [ "Carvalho", "Margarida", "" ], [ "Dragotto", "Gabriele", "" ], [ "Feijoo", "Felipe", "" ], [ "Lodi", "Andrea", "" ], [ "Sankaranarayanan", "Sriram", "" ] ]
new_dataset
0.984416
2002.01924
Remi Chou
Remi A. Chou
Explicit Wiretap Channel Codes via Source Coding, Universal Hashing, and Distribution Approximation, When the Channels' Statistics are Uncertain
16 pages, two-column, 3 figures, accepted to IEEE Transactions on Information Forensics and Security
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider wiretap channels with uncertainty on the eavesdropper channel under (i) noisy blockwise type II, (ii) compound, or (iii) arbitrarily varying models. We present explicit wiretap codes that can handle these models in a unified manner and only rely on three primitives, namely source coding with side information, universal hashing, and distribution approximation. Our explicit wiretap codes achieve the best known single-letter achievable rates, previously obtained non-constructively, for the models considered. Our results are obtained for strong secrecy, do not require a pre-shared secret between the legitimate users, and do not require any symmetry properties on the channel. An extension of our results to compound main channels is also derived via new capacity-achieving polar coding schemes for compound settings.
[ { "version": "v1", "created": "Wed, 5 Feb 2020 18:59:09 GMT" }, { "version": "v2", "created": "Sun, 13 Dec 2020 01:05:59 GMT" }, { "version": "v3", "created": "Tue, 1 Nov 2022 18:52:32 GMT" } ]
2022-11-03T00:00:00
[ [ "Chou", "Remi A.", "" ] ]
new_dataset
0.992945
2105.01208
Sanja Rukavina
Sara Ban, Sanja Rukavina
Type IV-II codes over Z4 constructed from generalized bent functions
16 pages
Australas. J. Combin., 84 (3) (2022), 341-356
null
null
cs.IT math.CO math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A Type IV-II Z4-code is a self-dual code over Z4 with the property that all Euclidean weights are divisible by eight and all codewords have even Hamming weight. In this paper we use generalized bent functions for a construction of self-orthogonal codes over Z4 of length $2^m$, for $m$ odd, $m \geq 3$, and prove that for $m \geq 5$ those codes can be extended to Type IV-II Z4-codes. From that family of Type IV-II Z4-codes, we obtain a family of self-dual Type II binary codes by using Gray map. We also consider the weight distributions of the obtained codes and the structure of the supports of the minimum weight codewords.
[ { "version": "v1", "created": "Mon, 3 May 2021 22:56:08 GMT" } ]
2022-11-03T00:00:00
[ [ "Ban", "Sara", "" ], [ "Rukavina", "Sanja", "" ] ]
new_dataset
0.99214
2106.01601
Jiao Sun
Jiao Sun and Nanyun Peng
Men Are Elected, Women Are Married: Events Gender Bias on Wikipedia
ACL 2021
null
null
null
cs.CL cs.AI cs.CY cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human activities can be seen as sequences of events, which are crucial to understanding societies. Disproportional event distribution for different demographic groups can manifest and amplify social stereotypes, and potentially jeopardize the ability of members in some groups to pursue certain goals. In this paper, we present the first event-centric study of gender biases in a Wikipedia corpus. To facilitate the study, we curate a corpus of career and personal life descriptions with demographic information consisting of 7,854 fragments from 10,412 celebrities. Then we detect events with a state-of-the-art event detection model, calibrate the results using strategically generated templates, and extract events that have asymmetric associations with genders. Our study discovers that the Wikipedia pages tend to intermingle personal life events with professional events for females but not for males, which calls for the awareness of the Wikipedia community to formalize guidelines and train the editors to mind the implicit biases that contributors carry. Our work also lays the foundation for future works on quantifying and discovering event biases at the corpus level.
[ { "version": "v1", "created": "Thu, 3 Jun 2021 05:22:16 GMT" } ]
2022-11-03T00:00:00
[ [ "Sun", "Jiao", "" ], [ "Peng", "Nanyun", "" ] ]
new_dataset
0.990737
2110.15221
Ivan Carvalho
Matthew Treinish and Ivan Carvalho and Georgios Tsilimigkounakis and Nahum S\'a
rustworkx: A High-Performance Graph Library for Python
null
null
10.21105/joss.03968
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
In rustworkx, we provide a high-performance, flexible graph library for Python. rustworkx is inspired by NetworkX but addresses many performance concerns of the latter. rustworkx is written in Rust and is particularly suited for performance-sensitive applications that use graph representations.
[ { "version": "v1", "created": "Thu, 28 Oct 2021 15:34:21 GMT" }, { "version": "v2", "created": "Sat, 26 Feb 2022 16:39:47 GMT" }, { "version": "v3", "created": "Tue, 2 Aug 2022 00:40:57 GMT" }, { "version": "v4", "created": "Wed, 2 Nov 2022 00:29:03 GMT" } ]
2022-11-03T00:00:00
[ [ "Treinish", "Matthew", "" ], [ "Carvalho", "Ivan", "" ], [ "Tsilimigkounakis", "Georgios", "" ], [ "Sá", "Nahum", "" ] ]
new_dataset
0.999704
2111.02276
Diancheng Li
Diancheng Li, Dongliang Fan, Renjie Zhu, Qiaozhi Lei, Yuxuan Liao, Xin Yang, Yang Pan, Zheng Wang, Yang Wu, Sicong Liu, Hongqiang Wang
Origami-inspired soft twisting actuator
9 figures. Soft Robotics (2022)
null
10.1089/soro.2021.0185
null
cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
Soft actuators have shown great advantages in compliance and morphology matched for manipulation of delicate objects and inspection in a confined space. There is an unmet need for a soft actuator that can provide torsional motion to e.g. enlarge working space and increase degrees of freedom. Towards this goal, we present origami-inspired soft pneumatic actuators (OSPAs) made from silicone. The prototype can output a rotation of more than one revolution (up to 435{\deg}), more significant than its counterparts. Its rotation ratio (=rotation angle/ aspect ratio) is more than 136{\deg}, about twice the largest one in other literature. We describe the design and fabrication method, build the analytical model and simulation model, and analyze and optimize the parameters. Finally, we demonstrate the potentially extensive utility of the OSPAs through their integration into a gripper capable of simultaneously grasping and lifting fragile or flat objects, a versatile robot arm capable of picking and placing items at the right angle with the twisting actuators, and a soft snake robot capable of changing attitude and directions by torsion of the twisting actuators.
[ { "version": "v1", "created": "Wed, 3 Nov 2021 15:13:27 GMT" }, { "version": "v2", "created": "Wed, 2 Nov 2022 15:11:18 GMT" } ]
2022-11-03T00:00:00
[ [ "Li", "Diancheng", "" ], [ "Fan", "Dongliang", "" ], [ "Zhu", "Renjie", "" ], [ "Lei", "Qiaozhi", "" ], [ "Liao", "Yuxuan", "" ], [ "Yang", "Xin", "" ], [ "Pan", "Yang", "" ], [ "Wang", "Zheng", "" ], [ "Wu", "Yang", "" ], [ "Liu", "Sicong", "" ], [ "Wang", "Hongqiang", "" ] ]
new_dataset
0.971612
2203.07852
DeLesley Hutchins
DeLesley Hutchins, Imanol Schlag, Yuhuai Wu, Ethan Dyer, Behnam Neyshabur
Block-Recurrent Transformers
Update to NeurIPS camera-ready version
null
null
null
cs.LG cs.AI cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce the Block-Recurrent Transformer, which applies a transformer layer in a recurrent fashion along a sequence, and has linear complexity with respect to sequence length. Our recurrent cell operates on blocks of tokens rather than single tokens during training, and leverages parallel computation within a block in order to make efficient use of accelerator hardware. The cell itself is strikingly simple. It is merely a transformer layer: it uses self-attention and cross-attention to efficiently compute a recurrent function over a large set of state vectors and tokens. Our design was inspired in part by LSTM cells, and it uses LSTM-style gates, but it scales the typical LSTM cell up by several orders of magnitude. Our implementation of recurrence has the same cost in both computation time and parameter count as a conventional transformer layer, but offers dramatically improved perplexity in language modeling tasks over very long sequences. Our model out-performs a long-range Transformer XL baseline by a wide margin, while running twice as fast. We demonstrate its effectiveness on PG19 (books), arXiv papers, and GitHub source code. Our code has been released as open source.
[ { "version": "v1", "created": "Fri, 11 Mar 2022 23:44:33 GMT" }, { "version": "v2", "created": "Sat, 17 Sep 2022 01:31:49 GMT" }, { "version": "v3", "created": "Wed, 2 Nov 2022 00:35:56 GMT" } ]
2022-11-03T00:00:00
[ [ "Hutchins", "DeLesley", "" ], [ "Schlag", "Imanol", "" ], [ "Wu", "Yuhuai", "" ], [ "Dyer", "Ethan", "" ], [ "Neyshabur", "Behnam", "" ] ]
new_dataset
0.999198
2204.02550
Neekon Vafa
Aparna Gupte, Neekon Vafa, Vinod Vaikuntanathan
Continuous LWE is as Hard as LWE & Applications to Learning Gaussian Mixtures
Fixed bugs in Lemma 9 and Section 6
null
null
null
cs.CR cs.LG
http://creativecommons.org/licenses/by/4.0/
We show direct and conceptually simple reductions between the classical learning with errors (LWE) problem and its continuous analog, CLWE (Bruna, Regev, Song and Tang, STOC 2021). This allows us to bring to bear the powerful machinery of LWE-based cryptography to the applications of CLWE. For example, we obtain the hardness of CLWE under the classical worst-case hardness of the gap shortest vector problem. Previously, this was known only under quantum worst-case hardness of lattice problems. More broadly, with our reductions between the two problems, any future developments to LWE will also apply to CLWE and its downstream applications. As a concrete application, we show an improved hardness result for density estimation for mixtures of Gaussians. In this computational problem, given sample access to a mixture of Gaussians, the goal is to output a function that estimates the density function of the mixture. Under the (plausible and widely believed) exponential hardness of the classical LWE problem, we show that Gaussian mixture density estimation in $\mathbb{R}^n$ with roughly $\log n$ Gaussian components given $\mathsf{poly}(n)$ samples requires time quasi-polynomial in $n$. Under the (conservative) polynomial hardness of LWE, we show hardness of density estimation for $n^{\epsilon}$ Gaussians for any constant $\epsilon > 0$, which improves on Bruna, Regev, Song and Tang (STOC 2021), who show hardness for at least $\sqrt{n}$ Gaussians under polynomial (quantum) hardness assumptions. Our key technical tool is a reduction from classical LWE to LWE with $k$-sparse secrets where the multiplicative increase in the noise is only $O(\sqrt{k})$, independent of the ambient dimension $n$.
[ { "version": "v1", "created": "Wed, 6 Apr 2022 03:03:39 GMT" }, { "version": "v2", "created": "Tue, 7 Jun 2022 18:45:32 GMT" }, { "version": "v3", "created": "Wed, 2 Nov 2022 05:06:35 GMT" } ]
2022-11-03T00:00:00
[ [ "Gupte", "Aparna", "" ], [ "Vafa", "Neekon", "" ], [ "Vaikuntanathan", "Vinod", "" ] ]
new_dataset
0.97441
2204.11641
Maryam Motallebighomi
Maryam Motallebighomi, Harshad Sathaye, Mridula Singh, Aanjhan Ranganathan
Cryptography Is Not Enough: Relay Attacks on Authenticated GNSS Signals
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Civilian-GNSS is vulnerable to signal spoofing attacks, and countermeasures based on cryptographic authentication are being proposed to protect against these attacks. Both Galileo and GPS are currently testing broadcast authentication techniques based on the delayed key disclosure to validate the integrity of navigation messages. These authentication mechanisms have proven secure against record now and replay later attacks, as navigation messages become invalid after keys are released. This work analyzes the security guarantees of cryptographically protected GNSS signals and shows the possibility of spoofing a receiver to an arbitrary location without breaking any cryptographic operation. In contrast to prior work, we demonstrate the ability of an attacker to receive signals close to the victim receiver and generate spoofing signals for a different target location without modifying the navigation message contents. Our strategy exploits the essential common reception and transmission time method used to estimate pseudorange in GNSS receivers, thereby rendering any cryptographic authentication useless. We evaluate our attack on a commercial receiver (ublox M9N) and a software-defined GNSS receiver (GNSS-SDR) using a combination of open-source tools, commercial GNSS signal generators, and software-defined radio hardware platforms. Our results show that it is possible to spoof a victim receiver to locations around 4000 km away from the true location without requiring any high-speed communication networks or modifying the message contents. Through this work, we further highlight the fundamental limitations in securing a broadcast signaling-based localization system even if all communications are cryptographically protected.
[ { "version": "v1", "created": "Mon, 25 Apr 2022 13:19:57 GMT" }, { "version": "v2", "created": "Wed, 28 Sep 2022 19:38:09 GMT" }, { "version": "v3", "created": "Wed, 2 Nov 2022 01:42:30 GMT" } ]
2022-11-03T00:00:00
[ [ "Motallebighomi", "Maryam", "" ], [ "Sathaye", "Harshad", "" ], [ "Singh", "Mridula", "" ], [ "Ranganathan", "Aanjhan", "" ] ]
new_dataset
0.979475
2206.00006
Baoyu Jing
Baoyu Jing, Yuchen Yan, Yada Zhu and Hanghang Tong
COIN: Co-Cluster Infomax for Bipartite Graphs
NeurIPS 2022 GLFrontiers Workshop
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Bipartite graphs are powerful data structures to model interactions between two types of nodes, which have been used in a variety of applications, such as recommender systems, information retrieval, and drug discovery. A fundamental challenge for bipartite graphs is how to learn informative node embeddings. Despite the success of recent self-supervised learning methods on bipartite graphs, their objectives are discriminating instance-wise positive and negative node pairs, which could contain cluster-level errors. In this paper, we introduce a novel co-cluster infomax (COIN) framework, which captures the cluster-level information by maximizing the mutual information of co-clusters. Different from previous infomax methods which estimate mutual information by neural networks, COIN could easily calculate mutual information. Besides, COIN is an end-to-end coclustering method which can be trained jointly with other objective functions and optimized via back-propagation. Furthermore, we also provide theoretical analysis for COIN. We theoretically prove that COIN is able to effectively increase the mutual information of node embeddings and COIN is upper-bounded by the prior distributions of nodes. We extensively evaluate the proposed COIN framework on various benchmark datasets and tasks to demonstrate the effectiveness of COIN.
[ { "version": "v1", "created": "Tue, 31 May 2022 10:20:07 GMT" }, { "version": "v2", "created": "Wed, 2 Nov 2022 16:38:37 GMT" } ]
2022-11-03T00:00:00
[ [ "Jing", "Baoyu", "" ], [ "Yan", "Yuchen", "" ], [ "Zhu", "Yada", "" ], [ "Tong", "Hanghang", "" ] ]
new_dataset
0.977094
2206.00208
Kun Song
Kun Song, Heyang Xue, Xinsheng Wang, Jian Cong, Yongmao Zhang, Lei Xie, Bing Yang, Xiong Zhang, Dan Su
AdaVITS: Tiny VITS for Low Computing Resource Speaker Adaptation
Accepted by ISCSLP 2022
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Speaker adaptation in text-to-speech synthesis (TTS) is to finetune a pre-trained TTS model to adapt to new target speakers with limited data. While much effort has been conducted towards this task, seldom work has been performed for low computational resource scenarios due to the challenges raised by the requirement of the lightweight model and less computational complexity. In this paper, a tiny VITS-based TTS model, named AdaVITS, for low computing resource speaker adaptation is proposed. To effectively reduce parameters and computational complexity of VITS, an iSTFT-based wave construction decoder is proposed to replace the upsampling-based decoder which is resource-consuming in the original VITS. Besides, NanoFlow is introduced to share the density estimate across flow blocks to reduce the parameters of the prior encoder. Furthermore, to reduce the computational complexity of the textual encoder, scaled-dot attention is replaced with linear attention. To deal with the instability caused by the simplified model, instead of using the original text encoder, phonetic posteriorgram (PPG) is utilized as linguistic feature via a text-to-PPG module, which is then used as input for the encoder. Experiment shows that AdaVITS can generate stable and natural speech in speaker adaptation with 8.97M model parameters and 0.72GFlops computational complexity.
[ { "version": "v1", "created": "Wed, 1 Jun 2022 03:09:18 GMT" }, { "version": "v2", "created": "Mon, 31 Oct 2022 14:17:47 GMT" }, { "version": "v3", "created": "Wed, 2 Nov 2022 13:04:35 GMT" } ]
2022-11-03T00:00:00
[ [ "Song", "Kun", "" ], [ "Xue", "Heyang", "" ], [ "Wang", "Xinsheng", "" ], [ "Cong", "Jian", "" ], [ "Zhang", "Yongmao", "" ], [ "Xie", "Lei", "" ], [ "Yang", "Bing", "" ], [ "Zhang", "Xiong", "" ], [ "Su", "Dan", "" ] ]
new_dataset
0.988637
2206.04186
Hanyang Jiang
Hanyang Jiang, Yuehaw Khoo, Haizhao Yang
Reinforced Inverse Scattering
null
null
null
null
cs.LG eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inverse wave scattering aims at determining the properties of an object using data on how the object scatters incoming waves. In order to collect information, sensors are put in different locations to send and receive waves from each other. The choice of sensor positions and incident wave frequencies determines the reconstruction quality of scatterer properties. This paper introduces reinforcement learning to develop precision imaging that decides sensor positions and wave frequencies adaptive to different scatterers in an intelligent way, thus obtaining a significant improvement in reconstruction quality with limited imaging resources. Extensive numerical results will be provided to demonstrate the superiority of the proposed method over existing methods.
[ { "version": "v1", "created": "Wed, 8 Jun 2022 22:56:09 GMT" }, { "version": "v2", "created": "Wed, 2 Nov 2022 15:10:16 GMT" } ]
2022-11-03T00:00:00
[ [ "Jiang", "Hanyang", "" ], [ "Khoo", "Yuehaw", "" ], [ "Yang", "Haizhao", "" ] ]
new_dataset
0.952727
2208.00627
Yilan Zhang
Yilan Zhang, Fengying Xie, Xuedong Song, Hangning Zhou, Yiguang Yang, Haopeng Zhang, Jie Liu
A Rotation Meanout Network with Invariance for Dermoscopy Image Classification and Retrieval
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The computer-aided diagnosis (CAD) system can provide a reference basis for the clinical diagnosis of skin diseases. Convolutional neural networks (CNNs) can not only extract visual elements such as colors and shapes but also semantic features. As such they have made great improvements in many tasks of dermoscopy images. The imaging of dermoscopy has no principal orientation, indicating that there are a large number of skin lesion rotations in the datasets. However, CNNs lack rotation invariance, which is bound to affect the robustness of CNNs against rotations. To tackle this issue, we propose a rotation meanout (RM) network to extract rotation-invariant features from dermoscopy images. In RM, each set of rotated feature maps corresponds to a set of outputs of the weight-sharing convolutions and they are fused using meanout strategy to obtain the final feature maps. Through theoretical derivation, the proposed RM network is rotation-equivariant and can extract rotation-invariant features when followed by the global average pooling (GAP) operation. The extracted rotation-invariant features can better represent the original data in classification and retrieval tasks for dermoscopy images. The RM is a general operation, which does not change the network structure or increase any parameter, and can be flexibly embedded in any part of CNNs. Extensive experiments are conducted on a dermoscopy image dataset. The results show our method outperforms other anti-rotation methods and achieves great improvements in dermoscopy image classification and retrieval tasks, indicating the potential of rotation invariance in the field of dermoscopy images.
[ { "version": "v1", "created": "Mon, 1 Aug 2022 06:15:52 GMT" }, { "version": "v2", "created": "Wed, 2 Nov 2022 09:06:47 GMT" } ]
2022-11-03T00:00:00
[ [ "Zhang", "Yilan", "" ], [ "Xie", "Fengying", "" ], [ "Song", "Xuedong", "" ], [ "Zhou", "Hangning", "" ], [ "Yang", "Yiguang", "" ], [ "Zhang", "Haopeng", "" ], [ "Liu", "Jie", "" ] ]
new_dataset
0.983747
2209.02577
Yixue Zhao
Yixue Zhao, Saghar Talebipour, Kesina Baral, Hyojae Park, Leon Yee, Safwat Ali Khan, Yuriy Brun, Nenad Medvidovic, Kevin Moran
Avgust: Automating Usage-Based Test Generation from Videos of App Executions
null
ESEC/FSE 2022
10.1145/3540250.3549134
null
cs.SE cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Writing and maintaining UI tests for mobile apps is a time-consuming and tedious task. While decades of research have produced automated approaches for UI test generation, these approaches typically focus on testing for crashes or maximizing code coverage. By contrast, recent research has shown that developers prefer usage-based tests, which center around specific uses of app features, to help support activities such as regression testing. Very few existing techniques support the generation of such tests, as doing so requires automating the difficult task of understanding the semantics of UI screens and user inputs. In this paper, we introduce Avgust, which automates key steps of generating usage-based tests. Avgust uses neural models for image understanding to process video recordings of app uses to synthesize an app-agnostic state-machine encoding of those uses. Then, Avgust uses this encoding to synthesize test cases for a new target app. We evaluate Avgust on 374 videos of common uses of 18 popular apps and show that 69% of the tests Avgust generates successfully execute the desired usage, and that Avgust's classifiers outperform the state of the art.
[ { "version": "v1", "created": "Tue, 6 Sep 2022 15:36:03 GMT" }, { "version": "v2", "created": "Mon, 10 Oct 2022 23:03:31 GMT" }, { "version": "v3", "created": "Mon, 31 Oct 2022 15:56:55 GMT" }, { "version": "v4", "created": "Tue, 1 Nov 2022 18:52:39 GMT" } ]
2022-11-03T00:00:00
[ [ "Zhao", "Yixue", "" ], [ "Talebipour", "Saghar", "" ], [ "Baral", "Kesina", "" ], [ "Park", "Hyojae", "" ], [ "Yee", "Leon", "" ], [ "Khan", "Safwat Ali", "" ], [ "Brun", "Yuriy", "" ], [ "Medvidovic", "Nenad", "" ], [ "Moran", "Kevin", "" ] ]
new_dataset
0.990692
2209.03625
Devarsh Patel
Devarsh Patel, Sarthak Patel, Megh Patel
Application of image-to-image translation in improving pedestrian detection
This is a working draft and not indented for publication
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
The lack of effective target regions makes it difficult to perform several visual functions in low intensity light, including pedestrian recognition, and image-to-image translation. In this situation, with the accumulation of high-quality information by the combined use of infrared and visible images it is possible to detect pedestrians even in low light. In this study we are going to use advanced deep learning models like pix2pixGAN and YOLOv7 on LLVIP dataset, containing visible-infrared image pairs for low light vision. This dataset contains 33672 images and most of the images were captured in dark scenes, tightly synchronized with time and location.
[ { "version": "v1", "created": "Thu, 8 Sep 2022 08:07:01 GMT" }, { "version": "v2", "created": "Wed, 2 Nov 2022 12:22:44 GMT" } ]
2022-11-03T00:00:00
[ [ "Patel", "Devarsh", "" ], [ "Patel", "Sarthak", "" ], [ "Patel", "Megh", "" ] ]
new_dataset
0.996732
2209.14156
Jaemin Cho
Zineng Tang, Jaemin Cho, Yixin Nie, Mohit Bansal
TVLT: Textless Vision-Language Transformer
NeurIPS 2022 Oral (21 pages; the first three authors contributed equally)
null
null
null
cs.CV cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we present the Textless Vision-Language Transformer (TVLT), where homogeneous transformer blocks take raw visual and audio inputs for vision-and-language representation learning with minimal modality-specific design, and do not use text-specific modules such as tokenization or automatic speech recognition (ASR). TVLT is trained by reconstructing masked patches of continuous video frames and audio spectrograms (masked autoencoding) and contrastive modeling to align video and audio. TVLT attains performance comparable to its text-based counterpart on various multimodal tasks, such as visual question answering, image retrieval, video retrieval, and multimodal sentiment analysis, with 28x faster inference speed and only 1/3 of the parameters. Our findings suggest the possibility of learning compact and efficient visual-linguistic representations from low-level visual and audio signals without assuming the prior existence of text. Our code and checkpoints are available at: https://github.com/zinengtang/TVLT
[ { "version": "v1", "created": "Wed, 28 Sep 2022 15:08:03 GMT" }, { "version": "v2", "created": "Wed, 2 Nov 2022 16:48:00 GMT" } ]
2022-11-03T00:00:00
[ [ "Tang", "Zineng", "" ], [ "Cho", "Jaemin", "" ], [ "Nie", "Yixin", "" ], [ "Bansal", "Mohit", "" ] ]
new_dataset
0.999381
2210.17349
Kun Song
Kun Song, Jian Cong, Xinsheng Wang, Yongmao Zhang, Lei Xie, Ning Jiang, Haiying Wu
Robust MelGAN: A robust universal neural vocoder for high-fidelity TTS
Accepted by ISCSLP 2022
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In current two-stage neural text-to-speech (TTS) paradigm, it is ideal to have a universal neural vocoder, once trained, which is robust to imperfect mel-spectrogram predicted from the acoustic model. To this end, we propose Robust MelGAN vocoder by solving the original multi-band MelGAN's metallic sound problem and increasing its generalization ability. Specifically, we introduce a fine-grained network dropout strategy to the generator. With a specifically designed over-smooth handler which separates speech signal intro periodic and aperiodic components, we only perform network dropout to the aperodic components, which alleviates metallic sounding and maintains good speaker similarity. To further improve generalization ability, we introduce several data augmentation methods to augment fake data in the discriminator, including harmonic shift, harmonic noise and phase noise. Experiments show that Robust MelGAN can be used as a universal vocoder, significantly improving sound quality in TTS systems built on various types of data.
[ { "version": "v1", "created": "Mon, 31 Oct 2022 14:24:10 GMT" }, { "version": "v2", "created": "Tue, 1 Nov 2022 03:30:50 GMT" }, { "version": "v3", "created": "Wed, 2 Nov 2022 13:05:46 GMT" } ]
2022-11-03T00:00:00
[ [ "Song", "Kun", "" ], [ "Cong", "Jian", "" ], [ "Wang", "Xinsheng", "" ], [ "Zhang", "Yongmao", "" ], [ "Xie", "Lei", "" ], [ "Jiang", "Ning", "" ], [ "Wu", "Haiying", "" ] ]
new_dataset
0.967835
2211.00718
Andrew J
Jomin Jose, Andrew J, Kumudha Raimond, Shweta Vincent
SleepyWheels: An Ensemble Model for Drowsiness Detection leading to Accident Prevention
20 pages
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Around 40 percent of accidents related to driving on highways in India occur due to the driver falling asleep behind the steering wheel. Several types of research are ongoing to detect driver drowsiness but they suffer from the complexity and cost of the models. In this paper, SleepyWheels a revolutionary method that uses a lightweight neural network in conjunction with facial landmark identification is proposed to identify driver fatigue in real time. SleepyWheels is successful in a wide range of test scenarios, including the lack of facial characteristics while covering the eye or mouth, the drivers varying skin tones, camera placements, and observational angles. It can work well when emulated to real time systems. SleepyWheels utilized EfficientNetV2 and a facial landmark detector for identifying drowsiness detection. The model is trained on a specially created dataset on driver sleepiness and it achieves an accuracy of 97 percent. The model is lightweight hence it can be further deployed as a mobile application for various platforms.
[ { "version": "v1", "created": "Tue, 1 Nov 2022 19:36:47 GMT" } ]
2022-11-03T00:00:00
[ [ "Jose", "Jomin", "" ], [ "J", "Andrew", "" ], [ "Raimond", "Kumudha", "" ], [ "Vincent", "Shweta", "" ] ]
new_dataset
0.99401
2211.00746
Jyoti Kini
Jyoti Kini, Ajmal Mian, Mubarak Shah
3DMODT: Attention-Guided Affinities for Joint Detection & Tracking in 3D Point Clouds
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
We propose a method for joint detection and tracking of multiple objects in 3D point clouds, a task conventionally treated as a two-step process comprising object detection followed by data association. Our method embeds both steps into a single end-to-end trainable network eliminating the dependency on external object detectors. Our model exploits temporal information employing multiple frames to detect objects and track them in a single network, thereby making it a utilitarian formulation for real-world scenarios. Computing affinity matrix by employing features similarity across consecutive point cloud scans forms an integral part of visual tracking. We propose an attention-based refinement module to refine the affinity matrix by suppressing erroneous correspondences. The module is designed to capture the global context in affinity matrix by employing self-attention within each affinity matrix and cross-attention across a pair of affinity matrices. Unlike competing approaches, our network does not require complex post-processing algorithms, and processes raw LiDAR frames to directly output tracking results. We demonstrate the effectiveness of our method on the three tracking benchmarks: JRDB, Waymo, and KITTI. Experimental evaluations indicate the ability of our model to generalize well across datasets.
[ { "version": "v1", "created": "Tue, 1 Nov 2022 20:59:38 GMT" } ]
2022-11-03T00:00:00
[ [ "Kini", "Jyoti", "" ], [ "Mian", "Ajmal", "" ], [ "Shah", "Mubarak", "" ] ]
new_dataset
0.995852
2211.00752
Artem Lykov
Artem Lykov, Aleksey Fedoseev, and Dzmitry Tsetserukou
DeltaFinger: a 3-DoF Wearable Haptic Display Enabling High-Fidelity Force Vector Presentation at a User Finger
13 pages, 8 figures, accepted version to AsiaHaptics 2022
null
null
null
cs.HC cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper presents a novel haptic device DeltaFinger designed to deliver the force of interaction with virtual objects by guiding user's finger with wearable delta mechanism. The developed interface is capable to deliver 3D force vector to the fingertip of the index finger of the user, allowing complex rendering of virtual reality (VR) environment. The developed device is able to produce the kinesthetic feedback up to 1.8 N in vertical projection and 0.9 N in horizontal projection without restricting the motion freedom of of the remaining fingers. The experimental results showed a sufficient precision in perception of force vector with DeltaFinger (mean force vector error of 0.6 rad). The proposed device potentially can be applied to VR communications, medicine, and navigation of the people with vision problems.
[ { "version": "v1", "created": "Tue, 1 Nov 2022 21:15:49 GMT" } ]
2022-11-03T00:00:00
[ [ "Lykov", "Artem", "" ], [ "Fedoseev", "Aleksey", "" ], [ "Tsetserukou", "Dzmitry", "" ] ]
new_dataset
0.999255