id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
2211.13327
Anna Feldman
Patrick Lee and Anna Feldman and Jing Peng
A Report on the Euphemisms Detection Shared Task
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
This paper presents The Shared Task on Euphemism Detection for the Third Workshop on Figurative Language Processing (FigLang 2022) held in conjunction with EMNLP 2022. Participants were invited to investigate the euphemism detection task: given input text, identify whether it contains a euphemism. The input data is a corpus of sentences containing potentially euphemistic terms (PETs) collected from the GloWbE corpus (Davies and Fuchs, 2015), and are human-annotated as containing either a euphemistic or literal usage of a PET. In this paper, we present the results and analyze the common themes, methods and findings of the participating teams
[ { "version": "v1", "created": "Wed, 23 Nov 2022 22:06:35 GMT" }, { "version": "v2", "created": "Sat, 3 Dec 2022 17:26:25 GMT" } ]
2022-12-06T00:00:00
[ [ "Lee", "Patrick", "" ], [ "Feldman", "Anna", "" ], [ "Peng", "Jing", "" ] ]
new_dataset
0.953125
2211.14206
Belkacem Imine
Belkacem Imine, Naima Hadj-Said, Adda Ali-Pacha
McEliece cryptosystem based on Plotkin construction with QC-MDPC and QC-LDPC codes
11 pages
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a new variant of the McEliece cryptosystem using two families of quasi-cyclic codes: low density parity check codes (QC-LDPC) and moderate density parity check codes (QC-MDPC). Due to the low weight codewords in the dual of LDPC codes, this family of codes is vulnerable to dual code attacks, making it unsuitable for use with the McEliece cryptosystem. However, this is not the case in our proposal, and it is possible by using the (U |U + V ) construction to concatenate LDPC codes with MDPC codes. We will demonstrate that our proposed cryptosystem can withstand dual code and generic decoding attacks, and that the public key can be reduced by leveraging the quasi-cyclic property and the Plotkin construction.
[ { "version": "v1", "created": "Fri, 25 Nov 2022 16:13:43 GMT" }, { "version": "v2", "created": "Mon, 28 Nov 2022 18:08:58 GMT" }, { "version": "v3", "created": "Fri, 2 Dec 2022 19:08:32 GMT" } ]
2022-12-06T00:00:00
[ [ "Imine", "Belkacem", "" ], [ "Hadj-Said", "Naima", "" ], [ "Ali-Pacha", "Adda", "" ] ]
new_dataset
0.999737
2211.16922
Jianwei Li
Jianwei Li, Zitong Yu, Jingang Shi
Learning Motion-Robust Remote Photoplethysmography through Arbitrary Resolution Videos
Accepted by AAAI 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Remote photoplethysmography (rPPG) enables non-contact heart rate (HR) estimation from facial videos which gives significant convenience compared with traditional contact-based measurements. In the real-world long-term health monitoring scenario, the distance of the participants and their head movements usually vary by time, resulting in the inaccurate rPPG measurement due to the varying face resolution and complex motion artifacts. Different from the previous rPPG models designed for a constant distance between camera and participants, in this paper, we propose two plug-and-play blocks (i.e., physiological signal feature extraction block (PFE) and temporal face alignment block (TFA)) to alleviate the degradation of changing distance and head motion. On one side, guided with representative-area information, PFE adaptively encodes the arbitrary resolution facial frames to the fixed-resolution facial structure features. On the other side, leveraging the estimated optical flow, TFA is able to counteract the rPPG signal confusion caused by the head movement thus benefit the motion-robust rPPG signal recovery. Besides, we also train the model with a cross-resolution constraint using a two-stream dual-resolution framework, which further helps PFE learn resolution-robust facial rPPG features. Extensive experiments on three benchmark datasets (UBFC-rPPG, COHFACE and PURE) demonstrate the superior performance of the proposed method. One highlight is that with PFE and TFA, the off-the-shelf spatio-temporal rPPG models can predict more robust rPPG signals under both varying face resolution and severe head movement scenarios. The codes are available at https://github.com/LJW-GIT/Arbitrary_Resolution_rPPG.
[ { "version": "v1", "created": "Wed, 30 Nov 2022 11:50:08 GMT" }, { "version": "v2", "created": "Thu, 1 Dec 2022 03:01:44 GMT" }, { "version": "v3", "created": "Fri, 2 Dec 2022 19:40:26 GMT" } ]
2022-12-06T00:00:00
[ [ "Li", "Jianwei", "" ], [ "Yu", "Zitong", "" ], [ "Shi", "Jingang", "" ] ]
new_dataset
0.996537
2212.01387
Maria Maistro
Mirko Biasini, Vittorio Carmignani, Nicola Ferro, Panagiotis Filianos, Maria Maistro, Giorgio Maria di Nunzio
FullBrain: a Social E-learning Platform
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
We present FullBrain, a social e-learning platform where students share and track their knowledge. FullBrain users can post notes, ask questions and share learning resources in dedicated course and concept spaces. We detail two components of FullBrain: a SIR system equipped with query autocomplete and query autosuggestion, and a Leaderboard module to improve user experience. We analyzed the day-to-day users' usage of the SIR system, measuring a time-to-complete a request below 0.11s, matching or exceeding our UX targets. Moreover, we performed stress tests which lead the way for more detailed analysis. Through a preliminary user study and log data analysis, we observe that 97% of the users' activity is directed to the top 4 positions in the leaderboard.
[ { "version": "v1", "created": "Thu, 1 Dec 2022 13:58:54 GMT" } ]
2022-12-06T00:00:00
[ [ "Biasini", "Mirko", "" ], [ "Carmignani", "Vittorio", "" ], [ "Ferro", "Nicola", "" ], [ "Filianos", "Panagiotis", "" ], [ "Maistro", "Maria", "" ], [ "di Nunzio", "Giorgio Maria", "" ] ]
new_dataset
0.998332
2212.01424
Orr Zohar Mr
Orr Zohar, Kuan-Chieh Wang, Serena Yeung
PROB: Probabilistic Objectness for Open World Object Detection
null
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Open World Object Detection (OWOD) is a new and challenging computer vision task that bridges the gap between classic object detection (OD) benchmarks and object detection in the real world. In addition to detecting and classifying seen/labeled objects, OWOD algorithms are expected to detect novel/unknown objects - which can be classified and incrementally learned. In standard OD, object proposals not overlapping with a labeled object are automatically classified as background. Therefore, simply applying OD methods to OWOD fails as unknown objects would be predicted as background. The challenge of detecting unknown objects stems from the lack of supervision in distinguishing unknown objects and background object proposals. Previous OWOD methods have attempted to overcome this issue by generating supervision using pseudo-labeling - however, unknown object detection has remained low. Probabilistic/generative models may provide a solution for this challenge. Herein, we introduce a novel probabilistic framework for objectness estimation, where we alternate between probability distribution estimation and objectness likelihood maximization of known objects in the embedded feature space - ultimately allowing us to estimate the objectness probability of different proposals. The resulting Probabilistic Objectness transformer-based open-world detector, PROB, integrates our framework into traditional object detection models, adapting them for the open-world setting. Comprehensive experiments on OWOD benchmarks show that PROB outperforms all existing OWOD methods in both unknown object detection ($\sim 2\times$ unknown recall) and known object detection ($\sim 10\%$ mAP). Our code will be made available upon publication at https://github.com/orrzohar/PROB.
[ { "version": "v1", "created": "Fri, 2 Dec 2022 20:04:24 GMT" } ]
2022-12-06T00:00:00
[ [ "Zohar", "Orr", "" ], [ "Wang", "Kuan-Chieh", "" ], [ "Yeung", "Serena", "" ] ]
new_dataset
0.968535
2212.01444
Omur Arslan
\"Om\"ur Arslan
Time Governors for Safe Path-Following Control
11 pages, 6 figures, submitted to a journal publication
null
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Safe and smooth robot motion around obstacles is an essential skill for autonomous robots, especially when operating around people and other robots. Conventionally, due to real-time operation requirements and onboard computation limitations, many robot motion planning and control methods follow a two-step approach: first construct a (e.g., piecewise linear) collision-free reference path for a simplified robot model, and then execute the reference plan via path-following control for a more accurate and complex robot model. A challenge of such a decoupled robot motion planning and control method for highly dynamic robotic systems is ensuring the safety of path-following control as well as the successful completion of the reference plan. In this paper, we introduce a novel dynamical systems approach for online closed-loop time parametrization, called $\textit{a time governor}$, of a reference path for provably correct and safe path-following control based on feedback motion prediction, where the safety of robot motion under path-following control is continuously monitored using predicted robot motion. After introducing the general framework of time governors for safe path following, we present an example application for the fully actuated high-order robot dynamics using proportional-and-higher-order-derivative (PhD) path-following control whose feedback motion prediction is performed by Lyapunov ellipsoids and Vandemonde simplexes. In numerical simulations, we investigate the role of reference position and velocity feedback, and motion prediction on path-following performance and robot motion.
[ { "version": "v1", "created": "Fri, 2 Dec 2022 20:54:52 GMT" } ]
2022-12-06T00:00:00
[ [ "Arslan", "Ömür", "" ] ]
new_dataset
0.957598
2212.01540
Hossein Rastgoftar
Aeris El Asslouj and Hossein Rastgoftar
Quadcopter Tracking Using Euler-Angle-Free Flatness-Based Control
8 pages
null
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quadcopter trajectory tracking control has been extensively investigated and implemented in the past. Available controls mostly use the Euler angle standards to describe the quadcopters rotational kinematics and dynamics. As a result, the same rotation can be translated into different roll, pitch, and yaw angles because there are multiple Euler angle standards for characterization of rotation in a 3-dimensional motion space. Additionally, it is computationally expensive to convert a quadcopters orientation to the associated roll, pitch, and yaw angles, which may make it difficult to track quick and aggressive trajectories. To address these issues, this paper will develop a flatness-based trajectory tracking control without using Euler angles. We assess and test the proposed controls performance in the Gazebo simulation environment and contrast its functionality with the existing Mellinger controller, which has been widely adopted by the robotics and unmanned aerial system (UAS) communities.
[ { "version": "v1", "created": "Sat, 3 Dec 2022 05:20:20 GMT" } ]
2022-12-06T00:00:00
[ [ "Asslouj", "Aeris El", "" ], [ "Rastgoftar", "Hossein", "" ] ]
new_dataset
0.998147
2212.01638
Jintao Lin
Jintao Lin, Zhaoyang Liu, Wenhai Wang, Wayne Wu, Limin Wang
VLG: General Video Recognition with Web Textual Knowledge
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Video recognition in an open and dynamic world is quite challenging, as we need to handle different settings such as close-set, long-tail, few-shot and open-set. By leveraging semantic knowledge from noisy text descriptions crawled from the Internet, we focus on the general video recognition (GVR) problem of solving different recognition tasks within a unified framework. The core contribution of this paper is twofold. First, we build a comprehensive video recognition benchmark of Kinetics-GVR, including four sub-task datasets to cover the mentioned settings. To facilitate the research of GVR, we propose to utilize external textual knowledge from the Internet and provide multi-source text descriptions for all action classes. Second, inspired by the flexibility of language representation, we present a unified visual-linguistic framework (VLG) to solve the problem of GVR by an effective two-stage training paradigm. Our VLG is first pre-trained on video and language datasets to learn a shared feature space, and then devises a flexible bi-modal attention head to collaborate high-level semantic concepts under different settings. Extensive results show that our VLG obtains the state-of-the-art performance under four settings. The superior performance demonstrates the effectiveness and generalization ability of our proposed framework. We hope our work makes a step towards the general video recognition and could serve as a baseline for future research. The code and models will be available at https://github.com/MCG-NJU/VLG.
[ { "version": "v1", "created": "Sat, 3 Dec 2022 15:46:49 GMT" } ]
2022-12-06T00:00:00
[ [ "Lin", "Jintao", "" ], [ "Liu", "Zhaoyang", "" ], [ "Wang", "Wenhai", "" ], [ "Wu", "Wayne", "" ], [ "Wang", "Limin", "" ] ]
new_dataset
0.999724
2212.01648
Christopher Tralie
Christopher J. Tralie, Zachary Schlamowitz, Jose Arbelo, Antonio I. Delgado, Charley Kirk, Nicholas A. Scoville
The DOPE Distance is SIC: A Stable, Informative, and Computable Metric on Time Series And Ordered Merge Trees
31 pages, 12 Figures
null
null
null
cs.IR math.AT
http://creativecommons.org/licenses/by/4.0/
Metrics for merge trees that are simultaneously stable, informative, and efficiently computable have so far eluded researchers. We show in this work that it is possible to devise such a metric when restricting merge trees to ordered domains such as the interval and the circle. We present the ``dynamic ordered persistence editing'' (DOPE) distance, which we prove is stable and informative while satisfying metric properties. We then devise a simple $O(N^2)$ dynamic programming algorithm to compute it on the interval and an $O(N^3)$ algorithm to compute it on the circle. Surprisingly, we accomplish this by ignoring all of the hierarchical information of the merge tree and simply focusing on a sequence of ordered critical points, which can be interpreted as a time series. Thus our algorithm is more similar to string edit distance and dynamic time warping than it is to more conventional merge tree comparison algorithms. In the context of time series with the interval as a domain, we show empirically on the UCR time series classification dataset that DOPE performs better than bottleneck/Wasserstein distances between persistence diagrams.
[ { "version": "v1", "created": "Sat, 3 Dec 2022 16:34:19 GMT" } ]
2022-12-06T00:00:00
[ [ "Tralie", "Christopher J.", "" ], [ "Schlamowitz", "Zachary", "" ], [ "Arbelo", "Jose", "" ], [ "Delgado", "Antonio I.", "" ], [ "Kirk", "Charley", "" ], [ "Scoville", "Nicholas A.", "" ] ]
new_dataset
0.995503
2212.01651
Slobodan Djukanovi\'c
Slobodan Djukanovi\'c, Nikola Bulatovi\'c, Ivana \v{C}avor
A dataset for audio-video based vehicle speed estimation
30th Telecommunications Forum TELFOR 2022, Belgrade, Serbia, November 15-16, 2022. 5 pages, 2 figures, 1 table
null
null
null
cs.LG cs.CV cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
Accurate speed estimation of road vehicles is important for several reasons. One is speed limit enforcement, which represents a crucial tool in decreasing traffic accidents and fatalities. Compared with other research areas and domains, the number of available datasets for vehicle speed estimation is still very limited. We present a dataset of on-road audio-video recordings of single vehicles passing by a camera at known speeds, maintained stable by the on-board cruise control. The dataset contains thirteen vehicles, selected to be as diverse as possible in terms of manufacturer, production year, engine type, power and transmission, resulting in a total of $ 400 $ annotated audio-video recordings. The dataset is fully available and intended as a public benchmark to facilitate research in audio-video vehicle speed estimation. In addition to the dataset, we propose a cross-validation strategy which can be used in a machine learning model for vehicle speed estimation. Two approaches to training-validation split of the dataset are proposed.
[ { "version": "v1", "created": "Sat, 3 Dec 2022 17:02:57 GMT" } ]
2022-12-06T00:00:00
[ [ "Djukanović", "Slobodan", "" ], [ "Bulatović", "Nikola", "" ], [ "Čavor", "Ivana", "" ] ]
new_dataset
0.999822
2212.01672
Lorenzo Giusti
Lorenzo Giusti, Josue Garcia, Steven Cozine, Darrick Suen, Christina Nguyen, Ryan Alimo
MaRF: Representing Mars as Neural Radiance Fields
ECCV 2022 (oral)
null
null
null
cs.CV cs.GR
http://creativecommons.org/licenses/by/4.0/
The aim of this work is to introduce MaRF, a novel framework able to synthesize the Martian environment using several collections of images from rover cameras. The idea is to generate a 3D scene of Mars' surface to address key challenges in planetary surface exploration such as: planetary geology, simulated navigation and shape analysis. Although there exist different methods to enable a 3D reconstruction of Mars' surface, they rely on classical computer graphics techniques that incur high amounts of computational resources during the reconstruction process, and have limitations with generalizing reconstructions to unseen scenes and adapting to new images coming from rover cameras. The proposed framework solves the aforementioned limitations by exploiting Neural Radiance Fields (NeRFs), a method that synthesize complex scenes by optimizing a continuous volumetric scene function using a sparse set of images. To speed up the learning process, we replaced the sparse set of rover images with their neural graphics primitives (NGPs), a set of vectors of fixed length that are learned to preserve the information of the original images in a significantly smaller size. In the experimental section, we demonstrate the environments created from actual Mars datasets captured by Curiosity rover, Perseverance rover and Ingenuity helicopter, all of which are available on the Planetary Data System (PDS).
[ { "version": "v1", "created": "Sat, 3 Dec 2022 18:58:00 GMT" } ]
2022-12-06T00:00:00
[ [ "Giusti", "Lorenzo", "" ], [ "Garcia", "Josue", "" ], [ "Cozine", "Steven", "" ], [ "Suen", "Darrick", "" ], [ "Nguyen", "Christina", "" ], [ "Alimo", "Ryan", "" ] ]
new_dataset
0.999161
2212.01745
Vibhakar Mohta
Vibhakar Mohta, Adarsh Patnaik, Shivam Kumar Panda, Siva Vignesh Krishnan, Abhinav Gupta, Abhay Shukla, Gauri Wadhwa, Shrey Verma, Aditya Bandopadhyay
Design of an All-Purpose Terrace Farming Robot
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Automation in farming processes is a growing field of research in both academia and industries. A considerable amount of work has been put into this field to develop systems robust enough for farming. Terrace farming, in particular, provides a varying set of challenges, including robust stair climbing methods and stable navigation in unstructured terrains. We propose the design of a novel autonomous terrace farming robot, Aarohi, that can effectively climb steep terraces of considerable heights and execute several farming operations. The design optimisation strategy for the overall mechanical structure is elucidated. Further, the embedded and software architecture along with fail-safe strategies are presented for a working prototype. Algorithms for autonomous traversal over the terrace steps using the scissor lift mechanism and performing various farming operations have also been discussed. The adaptability of the design to specific operational requirements and modular farm tools allow Aarohi to be customised for a wide variety of use cases.
[ { "version": "v1", "created": "Sun, 4 Dec 2022 05:45:25 GMT" } ]
2022-12-06T00:00:00
[ [ "Mohta", "Vibhakar", "" ], [ "Patnaik", "Adarsh", "" ], [ "Panda", "Shivam Kumar", "" ], [ "Krishnan", "Siva Vignesh", "" ], [ "Gupta", "Abhinav", "" ], [ "Shukla", "Abhay", "" ], [ "Wadhwa", "Gauri", "" ], [ "Verma", "Shrey", "" ], [ "Bandopadhyay", "Aditya", "" ] ]
new_dataset
0.998391
2212.01769
Zicheng Zhang
Zicheng Zhang, Yi Zhu, Jianzhuang Liu, Xiaodan Liang, Wei Ke
CoupAlign: Coupling Word-Pixel with Sentence-Mask Alignments for Referring Image Segmentation
accept to NeurIPS 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Referring image segmentation aims at localizing all pixels of the visual objects described by a natural language sentence. Previous works learn to straightforwardly align the sentence embedding and pixel-level embedding for highlighting the referred objects, but ignore the semantic consistency of pixels within the same object, leading to incomplete masks and localization errors in predictions. To tackle this problem, we propose CoupAlign, a simple yet effective multi-level visual-semantic alignment method, to couple sentence-mask alignment with word-pixel alignment to enforce object mask constraint for achieving more accurate localization and segmentation. Specifically, the Word-Pixel Alignment (WPA) module performs early fusion of linguistic and pixel-level features in intermediate layers of the vision and language encoders. Based on the word-pixel aligned embedding, a set of mask proposals are generated to hypothesize possible objects. Then in the Sentence-Mask Alignment (SMA) module, the masks are weighted by the sentence embedding to localize the referred object, and finally projected back to aggregate the pixels for the target. To further enhance the learning of the two alignment modules, an auxiliary loss is designed to contrast the foreground and background pixels. By hierarchically aligning pixels and masks with linguistic features, our CoupAlign captures the pixel coherence at both visual and semantic levels, thus generating more accurate predictions. Extensive experiments on popular datasets (e.g., RefCOCO and G-Ref) show that our method achieves consistent improvements over state-of-the-art methods, e.g., about 2% oIoU increase on the validation and testing set of RefCOCO. Especially, CoupAlign has remarkable ability in distinguishing the target from multiple objects of the same class.
[ { "version": "v1", "created": "Sun, 4 Dec 2022 08:53:42 GMT" } ]
2022-12-06T00:00:00
[ [ "Zhang", "Zicheng", "" ], [ "Zhu", "Yi", "" ], [ "Liu", "Jianzhuang", "" ], [ "Liang", "Xiaodan", "" ], [ "Ke", "Wei", "" ] ]
new_dataset
0.988851
2212.01791
Md Parvez Mollah
Md Parvez Mollah
An LSTM model for Twitter Sentiment Analysis
3 pages
null
null
null
cs.CL cs.SI
http://creativecommons.org/licenses/by/4.0/
Sentiment analysis on social media such as Twitter provides organizations and individuals an effective way to monitor public emotions towards them and their competitors. As a result, sentiment analysis has become an important and challenging task. In this work, we have collected seven publicly available and manually annotated twitter sentiment datasets. We create a new training and testing dataset from the collected datasets. We develop an LSTM model to classify sentiment of a tweet and evaluate the model with the new dataset.
[ { "version": "v1", "created": "Sun, 4 Dec 2022 10:42:46 GMT" } ]
2022-12-06T00:00:00
[ [ "Mollah", "Md Parvez", "" ] ]
new_dataset
0.992389
2212.01934
Benedikt Kolbe
Vincent Despr\'e, Benedikt Kolbe, Hugo Parlier, Monique Teillaud
Computing a Dirichlet domain for a hyperbolic surface
15 pages, 5 figures
null
null
null
cs.CG math.DG math.GT
http://creativecommons.org/licenses/by/4.0/
The goal of this paper is to exhibit and analyze an algorithm that takes a given closed orientable hyperbolic surface and outputs an explicit Dirichlet domain. The input is a fundamental polygon with side pairings. While grounded in topological considerations, the algorithm makes key use of the geometry of the surface. We introduce data structures that reflect this interplay between geometry and topology and show that the algorithm finishes in polynomial time, in terms of the initial perimeter and the genus of the surface.
[ { "version": "v1", "created": "Sun, 4 Dec 2022 21:58:41 GMT" } ]
2022-12-06T00:00:00
[ [ "Despré", "Vincent", "" ], [ "Kolbe", "Benedikt", "" ], [ "Parlier", "Hugo", "" ], [ "Teillaud", "Monique", "" ] ]
new_dataset
0.973381
2212.01967
Zhaozhen Xu
Zhaozhen Xu, Nello Cristianini
QBERT: Generalist Model for Processing Questions
null
null
null
null
cs.CL cs.AI cs.IR cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Using a single model across various tasks is beneficial for training and applying deep neural sequence models. We address the problem of developing generalist representations of text that can be used to perform a range of different tasks rather than being specialised to a single application. We focus on processing short questions and developing an embedding for these questions that is useful on a diverse set of problems, such as question topic classification, equivalent question recognition, and question answering. This paper introduces QBERT, a generalist model for processing questions. With QBERT, we demonstrate how we can train a multi-task network that performs all question-related tasks and has achieved similar performance compared to its corresponding single-task models.
[ { "version": "v1", "created": "Mon, 5 Dec 2022 00:56:28 GMT" } ]
2022-12-06T00:00:00
[ [ "Xu", "Zhaozhen", "" ], [ "Cristianini", "Nello", "" ] ]
new_dataset
0.994098
2212.02007
Jianghong Dong
Jianghong Dong, Qing Xu, Jiawei Wang, Chunying Yang, Mengchi Cai, Chaoyi Chen, Jianqiang Wang and Keqiang Li
Mixed Cloud Control Testbed: Validating Vehicle-Road-Cloud Integration via Mixed Digital Twin
13 pages, 13 figures
null
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reliable and efficient validation technologies are critical for the recent development of multi-vehicle cooperation and vehicle-road-cloud integration. In this paper, we introduce our miniature experimental platform, Mixed Cloud Control Testbed (MCCT), developed based on a new notion of Mixed Digital Twin (mixedDT). Combining Mixed Reality with Digital Twin, mixedDT integrates the virtual and physical spaces into a mixed one, where physical entities coexist and interact with virtual entities via their digital counterparts. Under the framework of mixedDT, MCCT contains three major experimental platforms in the physical, virtual and mixed spaces respectively, and provides a unified access for various human-machine interfaces and external devices such as driving simulators. A cloud unit, where the mixed experimental platform is deployed, is responsible for fusing multi-platform information and assigning control instructions, contributing to synchronous operation and real-time cross-platform interaction. Particularly, MCCT allows for multi-vehicle coordination composed of different multi-source vehicles (\eg, physical vehicles, virtual vehicles and human-driven vehicles). Validations on vehicle platooning demonstrate the flexibility and scalability of MCCT.
[ { "version": "v1", "created": "Mon, 5 Dec 2022 03:39:31 GMT" } ]
2022-12-06T00:00:00
[ [ "Dong", "Jianghong", "" ], [ "Xu", "Qing", "" ], [ "Wang", "Jiawei", "" ], [ "Yang", "Chunying", "" ], [ "Cai", "Mengchi", "" ], [ "Chen", "Chaoyi", "" ], [ "Wang", "Jianqiang", "" ], [ "Li", "Keqiang", "" ] ]
new_dataset
0.995881
2212.02077
Zhongyang Zhu
Xuebo Tian, Zhongyang Zhu, Junqiao Zhao, Gengxuan Tian, and Chen Ye
DL-SLOT: Dynamic LiDAR SLAM and object tracking based on collaborative graph optimization
10 pages, 10 figures, this work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ego-pose estimation and dynamic object tracking are two critical problems for autonomous driving systems. The solutions to these problems are generally based on their respective assumptions, \ie{the static world assumption for simultaneous localization and mapping (SLAM) and the accurate ego-pose assumption for object tracking}. However, these assumptions are challenging to hold in dynamic road scenarios, where SLAM and object tracking become closely correlated. Therefore, we propose DL-SLOT, a dynamic LiDAR SLAM and object tracking method, to simultaneously address these two coupled problems. This method integrates the state estimations of both the autonomous vehicle and the stationary and dynamic objects in the environment into a unified optimization framework. First, we used object detection to identify all points belonging to potentially dynamic objects. Subsequently, a LiDAR odometry was conducted using the filtered point cloud. Simultaneously, we proposed a sliding window-based object association method that accurately associates objects according to the historical trajectories of tracked objects. The ego-states and those of the stationary and dynamic objects are integrated into the sliding window-based collaborative graph optimization. The stationary objects are subsequently restored from the potentially dynamic object set. Finally, a global pose-graph is implemented to eliminate the accumulated error. Experiments on KITTI datasets demonstrate that our method achieves better accuracy than SLAM and object tracking baseline methods. This confirms that solving SLAM and object tracking simultaneously is mutually advantageous, dramatically improving the robustness and accuracy of SLAM and object tracking in dynamic road scenarios.
[ { "version": "v1", "created": "Mon, 5 Dec 2022 07:46:14 GMT" } ]
2022-12-06T00:00:00
[ [ "Tian", "Xuebo", "" ], [ "Zhu", "Zhongyang", "" ], [ "Zhao", "Junqiao", "" ], [ "Tian", "Gengxuan", "" ], [ "Ye", "Chen", "" ] ]
new_dataset
0.999345
2212.02127
\v{Z}iga Babnik
\v{Z}iga Babnik, Peter Peer, Vitomir \v{S}truc
FaceQAN: Face Image Quality Assessment Through Adversarial Noise Exploration
The content of this paper was published in ICPR 2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Recent state-of-the-art face recognition (FR) approaches have achieved impressive performance, yet unconstrained face recognition still represents an open problem. Face image quality assessment (FIQA) approaches aim to estimate the quality of the input samples that can help provide information on the confidence of the recognition decision and eventually lead to improved results in challenging scenarios. While much progress has been made in face image quality assessment in recent years, computing reliable quality scores for diverse facial images and FR models remains challenging. In this paper, we propose a novel approach to face image quality assessment, called FaceQAN, that is based on adversarial examples and relies on the analysis of adversarial noise which can be calculated with any FR model learned by using some form of gradient descent. As such, the proposed approach is the first to link image quality to adversarial attacks. Comprehensive (cross-model as well as model-specific) experiments are conducted with four benchmark datasets, i.e., LFW, CFP-FP, XQLFW and IJB-C, four FR models, i.e., CosFace, ArcFace, CurricularFace and ElasticFace, and in comparison to seven state-of-the-art FIQA methods to demonstrate the performance of FaceQAN. Experimental results show that FaceQAN achieves competitive results, while exhibiting several desirable characteristics.
[ { "version": "v1", "created": "Mon, 5 Dec 2022 09:37:32 GMT" } ]
2022-12-06T00:00:00
[ [ "Babnik", "Žiga", "" ], [ "Peer", "Peter", "" ], [ "Štruc", "Vitomir", "" ] ]
new_dataset
0.981157
2212.02159
Jian Wang
Yourui Huangfu and Jian Wang and Shengchen Dai and Rong Li and Jun Wang and Chongwen Huang and Zhaoyang Zhang
WAIR-D: Wireless AI Research Dataset
5 pages, 8 figures
null
null
null
cs.LG cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is a common sense that datasets with high-quality data samples play an important role in artificial intelligence (AI), machine learning (ML) and related studies. However, although AI/ML has been introduced in wireless researches long time ago, few datasets are commonly used in the research community. Without a common dataset, AI-based methods proposed for wireless systems are hard to compare with both the traditional baselines and even each other. The existing wireless AI researches usually rely on datasets generated based on statistical models or ray-tracing simulations with limited environments. The statistical data hinder the trained AI models from further fine-tuning for a specific scenario, and ray-tracing data with limited environments lower down the generalization capability of the trained AI models. In this paper, we present the Wireless AI Research Dataset (WAIR-D)1, which consists of two scenarios. Scenario 1 contains 10,000 environments with sparsely dropped user equipments (UEs), and Scenario 2 contains 100 environments with densely dropped UEs. The environments are randomly picked up from more than 40 cities in the real world map. The large volume of the data guarantees that the trained AI models enjoy good generalization capability, while fine-tuning can be easily carried out on a specific chosen environment. Moreover, both the wireless channels and the corresponding environmental information are provided in WAIR-D, so that extra-information-aided communication mechanism can be designed and evaluated. WAIR-D provides the researchers benchmarks to compare their different designs or reproduce results of others. In this paper, we show the detailed construction of this dataset and examples of using it.
[ { "version": "v1", "created": "Mon, 5 Dec 2022 10:59:05 GMT" } ]
2022-12-06T00:00:00
[ [ "Huangfu", "Yourui", "" ], [ "Wang", "Jian", "" ], [ "Dai", "Shengchen", "" ], [ "Li", "Rong", "" ], [ "Wang", "Jun", "" ], [ "Huang", "Chongwen", "" ], [ "Zhang", "Zhaoyang", "" ] ]
new_dataset
0.999844
2212.02168
Mika H\"am\"al\"ainen
Mika H\"am\"al\"ainen and Khalid Alnajjar and Thierry Poibeau
Video Games as a Corpus: Sentiment Analysis using Fallout New Vegas Dialog
FDG 2022
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We present a method for extracting a multilingual sentiment annotated dialog data set from Fallout New Vegas. The game developers have preannotated every line of dialog in the game in one of the 8 different sentiments: \textit{anger, disgust, fear, happy, neutral, pained, sad } and \textit{surprised}. The game has been translated into English, Spanish, German, French and Italian. We conduct experiments on multilingual, multilabel sentiment analysis on the extracted data set using multilingual BERT, XLMRoBERTa and language specific BERT models. In our experiments, multilingual BERT outperformed XLMRoBERTa for most of the languages, also language specific models were slightly better than multilingual BERT for most of the languages. The best overall accuracy was 54\% and it was achieved by using multilingual BERT on Spanish data. The extracted data set presents a challenging task for sentiment analysis. We have released the data, including the testing and training splits, openly on Zenodo. The data set has been shuffled for copyright reasons.
[ { "version": "v1", "created": "Mon, 5 Dec 2022 11:09:05 GMT" } ]
2022-12-06T00:00:00
[ [ "Hämäläinen", "Mika", "" ], [ "Alnajjar", "Khalid", "" ], [ "Poibeau", "Thierry", "" ] ]
new_dataset
0.995867
2212.02192
Arsi Ik\"aheimonen MSc
A. Ik\"aheimonen, A.M. Triana, N. Luong, A. Ziaei, J. Rantaharju, R. Darst, and T. Aledavood
Niimpy: a toolbox for behavioral data analysis
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Behavioral studies using personal digital devices typically produce rich longitudinal datasets of mixed data types. These data provide information about the behavior of users of these devices in real-time and in the users' natural environments. Analyzing the data requires multidisciplinary expertise and dedicated software. Currently, no generalizable, device-agnostic, freely available software exists within Python scientific computing ecosystem to preprocess and analyze such data. This paper introduces a Python package, Niimpy, for analyzing digital behavioral data. The Niimpy toolbox is a user-friendly open-source package that can quickly be expanded and adapted to specific research requirements. The toolbox facilitates the analysis phase by offering tools for preprocessing, extracting features, and exploring the data. It also aims to educate the user on behavioral data analysis and promotes open science practices. Over time, Niimpy will expand with extra data analysis features developed by the core group, new users, and developers. Niimpy can help the fast-growing number of researchers with diverse backgrounds who collect data from personal and consumer digital devices to systematically and efficiently analyze the data and extract useful information. This novel information is vital for answering research questions in various fields, from medicine to psychology, sociology, and others.
[ { "version": "v1", "created": "Mon, 5 Dec 2022 11:58:42 GMT" } ]
2022-12-06T00:00:00
[ [ "Ikäheimonen", "A.", "" ], [ "Triana", "A. M.", "" ], [ "Luong", "N.", "" ], [ "Ziaei", "A.", "" ], [ "Rantaharju", "J.", "" ], [ "Darst", "R.", "" ], [ "Aledavood", "T.", "" ] ]
new_dataset
0.992864
2212.02228
Philippe Lacomme Dr
Lacomme Philippe, Prins Christian, Tanguy Alain
First Competitive Ant Colony Scheme for the CARP
null
null
null
Research Report LIMOS/RR-04-21
cs.NE cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses the Capacitated Arc Routing Problem (CARP) using an Ant Colony Optimization scheme. Ant Colony schemes can compute solutions for medium scale instances of VRP. The proposed Ant Colony is dedicated to large-scale instances of CARP with more than 140 nodes and 190 arcs to service. The Ant Colony scheme is coupled with a local search procedure and provides high quality solutions. The benchmarks we carried out prove possible to obtain solutions as profitable as CARPET ones can be obtained using such scheme when a sufficient number of iterations is devoted to the ants. It competes with the Genetic Algorithm of Lacomme et al. regarding solution quality but it is more time consuming on large scale instances. The method has been intensively benchmarked on the well-known instances of Eglese, DeArmon and the last ones of Belenguer and Benavent. This research report is a step forward CARP resolution by Ant Colony proving ant schemes can compete with Taboo search methods and Genetic Algorithms
[ { "version": "v1", "created": "Sat, 19 Nov 2022 10:31:27 GMT" } ]
2022-12-06T00:00:00
[ [ "Philippe", "Lacomme", "" ], [ "Christian", "Prins", "" ], [ "Alain", "Tanguy", "" ] ]
new_dataset
0.999237
2212.02231
Junjie Lu
Junjie Lu, Bi Zeng, Jingtao Tang, and Tin Lun Lam
TMSTC*: A Turn-minimizing Algorithm For Multi-robot Coverage Path Planning
8 pages, 9 figures, submitted to RA-L
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Coverage path planning is a major application for mobile robots, which requires robots to move along a planned path to cover the entire map. For large-scale tasks, coverage path planning benefits greatly from multiple robots. In this paper, we describe Turn-minimizing Multirobot Spanning Tree Coverage Star(TMSTC*), an improved multirobot coverage path planning (mCPP) algorithm based on the MSTC*. Our algorithm partitions the map into minimum bricks as tree's branches and thereby transforms the problem into finding the maximum independent set of bipartite graph. We then connect bricks with greedy strategy to form a tree, aiming to reduce the number of turns of corresponding circumnavigating coverage path. Our experimental results show that our approach enables multiple robots to make fewer turns and thus complete terrain coverage tasks faster than other popular algorithms.
[ { "version": "v1", "created": "Mon, 5 Dec 2022 13:00:25 GMT" } ]
2022-12-06T00:00:00
[ [ "Lu", "Junjie", "" ], [ "Zeng", "Bi", "" ], [ "Tang", "Jingtao", "" ], [ "Lam", "Tin Lun", "" ] ]
new_dataset
0.993432
2212.02248
Juncheng Wang
Qi Wang, Juncheng Wang, Junyu Gao, Yuan Yuan, Xuelong Li
Counting Like Human: Anthropoid Crowd Counting on Modeling the Similarity of Objects
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The mainstream crowd counting methods regress density map and integrate it to obtain counting results. Since the density representation to one head accords to its adjacent distribution, it embeds the same category objects with variant values, while human beings counting models the invariant features namely similarity to objects. Inspired by this, we propose a rational and anthropoid crowd counting framework. To begin with, we leverage counting scalar as supervision signal, which provides global and implicit guidance to similar matters. Then, the large kernel CNN is utilized to imitate the paradigm of human beings which models invariant knowledge firstly and slides to compare similarity. Later, re-parameterization on pre-trained paralleled parameters is presented to cater to the inner-class variance on similarity comparison. Finally, the Random Scaling patches Yield (RSY) is proposed to facilitate similarity modeling on long distance dependencies. Extensive experiments on five challenging benchmarks in crowd counting show the proposed framework achieves state-of-the-art.
[ { "version": "v1", "created": "Fri, 2 Dec 2022 07:00:53 GMT" } ]
2022-12-06T00:00:00
[ [ "Wang", "Qi", "" ], [ "Wang", "Juncheng", "" ], [ "Gao", "Junyu", "" ], [ "Yuan", "Yuan", "" ], [ "Li", "Xuelong", "" ] ]
new_dataset
0.989334
2212.02265
Burak Ekim
Burak Ekim, Timo T. Stomberg, Ribana Roscher, Michael Schmitt
MapInWild: A Remote Sensing Dataset to Address the Question What Makes Nature Wild
9 pages, 9 figures. Accepted for inclusion in a future issue of the IEEE Geoscience and Remote Sensing Magazine
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Antrophonegic pressure (i.e. human influence) on the environment is one of the largest causes of the loss of biological diversity. Wilderness areas, in contrast, are home to undisturbed ecological processes. However, there is no biophysical definition of the term wilderness. Instead, wilderness is more of a philosophical or cultural concept and thus cannot be easily delineated or categorized in a technical manner. With this paper, (i) we introduce the task of wilderness mapping by means of machine learning applied to satellite imagery (ii) and publish MapInWild, a large-scale benchmark dataset curated for that task. MapInWild is a multi-modal dataset and comprises various geodata acquired and formed from a diverse set of Earth observation sensors. The dataset consists of 8144 images with a shape of 1920 x 1920 pixels and is approximately 350 GB in size. The images are weakly annotated with three classes derived from the World Database of Protected Areas - Strict Nature Reserves, Wilderness Areas, and National Parks. With the dataset, which shall serve as a testbed for developments in fields such as explainable machine learning and environmental remote sensing, we hope to contribute to a deepening of our understanding of the question "What makes nature wild?".
[ { "version": "v1", "created": "Mon, 5 Dec 2022 13:45:06 GMT" } ]
2022-12-06T00:00:00
[ [ "Ekim", "Burak", "" ], [ "Stomberg", "Timo T.", "" ], [ "Roscher", "Ribana", "" ], [ "Schmitt", "Michael", "" ] ]
new_dataset
0.999835
2212.02352
Berta Chulvi
Berta Chulvi, Alejandro Toselli, Paolo Rosso
Fake News and Hate Speech: Language in Common
2 pages
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
In this paper we raise the research question of whether fake news and hate speech spreaders share common patterns in language. We compute a novel index, the ingroup vs outgroup index, in three different datasets and we show that both phenomena share an "us vs them" narrative.
[ { "version": "v1", "created": "Mon, 5 Dec 2022 15:35:10 GMT" } ]
2022-12-06T00:00:00
[ [ "Chulvi", "Berta", "" ], [ "Toselli", "Alejandro", "" ], [ "Rosso", "Paolo", "" ] ]
new_dataset
0.956972
2212.02425
Pedro Barroso
Pedro Barroso, M\'ario Pereira and Ant\'onio Ravara
Leroy and Blazy were right: their memory model soundness proof is automatable (Extended Version)
To be published in VSTTE'22
null
null
null
cs.LO cs.PL
http://creativecommons.org/licenses/by/4.0/
Xavier Leroy and Sandrine Blazy in 2007 conducted a formal verification, using the Coq proof assistant, of a memory model for low-level imperative languages such as C. Considering their formalization was performed essentially in first-order logic, one question left open by the authors was whether their proofs could be automated using a verification framework for first-order logic. We took the challenge and automated their formalization using Why3, significantly reducing the proof effort. We systematically followed the Coq proofs and realized that in many cases at around one third of the way Why3 was able to discharge all VCs. Furthermore, the proofs still requiring interactions (e.g. induction, witnesses for existential proofs, assertions) were factorized isolating auxiliary results that we stated explicitly. In this way, we achieved an almost-automatic soundness and safety proof of the memory model. Nonetheless, our development allows an extraction of a correct-by-construction concrete memory model, going thus further than the preliminary Why version of Leroy and Blazy.
[ { "version": "v1", "created": "Mon, 5 Dec 2022 17:08:18 GMT" } ]
2022-12-06T00:00:00
[ [ "Barroso", "Pedro", "" ], [ "Pereira", "Mário", "" ], [ "Ravara", "António", "" ] ]
new_dataset
0.952702
2212.02439
Laurence Pelletier
Jason Lequyer, Wen-Hsin Hsu, Reuben Philip, Anna Christina Erpf, Laurence Pelletier
Domino Denoise: An Accurate Blind Zero-Shot Denoiser using Domino Tilings
null
null
null
null
cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
Because noise can interfere with downstream analysis, image denoising has come to occupy an important place in the image processing toolbox. The most accurate state-of-the-art denoisers typically train on a representative dataset. But gathering a training set is not always feasible, so interest has grown in blind zero-shot denoisers that train only on the image they are denoising. The most accurate blind-zero shot methods are blind-spot networks, which mask pixels and attempt to infer them from their surroundings. Other methods exist where all neurons participate in forward inference, however they are not as accurate and are susceptible to overfitting. Here we present a hybrid approach. We first introduce a semi blind-spot network where the network can see only a small percentage of inputs during gradient update. We then resolve overfitting by introducing a validation scheme where we split pixels into two groups and fill in pixel gaps using domino tilings. Our method achieves an average PSNR increase of $0.28$ and a three fold increase in speed over the current gold standard blind zero-shot denoiser Self2Self on synthetic Gaussian noise. We demonstrate the broader applicability of Pixel Domino Tiling by inserting it into a preciously published method.
[ { "version": "v1", "created": "Mon, 5 Dec 2022 17:34:47 GMT" } ]
2022-12-06T00:00:00
[ [ "Lequyer", "Jason", "" ], [ "Hsu", "Wen-Hsin", "" ], [ "Philip", "Reuben", "" ], [ "Erpf", "Anna Christina", "" ], [ "Pelletier", "Laurence", "" ] ]
new_dataset
0.99104
2212.02462
James P. Crutchfield
James P. Crutchfield and Alexandra M. Jurgens
Whale Casting: Remote mobile streaming humpback whale vocalizations to the world
6 pages, 3 figures; http://csc.ucdavis.edu/~cmg/compmech/pubs/whalecasting.html
null
null
null
cs.HC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Over several days in early August 2021, while at sea in Chatham Strait, Southeast Alaska, aboard M/Y Blue Pearl, an online twitch.tv stream broadcast in real-time humpback whale vocalizations monitored via hydrophone. Dozens on mainland North American and around the planet listened in and chatted via the stream. The webcasts demonstrated a proof-of-concept: only relatively inexpensive commercial-off-the-shelf equipment is required for remote mobile streaming at sea. These notes document what was required and make recommendations for higher-quality and larger-scale deployments. One conclusion is that real-time, automated audio documenting whale acoustic behavior is readily accessible and, using the cloud, it can be directly integrated into behavioral databases -- information sources that now often focus exclusively on nonreal-time visual-sighting narrative reports and photography.
[ { "version": "v1", "created": "Mon, 5 Dec 2022 18:08:40 GMT" } ]
2022-12-06T00:00:00
[ [ "Crutchfield", "James P.", "" ], [ "Jurgens", "Alexandra M.", "" ] ]
new_dataset
0.994278
2004.02227
Mohammad Reza Zarrabi
Mohammad Reza Zarrabi, Nasrollah Moghaddam Charkari
A sufficient condition for visibility paths in simple polygons
null
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The purpose of this note is to give a simple proof for a necessary and sufficient condition for visibility paths in simple polygons. A visibility path is a curve such that every point inside a simple polygon is visible from at least one point on the path. This result is essential for finding the shortest watchman route inside a simple polygon specially when the route is restricted to curved paths.
[ { "version": "v1", "created": "Sun, 5 Apr 2020 15:08:31 GMT" }, { "version": "v2", "created": "Tue, 5 May 2020 17:00:34 GMT" }, { "version": "v3", "created": "Wed, 15 Jul 2020 11:19:29 GMT" }, { "version": "v4", "created": "Mon, 7 Nov 2022 15:37:54 GMT" }, { "version": "v5", "created": "Fri, 2 Dec 2022 16:02:31 GMT" } ]
2022-12-05T00:00:00
[ [ "Zarrabi", "Mohammad Reza", "" ], [ "Charkari", "Nasrollah Moghaddam", "" ] ]
new_dataset
0.997389
2109.13098
Cencheng Shen
Cencheng Shen, Qizhe Wang, Carey E. Priebe
One-Hot Graph Encoder Embedding
7 pages main + 7 pages appendix
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023
10.1109/TPAMI.2022.3225073
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we propose a lightning fast graph embedding method called one-hot graph encoder embedding. It has a linear computational complexity and the capacity to process billions of edges within minutes on standard PC -- making it an ideal candidate for huge graph processing. It is applicable to either adjacency matrix or graph Laplacian, and can be viewed as a transformation of the spectral embedding. Under random graph models, the graph encoder embedding is approximately normally distributed per vertex, and asymptotically converges to its mean. We showcase three applications: vertex classification, vertex clustering, and graph bootstrap. In every case, the graph encoder embedding exhibits unrivalled computational advantages.
[ { "version": "v1", "created": "Mon, 27 Sep 2021 14:49:44 GMT" }, { "version": "v2", "created": "Tue, 23 Aug 2022 13:33:52 GMT" }, { "version": "v3", "created": "Fri, 2 Dec 2022 02:45:11 GMT" } ]
2022-12-05T00:00:00
[ [ "Shen", "Cencheng", "" ], [ "Wang", "Qizhe", "" ], [ "Priebe", "Carey E.", "" ] ]
new_dataset
0.990165
2111.03823
Thanapong Chuangyanyong
Thanapong Chuangyanyong, Panusorn Chinsakuljaroen, Worachit Ketrungsri and Thanacha Choopojcharoen
Flying Trapeze Act Motion Planning Algorithm for Two-Link Free-Flying Acrobatic Robot
7 pages, 8 figures, 2 tables
null
10.1109/ICARM54641.2022.9959158
null
cs.RO
http://creativecommons.org/licenses/by-sa/4.0/
A flying trapeze act can be a challenging task for a robotics system since some act requires the performer to catch another trapeze or catcher at the end after being airborne. The objective of this paper is to design and validate a motion planning algorithm for a two-link free-flying acrobatic robot that can accurately land on another trapeze after free-flying in the air. First, the proposed algorithm plan the robot trajectory with the non-linear constrained optimization method. Then, a feedback controller is implemented to stabilize the posture. However, since the spatial position of the center-of-mass of the robot cannot be controlled, this paper proposes a trajectory correction scheme that manipulates the robot's posture such that the robot is still able to land on the target. Lastly, the whole algorithm is validated in the simulation that mimics real-world circumstances.
[ { "version": "v1", "created": "Sat, 6 Nov 2021 07:32:49 GMT" }, { "version": "v2", "created": "Mon, 28 Feb 2022 13:02:46 GMT" } ]
2022-12-05T00:00:00
[ [ "Chuangyanyong", "Thanapong", "" ], [ "Chinsakuljaroen", "Panusorn", "" ], [ "Ketrungsri", "Worachit", "" ], [ "Choopojcharoen", "Thanacha", "" ] ]
new_dataset
0.997756
2112.06596
Hang Zhou
Hang Zhou, Rui Ma, Ling-Xiao Zhang, Lin Gao, Ali Mahdavi-Amiri, Hao Zhang
SAC-GAN: Structure-Aware Image Composition
Accepted to TVCG. Code: https://github.com/RyanHangZhou/SAC-GAN
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce an end-to-end learning framework for image-to-image composition, aiming to plausibly compose an object represented as a cropped patch from an object image into a background scene image. As our approach emphasizes more on semantic and structural coherence of the composed images, rather than their pixel-level RGB accuracies, we tailor the input and output of our network with structure-aware features and design our network losses accordingly, with ground truth established in a self-supervised setting through the object cropping. Specifically, our network takes the semantic layout features from the input scene image, features encoded from the edges and silhouette in the input object patch, as well as a latent code as inputs, and generates a 2D spatial affine transform defining the translation and scaling of the object patch. The learned parameters are further fed into a differentiable spatial transformer network to transform the object patch into the target image, where our model is trained adversarially using an affine transform discriminator and a layout discriminator. We evaluate our network, coined SAC-GAN, for various image composition scenarios in terms of quality, composability, and generalizability of the composite images. Comparisons are made to state-of-the-art alternatives, including Instance Insertion, ST-GAN, CompGAN and PlaceNet, confirming superiority of our method.
[ { "version": "v1", "created": "Mon, 13 Dec 2021 12:24:50 GMT" }, { "version": "v2", "created": "Thu, 30 Dec 2021 08:14:38 GMT" }, { "version": "v3", "created": "Sat, 8 Jan 2022 04:10:44 GMT" }, { "version": "v4", "created": "Tue, 5 Jul 2022 10:07:40 GMT" }, { "version": "v5", "created": "Fri, 2 Dec 2022 09:27:41 GMT" } ]
2022-12-05T00:00:00
[ [ "Zhou", "Hang", "" ], [ "Ma", "Rui", "" ], [ "Zhang", "Ling-Xiao", "" ], [ "Gao", "Lin", "" ], [ "Mahdavi-Amiri", "Ali", "" ], [ "Zhang", "Hao", "" ] ]
new_dataset
0.991545
2206.14077
Jos\'e \'Alamos
Jos\'e \'Alamos and Peter Kietzmann and Thomas Schmidt and Matthias W\"ahlisch
DSME-LoRa: Seamless Long Range Communication Between Arbitrary Nodes in the Constrained IoT
44 pages (incl. References), 27 figures,8 tables
ACM Transactions on Sensor Networks, Vol. 18, No. 4 (November 2022), 43 pages
10.1145/3552432
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Long range radio communication is preferred in many IoT deployments as it avoids the complexity of multi-hop wireless networks. LoRa is a popular, energy-efficient wireless modulation but its networking substrate LoRaWAN introduces severe limitations to its users. In this paper, we present and thoroughly analyze DSME-LoRa, a system design of LoRa with IEEE 802.15.4 DSME as a MAC layer. DSME-LoRa offers the advantage of seamless client-to-client communication beyond the pure gateway-centric transmission of LoRaWAN. We evaluate its feasibility via a full-stack implementation on the popular RIOT operating system, assess its steady-state packet flows in an analytical stochastic Markov model, and quantify its scalability in massive communication scenarios using large scale network simulations. Our findings indicate that DSME-LoRa is indeed a powerful approach that opens LoRa to standard network layers and outperforms LoRaWAN in many dimensions.
[ { "version": "v1", "created": "Tue, 28 Jun 2022 15:18:14 GMT" }, { "version": "v2", "created": "Fri, 26 Aug 2022 12:23:45 GMT" } ]
2022-12-05T00:00:00
[ [ "Álamos", "José", "" ], [ "Kietzmann", "Peter", "" ], [ "Schmidt", "Thomas", "" ], [ "Wählisch", "Matthias", "" ] ]
new_dataset
0.998491
2208.09885
Bingchen Li
Bingchen Li, Xin Li, Yiting Lu, Sen Liu, Ruoyu Feng, Zhibo Chen
HST: Hierarchical Swin Transformer for Compressed Image Super-resolution
Accepted by ECCV2022 Workshop (AIM2022)
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Compressed Image Super-resolution has achieved great attention in recent years, where images are degraded with compression artifacts and low-resolution artifacts. Since the complex hybrid distortions, it is hard to restore the distorted image with the simple cooperation of super-resolution and compression artifacts removing. In this paper, we take a step forward to propose the Hierarchical Swin Transformer (HST) network to restore the low-resolution compressed image, which jointly captures the hierarchical feature representations and enhances each-scale representation with Swin transformer, respectively. Moreover, we find that the pretraining with Super-resolution (SR) task is vital in compressed image super-resolution. To explore the effects of different SR pretraining, we take the commonly-used SR tasks (e.g., bicubic and different real super-resolution simulations) as our pretraining tasks, and reveal that SR plays an irreplaceable role in the compressed image super-resolution. With the cooperation of HST and pre-training, our HST achieves the fifth place in AIM 2022 challenge on the low-quality compressed image super-resolution track, with the PSNR of 23.51dB. Extensive experiments and ablation studies have validated the effectiveness of our proposed methods. The code and models are available at https://github.com/USTC-IMCL/HST-for-Compressed-Image-SR.
[ { "version": "v1", "created": "Sun, 21 Aug 2022 13:41:51 GMT" }, { "version": "v2", "created": "Fri, 2 Dec 2022 02:54:40 GMT" } ]
2022-12-05T00:00:00
[ [ "Li", "Bingchen", "" ], [ "Li", "Xin", "" ], [ "Lu", "Yiting", "" ], [ "Liu", "Sen", "" ], [ "Feng", "Ruoyu", "" ], [ "Chen", "Zhibo", "" ] ]
new_dataset
0.995022
2211.10973
Peng Qi
Peng Qi, Yuyan Bu, Juan Cao, Wei Ji, Ruihao Shui, Junbin Xiao, Danding Wang, Tat-Seng Chua
FakeSV: A Multimodal Benchmark with Rich Social Context for Fake News Detection on Short Video Platforms
To appear in AAAI 2023 AISI track. This version contains appendix with additional details
null
null
null
cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Short video platforms have become an important channel for news sharing, but also a new breeding ground for fake news. To mitigate this problem, research of fake news video detection has recently received a lot of attention. Existing works face two roadblocks: the scarcity of comprehensive and largescale datasets and insufficient utilization of multimodal information. Therefore, in this paper, we construct the largest Chinese short video dataset about fake news named FakeSV, which includes news content, user comments, and publisher profiles simultaneously. To understand the characteristics of fake news videos, we conduct exploratory analysis of FakeSV from different perspectives. Moreover, we provide a new multimodal detection model named SV-FEND, which exploits the cross-modal correlations to select the most informative features and utilizes the social context information for detection. Extensive experiments evaluate the superiority of the proposed method and provide detailed comparisons of different methods and modalities for future works.
[ { "version": "v1", "created": "Sun, 20 Nov 2022 12:57:54 GMT" }, { "version": "v2", "created": "Fri, 2 Dec 2022 12:43:33 GMT" } ]
2022-12-05T00:00:00
[ [ "Qi", "Peng", "" ], [ "Bu", "Yuyan", "" ], [ "Cao", "Juan", "" ], [ "Ji", "Wei", "" ], [ "Shui", "Ruihao", "" ], [ "Xiao", "Junbin", "" ], [ "Wang", "Danding", "" ], [ "Chua", "Tat-Seng", "" ] ]
new_dataset
0.999747
2211.15848
Chenyan Xiong
Arnold Overwijk, Chenyan Xiong, Xiao Liu, Cameron VandenBerg, Jamie Callan
ClueWeb22: 10 Billion Web Documents with Visual and Semantic Information
null
null
null
null
cs.IR cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
ClueWeb22, the newest iteration of the ClueWeb line of datasets, provides 10 billion web pages affiliated with rich information. Its design was influenced by the need for a high quality, large scale web corpus to support a range of academic and industry research, for example, in information systems, retrieval-augmented AI systems, and model pretraining. Compared with earlier ClueWeb corpora, the ClueWeb22 corpus is larger, more varied, of higher-quality, and aligned with the document distributions in commercial web search. Besides raw HTML, ClueWeb22 includes rich information about the web pages provided by industry-standard document understanding systems, including the visual representation of pages rendered by a web browser, parsed HTML structure information from a neural network parser, and pre-processed cleaned document text to lower the barrier to entry. Many of these signals have been widely used in industry but are available to the research community for the first time at this scale.
[ { "version": "v1", "created": "Tue, 29 Nov 2022 00:49:40 GMT" }, { "version": "v2", "created": "Fri, 2 Dec 2022 03:38:26 GMT" } ]
2022-12-05T00:00:00
[ [ "Overwijk", "Arnold", "" ], [ "Xiong", "Chenyan", "" ], [ "Liu", "Xiao", "" ], [ "VandenBerg", "Cameron", "" ], [ "Callan", "Jamie", "" ] ]
new_dataset
0.999872
2212.00229
Shicheng Xu
Shicheng Xu, Liang Pang, Huawei Shen, Xueqi Cheng
NIR-Prompt: A Multi-task Generalized Neural Information Retrieval Training Framework
This article is the extension of arXiv:2204.02725
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Information retrieval aims to find information that meets users' needs from the corpus. Different needs correspond to different IR tasks such as document retrieval, open-domain question answering, retrieval-based dialogue, etc., while they share the same schema to estimate the relationship between texts. It indicates that a good IR model can generalize to different tasks and domains. However, previous studies indicate that state-of-the-art neural information retrieval (NIR) models, e.g, pre-trained language models (PLMs) are hard to generalize. Mainly because the end-to-end fine-tuning paradigm makes the model overemphasize task-specific signals and domain biases but loses the ability to capture generalized essential signals. To address this problem, we propose a novel NIR training framework named NIR-Prompt for retrieval and reranking stages based on the idea of decoupling signal capturing and combination. NIR-Prompt exploits Essential Matching Module (EMM) to capture the essential matching signals and gets the description of tasks by Matching Description Module (MDM). The description is used as task-adaptation information to combine the essential matching signals to adapt to different tasks. Experiments under in-domain multi-task, out-of-domain multi-task, and new task adaptation settings show that NIR-Prompt can improve the generalization of PLMs in NIR for both retrieval and reranking stages compared with baselines.
[ { "version": "v1", "created": "Thu, 1 Dec 2022 02:26:52 GMT" }, { "version": "v2", "created": "Fri, 2 Dec 2022 02:30:19 GMT" } ]
2022-12-05T00:00:00
[ [ "Xu", "Shicheng", "" ], [ "Pang", "Liang", "" ], [ "Shen", "Huawei", "" ], [ "Cheng", "Xueqi", "" ] ]
new_dataset
0.992202
2212.00352
Kaibing Xie
Kaibing Xie (1), Jian Yang (1), Kang Qiu (1) ((1) Peng Cheng Laboratory, Shenzhen, China)
A Dataset with Multibeam Forward-Looking Sonar for Underwater Object Detection
null
null
10.1038/s41597-022-01854-w
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multibeam forward-looking sonar (MFLS) plays an important role in underwater detection. There are several challenges to the research on underwater object detection with MFLS. Firstly, the research is lack of available dataset. Secondly, the sonar image, generally processed at pixel level and transformed to sector representation for the visual habits of human beings, is disadvantageous to the research in artificial intelligence (AI) areas. Towards these challenges, we present a novel dataset, the underwater acoustic target detection (UATD) dataset, consisting of over 9000 MFLS images captured using Tritech Gemini 1200ik sonar. Our dataset provides raw data of sonar images with annotation of 10 categories of target objects (cube, cylinder, tyres, etc). The data was collected from lake and shallow water. To verify the practicality of UATD, we apply the dataset to the state-of-the-art detectors and provide corresponding benchmarks for its accuracy and efficiency.
[ { "version": "v1", "created": "Thu, 1 Dec 2022 08:26:03 GMT" }, { "version": "v2", "created": "Fri, 2 Dec 2022 01:38:51 GMT" } ]
2022-12-05T00:00:00
[ [ "Xie", "Kaibing", "" ], [ "Yang", "Jian", "" ], [ "Qiu", "Kang", "" ] ]
new_dataset
0.98297
2212.00851
Tharindu Ranasinghe Dr
Tharindu Ranasinghe, Isuri Anuradha, Damith Premasiri, Kanishka Silva, Hansi Hettiarachchi, Lasitha Uyangodage, Marcos Zampieri
SOLD: Sinhala Offensive Language Dataset
This is a preprint of an article submitted to Applied Intelligence, Springer
null
null
null
cs.CL cs.AI cs.LG cs.SI
http://creativecommons.org/licenses/by/4.0/
The widespread of offensive content online, such as hate speech and cyber-bullying, is a global phenomenon. This has sparked interest in the artificial intelligence (AI) and natural language processing (NLP) communities, motivating the development of various systems trained to detect potentially harmful content automatically. These systems require annotated datasets to train the machine learning (ML) models. However, with a few notable exceptions, most datasets on this topic have dealt with English and a few other high-resource languages. As a result, the research in offensive language identification has been limited to these languages. This paper addresses this gap by tackling offensive language identification in Sinhala, a low-resource Indo-Aryan language spoken by over 17 million people in Sri Lanka. We introduce the Sinhala Offensive Language Dataset (SOLD) and present multiple experiments on this dataset. SOLD is a manually annotated dataset containing 10,000 posts from Twitter annotated as offensive and not offensive at both sentence-level and token-level, improving the explainability of the ML models. SOLD is the first large publicly available offensive language dataset compiled for Sinhala. We also introduce SemiSOLD, a larger dataset containing more than 145,000 Sinhala tweets, annotated following a semi-supervised approach.
[ { "version": "v1", "created": "Thu, 1 Dec 2022 20:18:21 GMT" } ]
2022-12-05T00:00:00
[ [ "Ranasinghe", "Tharindu", "" ], [ "Anuradha", "Isuri", "" ], [ "Premasiri", "Damith", "" ], [ "Silva", "Kanishka", "" ], [ "Hettiarachchi", "Hansi", "" ], [ "Uyangodage", "Lasitha", "" ], [ "Zampieri", "Marcos", "" ] ]
new_dataset
0.99987
2212.00891
Ian McQuillan
Oscar H. Ibarra, Jozef Jir\'asek, Ian McQuillan, and Luca Prigioniero
Space Complexity of Stack Automata Models
23 pages, 1 figure, 2 tables
International Journal of Foundations of Computer Science, 32 (6), 801--823 (2021)
10.1142/S0129054121420090
null
cs.FL
http://creativecommons.org/licenses/by/4.0/
This paper examines several measures of space complexity of variants of stack automata: non-erasing stack automata and checking stack automata. These measures capture the minimum stack size required to accept every word in the language of the automaton (weak measure), the maximum stack size used in any accepting computation on any accepted word (accept measure),and the maximum stack size used in any computation (strong measure). We give a detailed characterization of the accept and strong space complexity measures for checking stack automata. Exactly one of three cases can occur: the complexity is either bounded by a constant, behaves like a linear function, or it can not be bounded by any function of the length of the input word (and it is decidable which case occurs). However, this result does not hold for non-erasing stack automata; we provide an example where the space complexity grows proportionally to the square root of the length of the input. Furthermore, we study the complexity bounds of machines which accept a given language, and decidability of space complexity properties.
[ { "version": "v1", "created": "Thu, 1 Dec 2022 22:16:42 GMT" } ]
2022-12-05T00:00:00
[ [ "Ibarra", "Oscar H.", "" ], [ "Jirásek", "Jozef", "" ], [ "McQuillan", "Ian", "" ], [ "Prigioniero", "Luca", "" ] ]
new_dataset
0.984117
2212.00903
Xiaoran Wu
Xiaoran Wu, Zihan Yan, Xiang Anthony Chen
DeclutterCam: A Photographic Assistant System with Clutter Detection and Removal
null
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Photographs convey the stories of photographers to the audience. However, this story-telling aspect of photography is easily distracted by visual clutter. Informed by a pilot study, we identified the kinds of clutter that amateurs frequently include in their photos. We were thus inspired to develop DeclutterCam, a photographic assistant system that incorporates novel user interactions and AI algorithms for photographic decluttering. Clutter elements are detected by an aesthetic quality evaluation algorithm and are highlighted so that users can interactively identify distracting elements. A GAN-based iterative clutter removal tool enables users to test their photographic ideas in real-time. User studies with 32 photography beginners demonstrate that our system provides flexible interfaces, accurate algorithms, and immediate feedback that allow users to avoid clutter and explore more photographic ideas. Evaluations by photography experts show that users can take higher-quality photos that better convey the intended story using our system.
[ { "version": "v1", "created": "Thu, 1 Dec 2022 23:02:37 GMT" } ]
2022-12-05T00:00:00
[ [ "Wu", "Xiaoran", "" ], [ "Yan", "Zihan", "" ], [ "Chen", "Xiang Anthony", "" ] ]
new_dataset
0.999543
2212.00928
Manuel Ballester
Manuel Ballester, Heming Wang, Jiren Li, Oliver Cossairt, Florian Willomitzer
Single-shot ToF sensing with sub-mm precision using conventional CMOS sensors
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel single-shot interferometric ToF camera targeted for precise 3D measurements of dynamic objects. The camera concept is based on Synthetic Wavelength Interferometry, a technique that allows retrieval of depth maps of objects with optically rough surfaces at submillimeter depth precision. In contrast to conventional ToF cameras, our device uses only off-the-shelf CCD/CMOS detectors and works at their native chip resolution (as of today, theoretically up to 20 Mp and beyond). Moreover, we can obtain a full 3D model of the object in single-shot, meaning that no temporal sequence of exposures or temporal illumination modulation (such as amplitude or frequency modulation) is necessary, which makes our camera robust against object motion. In this paper, we introduce the novel camera concept and show first measurements that demonstrate the capabilities of our system. We present 3D measurements of small (cm-sized) objects with > 2 Mp point cloud resolution (the resolution of our used detector) and up to sub-mm depth precision. We also report a "single-shot 3D video" acquisition and a first single-shot "Non-Line-of-Sight" measurement. Our technique has great potential for high-precision applications with dynamic object movement, e.g., in AR/VR, industrial inspection, medical imaging, and imaging through scattering media like fog or human tissue.
[ { "version": "v1", "created": "Fri, 2 Dec 2022 01:50:36 GMT" } ]
2022-12-05T00:00:00
[ [ "Ballester", "Manuel", "" ], [ "Wang", "Heming", "" ], [ "Li", "Jiren", "" ], [ "Cossairt", "Oliver", "" ], [ "Willomitzer", "Florian", "" ] ]
new_dataset
0.980414
2212.00973
Zixun Guo
Z. Guo, J. Kang, D. Herremans
A Domain-Knowledge-Inspired Music Embedding Space and a Novel Attention Mechanism for Symbolic Music Modeling
This paper is accepted at AAAI 2023
null
null
null
cs.SD cs.AI eess.AS eess.SP
http://creativecommons.org/licenses/by-nc-sa/4.0/
Following the success of the transformer architecture in the natural language domain, transformer-like architectures have been widely applied to the domain of symbolic music recently. Symbolic music and text, however, are two different modalities. Symbolic music contains multiple attributes, both absolute attributes (e.g., pitch) and relative attributes (e.g., pitch interval). These relative attributes shape human perception of musical motifs. These important relative attributes, however, are mostly ignored in existing symbolic music modeling methods with the main reason being the lack of a musically-meaningful embedding space where both the absolute and relative embeddings of the symbolic music tokens can be efficiently represented. In this paper, we propose the Fundamental Music Embedding (FME) for symbolic music based on a bias-adjusted sinusoidal encoding within which both the absolute and the relative attributes can be embedded and the fundamental musical properties (e.g., translational invariance) are explicitly preserved. Taking advantage of the proposed FME, we further propose a novel attention mechanism based on the relative index, pitch and onset embeddings (RIPO attention) such that the musical domain knowledge can be fully utilized for symbolic music modeling. Experiment results show that our proposed model: RIPO transformer which utilizes FME and RIPO attention outperforms the state-of-the-art transformers (i.e., music transformer, linear transformer) in a melody completion task. Moreover, using the RIPO transformer in a downstream music generation task, we notice that the notorious degeneration phenomenon no longer exists and the music generated by the RIPO transformer outperforms the music generated by state-of-the-art transformer models in both subjective and objective evaluations.
[ { "version": "v1", "created": "Fri, 2 Dec 2022 05:04:31 GMT" } ]
2022-12-05T00:00:00
[ [ "Guo", "Z.", "" ], [ "Kang", "J.", "" ], [ "Herremans", "D.", "" ] ]
new_dataset
0.970299
2212.01022
Indranil Saha
Nikhil Kumar Singh and Indranil Saha
STL-Based Synthesis of Feedback Controllers Using Reinforcement Learning
Full version of the paper to be published in AAAI 2023
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep Reinforcement Learning (DRL) has the potential to be used for synthesizing feedback controllers (agents) for various complex systems with unknown dynamics. These systems are expected to satisfy diverse safety and liveness properties best captured using temporal logic. In RL, the reward function plays a crucial role in specifying the desired behaviour of these agents. However, the problem of designing the reward function for an RL agent to satisfy complex temporal logic specifications has received limited attention in the literature. To address this, we provide a systematic way of generating rewards in real-time by using the quantitative semantics of Signal Temporal Logic (STL), a widely used temporal logic to specify the behaviour of cyber-physical systems. We propose a new quantitative semantics for STL having several desirable properties, making it suitable for reward generation. We evaluate our STL-based reinforcement learning mechanism on several complex continuous control benchmarks and compare our STL semantics with those available in the literature in terms of their efficacy in synthesizing the controller agent. Experimental results establish our new semantics to be the most suitable for synthesizing feedback controllers for complex continuous dynamical systems through reinforcement learning.
[ { "version": "v1", "created": "Fri, 2 Dec 2022 08:31:46 GMT" } ]
2022-12-05T00:00:00
[ [ "Singh", "Nikhil Kumar", "" ], [ "Saha", "Indranil", "" ] ]
new_dataset
0.974779
2212.01033
Jaidev Shriram
Jaidev Shriram and Makarand Tapaswi and Vinoo Alluri
Sonus Texere! Automated Dense Soundtrack Construction for Books using Movie Adaptations
Accepted to ISMIR 2022. Project page: https://auto-book-soundtrack.github.io/
null
null
null
cs.SD cs.AI cs.MM eess.AS
http://creativecommons.org/licenses/by-nc-sa/4.0/
Reading, much like music listening, is an immersive experience that transports readers while taking them on an emotional journey. Listening to complementary music has the potential to amplify the reading experience, especially when the music is stylistically cohesive and emotionally relevant. In this paper, we propose the first fully automatic method to build a dense soundtrack for books, which can play high-quality instrumental music for the entirety of the reading duration. Our work employs a unique text processing and music weaving pipeline that determines the context and emotional composition of scenes in a chapter. This allows our method to identify and play relevant excerpts from the soundtrack of the book's movie adaptation. By relying on the movie composer's craftsmanship, our book soundtracks include expert-made motifs and other scene-specific musical characteristics. We validate the design decisions of our approach through a perceptual study. Our readers note that the book soundtrack greatly enhanced their reading experience, due to high immersiveness granted via uninterrupted and style-consistent music, and a heightened emotional state attained via high precision emotion and scene context recognition.
[ { "version": "v1", "created": "Fri, 2 Dec 2022 08:57:20 GMT" } ]
2022-12-05T00:00:00
[ [ "Shriram", "Jaidev", "" ], [ "Tapaswi", "Makarand", "" ], [ "Alluri", "Vinoo", "" ] ]
new_dataset
0.998441
2212.01039
Yichong Leng
Yichong Leng, Xu Tan, Wenjie Liu, Kaitao Song, Rui Wang, Xiang-Yang Li, Tao Qin, Edward Lin, Tie-Yan Liu
SoftCorrect: Error Correction with Soft Detection for Automatic Speech Recognition
AAAI 2023
null
null
null
cs.CL cs.LG eess.AS
http://creativecommons.org/licenses/by-nc-nd/4.0/
Error correction in automatic speech recognition (ASR) aims to correct those incorrect words in sentences generated by ASR models. Since recent ASR models usually have low word error rate (WER), to avoid affecting originally correct tokens, error correction models should only modify incorrect words, and therefore detecting incorrect words is important for error correction. Previous works on error correction either implicitly detect error words through target-source attention or CTC (connectionist temporal classification) loss, or explicitly locate specific deletion/substitution/insertion errors. However, implicit error detection does not provide clear signal about which tokens are incorrect and explicit error detection suffers from low detection accuracy. In this paper, we propose SoftCorrect with a soft error detection mechanism to avoid the limitations of both explicit and implicit error detection. Specifically, we first detect whether a token is correct or not through a probability produced by a dedicatedly designed language model, and then design a constrained CTC loss that only duplicates the detected incorrect tokens to let the decoder focus on the correction of error tokens. Compared with implicit error detection with CTC loss, SoftCorrect provides explicit signal about which words are incorrect and thus does not need to duplicate every token but only incorrect tokens; compared with explicit error detection, SoftCorrect does not detect specific deletion/substitution/insertion errors but just leaves it to CTC loss. Experiments on AISHELL-1 and Aidatatang datasets show that SoftCorrect achieves 26.1% and 9.4% CER reduction respectively, outperforming previous works by a large margin, while still enjoying fast speed of parallel generation.
[ { "version": "v1", "created": "Fri, 2 Dec 2022 09:11:32 GMT" } ]
2022-12-05T00:00:00
[ [ "Leng", "Yichong", "" ], [ "Tan", "Xu", "" ], [ "Liu", "Wenjie", "" ], [ "Song", "Kaitao", "" ], [ "Wang", "Rui", "" ], [ "Li", "Xiang-Yang", "" ], [ "Qin", "Tao", "" ], [ "Lin", "Edward", "" ], [ "Liu", "Tie-Yan", "" ] ]
new_dataset
0.986807
2212.01042
Hui Zhuang
Pengfei Hu, Hui Zhuang, Panneer Selvam Santhalingamy, Riccardo Spolaor, Parth Pathaky, Guoming Zhang, Xiuzhen Cheng
AccEar: Accelerometer Acoustic Eavesdropping with Unconstrained Vocabulary
2022 IEEE Symposium on Security and Privacy (SP)
2022 IEEE Symposium on Security and Privacy (SP)
10.1109/SP46214.2022.9833716
null
cs.SD cs.CR eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the increasing popularity of voice-based applications, acoustic eavesdropping has become a serious threat to users' privacy. While on smartphones the access to microphones needs an explicit user permission, acoustic eavesdropping attacks can rely on motion sensors (such as accelerometer and gyroscope), which access is unrestricted. However, previous instances of such attacks can only recognize a limited set of pre-trained words or phrases. In this paper, we present AccEar, an accelerometerbased acoustic eavesdropping attack that can reconstruct any audio played on the smartphone's loudspeaker with unconstrained vocabulary. We show that an attacker can employ a conditional Generative Adversarial Network (cGAN) to reconstruct highfidelity audio from low-frequency accelerometer signals. The presented cGAN model learns to recreate high-frequency components of the user's voice from low-frequency accelerometer signals through spectrogram enhancement. We assess the feasibility and effectiveness of AccEar attack in a thorough set of experiments using audio from 16 public personalities. As shown by the results in both objective and subjective evaluations, AccEar successfully reconstructs user speeches from accelerometer signals in different scenarios including varying sampling rate, audio volume, device model, etc.
[ { "version": "v1", "created": "Fri, 2 Dec 2022 09:13:28 GMT" } ]
2022-12-05T00:00:00
[ [ "Hu", "Pengfei", "" ], [ "Zhuang", "Hui", "" ], [ "Santhalingamy", "Panneer Selvam", "" ], [ "Spolaor", "Riccardo", "" ], [ "Pathaky", "Parth", "" ], [ "Zhang", "Guoming", "" ], [ "Cheng", "Xiuzhen", "" ] ]
new_dataset
0.97891
2212.01210
Nedim Osmic
Nedim Osmic, Adnan Tahirovic and Bakir Lacevic
Octocopter Design: Modelling, Control and Motion Planning
100 pages, 57 Figures, 16 Tables
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
This book provides a solution to the control and motion planning design for an octocopter system. It includes a particular choice of control and motion planning algorithms which is based on the authors' previous research work, so it can be used as a reference design guidance for students, researchers as well as autonomous vehicles hobbyists. The control is constructed based on a fault tolerant approach aiming to increase the chances of the system to detect and isolate a potential failure in order to produce feasible control signals to the remaining active motors. The used motion planning algorithm is risk-aware by means that it takes into account the constraints related to the fault-dependant and mission-related maneuverability analysis of the octocopter system during the planning stage. Such a planner generates only those reference trajectories along which the octocopter system would be safe and capable of good tracking in case of a single motor fault and of majority of double motor fault scenarios. The control and motion planning algorithms presented in the book aim to increase the overall reliability of the system for completing the mission.
[ { "version": "v1", "created": "Fri, 2 Dec 2022 14:43:25 GMT" } ]
2022-12-05T00:00:00
[ [ "Osmic", "Nedim", "" ], [ "Tahirovic", "Adnan", "" ], [ "Lacevic", "Bakir", "" ] ]
new_dataset
0.999799
2212.01247
Tobias Fischer
Tobias Fischer, Yung-Hsu Yang, Suryansh Kumar, Min Sun, Fisher Yu
CC-3DT: Panoramic 3D Object Tracking via Cross-Camera Fusion
Project page: https://www.vis.xyz/pub/cc-3dt/
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
To track the 3D locations and trajectories of the other traffic participants at any given time, modern autonomous vehicles are equipped with multiple cameras that cover the vehicle's full surroundings. Yet, camera-based 3D object tracking methods prioritize optimizing the single-camera setup and resort to post-hoc fusion in a multi-camera setup. In this paper, we propose a method for panoramic 3D object tracking, called CC-3DT, that associates and models object trajectories both temporally and across views, and improves the overall tracking consistency. In particular, our method fuses 3D detections from multiple cameras before association, reducing identity switches significantly and improving motion modeling. Our experiments on large-scale driving datasets show that fusion before association leads to a large margin of improvement over post-hoc fusion. We set a new state-of-the-art with 12.6% improvement in average multi-object tracking accuracy (AMOTA) among all camera-based methods on the competitive NuScenes 3D tracking benchmark, outperforming previously published methods by 6.5% in AMOTA with the same 3D detector.
[ { "version": "v1", "created": "Fri, 2 Dec 2022 15:43:55 GMT" } ]
2022-12-05T00:00:00
[ [ "Fischer", "Tobias", "" ], [ "Yang", "Yung-Hsu", "" ], [ "Kumar", "Suryansh", "" ], [ "Sun", "Min", "" ], [ "Yu", "Fisher", "" ] ]
new_dataset
0.996995
2212.01260
Maxim Khomiakov
Maxim Khomiakov, Julius Holbech Radzikowski, Carl Anton Schmidt, Mathias Bonde S{\o}rensen, Mads Andersen, Michael Riis Andersen and Jes Frellsen
SolarDK: A high-resolution urban solar panel image classification and localization dataset
7 pages, 2 figures, to access the dataset, see https://osf.io/aj539/
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
The body of research on classification of solar panel arrays from aerial imagery is increasing, yet there are still not many public benchmark datasets. This paper introduces two novel benchmark datasets for classifying and localizing solar panel arrays in Denmark: A human annotated dataset for classification and segmentation, as well as a classification dataset acquired using self-reported data from the Danish national building registry. We explore the performance of prior works on the new benchmark dataset, and present results after fine-tuning models using a similar approach as recent works. Furthermore, we train models of newer architectures and provide benchmark baselines to our datasets in several scenarios. We believe the release of these datasets may improve future research in both local and global geospatial domains for identifying and mapping of solar panel arrays from aerial imagery. The data is accessible at https://osf.io/aj539/.
[ { "version": "v1", "created": "Fri, 2 Dec 2022 15:56:56 GMT" } ]
2022-12-05T00:00:00
[ [ "Khomiakov", "Maxim", "" ], [ "Radzikowski", "Julius Holbech", "" ], [ "Schmidt", "Carl Anton", "" ], [ "Sørensen", "Mathias Bonde", "" ], [ "Andersen", "Mads", "" ], [ "Andersen", "Michael Riis", "" ], [ "Frellsen", "Jes", "" ] ]
new_dataset
0.999856
2212.01298
Yushan Siriwardhana
Sehan Samarakoon, Yushan Siriwardhana, Pawani Porambage, Madhusanka Liyanage, Sang-Yoon Chang, Jinoh Kim, Jonghyun Kim, Mika Ylianttila
5G-NIDD: A Comprehensive Network Intrusion Detection Dataset Generated over 5G Wireless Network
Link to the Dataset http://ieee-dataport.org/10203
null
null
null
cs.CR cs.NI
http://creativecommons.org/publicdomain/zero/1.0/
With a plethora of new connections, features, and services introduced, the 5th generation (5G) wireless technology reflects the development of mobile communication networks and is here to stay for the next decade. The multitude of services and technologies that 5G incorporates have made modern communication networks very complex and sophisticated in nature. This complexity along with the incorporation of Machine Learning (ML) and Artificial Intelligence (AI) provides the opportunity for the attackers to launch intelligent attacks against the network and network devices. These attacks often traverse undetected due to the lack of intelligent security mechanisms to counter these threats. Therefore, the implementation of real-time, proactive, and self-adaptive security mechanisms throughout the network would be an integral part of 5G as well as future communication systems. Therefore, large amounts of data collected from real networks will play an important role in the training of AI/ML models to identify and detect malicious content in network traffic. This work presents 5G-NIDD, a fully labeled dataset built on a functional 5G test network that can be used by those who develop and test AI/ML solutions. The work further analyses the collected data using common ML models and shows the achieved accuracy levels.
[ { "version": "v1", "created": "Fri, 2 Dec 2022 16:42:46 GMT" } ]
2022-12-05T00:00:00
[ [ "Samarakoon", "Sehan", "" ], [ "Siriwardhana", "Yushan", "" ], [ "Porambage", "Pawani", "" ], [ "Liyanage", "Madhusanka", "" ], [ "Chang", "Sang-Yoon", "" ], [ "Kim", "Jinoh", "" ], [ "Kim", "Jonghyun", "" ], [ "Ylianttila", "Mika", "" ] ]
new_dataset
0.999786
2212.01301
Ian McQuillan
Oscar H. Ibarra and Ian McQuillan
Semilinearity of Families of Languages
20 pages
International Journal of Foundations of Computer Science, 31 (8), 1179-1198 (2020)
10.1142/S0129054120420095
null
cs.FL
http://creativecommons.org/licenses/by/4.0/
Techniques are developed for creating new and general language families of only semilinear languages, and for showing families only contain semilinear languages. It is shown that for language families L that are semilinear full trios, the smallest full AFL containing L that is also closed under intersection with languages in NCM (where NCM is the family of languages accepted by NFAs augmented with reversal-bounded counters), is also semilinear. If these closure properties are effective, this also immediately implies decidability of membership, emptiness, and infiniteness for these general families. From the general techniques, new grammar systems are given that are extensions of well-known families of semilinear full trios, whereby it is implied that these extensions must only describe semilinear languages. This also implies positive decidability properties for the new systems. Some characterizations of the new families are also given.
[ { "version": "v1", "created": "Fri, 2 Dec 2022 16:49:56 GMT" } ]
2022-12-05T00:00:00
[ [ "Ibarra", "Oscar H.", "" ], [ "McQuillan", "Ian", "" ] ]
new_dataset
0.998306
2212.01372
Mustafa Doger
Mustafa Doger and Sennur Ulukus
Bitcoin Security-Latency Under Network Delay
null
null
null
null
cs.CR cs.DC cs.DM cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We improve security-latency bounds of Nakamoto consensus by analyzing the race between adversarial and honest chains in three different phases: pre-mining, confirmation and post-confirmation. We find the probability distribution of the length of the adversarial chain and the rigged adversarial chain under jumper models during the confirmation interval. We analyze certain properties of this race to model pre-mining and post-confirmation phases with random walks that provide tighter bounds than existing results. Combining all three phases provides novel upper and lower bounds for blockchains with small $\lambda\Delta$.
[ { "version": "v1", "created": "Fri, 2 Dec 2022 18:54:30 GMT" } ]
2022-12-05T00:00:00
[ [ "Doger", "Mustafa", "" ], [ "Ulukus", "Sennur", "" ] ]
new_dataset
0.984655
2006.02901
Gang Liu
Gang Liu, Yajing Pang, Shuai Yin, Xiaoke Niu, Jing Wang, Hong Wan
Dendrite Net with Acceleration Module for Faster Nonlinear Mapping and System Identification
Published in Mathematics
null
10.3390/math10234477
null
cs.LG cs.CV stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
Nonlinear mapping is an essential and common demand in online systems, such as sensor systems and mobile phones. Accelerating nonlinear mapping will directly speed up online systems. Previously the authors of this paper proposed a Dendrite Net (DD) with enormously lower time complexity than the existing nonlinear mapping algorithms; however, there still are redundant calculations in DD. This paper presents a DD with an acceleration module (AC) to accelerate nonlinear mapping further. We conduct three experiments to verify whether DD with AC has lower time complexity while retaining DD's nonlinear mapping properties and system identification properties: The first experiment is the precision and identification of unary nonlinear mapping, reflecting the calculation performance using DD with AC for basic functions in online systems. The second experiment is the mapping precision and identification of the multi-input nonlinear system, reflecting the performance for designing online systems via DD with AC. Finally, this paper compares the time complexity of DD and DD with AC and analyzes the theoretical reasons through repeated experiments. Results: DD with AC retains DD's excellent mapping and identification properties and has lower time complexity. Significance: DD with AC can be used for most engineering systems, such as sensor systems, and will speed up computation in these online systems. The code of DD with AC is available on https://github.com/liugang1234567/Gang-neuron
[ { "version": "v1", "created": "Thu, 4 Jun 2020 17:56:24 GMT" }, { "version": "v2", "created": "Thu, 1 Dec 2022 17:51:04 GMT" } ]
2022-12-02T00:00:00
[ [ "Liu", "Gang", "" ], [ "Pang", "Yajing", "" ], [ "Yin", "Shuai", "" ], [ "Niu", "Xiaoke", "" ], [ "Wang", "Jing", "" ], [ "Wan", "Hong", "" ] ]
new_dataset
0.994855
2106.04618
Laurens Bliek
Laurens Bliek, Arthur Guijt, Rickard Karlsson, Sicco Verwer, Mathijs de Weerdt
EXPObench: Benchmarking Surrogate-based Optimisation Algorithms on Expensive Black-box Functions
33 pages
null
null
null
cs.LG cs.NE math.OC
http://creativecommons.org/licenses/by/4.0/
Surrogate algorithms such as Bayesian optimisation are especially designed for black-box optimisation problems with expensive objectives, such as hyperparameter tuning or simulation-based optimisation. In the literature, these algorithms are usually evaluated with synthetic benchmarks which are well established but have no expensive objective, and only on one or two real-life applications which vary wildly between papers. There is a clear lack of standardisation when it comes to benchmarking surrogate algorithms on real-life, expensive, black-box objective functions. This makes it very difficult to draw conclusions on the effect of algorithmic contributions and to give substantial advice on which method to use when. A new benchmark library, EXPObench, provides first steps towards such a standardisation. The library is used to provide an extensive comparison of six different surrogate algorithms on four expensive optimisation problems from different real-life applications. This has led to new insights regarding the relative importance of exploration, the evaluation time of the objective, and the used model. We also provide rules of thumb for which surrogate algorithm to use in which situation. A further contribution is that we make the algorithms and benchmark problem instances publicly available, contributing to more uniform analysis of surrogate algorithms. Most importantly, we include the performance of the six algorithms on all evaluated problem instances. This results in a unique new dataset that lowers the bar for researching new methods as the number of expensive evaluations required for comparison is significantly reduced.
[ { "version": "v1", "created": "Tue, 8 Jun 2021 18:17:42 GMT" }, { "version": "v2", "created": "Thu, 1 Dec 2022 16:37:41 GMT" } ]
2022-12-02T00:00:00
[ [ "Bliek", "Laurens", "" ], [ "Guijt", "Arthur", "" ], [ "Karlsson", "Rickard", "" ], [ "Verwer", "Sicco", "" ], [ "de Weerdt", "Mathijs", "" ] ]
new_dataset
0.992043
2110.06321
Jie Chen
Jie Chen, Prasanna Date, Nicholas Chancellor, Mohammed Atiquzzaman, Cormac Sreenan
Controller-based Energy-Aware Wireless Sensor Network Routing using Quantum Algorithms
null
IEEE Transactions on Quantum Engineering, vol. 3, pp. 1-12, 2022, Art no. 3102912
10.1109/TQE.2022.3217297
null
cs.ET quant-ph
http://creativecommons.org/licenses/by/4.0/
Energy efficient routing in wireless sensor networks has attracted attention from researchers in both academia and industry, most recently motivated by the opportunity to use SDN (software defined network)-inspired approaches. These problems are NP-hard, with algorithms needing computation time which scales faster than polynomial in the problem size. Consequently, heuristic algorithms are used in practice, which are unable to guarantee optimally. In this short paper, we show proof-of-principle for the use of a quantum annealing processor instead of a classical processor, to find optimal or near-optimal solutions very quickly. Our preliminary results for small networks show that this approach using quantum computing has great promise and may open the door for other significant improvements in the efficacy of network algorithms.
[ { "version": "v1", "created": "Tue, 12 Oct 2021 20:16:21 GMT" } ]
2022-12-02T00:00:00
[ [ "Chen", "Jie", "" ], [ "Date", "Prasanna", "" ], [ "Chancellor", "Nicholas", "" ], [ "Atiquzzaman", "Mohammed", "" ], [ "Sreenan", "Cormac", "" ] ]
new_dataset
0.996689
2201.01051
Ashirbad Pradhan
Ashirbad Pradhan, Jiayuan He, Ning Jiang
Open Access Dataset for Electromyography based Multi-code Biometric Authentication
manuscript for open access dataset (paper and appendix)
Sci Data 9, 733 (2022)
10.1038/s41597-022-01836-y
null
cs.CR eess.SP stat.ML
http://creativecommons.org/licenses/by/4.0/
Recently, surface electromyogram (EMG) has been proposed as a novel biometric trait for addressing some key limitations of current biometrics, such as spoofing and liveness. The EMG signals possess a unique characteristic: they are inherently different for individuals (biometrics), and they can be customized to realize multi-length codes or passwords (for example, by performing different gestures). However, current EMG-based biometric research has two critical limitations: 1) a small subject pool, compared to other more established biometric traits, and 2) limited to single-session or single-day data sets. In this study, forearm and wrist EMG data were collected from 43 participants over three different days with long separation while they performed static hand and wrist gestures. The multi-day biometric authentication resulted in a median EER of 0.017 for the forearm setup and 0.025 for the wrist setup, comparable to well-established biometric traits suggesting consistent performance over multiple days. The presented large-sample multi-day data set and findings could facilitate further research on EMG-based biometrics and other gesture recognition-based applications.
[ { "version": "v1", "created": "Tue, 4 Jan 2022 09:20:34 GMT" }, { "version": "v2", "created": "Wed, 5 Jan 2022 07:15:08 GMT" } ]
2022-12-02T00:00:00
[ [ "Pradhan", "Ashirbad", "" ], [ "He", "Jiayuan", "" ], [ "Jiang", "Ning", "" ] ]
new_dataset
0.999626
2201.08383
Chao-Yuan Wu
Chao-Yuan Wu, Yanghao Li, Karttikeya Mangalam, Haoqi Fan, Bo Xiong, Jitendra Malik, Christoph Feichtenhofer
MeMViT: Memory-Augmented Multiscale Vision Transformer for Efficient Long-Term Video Recognition
Technical report. arXiv v2: add link to code
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While today's video recognition systems parse snapshots or short clips accurately, they cannot connect the dots and reason across a longer range of time yet. Most existing video architectures can only process <5 seconds of a video without hitting the computation or memory bottlenecks. In this paper, we propose a new strategy to overcome this challenge. Instead of trying to process more frames at once like most existing methods, we propose to process videos in an online fashion and cache "memory" at each iteration. Through the memory, the model can reference prior context for long-term modeling, with only a marginal cost. Based on this idea, we build MeMViT, a Memory-augmented Multiscale Vision Transformer, that has a temporal support 30x longer than existing models with only 4.5% more compute; traditional methods need >3,000% more compute to do the same. On a wide range of settings, the increased temporal support enabled by MeMViT brings large gains in recognition accuracy consistently. MeMViT obtains state-of-the-art results on the AVA, EPIC-Kitchens-100 action classification, and action anticipation datasets. Code and models are available at https://github.com/facebookresearch/memvit.
[ { "version": "v1", "created": "Thu, 20 Jan 2022 18:59:54 GMT" }, { "version": "v2", "created": "Wed, 30 Nov 2022 19:40:55 GMT" } ]
2022-12-02T00:00:00
[ [ "Wu", "Chao-Yuan", "" ], [ "Li", "Yanghao", "" ], [ "Mangalam", "Karttikeya", "" ], [ "Fan", "Haoqi", "" ], [ "Xiong", "Bo", "" ], [ "Malik", "Jitendra", "" ], [ "Feichtenhofer", "Christoph", "" ] ]
new_dataset
0.998505
2204.06676
Khushal Sethi
Khushal Sethi
DRAGON (Differentiable Graph Execution) : A suite of Hardware Simulation and Optimization tools for Modern AI/Non-AI Workloads
null
null
null
null
cs.AR cs.AI cs.ET cs.PF
http://creativecommons.org/licenses/by/4.0/
We introduce DRAGON, an open-source, fast and explainable hardware simulation and optimization toolchain that enables hardware architects to simulate hardware designs, and to optimize hardware designs to efficiently execute workloads. The DRAGON toolchain provides the following tools: Hardware Model Generator (DGen), Hardware Simulator (DSim) and Hardware Optimizer (DOpt). DSim provides the simulation of running algorithms (represented as data-flow graphs) on hardware described. DGen describes the hardware in detail, with user input architectures/technology (represented in a custom description language). A novel methodology of gradient descent from the simulation allows us optimize the hardware model (giving the directions for improvements in technology parameters and design parameters), provided by Dopt. DRAGON framework (DSim) is much faster than previously avaible works for simulation, which is possible through performance-first code writing practices, mathematical formulas for common computing operations to avoid cycle-accurate simulation steps, efficient algorithms for mapping, and data-structure representations for hardware state. DRAGON framework (Dopt) generates performance optimized architectures for both AI and Non-AI Workloads, and provides technology improvement directions for 100x-1000x better future computing systems.
[ { "version": "v1", "created": "Wed, 13 Apr 2022 23:57:12 GMT" }, { "version": "v2", "created": "Mon, 25 Apr 2022 04:50:22 GMT" }, { "version": "v3", "created": "Wed, 4 May 2022 04:23:46 GMT" }, { "version": "v4", "created": "Mon, 16 May 2022 02:08:48 GMT" }, { "version": "v5", "created": "Mon, 30 May 2022 17:47:34 GMT" }, { "version": "v6", "created": "Sat, 3 Sep 2022 21:28:41 GMT" }, { "version": "v7", "created": "Wed, 30 Nov 2022 20:07:07 GMT" } ]
2022-12-02T00:00:00
[ [ "Sethi", "Khushal", "" ] ]
new_dataset
0.97574
2204.09069
Roberto Bigazzi
Roberto Bigazzi, Federico Landi, Silvia Cascianelli, Marcella Cornia, Lorenzo Baraldi and Rita Cucchiara
Embodied Navigation at the Art Gallery
Accepted by 21st International Conference on Image Analysis and Processing (ICIAP 2021)
null
10.1007/978-3-031-06427-2_61
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Embodied agents, trained to explore and navigate indoor photorealistic environments, have achieved impressive results on standard datasets and benchmarks. So far, experiments and evaluations have involved domestic and working scenes like offices, flats, and houses. In this paper, we build and release a new 3D space with unique characteristics: the one of a complete art museum. We name this environment ArtGallery3D (AG3D). Compared with existing 3D scenes, the collected space is ampler, richer in visual features, and provides very sparse occupancy information. This feature is challenging for occupancy-based agents which are usually trained in crowded domestic environments with plenty of occupancy information. Additionally, we annotate the coordinates of the main points of interest inside the museum, such as paintings, statues, and other items. Thanks to this manual process, we deliver a new benchmark for PointGoal navigation inside this new space. Trajectories in this dataset are far more complex and lengthy than existing ground-truth paths for navigation in Gibson and Matterport3D. We carry on extensive experimental evaluation using our new space for evaluation and prove that existing methods hardly adapt to this scenario. As such, we believe that the availability of this 3D model will foster future research and help improve existing solutions.
[ { "version": "v1", "created": "Tue, 19 Apr 2022 18:00:06 GMT" } ]
2022-12-02T00:00:00
[ [ "Bigazzi", "Roberto", "" ], [ "Landi", "Federico", "" ], [ "Cascianelli", "Silvia", "" ], [ "Cornia", "Marcella", "" ], [ "Baraldi", "Lorenzo", "" ], [ "Cucchiara", "Rita", "" ] ]
new_dataset
0.963657
2205.00030
Syed Mohsin Abbas Dr.
Syed Mohsin Abbas, Marwan Jalaleddine and Warren J. Gross
GRAND for Rayleigh Fading Channels
To appear in IEEE Global Communications Conference (GLOBECOM) 2022 Workshops
GLOBECOM 2022 Workshops
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Guessing Random Additive Noise Decoding (GRAND) is a code-agnostic decoding technique for short-length and high-rate channel codes. GRAND tries to guess the channel noise by generating test error patterns (TEPs), and the sequence of the TEPs is the main difference between different GRAND variants. In this work, we extend the application of GRAND to multipath frequency non-selective Rayleigh fading communication channels, and we refer to this GRAND variant as Fading-GRAND. The proposed Fading-GRAND adapts its TEP generation to the fading conditions of the underlying communication channel, outperforming traditional channel code decoders in scenarios with $L$ spatial diversity branches as well as scenarios with no diversity. Numerical simulation results show that the Fading-GRAND outperforms the traditional Berlekamp-Massey (B-M) decoder for decoding BCH code $(127,106)$ and BCH code $(127,113)$ by $\mathbf{0.5\sim6.5}$ dB at a target FER of $10^{-7}$. Similarly, Fading-GRAND outperforms GRANDAB, the hard-input variation of GRAND, by $0.2\sim8$ dB at a target FER of $10^{-7}$ with CRC $(128,104)$ code and RLC $(128,104)$. Furthermore the average complexity of Fading-GRAND, at $\frac{E_b}{N_0}$ corresponding to target FER of $10^{-7}$, is $\frac{1}{2}\times\sim \frac{1}{46}\times$ the complexity of GRANDAB.
[ { "version": "v1", "created": "Fri, 29 Apr 2022 18:22:06 GMT" }, { "version": "v2", "created": "Thu, 1 Dec 2022 03:50:37 GMT" } ]
2022-12-02T00:00:00
[ [ "Abbas", "Syed Mohsin", "" ], [ "Jalaleddine", "Marwan", "" ], [ "Gross", "Warren J.", "" ] ]
new_dataset
0.967278
2208.13422
Hao Xu
Hao Xu, Bo Li and Fei Zhong
Light-YOLOv5: A Lightweight Algorithm for Improved YOLOv5 in Complex Fire Scenarios
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fire-detection technology is of great importance for successful fire-prevention measures. Image-based fire detection is one effective method. At present, object-detection algorithms are deficient in performing detection speed and accuracy tasks when they are applied in complex fire scenarios. In this study, a lightweight fire-detection algorithm, Light-YOLOv5 (You Only Look Once version five), is presented. First, a separable vision transformer (SepViT) block is used to replace several C3 modules in the final layer of a backbone network to enhance both the contact of the backbone network to global in-formation and the extraction of flame and smoke features; second, a light bidirectional feature pyramid network (Light-BiFPN) is designed to lighten the model while improving the feature extraction and balancing speed and accuracy features during a fire-detection procedure; third, a global attention mechanism (GAM) is fused into the network to cause the model to focus more on the global dimensional features and further improve the detection accuracy of the model; and finally, the Mish activation function and SIoU loss are utilized to simultaneously increase the convergence speed and enhance the accuracy. The experimental results show that compared to the original algorithm, the mean average accuracy (mAP) of Light-YOLOv5 increases by 3.3%, the number of parameters decreases by 27.1%, and the floating point operations (FLOPs) decrease by 19.1%. The detection speed reaches 91.1 FPS, which can detect targets in complex fire scenarios in real time.
[ { "version": "v1", "created": "Mon, 29 Aug 2022 08:36:04 GMT" }, { "version": "v2", "created": "Mon, 3 Oct 2022 08:38:40 GMT" }, { "version": "v3", "created": "Thu, 1 Dec 2022 16:24:31 GMT" } ]
2022-12-02T00:00:00
[ [ "Xu", "Hao", "" ], [ "Li", "Bo", "" ], [ "Zhong", "Fei", "" ] ]
new_dataset
0.997972
2210.12985
Christian Khairallah (Cayralat)
Shahd Dibas, Christian Khairallah, Nizar Habash, Omar Fayez Sadi, Tariq Sairafy, Karmel Sarabta and Abrar Ardah
Maknuune: A Large Open Palestinian Arabic Lexicon
Fixed errors in the Total row of Table 4 on page 7
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We present Maknuune, a large open lexicon for the Palestinian Arabic dialect. Maknuune has over 36K entries from 17K lemmas, and 3.7K roots. All entries include diacritized Arabic orthography, phonological transcription and English glosses. Some entries are enriched with additional information such as broken plurals and templatic feminine forms, associated phrases and collocations, Standard Arabic glosses, and examples or notes on grammar, usage, or location of collected entry.
[ { "version": "v1", "created": "Mon, 24 Oct 2022 07:19:03 GMT" }, { "version": "v2", "created": "Thu, 1 Dec 2022 14:27:38 GMT" } ]
2022-12-02T00:00:00
[ [ "Dibas", "Shahd", "" ], [ "Khairallah", "Christian", "" ], [ "Habash", "Nizar", "" ], [ "Sadi", "Omar Fayez", "" ], [ "Sairafy", "Tariq", "" ], [ "Sarabta", "Karmel", "" ], [ "Ardah", "Abrar", "" ] ]
new_dataset
0.999821
2211.10716
Fanze Kong
Fanze Kong, Xiyuan Liu, Benxu Tang, Jiarong Lin, Yunfan Ren, Yixi Cai, Fangcheng Zhu, Nan Chen, Fu Zhang
MARSIM: A light-weight point-realistic simulator for LiDAR-based UAVs
8 pages, 13 figures
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The emergence of low-cost, small form factor and light-weight solid-state LiDAR sensors have brought new opportunities for autonomous unmanned aerial vehicles (UAVs) by advancing navigation safety and computation efficiency. Yet the successful developments of LiDAR-based UAVs must rely on extensive simulations. Existing simulators can hardly perform simulations of real-world environments due to the requirements of dense mesh maps that are difficult to obtain. In this paper, we develop a point-realistic simulator of real-world scenes for LiDAR-based UAVs. The key idea is the underlying point rendering method, where we construct a depth image directly from the point cloud map and interpolate it to obtain realistic LiDAR point measurements. Our developed simulator is able to run on a light-weight computing platform and supports the simulation of LiDARs with different resolution and scanning patterns, dynamic obstacles, and multi-UAV systems. Developed in the ROS framework, the simulator can easily communicate with other key modules of an autonomous robot, such as perception, state estimation, planning, and control. Finally, the simulator provides 10 high-resolution point cloud maps of various real-world environments, including forests of different densities, historic building, office, parking garage, and various complex indoor environments. These realistic maps provide diverse testing scenarios for an autonomous UAV. Evaluation results show that the developed simulator achieves superior performance in terms of time and memory consumption against Gazebo and that the simulated UAV flights highly match the actual one in real-world environments. We believe such a point-realistic and light-weight simulator is crucial to bridge the gap between UAV simulation and experiments and will significantly facilitate the research of LiDAR-based autonomous UAVs in the future.
[ { "version": "v1", "created": "Sat, 19 Nov 2022 15:08:44 GMT" }, { "version": "v2", "created": "Thu, 1 Dec 2022 13:41:30 GMT" } ]
2022-12-02T00:00:00
[ [ "Kong", "Fanze", "" ], [ "Liu", "Xiyuan", "" ], [ "Tang", "Benxu", "" ], [ "Lin", "Jiarong", "" ], [ "Ren", "Yunfan", "" ], [ "Cai", "Yixi", "" ], [ "Zhu", "Fangcheng", "" ], [ "Chen", "Nan", "" ], [ "Zhang", "Fu", "" ] ]
new_dataset
0.986733
2212.00003
Xinquan Wen
Xinquan Wen, Yiying Wu
Slowing Plants, Slowing Home
null
null
10.1145/3547522.3547691
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
The Anthropocene is causing a global crisis in recent decades. Facing this challenge, increasing attempts are being made to explore the more-than-human-centred perspective in HCI and design. Our research sets out to explore the ways of experiencing and interacting with plants with a case study on the slowness of plants. Utilising existing time-lapse technology, we investigate the role of IoT technologies in associating biological slowness with the networked technological environment of the home. In the experiment, we chose the humidity level of the environment as the variable to synchronise the movement of smart curtains and plants. We propose a relationship-centred strategy that uses an inclusive feature of a microclimate, like humidity, instead of the plant itself, for human-plant interaction. Furthermore, it indicates a 'plant-decentred' perspective to spark critical reflection on the taken-for-granted perception of isolating a person or a plant as an individual entity.
[ { "version": "v1", "created": "Tue, 4 Oct 2022 07:16:36 GMT" } ]
2022-12-02T00:00:00
[ [ "Wen", "Xinquan", "" ], [ "Wu", "Yiying", "" ] ]
new_dataset
0.995233
2212.00013
Vivien Van Veldhuizen
Vivien van Veldhuizen
Autotuning PID control using Actor-Critic Deep Reinforcement Learning
null
null
null
null
cs.LG cs.AI cs.RO
http://creativecommons.org/licenses/by/4.0/
This work is an exploratory research concerned with determining in what way reinforcement learning can be used to predict optimal PID parameters for a robot designed for apple harvest. To study this, an algorithm called Advantage Actor Critic (A2C) is implemented on a simulated robot arm. The simulation primarily relies on the ROS framework. Experiments for tuning one actuator at a time and two actuators a a time are run, which both show that the model is able to predict PID gains that perform better than the set baseline. In addition, it is studied if the model is able to predict PID parameters based on where an apple is located. Initial tests show that the model is indeed able to adapt its predictions to apple locations, making it an adaptive controller.
[ { "version": "v1", "created": "Tue, 29 Nov 2022 11:15:50 GMT" } ]
2022-12-02T00:00:00
[ [ "van Veldhuizen", "Vivien", "" ] ]
new_dataset
0.979884
2212.00069
Tushar Agarwal
Tushar Agarwal, Nithin Sugavanam, and Emre Ertin
MrSARP: A Hierarchical Deep Generative Prior for SAR Image Super-resolution
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
null
null
null
cs.CV cs.LG eess.SP
http://creativecommons.org/licenses/by-sa/4.0/
Generative models learned from training using deep learning methods can be used as priors in inverse under-determined inverse problems, including imaging from sparse set of measurements. In this paper, we present a novel hierarchical deep-generative model MrSARP for SAR imagery that can synthesize SAR images of a target at different resolutions jointly. MrSARP is trained in conjunction with a critic that scores multi resolution images jointly to decide if they are realistic images of a target at different resolutions. We show how this deep generative model can be used to retrieve the high spatial resolution image from low resolution images of the same target. The cost function of the generator is modified to improve its capability to retrieve the input parameters for a given set of resolution images. We evaluate the model's performance using the three standard error metrics used for evaluating super-resolution performance on simulated data and compare it to upsampling and sparsity based image sharpening approaches.
[ { "version": "v1", "created": "Wed, 30 Nov 2022 19:12:21 GMT" } ]
2022-12-02T00:00:00
[ [ "Agarwal", "Tushar", "" ], [ "Sugavanam", "Nithin", "" ], [ "Ertin", "Emre", "" ] ]
new_dataset
0.957848
2212.00089
Yixin Xu
Yixin Xu, Zijian Zhao, Yi Xiao, Tongguang Yu, Halid Mulaosmanovic, Dominik Kleimaier, Stefan Duenkel, Sven Beyer, Xiao Gong, Rajiv Joshi, X. Sharon Hu, Shixian Wen, Amanda Sofie Rios, Kiran Lekkala, Laurent Itti, Eric Homan, Sumitha George, Vijaykrishnan Narayanan, Kai Ni
Ferroelectric FET based Context-Switching FPGA Enabling Dynamic Reconfiguration for Adaptive Deep Learning Machines
54 pages, 15 figures
null
null
null
cs.AR cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Field Programmable Gate Array (FPGA) is widely used in acceleration of deep learning applications because of its reconfigurability, flexibility, and fast time-to-market. However, conventional FPGA suffers from the tradeoff between chip area and reconfiguration latency, making efficient FPGA accelerations that require switching between multiple configurations still elusive. In this paper, we perform technology-circuit-architecture co-design to break this tradeoff with no additional area cost and lower power consumption compared with conventional designs while providing dynamic reconfiguration, which can hide the reconfiguration time behind the execution time. Leveraging the intrinsic transistor structure and non-volatility of ferroelectric FET (FeFET), compact FPGA primitives are proposed and experimentally verified, including 1FeFET look-up table (LUT) cell, 1FeFET routing cell for connection blocks (CBs) and switch boxes (SBs). To support dynamic reconfiguration, two local copies of primitives are placed in parallel, which enables loading of arbitrary configuration without interrupting the active configuration execution. A comprehensive evaluation shows that compared with the SRAM-based FPGA, our dynamic reconfiguration design shows 63.0%/71.1% reduction in LUT/CB area and 82.7%/53.6% reduction in CB/SB power consumption with minimal penalty in the critical path delay (9.6%). We further implement a Super-Sub network model to show the benefit from the context-switching capability of our design. We also evaluate the timing performance of our design over conventional FPGA in various application scenarios. In one scenario that users switch between two preloaded configurations, our design yields significant time saving by 78.7% on average. In the other scenario of implementing multiple configurations with dynamic reconfiguration, our design offers time saving of 20.3% on average.
[ { "version": "v1", "created": "Wed, 30 Nov 2022 20:00:20 GMT" } ]
2022-12-02T00:00:00
[ [ "Xu", "Yixin", "" ], [ "Zhao", "Zijian", "" ], [ "Xiao", "Yi", "" ], [ "Yu", "Tongguang", "" ], [ "Mulaosmanovic", "Halid", "" ], [ "Kleimaier", "Dominik", "" ], [ "Duenkel", "Stefan", "" ], [ "Beyer", "Sven", "" ], [ "Gong", "Xiao", "" ], [ "Joshi", "Rajiv", "" ], [ "Hu", "X. Sharon", "" ], [ "Wen", "Shixian", "" ], [ "Rios", "Amanda Sofie", "" ], [ "Lekkala", "Kiran", "" ], [ "Itti", "Laurent", "" ], [ "Homan", "Eric", "" ], [ "George", "Sumitha", "" ], [ "Narayanan", "Vijaykrishnan", "" ], [ "Ni", "Kai", "" ] ]
new_dataset
0.998174
2212.00227
Maojun Zhang
Maojun Zhang, Yang Li, Zezhong Zhang, Guangxu Zhu, Caijun Zhong
Wireless Image Transmission with Semantic and Security Awareness
Submitted to IEEE WCL for possible publication
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
Semantic communication is an increasingly popular framework for wireless image transmission due to its high communication efficiency. With the aid of the joint-source-and-channel (JSC) encoder implemented by neural network, semantic communication directly maps original images into symbol sequences containing semantic information. Compared with the traditional separate source and channel coding design used in bitlevel communication systems, semantic communication systems are known to be more efficient and accurate especially in the low signal-to-the-noise ratio (SNR) regime. This thus prompts an critical while yet to be tackled issue of security in semantic communication: it makes the eavesdropper more easier to crack the semantic information as it can be decoded even in a quite noisy channel. In this letter, we develop a semantic communication framework that accounts for both semantic meaning decoding efficiency and its risk of privacy leakage. To achieve this, targeting wireless image transmission, we on the one hand propose an JSC autoencoder featuring residual for efficient semantic meaning extraction and transmission, and on the other hand, propose a data-driven scheme that balances the efficiency-privacy tradeoff. Extensive experimental results are provided to show the effectiveness and robustness of the proposed scheme.
[ { "version": "v1", "created": "Thu, 1 Dec 2022 02:22:08 GMT" } ]
2022-12-02T00:00:00
[ [ "Zhang", "Maojun", "" ], [ "Li", "Yang", "" ], [ "Zhang", "Zezhong", "" ], [ "Zhu", "Guangxu", "" ], [ "Zhong", "Caijun", "" ] ]
new_dataset
0.982062
2212.00228
N. Benjamin Erichson
N. Benjamin Erichson and Soon Hoe Lim and Michael W. Mahoney
Gated Recurrent Neural Networks with Weighted Time-Delay Feedback
null
null
null
null
cs.LG cs.NE stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a novel gated recurrent unit (GRU) with a weighted time-delay feedback mechanism in order to improve the modeling of long-term dependencies in sequential data. This model is a discretized version of a continuous-time formulation of a recurrent unit, where the dynamics are governed by delay differential equations (DDEs). By considering a suitable time-discretization scheme, we propose $\tau$-GRU, a discrete-time gated recurrent unit with delay. We prove the existence and uniqueness of solutions for the continuous-time model, and we demonstrate that the proposed feedback mechanism can help improve the modeling of long-term dependencies. Our empirical results show that $\tau$-GRU can converge faster and generalize better than state-of-the-art recurrent units and gated recurrent architectures on a range of tasks, including time-series classification, human activity recognition, and speech recognition.
[ { "version": "v1", "created": "Thu, 1 Dec 2022 02:26:34 GMT" } ]
2022-12-02T00:00:00
[ [ "Erichson", "N. Benjamin", "" ], [ "Lim", "Soon Hoe", "" ], [ "Mahoney", "Michael W.", "" ] ]
new_dataset
0.981518
2212.00244
Xidong Peng
Xidong Peng, Xinge Zhu, Yuexin Ma
CL3D: Unsupervised Domain Adaptation for Cross-LiDAR 3D Detection
Accepted by AAAI 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Domain adaptation for Cross-LiDAR 3D detection is challenging due to the large gap on the raw data representation with disparate point densities and point arrangements. By exploring domain-invariant 3D geometric characteristics and motion patterns, we present an unsupervised domain adaptation method that overcomes above difficulties. First, we propose the Spatial Geometry Alignment module to extract similar 3D shape geometric features of the same object class to align two domains, while eliminating the effect of distinct point distributions. Second, we present Temporal Motion Alignment module to utilize motion features in sequential frames of data to match two domains. Prototypes generated from two modules are incorporated into the pseudo-label reweighting procedure and contribute to our effective self-training framework for the target domain. Extensive experiments show that our method achieves state-of-the-art performance on cross-device datasets, especially for the datasets with large gaps captured by mechanical scanning LiDARs and solid-state LiDARs in various scenes. Project homepage is at https://github.com/4DVLab/CL3D.git
[ { "version": "v1", "created": "Thu, 1 Dec 2022 03:22:55 GMT" } ]
2022-12-02T00:00:00
[ [ "Peng", "Xidong", "" ], [ "Zhu", "Xinge", "" ], [ "Ma", "Yuexin", "" ] ]
new_dataset
0.982394
2212.00265
Nicolas Gu\'enon des Mesnards
Konstantine Arkoudas, Nicolas Guenon des Mesnards, Melanie Rubino, Sandesh Swamy, Saarthak Khanna, Weiqi Sun, Khan Haidar
PIZZA: A new benchmark for complex end-to-end task-oriented parsing
Accepted for publication at AMLC 2022
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Much recent work in task-oriented parsing has focused on finding a middle ground between flat slots and intents, which are inexpressive but easy to annotate, and powerful representations such as the lambda calculus, which are expressive but costly to annotate. This paper continues the exploration of task-oriented parsing by introducing a new dataset for parsing pizza and drink orders, whose semantics cannot be captured by flat slots and intents. We perform an extensive evaluation of deep-learning techniques for task-oriented parsing on this dataset, including different flavors of seq2seq systems and RNNGs. The dataset comes in two main versions, one in a recently introduced utterance-level hierarchical notation that we call TOP, and one whose targets are executable representations (EXR). We demonstrate empirically that training the parser to directly generate EXR notation not only solves the problem of entity resolution in one fell swoop and overcomes a number of expressive limitations of TOP notation, but also results in significantly greater parsing accuracy.
[ { "version": "v1", "created": "Thu, 1 Dec 2022 04:20:07 GMT" } ]
2022-12-02T00:00:00
[ [ "Arkoudas", "Konstantine", "" ], [ "Mesnards", "Nicolas Guenon des", "" ], [ "Rubino", "Melanie", "" ], [ "Swamy", "Sandesh", "" ], [ "Khanna", "Saarthak", "" ], [ "Sun", "Weiqi", "" ], [ "Haidar", "Khan", "" ] ]
new_dataset
0.998292
2212.00280
Jialian Wu
Jialian Wu, Jianfeng Wang, Zhengyuan Yang, Zhe Gan, Zicheng Liu, Junsong Yuan, Lijuan Wang
GRiT: A Generative Region-to-text Transformer for Object Understanding
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a Generative RegIon-to-Text transformer, GRiT, for object understanding. The spirit of GRiT is to formulate object understanding as <region, text> pairs, where region locates objects and text describes objects. For example, the text in object detection denotes class names while that in dense captioning refers to descriptive sentences. Specifically, GRiT consists of a visual encoder to extract image features, a foreground object extractor to localize objects, and a text decoder to generate open-set object descriptions. With the same model architecture, GRiT can understand objects via not only simple nouns, but also rich descriptive sentences including object attributes or actions. Experimentally, we apply GRiT to object detection and dense captioning tasks. GRiT achieves 60.4 AP on COCO 2017 test-dev for object detection and 15.5 mAP on Visual Genome for dense captioning. Code is available at https://github.com/JialianW/GRiT
[ { "version": "v1", "created": "Thu, 1 Dec 2022 04:59:44 GMT" } ]
2022-12-02T00:00:00
[ [ "Wu", "Jialian", "" ], [ "Wang", "Jianfeng", "" ], [ "Yang", "Zhengyuan", "" ], [ "Gan", "Zhe", "" ], [ "Liu", "Zicheng", "" ], [ "Yuan", "Junsong", "" ], [ "Wang", "Lijuan", "" ] ]
new_dataset
0.99965
2212.00305
Trung Nghia Le
Tuan-Luc Huynh, Khoi-Nguyen Nguyen-Ngoc, Chi-Bien Chu, Minh-Triet Tran, Trung-Nghia Le
Multilingual Communication System with Deaf Individuals Utilizing Natural and Visual Languages
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
According to the World Federation of the Deaf, more than two hundred sign languages exist. Therefore, it is challenging to understand deaf individuals, even proficient sign language users, resulting in a barrier between the deaf community and the rest of society. To bridge this language barrier, we propose a novel multilingual communication system, namely MUGCAT, to improve the communication efficiency of sign language users. By converting recognized specific hand gestures into expressive pictures, which is universal usage and language independence, our MUGCAT system significantly helps deaf people convey their thoughts. To overcome the limitation of sign language usage, which is mostly impossible to translate into complete sentences for ordinary people, we propose to reconstruct meaningful sentences from the incomplete translation of sign language. We also measure the semantic similarity of generated sentences with fragmented recognized hand gestures to keep the original meaning. Experimental results show that the proposed system can work in a real-time manner and synthesize exquisite stunning illustrations and meaningful sentences from a few hand gestures of sign language. This proves that our MUGCAT has promising potential in assisting deaf communication.
[ { "version": "v1", "created": "Thu, 1 Dec 2022 06:43:44 GMT" } ]
2022-12-02T00:00:00
[ [ "Huynh", "Tuan-Luc", "" ], [ "Nguyen-Ngoc", "Khoi-Nguyen", "" ], [ "Chu", "Chi-Bien", "" ], [ "Tran", "Minh-Triet", "" ], [ "Le", "Trung-Nghia", "" ] ]
new_dataset
0.95376
2212.00339
Zihao He
Kai Chen, Zihao He, Rong-Ching Chang, Jonathan May, Kristina Lerman
Anger Breeds Controversy: Analyzing Controversy and Emotions on Reddit
null
null
null
null
cs.CL cs.CY
http://creativecommons.org/licenses/by/4.0/
Emotions play an important role in interpersonal interactions and social conflict, yet their function in the development of controversy and disagreement in online conversations has not been explored. To address this gap, we study controversy on Reddit, a popular network of online discussion forums. We collect discussions from a wide variety of topical forums and use emotion detection to recognize a range of emotions from text, including anger, fear, joy, admiration, etc. Our study has three main findings. First, controversial comments express more anger and less admiration, joy and optimism than non-controversial comments. Second, controversial comments affect emotions of downstream comments in a discussion, usually resulting in long-term increase in anger and a decrease in positive emotions, although the magnitude and direction of emotional change depends on the forum. Finally, we show that emotions help better predict which comments will become controversial. Understanding emotional dynamics of online discussions can help communities to better manage conversations.
[ { "version": "v1", "created": "Thu, 1 Dec 2022 07:57:54 GMT" } ]
2022-12-02T00:00:00
[ [ "Chen", "Kai", "" ], [ "He", "Zihao", "" ], [ "Chang", "Rong-Ching", "" ], [ "May", "Jonathan", "" ], [ "Lerman", "Kristina", "" ] ]
new_dataset
0.995699
2212.00342
Balaji Ganesan
Sukriti Jaitly, Deepa Mariam George, Balaji Ganesan, Muhammad Ameen, Srinivas Pusapati
xEM: Explainable Entity Matching in Customer 360
4 pages, 5 figures. CODS-COMAD 2023 Demo
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Entity matching in Customer 360 is the task of determining if multiple records represent the same real world entity. Entities are typically people, organizations, locations, and events represented as attributed nodes in a graph, though they can also be represented as records in relational data. While probabilistic matching engines and artificial neural network models exist for this task, explaining entity matching has received less attention. In this demo, we present our Explainable Entity Matching (xEM) system and discuss the different AI/ML considerations that went into its implementation.
[ { "version": "v1", "created": "Thu, 1 Dec 2022 08:01:01 GMT" } ]
2022-12-02T00:00:00
[ [ "Jaitly", "Sukriti", "" ], [ "George", "Deepa Mariam", "" ], [ "Ganesan", "Balaji", "" ], [ "Ameen", "Muhammad", "" ], [ "Pusapati", "Srinivas", "" ] ]
new_dataset
0.980059
2212.00486
Jind\v{r}ich Libovick\'y
Martin Popel, Jind\v{r}ich Libovick\'y, Jind\v{r}ich Helcl
CUNI Systems for the WMT22 Czech-Ukrainian Translation Task
6 pages; System description paper at WMT22
null
null
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
We present Charles University submissions to the WMT22 General Translation Shared Task on Czech-Ukrainian and Ukrainian-Czech machine translation. We present two constrained submissions based on block back-translation and tagged back-translation and experiment with rule-based romanization of Ukrainian. Our results show that the romanization only has a minor effect on the translation quality. Further, we describe Charles Translator, a system that was developed in March 2022 as a response to the migration from Ukraine to the Czech Republic. Compared to our constrained systems, it did not use the romanization and used some proprietary data sources.
[ { "version": "v1", "created": "Thu, 1 Dec 2022 13:25:10 GMT" } ]
2022-12-02T00:00:00
[ [ "Popel", "Martin", "" ], [ "Libovický", "Jindřich", "" ], [ "Helcl", "Jindřich", "" ] ]
new_dataset
0.998908
2212.00500
Xiaohuan Zhou
Xiaohuan Zhou, Jiaming Wang, Zeyu Cui, Shiliang Zhang, Zhijie Yan, Jingren Zhou, Chang Zhou
MMSpeech: Multi-modal Multi-task Encoder-Decoder Pre-training for Speech Recognition
Submitted to ICASSP 2023
null
null
null
cs.MM cs.CL cs.LG cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a novel multi-modal multi-task encoder-decoder pre-training framework (MMSpeech) for Mandarin automatic speech recognition (ASR), which employs both unlabeled speech and text data. The main difficulty in speech-text joint pre-training comes from the significant difference between speech and text modalities, especially for Mandarin speech and text. Unlike English and other languages with an alphabetic writing system, Mandarin uses an ideographic writing system where character and sound are not tightly mapped to one another. Therefore, we propose to introduce the phoneme modality into pre-training, which can help capture modality-invariant information between Mandarin speech and text. Specifically, we employ a multi-task learning framework including five self-supervised and supervised tasks with speech and text data. For end-to-end pre-training, we introduce self-supervised speech-to-pseudo-codes (S2C) and phoneme-to-text (P2T) tasks utilizing unlabeled speech and text data, where speech-pseudo-codes pairs and phoneme-text pairs are a supplement to the supervised speech-text pairs. To train the encoder to learn better speech representation, we introduce self-supervised masked speech prediction (MSP) and supervised phoneme prediction (PP) tasks to learn to map speech into phonemes. Besides, we directly add the downstream supervised speech-to-text (S2T) task into the pre-training process, which can further improve the pre-training performance and achieve better recognition results even without fine-tuning. Experiments on AISHELL-1 show that our proposed method achieves state-of-the-art performance, with a more than 40% relative improvement compared with other pre-training methods.
[ { "version": "v1", "created": "Tue, 29 Nov 2022 13:16:09 GMT" } ]
2022-12-02T00:00:00
[ [ "Zhou", "Xiaohuan", "" ], [ "Wang", "Jiaming", "" ], [ "Cui", "Zeyu", "" ], [ "Zhang", "Shiliang", "" ], [ "Yan", "Zhijie", "" ], [ "Zhou", "Jingren", "" ], [ "Zhou", "Chang", "" ] ]
new_dataset
0.976329
2212.00586
Jiaan Wang
Shaohui Zheng, Zhixu Li, Jiaan Wang, Jianfeng Qu, An Liu, Lei Zhao, Zhigang Chen
Long-Document Cross-Lingual Summarization
Accepted by WSDM 2023
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cross-Lingual Summarization (CLS) aims at generating summaries in one language for the given documents in another language. CLS has attracted wide research attention due to its practical significance in the multi-lingual world. Though great contributions have been made, existing CLS works typically focus on short documents, such as news articles, short dialogues and guides. Different from these short texts, long documents such as academic articles and business reports usually discuss complicated subjects and consist of thousands of words, making them non-trivial to process and summarize. To promote CLS research on long documents, we construct Perseus, the first long-document CLS dataset which collects about 94K Chinese scientific documents paired with English summaries. The average length of documents in Perseus is more than two thousand tokens. As a preliminary study on long-document CLS, we build and evaluate various CLS baselines, including pipeline and end-to-end methods. Experimental results on Perseus show the superiority of the end-to-end baseline, outperforming the strong pipeline models equipped with sophisticated machine translation systems. Furthermore, to provide a deeper understanding, we manually analyze the model outputs and discuss specific challenges faced by current approaches. We hope that our work could benchmark long-document CLS and benefit future studies.
[ { "version": "v1", "created": "Thu, 1 Dec 2022 15:24:16 GMT" } ]
2022-12-02T00:00:00
[ [ "Zheng", "Shaohui", "" ], [ "Li", "Zhixu", "" ], [ "Wang", "Jiaan", "" ], [ "Qu", "Jianfeng", "" ], [ "Liu", "An", "" ], [ "Zhao", "Lei", "" ], [ "Chen", "Zhigang", "" ] ]
new_dataset
0.995074
2212.00638
Sachin Goyal
Sachin Goyal, Ananya Kumar, Sankalp Garg, Zico Kolter, and Aditi Raghunathan
Finetune like you pretrain: Improved finetuning of zero-shot vision models
20 Pages, 7 Tables, 5 Figures
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Finetuning image-text models such as CLIP achieves state-of-the-art accuracies on a variety of benchmarks. However, recent works like WiseFT (Wortsman et al., 2021) and LP-FT (Kumar et al., 2022) have shown that even subtle differences in the finetuning process can lead to surprisingly large differences in the final performance, both for in-distribution (ID) and out-of-distribution (OOD) data. In this work, we show that a natural and simple approach of mimicking contrastive pretraining consistently outperforms alternative finetuning approaches. Specifically, we cast downstream class labels as text prompts and continue optimizing the contrastive loss between image embeddings and class-descriptive prompt embeddings (contrastive finetuning). Our method consistently outperforms baselines across 7 distribution shifts, 6 transfer learning, and 3 few-shot learning benchmarks. On WILDS-iWILDCam, our proposed approach FLYP outperforms the top of the leaderboard by $2.3\%$ ID and $2.7\%$ OOD, giving the highest reported accuracy. Averaged across 7 OOD datasets (2 WILDS and 5 ImageNet associated shifts), FLYP gives gains of $4.2\%$ OOD over standard finetuning and outperforms the current state of the art (LP-FT) by more than $1\%$ both ID and OOD. Similarly, on 3 few-shot learning benchmarks, our approach gives gains up to $4.6\%$ over standard finetuning and $4.4\%$ over the state of the art. In total, these benchmarks establish contrastive finetuning as a simple, intuitive, and state-of-the-art approach for supervised finetuning of image-text models like CLIP. Code is available at https://github.com/locuslab/FLYP.
[ { "version": "v1", "created": "Thu, 1 Dec 2022 16:37:46 GMT" } ]
2022-12-02T00:00:00
[ [ "Goyal", "Sachin", "" ], [ "Kumar", "Ananya", "" ], [ "Garg", "Sankalp", "" ], [ "Kolter", "Zico", "" ], [ "Raghunathan", "Aditi", "" ] ]
new_dataset
0.985268
2212.00689
Babak Jalalzadeh Fard
B. Jalalzadeh Fard, S. A. Hasan, J. E. Bell
CliMedBERT: A Pre-trained Language Model for Climate and Health-related Text
5 pages, 1 figure. Presented at Tackling Climate Change with Machine Learning: workshop at NeurIPS 2022
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Climate change is threatening human health in unprecedented orders and many ways. These threats are expected to grow unless effective and evidence-based policies are developed and acted upon to minimize or eliminate them. Attaining such a task requires the highest degree of the flow of knowledge from science into policy. The multidisciplinary, location-specific, and vastness of published science makes it challenging to keep track of novel work in this area, as well as making the traditional knowledge synthesis methods inefficient in infusing science into policy. To this end, we consider developing multiple domain-specific language models (LMs) with different variations from Climate- and Health-related information, which can serve as a foundational step toward capturing available knowledge to enable solving different tasks, such as detecting similarities between climate- and health-related concepts, fact-checking, relation extraction, evidence of health effects to policy text generation, and more. To our knowledge, this is the first work that proposes developing multiple domain-specific language models for the considered domains. We will make the developed models, resources, and codebase available for the researchers.
[ { "version": "v1", "created": "Thu, 1 Dec 2022 17:44:09 GMT" } ]
2022-12-02T00:00:00
[ [ "Fard", "B. Jalalzadeh", "" ], [ "Hasan", "S. A.", "" ], [ "Bell", "J. E.", "" ] ]
new_dataset
0.998123
2212.00760
Kritika Garg
Kritika Garg, Himarsha R. Jayanetti, Sawood Alam, Michele C. Weigle, Michael L. Nelson
Caching HTTP 404 Responses Eliminates Unnecessary Archival Replay Requests
null
null
null
null
cs.NI cs.DL
http://creativecommons.org/licenses/by-sa/4.0/
Upon replay, JavaScript on archived web pages can generate recurring HTTP requests that lead to unnecessary traffic to the web archive. In one example, an archived page averaged more than 1000 requests per minute. These requests are not visible to the user, so if a user leaves such an archived page open in a browser tab, they would be unaware that their browser is continuing to generate traffic to the web archive. We found that web pages that require regular updates (e.g., radio playlists, updates for sports scores, image carousels) are more likely to make such recurring requests. If the resources requested by the web page are not archived, some web archives may attempt to patch the archive by requesting the resources from the live web. If the requested resources are unavailable on the live web, the resources cannot be archived, and the responses remain HTTP 404. Some archived pages continue to poll the server as frequently as they did on the live web, while some pages poll the server even more frequently if their requests return HTTP 404 responses, creating a high amount of unnecessary traffic. On a large scale, such web pages are effectively a denial of service attack on the web archive. Significant computational, network and storage resources are required for web archives to archive and then successfully replay pages as they were on the live web, and these resources should not be spent on unnecessary HTTP traffic. Our proposed solution is to optimize archival replay using Cache-Control HTTP response headers. We implemented this approach in a test environment and cached HTTP 404 responses that prevented the browser's requests from reaching the web archive server.
[ { "version": "v1", "created": "Thu, 1 Dec 2022 18:50:02 GMT" } ]
2022-12-02T00:00:00
[ [ "Garg", "Kritika", "" ], [ "Jayanetti", "Himarsha R.", "" ], [ "Alam", "Sawood", "" ], [ "Weigle", "Michele C.", "" ], [ "Nelson", "Michael L.", "" ] ]
new_dataset
0.997426
2106.08684
Niccol\`o Di Marco
Niccol\`o Di Marco, Matteo Cinelli, Walter Quattrociocchi
Reliability of Content and Echo Chambers on YouTube during the COVID-19 Debate
null
null
10.36190/2022.64
null
cs.CY physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The spread of inaccurate and misleading information may alter behaviours and complicate crisis management, especially during an emergency like the COVID-19 pandemic. This paper aims to investigate information diffusion during the COVID-19 pandemic by evaluating news consumption on YouTube. First, we analyse more than 2 million users' engagement with 13,000 videos released by 68 YouTube channels, labelled with a political bias and fact-checking index. Then, we study the relationship between each user\~Os political preference and their consumption of questionable (i.e., poorly fact-checked) and reliable information. Our results, quantified using measures from information theory, provide evidence for the existence of echo chambers across two dimensions represented by political bias and the trustworthiness of information channels. We observe that the echo chamber structure cannot be reproduced after properly randomising the users' interaction patterns. Moreover, we observe a relation between the political bias of users and their tendency to consume highly questionable news.
[ { "version": "v1", "created": "Wed, 16 Jun 2021 10:44:29 GMT" }, { "version": "v2", "created": "Tue, 29 Nov 2022 21:10:12 GMT" } ]
2022-12-01T00:00:00
[ [ "Di Marco", "Niccolò", "" ], [ "Cinelli", "Matteo", "" ], [ "Quattrociocchi", "Walter", "" ] ]
new_dataset
0.969242
2202.05728
Ahmad Hammoudeh
Ahmad Hammoudeh, Bastien Vanderplaetse, St\'ephane Dupont
Deep soccer captioning with transformer: dataset, semantics-related losses, and multi-level evaluation
null
null
10.1016/j.procs.2022.10.125
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
This work aims at generating captions for soccer videos using deep learning. In this context, this paper introduces a dataset, model, and triple-level evaluation. The dataset consists of 22k caption-clip pairs and three visual features (images, optical flow, inpainting) for ~500 hours of \emph{SoccerNet} videos. The model is divided into three parts: a transformer learns language, ConvNets learn vision, and a fusion of linguistic and visual features generates captions. The paper suggests evaluating generated captions at three levels: syntax (the commonly used evaluation metrics such as BLEU-score and CIDEr), meaning (the quality of descriptions for a domain expert), and corpus (the diversity of generated captions). The paper shows that the diversity of generated captions has improved (from 0.07 reaching 0.18) with semantics-related losses that prioritize selected words. Semantics-related losses and the utilization of more visual features (optical flow, inpainting) improved the normalized captioning score by 28\%. The web page of this work: https://sites.google.com/view/soccercaptioning}{https://sites.google.com/view/soccercaptioning
[ { "version": "v1", "created": "Fri, 11 Feb 2022 16:04:03 GMT" }, { "version": "v2", "created": "Wed, 30 Nov 2022 12:26:31 GMT" } ]
2022-12-01T00:00:00
[ [ "Hammoudeh", "Ahmad", "" ], [ "Vanderplaetse", "Bastien", "" ], [ "Dupont", "Stéphane", "" ] ]
new_dataset
0.993936
2202.11984
Jan Luxemburk
Jan Luxemburk, Tom\'a\v{s} \v{C}ejka
Fine-grained TLS services classification with reject option
null
Computer Networks, vol. 220, p. 109467, Jan. 2023
10.1016/j.comnet.2022.109467
null
cs.LG cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recent success and proliferation of machine learning and deep learning have provided powerful tools, which are also utilized for encrypted traffic analysis, classification, and threat detection in computer networks. These methods, neural networks in particular, are often complex and require a huge corpus of training data. Therefore, this paper focuses on collecting a large up-to-date dataset with almost 200 fine-grained service labels and 140 million network flows extended with packet-level metadata. The number of flows is three orders of magnitude higher than in other existing public labeled datasets of encrypted traffic. The number of service labels, which is important to make the problem hard and realistic, is four times higher than in the public dataset with the most class labels. The published dataset is intended as a benchmark for identifying services in encrypted traffic. Service identification can be further extended with the task of "rejecting" unknown services, i.e., the traffic not seen during the training phase. Neural networks offer superior performance for tackling this more challenging problem. To showcase the dataset's usefulness, we implemented a neural network with a multi-modal architecture, which is the state-of-the-art approach, and achieved 97.04% classification accuracy and detected 91.94% of unknown services with 5% false positive rate.
[ { "version": "v1", "created": "Thu, 24 Feb 2022 09:44:12 GMT" }, { "version": "v2", "created": "Tue, 29 Nov 2022 19:05:29 GMT" } ]
2022-12-01T00:00:00
[ [ "Luxemburk", "Jan", "" ], [ "Čejka", "Tomáš", "" ] ]
new_dataset
0.99885
2203.07473
Gunnar Kudrjavets
Gunnar Kudrjavets (University of Groningen), Nachiappan Nagappan (Microsoft Research), Ayushi Rastogi (University of Groningen)
The Unexplored Treasure Trove of Phabricator Code Review
5 pages. To be published in Proceedings of MSR '22: Proceedings of the 19th International Conference on Mining Software Repositories (MSR 2022). ACM, New York, NY, USA
null
10.1145/3524842.3528005
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phabricator is a modern code collaboration tool used by popular projects like FreeBSD and Mozilla. However, unlike the other well-known code review environments, such as Gerrit or GitHub, there is no readily accessible public code review dataset for Phabricator. This paper describes our experience mining code reviews from five different projects that use Phabricator (Blender, FreeBSD, KDE, LLVM, and Mozilla). We discuss the challenges associated with the data retrieval process and our solutions, resulting in a dataset with details regarding 317,476 Phabricator code reviews. Our dataset is available in both JSON and MySQL database dump formats. The dataset enables analyses of the history of code reviews at a more granular level than other platforms. In addition, given that the projects we mined are publicly accessible via the Conduit API, our dataset can be used as a foundation to fetch additional details and insights.
[ { "version": "v1", "created": "Mon, 14 Mar 2022 20:14:49 GMT" } ]
2022-12-01T00:00:00
[ [ "Kudrjavets", "Gunnar", "", "University of Groningen" ], [ "Nagappan", "Nachiappan", "", "Microsoft Research" ], [ "Rastogi", "Ayushi", "", "University of Groningen" ] ]
new_dataset
0.996926
2203.11876
Yuhang Zang
Yuhang Zang, Wei Li, Kaiyang Zhou, Chen Huang, Chen Change Loy
Open-Vocabulary DETR with Conditional Matching
ECCV 2022 Oral
null
10.1007/978-3-031-20077-9_7
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Open-vocabulary object detection, which is concerned with the problem of detecting novel objects guided by natural language, has gained increasing attention from the community. Ideally, we would like to extend an open-vocabulary detector such that it can produce bounding box predictions based on user inputs in form of either natural language or exemplar image. This offers great flexibility and user experience for human-computer interaction. To this end, we propose a novel open-vocabulary detector based on DETR -- hence the name OV-DETR -- which, once trained, can detect any object given its class name or an exemplar image. The biggest challenge of turning DETR into an open-vocabulary detector is that it is impossible to calculate the classification cost matrix of novel classes without access to their labeled images. To overcome this challenge, we formulate the learning objective as a binary matching one between input queries (class name or exemplar image) and the corresponding objects, which learns useful correspondence to generalize to unseen queries during testing. For training, we choose to condition the Transformer decoder on the input embeddings obtained from a pre-trained vision-language model like CLIP, in order to enable matching for both text and image queries. With extensive experiments on LVIS and COCO datasets, we demonstrate that our OV-DETR -- the first end-to-end Transformer-based open-vocabulary detector -- achieves non-trivial improvements over current state of the arts.
[ { "version": "v1", "created": "Tue, 22 Mar 2022 16:54:52 GMT" }, { "version": "v2", "created": "Wed, 30 Nov 2022 02:42:54 GMT" } ]
2022-12-01T00:00:00
[ [ "Zang", "Yuhang", "" ], [ "Li", "Wei", "" ], [ "Zhou", "Kaiyang", "" ], [ "Huang", "Chen", "" ], [ "Loy", "Chen Change", "" ] ]
new_dataset
0.994484
2204.06972
Geri Skenderi
Geri Skenderi, Christian Joppi, Matteo Denitto, Berniero Scarpa, Marco Cristani
The multi-modal universe of fast-fashion: the Visuelle 2.0 benchmark
Accepted at the 5th Workshop on Computer Vision for Fashion, Art, and Design @ CVPR22
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
We present Visuelle 2.0, the first dataset useful for facing diverse prediction problems that a fast-fashion company has to manage routinely. Furthermore, we demonstrate how the use of computer vision is substantial in this scenario. Visuelle 2.0 contains data for 6 seasons / 5355 clothing products of Nuna Lie, a famous Italian company with hundreds of shops located in different areas within the country. In particular, we focus on a specific prediction problem, namely short-observation new product sale forecasting (SO-fore). SO-fore assumes that the season has started and a set of new products is on the shelves of the different stores. The goal is to forecast the sales for a particular horizon, given a short, available past (few weeks), since no earlier statistics are available. To be successful, SO-fore approaches should capture this short past and exploit other modalities or exogenous data. To these aims, Visuelle 2.0 is equipped with disaggregated data at the item-shop level and multi-modal information for each clothing item, allowing computer vision approaches to come into play. The main message that we deliver is that the use of image data with deep networks boosts performances obtained when using the time series in long-term forecasting scenarios, ameliorating the WAPE and MAE by up to 5.48% and 7% respectively compared to competitive baseline methods. The dataset is available at https://humaticslab.github.io/forecasting/visuelle
[ { "version": "v1", "created": "Thu, 14 Apr 2022 13:53:46 GMT" }, { "version": "v2", "created": "Wed, 30 Nov 2022 15:06:22 GMT" } ]
2022-12-01T00:00:00
[ [ "Skenderi", "Geri", "" ], [ "Joppi", "Christian", "" ], [ "Denitto", "Matteo", "" ], [ "Scarpa", "Berniero", "" ], [ "Cristani", "Marco", "" ] ]
new_dataset
0.999766
2205.09045
Renjie Li
Xinyu Chen, Renjie Li, Yueyao Yu, Yuanwen Shen, Wenye Li, Zhaoyu Zhang, Yin Zhang
POViT: Vision Transformer for Multi-objective Design and Characterization of Nanophotonic Devices
The loss function should have been RMSE, not MSE, in the model evaluation section. As a result, the training results are all wrong. We need to withdraw this paper until we have come up with a solution to this issue
null
null
null
cs.LG physics.app-ph physics.optics
http://creativecommons.org/licenses/by-nc-sa/4.0/
We solve a fundamental challenge in semiconductor IC design: the fast and accurate characterization of nanoscale photonic devices. Much like the fusion between AI and EDA, many efforts have been made to apply DNNs such as convolutional neural networks (CNN) to prototype and characterize next-gen optoelectronic devices commonly found in photonic integrated circuits (PIC) and LiDAR. These prior works generally strive to predict the quality factor (Q) and modal volume (V) of for instance, photonic crystals, with ultra-high accuracy and speed. However, state-of-the-art models are still far from being directly applicable in the real-world: e.g. the correlation coefficient of V ($V_{coeff}$ ) is only about 80%, which is much lower than what it takes to generate reliable and reproducible nanophotonic designs. Recently, attention-based transformer models have attracted extensive interests and been widely used in CV and NLP. In this work, we propose the first-ever Transformer model (POViT) to efficiently design and simulate semiconductor photonic devices with multiple objectives. Unlike the standard Vision Transformer (ViT), we supplied photonic crystals as data input and changed the activation layer from GELU to an absolute-value function (ABS). Our experiments show that POViT exceeds results reported by previous models significantly. The correlation coefficient $V_{coeff}$ increases by over 12% (i.e., to 92.0%) and the prediction errors of Q is reduced by an order of magnitude, among several other key metric improvements. Our work has the potential to drive the expansion of EDA to fully automated photonic design. The complete dataset and code will be released to aid researchers endeavoring in the interdisciplinary field of physics and computer science.
[ { "version": "v1", "created": "Tue, 17 May 2022 01:58:34 GMT" }, { "version": "v2", "created": "Tue, 29 Nov 2022 00:42:12 GMT" }, { "version": "v3", "created": "Wed, 30 Nov 2022 01:10:56 GMT" } ]
2022-12-01T00:00:00
[ [ "Chen", "Xinyu", "" ], [ "Li", "Renjie", "" ], [ "Yu", "Yueyao", "" ], [ "Shen", "Yuanwen", "" ], [ "Li", "Wenye", "" ], [ "Zhang", "Zhaoyu", "" ], [ "Zhang", "Yin", "" ] ]
new_dataset
0.99935
2210.12152
Matthew Ho
Matthew Ho, Aditya Sharma, Justin Chang, Michael Saxon, Sharon Levy, Yujie Lu, William Yang Wang
WikiWhy: Answering and Explaining Cause-and-Effect Questions
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As large language models (LLMs) grow larger and more sophisticated, assessing their "reasoning" capabilities in natural language grows more challenging. Recent question answering (QA) benchmarks that attempt to assess reasoning are often limited by a narrow scope of covered situations and subject matters. We introduce WikiWhy, a QA dataset built around a novel auxiliary task: explaining why an answer is true in natural language. WikiWhy contains over 9,000 "why" question-answer-rationale triples, grounded on Wikipedia facts across a diverse set of topics. Each rationale is a set of supporting statements connecting the question to the answer. WikiWhy serves as a benchmark for the reasoning capabilities of LLMs because it demands rigorous explicit rationales for each answer to demonstrate the acquisition of implicit commonsense knowledge, which is unlikely to be easily memorized. GPT-3 baselines achieve only 38.7% human-evaluated correctness in the end-to-end answer & explain condition, leaving significant room for future improvements.
[ { "version": "v1", "created": "Fri, 21 Oct 2022 17:59:03 GMT" }, { "version": "v2", "created": "Wed, 30 Nov 2022 07:49:19 GMT" } ]
2022-12-01T00:00:00
[ [ "Ho", "Matthew", "" ], [ "Sharma", "Aditya", "" ], [ "Chang", "Justin", "" ], [ "Saxon", "Michael", "" ], [ "Levy", "Sharon", "" ], [ "Lu", "Yujie", "" ], [ "Wang", "William Yang", "" ] ]
new_dataset
0.999595
2211.07610
Ashwin Rao
Pulak Malhotra and Ashwin Rao
Pied Piper: Meta Search for Music
9 pages, 6 figures. To be published in conference proceedings of International Conference on Innovations in Computational Intelligence and Computer Vision (ICICV) 2022
International Conference on Innovations in Computational Intelligence and Computer Vision (ICICV) 2022
null
null
cs.IR
http://creativecommons.org/licenses/by/4.0/
Internet search engines have become an integral part of life, but for pop music, people still rely on textual search engines like Google. We propose Pied piper, a meta search engine for music. It can search for music lyrics, song metadata and song audio or a combination of any of these as the input query and efficiently return the relevant results.
[ { "version": "v1", "created": "Mon, 14 Nov 2022 18:31:41 GMT" } ]
2022-12-01T00:00:00
[ [ "Malhotra", "Pulak", "" ], [ "Rao", "Ashwin", "" ] ]
new_dataset
0.998509
2211.11982
Guangsen Wang
Guangsen Wang, Samson Tan, Shafiq Joty, Gang Wu, Jimmy Au, Steven Hoi
BotSIM: An End-to-End Bot Simulation Framework for Commercial Task-Oriented Dialog Systems
Paper accepted by the EMNLP 2022 System Demo Track; We have open-sourced the toolkit at https://github.com/salesforce/botsim
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
We present BotSIM, a data-efficient end-to-end Bot SIMulation toolkit for commercial text-based task-oriented dialog (TOD) systems. BotSIM consists of three major components: 1) a Generator that can infer semantic-level dialog acts and entities from bot definitions and generate user queries via model-based paraphrasing; 2) an agenda-based dialog user Simulator (ABUS) to simulate conversations with the dialog agents; 3) a Remediator to analyze the simulated conversations, visualize the bot health reports and provide actionable remediation suggestions for bot troubleshooting and improvement. We demonstrate BotSIM's effectiveness in end-to-end evaluation, remediation and multi-intent dialog generation via case studies on two commercial bot platforms. BotSIM's "generation-simulation-remediation" paradigm accelerates the end-to-end bot evaluation and iteration process by: 1) reducing manual test cases creation efforts; 2) enabling a holistic gauge of the bot in terms of NLU and end-to-end performance via extensive dialog simulation; 3) improving the bot troubleshooting process with actionable suggestions. A demo of our system can be found at https://tinyurl.com/mryu74cd and a demo video at https://youtu.be/qLi5iSoly30. We have open-sourced the toolkit at https://github.com/salesforce/botsim
[ { "version": "v1", "created": "Tue, 22 Nov 2022 03:34:36 GMT" }, { "version": "v2", "created": "Fri, 25 Nov 2022 02:11:49 GMT" }, { "version": "v3", "created": "Wed, 30 Nov 2022 12:37:08 GMT" } ]
2022-12-01T00:00:00
[ [ "Wang", "Guangsen", "" ], [ "Tan", "Samson", "" ], [ "Joty", "Shafiq", "" ], [ "Wu", "Gang", "" ], [ "Au", "Jimmy", "" ], [ "Hoi", "Steven", "" ] ]
new_dataset
0.987828
2211.13523
Jacob Solawetz
Floriana Ciaglia, Francesco Saverio Zuppichini, Paul Guerrie, Mark McQuade, and Jacob Solawetz
Roboflow 100: A Rich, Multi-Domain Object Detection Benchmark
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The evaluation of object detection models is usually performed by optimizing a single metric, e.g. mAP, on a fixed set of datasets, e.g. Microsoft COCO and Pascal VOC. Due to image retrieval and annotation costs, these datasets consist largely of images found on the web and do not represent many real-life domains that are being modelled in practice, e.g. satellite, microscopic and gaming, making it difficult to assert the degree of generalization learned by the model. We introduce the Roboflow-100 (RF100) consisting of 100 datasets, 7 imagery domains, 224,714 images, and 805 class labels with over 11,170 labelling hours. We derived RF100 from over 90,000 public datasets, 60 million public images that are actively being assembled and labelled by computer vision practitioners in the open on the web application Roboflow Universe. By releasing RF100, we aim to provide a semantically diverse, multi-domain benchmark of datasets to help researchers test their model's generalizability with real-life data. RF100 download and benchmark replication are available on GitHub.
[ { "version": "v1", "created": "Thu, 24 Nov 2022 10:44:06 GMT" }, { "version": "v2", "created": "Mon, 28 Nov 2022 22:04:16 GMT" }, { "version": "v3", "created": "Wed, 30 Nov 2022 14:53:33 GMT" } ]
2022-12-01T00:00:00
[ [ "Ciaglia", "Floriana", "" ], [ "Zuppichini", "Francesco Saverio", "" ], [ "Guerrie", "Paul", "" ], [ "McQuade", "Mark", "" ], [ "Solawetz", "Jacob", "" ] ]
new_dataset
0.990378
2211.15516
Shilong Liu
Shilong Liu, Yaoyuan Liang, Feng Li, Shijia Huang, Hao Zhang, Hang Su, Jun Zhu, Lei Zhang
DQ-DETR: Dual Query Detection Transformer for Phrase Extraction and Grounding
Accepted to AAAI 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study the problem of visual grounding by considering both phrase extraction and grounding (PEG). In contrast to the previous phrase-known-at-test setting, PEG requires a model to extract phrases from text and locate objects from images simultaneously, which is a more practical setting in real applications. As phrase extraction can be regarded as a $1$D text segmentation problem, we formulate PEG as a dual detection problem and propose a novel DQ-DETR model, which introduces dual queries to probe different features from image and text for object prediction and phrase mask prediction. Each pair of dual queries is designed to have shared positional parts but different content parts. Such a design effectively alleviates the difficulty of modality alignment between image and text (in contrast to a single query design) and empowers Transformer decoder to leverage phrase mask-guided attention to improve performance. To evaluate the performance of PEG, we also propose a new metric CMAP (cross-modal average precision), analogous to the AP metric in object detection. The new metric overcomes the ambiguity of Recall@1 in many-box-to-one-phrase cases in phrase grounding. As a result, our PEG pre-trained DQ-DETR establishes new state-of-the-art results on all visual grounding benchmarks with a ResNet-101 backbone. For example, it achieves $91.04\%$ and $83.51\%$ in terms of recall rate on RefCOCO testA and testB with a ResNet-101 backbone. Code will be availabl at \url{https://github.com/IDEA-Research/DQ-DETR}.
[ { "version": "v1", "created": "Mon, 28 Nov 2022 16:30:46 GMT" }, { "version": "v2", "created": "Wed, 30 Nov 2022 17:49:14 GMT" } ]
2022-12-01T00:00:00
[ [ "Liu", "Shilong", "" ], [ "Liang", "Yaoyuan", "" ], [ "Li", "Feng", "" ], [ "Huang", "Shijia", "" ], [ "Zhang", "Hao", "" ], [ "Su", "Hang", "" ], [ "Zhu", "Jun", "" ], [ "Zhang", "Lei", "" ] ]
new_dataset
0.958286
2211.15916
Guangsen Wang
Guangsen Wang and Shafiq Joty and Junnan Li and Steven Hoi
BotSIM: An End-to-End Bot Simulation Toolkit for Commercial Task-Oriented Dialog Systems
Accompanying code documentation at https://opensource.salesforce.com/botsim/latest/index.html. arXiv admin note: text overlap with arXiv:2211.11982
null
null
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
We introduce BotSIM, a modular, open-source Bot SIMulation environment with dialog generation, user simulation and conversation analytics capabilities. BotSIM aims to serve as a one-stop solution for large-scale data-efficient end-to-end evaluation, diagnosis and remediation of commercial task-oriented dialog (TOD) systems to significantly accelerate commercial bot development and evaluation, reduce cost and time-to-market. BotSIM adopts a layered design comprising the infrastructure layer, the adaptor layer and the application layer. The infrastructure layer hosts key models and components to support BotSIM's major functionalities via a streamlined "generation-simulation-remediation" pipeline. The adaptor layer is used to extend BotSIM to accommodate new bot platforms. The application layer provides a suite of command line tools and a Web App to significantly lower the entry barrier for BotSIM users such as bot admins or practitioners. In this report, we focus on the technical designs of various system components. A detailed case study using Einstein BotBuilder is also presented to show how to apply BotSIM pipeline for bot evaluation and remediation. The detailed system descriptions can be found in our system demo paper. The toolkit is available at: https://github.com/salesforce/BotSIM .
[ { "version": "v1", "created": "Tue, 29 Nov 2022 04:13:25 GMT" }, { "version": "v2", "created": "Wed, 30 Nov 2022 12:42:43 GMT" } ]
2022-12-01T00:00:00
[ [ "Wang", "Guangsen", "" ], [ "Joty", "Shafiq", "" ], [ "Li", "Junnan", "" ], [ "Hoi", "Steven", "" ] ]
new_dataset
0.998676
2211.16135
Xiaochen Li
Sicong Liu, Xiaochen Li, Zimu Zhou, Bin Guo, Meng Zhang, Haochen Shen and Zhiwen Yu
AdaEnlight: Energy-aware Low-light Video Stream Enhancement on Mobile Devices
null
null
10.1145/3569464
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
The ubiquity of camera-embedded devices and the advances in deep learning have stimulated various intelligent mobile video applications. These applications often demand on-device processing of video streams to deliver real-time, high-quality services for privacy and robustness concerns. However, the performance of these applications is constrained by the raw video streams, which tend to be taken with small-aperture cameras of ubiquitous mobile platforms in dim light. Despite extensive low-light video enhancement solutions, they are unfit for deployment to mobile devices due to their complex models and and ignorance of system dynamics like energy budgets. In this paper, we propose AdaEnlight, an energy-aware low-light video stream enhancement system on mobile devices. It achieves real-time video enhancement with competitive visual quality while allowing runtime behavior adaptation to the platform-imposed dynamic energy budgets. We report extensive experiments on diverse datasets, scenarios, and platforms and demonstrate the superiority of AdaEnlight compared with state-of-the-art low-light image and video enhancement solutions.
[ { "version": "v1", "created": "Tue, 29 Nov 2022 12:12:34 GMT" }, { "version": "v2", "created": "Wed, 30 Nov 2022 03:27:27 GMT" } ]
2022-12-01T00:00:00
[ [ "Liu", "Sicong", "" ], [ "Li", "Xiaochen", "" ], [ "Zhou", "Zimu", "" ], [ "Guo", "Bin", "" ], [ "Zhang", "Meng", "" ], [ "Shen", "Haochen", "" ], [ "Yu", "Zhiwen", "" ] ]
new_dataset
0.999133
2211.16611
Zhijie Qiao
Zhijie Qiao, Gedaliah Knizhnik, and Mark Yim
Holonomic Control of Arbitrary Configurations of Docked Modboats
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
The Modboat is a low-cost, underactuated, modular robot capable of surface swimming, docking to other modules, and undocking from them using only a single motor and two passive flippers. Undocking is achieved by causing intentional self-collision between the tails of neighboring modules in certain configurations; this becomes a challenge, however, when collective swimming as one connected component is desirable. Prior work has developed controllers that turn arbitrary configurations of docked Modboats into steerable vehicles, but they cannot counteract lateral forces and disturbances. In this work we present a centralized control strategy to create holonomic vehicles out of arbitrary configurations of docked Modboats using an iterative potential-field based search. We experimentally demonstrate that our controller performs well and can control surge and sway velocities and yaw angle simultaneously.
[ { "version": "v1", "created": "Tue, 29 Nov 2022 22:14:46 GMT" } ]
2022-12-01T00:00:00
[ [ "Qiao", "Zhijie", "" ], [ "Knizhnik", "Gedaliah", "" ], [ "Yim", "Mark", "" ] ]
new_dataset
0.999571
2211.16649
Vishnu Sashank Dorbala
Vishnu Sashank Dorbala, Gunnar Sigurdsson, Robinson Piramuthu, Jesse Thomason, Gaurav S. Sukhatme
CLIP-Nav: Using CLIP for Zero-Shot Vision-and-Language Navigation
8 pages, Accepted at LangRob Workshop at Conference on Robot Learning (CoRL), 2022
null
null
null
cs.CV cs.AI cs.CL cs.RO
http://creativecommons.org/publicdomain/zero/1.0/
Household environments are visually diverse. Embodied agents performing Vision-and-Language Navigation (VLN) in the wild must be able to handle this diversity, while also following arbitrary language instructions. Recently, Vision-Language models like CLIP have shown great performance on the task of zero-shot object recognition. In this work, we ask if these models are also capable of zero-shot language grounding. In particular, we utilize CLIP to tackle the novel problem of zero-shot VLN using natural language referring expressions that describe target objects, in contrast to past work that used simple language templates describing object classes. We examine CLIP's capability in making sequential navigational decisions without any dataset-specific finetuning, and study how it influences the path that an agent takes. Our results on the coarse-grained instruction following task of REVERIE demonstrate the navigational capability of CLIP, surpassing the supervised baseline in terms of both success rate (SR) and success weighted by path length (SPL). More importantly, we quantitatively show that our CLIP-based zero-shot approach generalizes better to show consistent performance across environments when compared to SOTA, fully supervised learning approaches when evaluated via Relative Change in Success (RCS).
[ { "version": "v1", "created": "Wed, 30 Nov 2022 00:38:54 GMT" } ]
2022-12-01T00:00:00
[ [ "Dorbala", "Vishnu Sashank", "" ], [ "Sigurdsson", "Gunnar", "" ], [ "Piramuthu", "Robinson", "" ], [ "Thomason", "Jesse", "" ], [ "Sukhatme", "Gaurav S.", "" ] ]
new_dataset
0.994289