id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
2210.13047
Gangtao Xin
Gangtao Xin and Pingyi Fan
EXK-SC: A Semantic Communication Model Based on Information Framework Expansion and Knowledge Collision
null
null
10.3390/e24121842
null
cs.IT cs.GT cs.LG eess.SP math.IT math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Semantic communication is not focused on improving the accuracy of transmitted symbols, but is concerned with expressing the expected meaning that the symbol sequence exactly carries. However, the measurement of semantic messages and their corresponding codebook generation are still open issues. Expansion, which integrates simple things into a complex system and even generates intelligence, is truly consistent with the evolution of the human language system. We apply this idea to the semantic communication system, quantifying semantic transmission by symbol sequences and investigating the semantic information system in a similar way as Shannon's method for digital communication systems. This work is the first to discuss semantic expansion and knowledge collision in the semantic information framework. Some important theoretical results are presented, including the relationship between semantic expansion and the transmission information rate. We believe such a semantic information framework may provide a new paradigm for semantic communications, and semantic expansion and knowledge collision will be the cornerstone of semantic information theory.
[ { "version": "v1", "created": "Mon, 24 Oct 2022 09:00:14 GMT" }, { "version": "v2", "created": "Wed, 21 Dec 2022 08:18:13 GMT" } ]
2023-01-04T00:00:00
[ [ "Xin", "Gangtao", "" ], [ "Fan", "Pingyi", "" ] ]
new_dataset
0.967134
2212.00968
Danfeng Hong
Xin Wu and Danfeng Hong and Jocelyn Chanussot
UIU-Net: U-Net in U-Net for Infrared Small Object Detection
null
IEEE Transactions on Image Processing, 2022
10.1109/TIP.2022.3228497
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning-based infrared small object detection methods currently rely heavily on the classification backbone network. This tends to result in tiny object loss and feature distinguishability limitations as the network depth increases. Furthermore, small objects in infrared images are frequently emerged bright and dark, posing severe demands for obtaining precise object contrast information. For this reason, we in this paper propose a simple and effective ``U-Net in U-Net'' framework, UIU-Net for short, and detect small objects in infrared images. As the name suggests, UIU-Net embeds a tiny U-Net into a larger U-Net backbone, enabling the multi-level and multi-scale representation learning of objects. Moreover, UIU-Net can be trained from scratch, and the learned features can enhance global and local contrast information effectively. More specifically, the UIU-Net model is divided into two modules: the resolution-maintenance deep supervision (RM-DS) module and the interactive-cross attention (IC-A) module. RM-DS integrates Residual U-blocks into a deep supervision network to generate deep multi-scale resolution-maintenance features while learning global context information. Further, IC-A encodes the local context information between the low-level details and high-level semantic features. Extensive experiments conducted on two infrared single-frame image datasets, i.e., SIRST and Synthetic datasets, show the effectiveness and superiority of the proposed UIU-Net in comparison with several state-of-the-art infrared small object detection methods. The proposed UIU-Net also produces powerful generalization performance for video sequence infrared small object datasets, e.g., ATR ground/air video sequence dataset. The codes of this work are available openly at \url{https://github.com/danfenghong/IEEE_TIP_UIU-Net}.
[ { "version": "v1", "created": "Fri, 2 Dec 2022 04:52:26 GMT" } ]
2023-01-04T00:00:00
[ [ "Wu", "Xin", "" ], [ "Hong", "Danfeng", "" ], [ "Chanussot", "Jocelyn", "" ] ]
new_dataset
0.966255
2212.14731
Asterios Bampakis
Asterios Bampakis, Sofia Yfantidou, Athena Vakali
UBIWEAR: An end-to-end, data-driven framework for intelligent physical activity prediction to empower mHealth interventions
2022 IEEE International Conference on E-health Networking, Application & Services (HealthCom), Pages 56-62
null
10.1109/HealthCom54947.2022.9982730
null
cs.AI cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
It is indisputable that physical activity is vital for an individual's health and wellness. However, a global prevalence of physical inactivity has induced significant personal and socioeconomic implications. In recent years, a significant amount of work has showcased the capabilities of self-tracking technology to create positive health behavior change. This work is motivated by the potential of personalized and adaptive goal-setting techniques in encouraging physical activity via self-tracking. To this end, we propose UBIWEAR, an end-to-end framework for intelligent physical activity prediction, with the ultimate goal to empower data-driven goal-setting interventions. To achieve this, we experiment with numerous machine learning and deep learning paradigms as a robust benchmark for physical activity prediction tasks. To train our models, we utilize, "MyHeart Counts", an open, large-scale dataset collected in-the-wild from thousands of users. We also propose a prescriptive framework for self-tracking aggregated data preprocessing, to facilitate data wrangling of real-world, noisy data. Our best model achieves a MAE of 1087 steps, 65% lower than the state of the art in terms of absolute error, proving the feasibility of the physical activity prediction task, and paving the way for future research.
[ { "version": "v1", "created": "Fri, 30 Dec 2022 14:18:39 GMT" }, { "version": "v2", "created": "Tue, 3 Jan 2023 15:43:24 GMT" } ]
2023-01-04T00:00:00
[ [ "Bampakis", "Asterios", "" ], [ "Yfantidou", "Sofia", "" ], [ "Vakali", "Athena", "" ] ]
new_dataset
0.998345
2301.00835
Manar Alalfi
Jian Chen and Manar H. Alalfi and Thomas R. Dean
Timed Model-Based Mutation Operators for Simulink Models
null
null
null
null
cs.SE
http://creativecommons.org/licenses/by-nc-sa/4.0/
Model-based mutation analysis is a recent research area, and real-time system testing can benefit from using model mutants. Model-based mutation testing (MBMT) is a particular branch of model-based testing. It generates faulty versions of a model using mutation operators to evaluate and improve test cases. Mutation testing is an effective way to ensure software correctness and has been applied to various application areas. Simulink is a vital modeling language for real-time systems. This paper introduces Simulink model mutation analysis to improve Model-in-the-loop (MIL) testing. We propose a set of Simulink mutation operators based on AUTOSAR, which reflects the temporal correctness when a Simulink model is mapped to Operating System tasks. We implement a mutation framework that generates mutants for implicit clock Simulink models. Finally, we demonstrate how this framework generates mutants to reveal task interference issues in the simulation. Our work integrates the Simulink model with the timed systems to better support mutation testing automation.
[ { "version": "v1", "created": "Mon, 2 Jan 2023 19:05:17 GMT" } ]
2023-01-04T00:00:00
[ [ "Chen", "Jian", "" ], [ "Alalfi", "Manar H.", "" ], [ "Dean", "Thomas R.", "" ] ]
new_dataset
0.970458
2301.00836
Vishweshwar Dixit
Vishweshwar V. Dixit
Kannudi -- A Reference Editor for Kannada
7 pages, 2 figures, 4 tables
null
null
null
cs.HC cs.CL
http://creativecommons.org/licenses/by/4.0/
Kannudi is a reference editor for Kannada based on OPOK! and OHOK! principles, and domain knowledge. It introduces a method of input for Kannada, called OHOK!, that is, Ottu Haku Ottu Kodu! (apply pressure and give ottu). This is especially suited for pressure sensitive input devices, though the current online implementation uses the regular mechanical keyboard. OHOK! has three possible modes, namely, sva-ottu (self-conjunct), kandante (as you see), and andante (as you say). It may be noted that kandante mode does not follow the phonetic order. However, this mode may work well for those who are inclined to visualize as they type rather than vocalizing the sounds. Kannudi also demonstrates how domain knowledge can be effectively used to potentially increase speed, accuracy, and user friendliness. For example, selection of a default vowel, automatic shunyification, and arkification. Also implemented are four types Deletes that are necessary for phono-syllabic languages like Kannada.
[ { "version": "v1", "created": "Sat, 24 Dec 2022 01:40:56 GMT" } ]
2023-01-04T00:00:00
[ [ "Dixit", "Vishweshwar V.", "" ] ]
new_dataset
0.999638
2301.00880
Cristian Alecsa
Cristian Daniel Alecsa
OF-AE: Oblique Forest AutoEncoders
11 pages, 12 figures, 2 tables
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the present work we propose an unsupervised ensemble method consisting of oblique trees that can address the task of auto-encoding, namely Oblique Forest AutoEncoders (briefly OF-AE). Our method is a natural extension of the eForest encoder introduced in [1]. More precisely, by employing oblique splits consisting in multivariate linear combination of features instead of the axis-parallel ones, we will devise an auto-encoder method through the computation of a sparse solution of a set of linear inequalities consisting of feature values constraints. The code for reproducing our results is available at https://github.com/CDAlecsa/Oblique-Forest-AutoEncoders.
[ { "version": "v1", "created": "Mon, 2 Jan 2023 21:23:37 GMT" } ]
2023-01-04T00:00:00
[ [ "Alecsa", "Cristian Daniel", "" ] ]
new_dataset
0.99409
2301.00891
Samiran Gode
Samiran Gode, Supreeth Bare, Bhiksha Raj, Hyungon Yoo
Understanding Political Polarisation using Language Models: A dataset and method
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Our paper aims to analyze political polarization in US political system using Language Models, and thereby help candidates make an informed decision. The availability of this information will help voters understand their candidates views on the economy, healthcare, education and other social issues. Our main contributions are a dataset extracted from Wikipedia that spans the past 120 years and a Language model based method that helps analyze how polarized a candidate is. Our data is divided into 2 parts, background information and political information about a candidate, since our hypothesis is that the political views of a candidate should be based on reason and be independent of factors such as birthplace, alma mater, etc. We further split this data into 4 phases chronologically, to help understand if and how the polarization amongst candidates changes. This data has been cleaned to remove biases. To understand the polarization we begin by showing results from some classical language models in Word2Vec and Doc2Vec. And then use more powerful techniques like the Longformer, a transformer based encoder, to assimilate more information and find the nearest neighbors of each candidate based on their political view and their background.
[ { "version": "v1", "created": "Mon, 2 Jan 2023 22:15:04 GMT" } ]
2023-01-04T00:00:00
[ [ "Gode", "Samiran", "" ], [ "Bare", "Supreeth", "" ], [ "Raj", "Bhiksha", "" ], [ "Yoo", "Hyungon", "" ] ]
new_dataset
0.997353
2301.00933
Wen Haifeng
Haifeng Wen, Weijie Yuan, Zilong Liu, Shuangyang Li
OTFS-SCMA: A Downlink NOMA Scheme for Massive Connectivity in High Mobility Channels
null
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
This paper studies a downlink system that combines orthogonal-time-frequency-space (OTFS) modulation and sparse code multiple access (SCMA) to support massive connectivity in high-mobility environments. We propose a cross-domain receiver for the considered OTFS-SCMA system which efficiently carries out OTFS symbol estimation and SCMA decoding in a joint manner. This is done by iteratively passing the extrinsic information between the time domain and the delay-Doppler (DD) domain via the corresponding unitary transformation to ensure the principal orthogonality of errors from each domain. We show that the proposed OTFS-SCMA detection algorithm exists at a fixed point in the state evolution when it converges. To further enhance the error performance of the proposed OTFS-SCMA system, we investigate the cooperation between downlink users to exploit the diversity gains and develop a distributed cooperative detection (DCD) algorithm with the aid of belief consensus. Our numerical results demonstrate the effectiveness and convergence of the proposed algorithm and show an increased spectral efficiency compared to the conventional OTFS transmission.
[ { "version": "v1", "created": "Tue, 3 Jan 2023 02:42:08 GMT" } ]
2023-01-04T00:00:00
[ [ "Wen", "Haifeng", "" ], [ "Yuan", "Weijie", "" ], [ "Liu", "Zilong", "" ], [ "Li", "Shuangyang", "" ] ]
new_dataset
0.989153
2301.00936
Ilya Semenov
Ilya Semenov, Robert Brown, Michael Otte
Control and Dynamic Motion Planning for a Hybrid Air-Underwater Quadrotor: Minimizing Energy Use in a Flooded Cave Environment
8 pages, 9 figures, written in 2020
null
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a dynamic path planning algorithm to navigate an amphibious rotor craft through a concave time-invariant obstacle field while attempting to minimize energy usage. We create a nonlinear quaternion state model that represents the rotor craft dynamics above and below the water. The 6 degree of freedom dynamics used within a layered architecture to generate motion paths for the vehicle to follow and the required control inputs. The rotor craft has a 3 dimensional map of its surroundings that is updated via limited range onboard sensor readings within the current medium (air or water). Path planning is done via PRM and D* Lite.
[ { "version": "v1", "created": "Tue, 3 Jan 2023 02:58:20 GMT" } ]
2023-01-04T00:00:00
[ [ "Semenov", "Ilya", "" ], [ "Brown", "Robert", "" ], [ "Otte", "Michael", "" ] ]
new_dataset
0.996041
2301.00964
Aswani Kumar Cherukuri Dr
Abhiruph Chakravarty, Jatin Karthik Tripathy, Sibi Chakkaravarthy S, Aswani Kumar Cherukuri, S. Anitha, Firuz Kamalov, Annapurna Jonnalagadda
e-Inu: Simulating A Quadruped Robot With Emotional Sentience
null
null
null
null
cs.RO cs.HC cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Quadruped robots are currently used in industrial robotics as mechanical aid to automate several routine tasks. However, presently, the usage of such a robot in a domestic setting is still very much a part of the research. This paper discusses the understanding and virtual simulation of such a robot capable of detecting and understanding human emotions, generating its gait, and responding via sounds and expression on a screen. To this end, we use a combination of reinforcement learning and software engineering concepts to simulate a quadruped robot that can understand emotions, navigate through various terrains and detect sound sources, and respond to emotions using audio-visual feedback. This paper aims to establish the framework of simulating a quadruped robot that is emotionally intelligent and can primarily respond to audio-visual stimuli using motor or audio response. The emotion detection from the speech was not as performant as ERANNs or Zeta Policy learning, still managing an accuracy of 63.5%. The video emotion detection system produced results that are almost at par with the state of the art, with an accuracy of 99.66%. Due to its "on-policy" learning process, the PPO algorithm was extremely rapid to learn, allowing the simulated dog to demonstrate a remarkably seamless gait across the different cadences and variations. This enabled the quadruped robot to respond to generated stimuli, allowing us to conclude that it functions as predicted and satisfies the aim of this work.
[ { "version": "v1", "created": "Tue, 3 Jan 2023 06:28:45 GMT" } ]
2023-01-04T00:00:00
[ [ "Chakravarty", "Abhiruph", "" ], [ "Tripathy", "Jatin Karthik", "" ], [ "S", "Sibi Chakkaravarthy", "" ], [ "Cherukuri", "Aswani Kumar", "" ], [ "Anitha", "S.", "" ], [ "Kamalov", "Firuz", "" ], [ "Jonnalagadda", "Annapurna", "" ] ]
new_dataset
0.990351
2301.00975
Jun Wan
Hao Fang, Ajian Liu, Jun Wan, Sergio Escalera, Chenxu Zhao, Xu Zhang, Stan Z. Li, Zhen Lei
Surveillance Face Anti-spoofing
15 pages, 9 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
[ { "version": "v1", "created": "Tue, 3 Jan 2023 07:09:57 GMT" } ]
2023-01-04T00:00:00
[ [ "Fang", "Hao", "" ], [ "Liu", "Ajian", "" ], [ "Wan", "Jun", "" ], [ "Escalera", "Sergio", "" ], [ "Zhao", "Chenxu", "" ], [ "Zhang", "Xu", "" ], [ "Li", "Stan Z.", "" ], [ "Lei", "Zhen", "" ] ]
new_dataset
0.972233
2301.01057
Janne Mustaniemi
Janne Mustaniemi, Juho Kannala, Esa Rahtu, Li Liu and Janne Heikkil\"a
BS3D: Building-scale 3D Reconstruction from RGB-D Images
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Various datasets have been proposed for simultaneous localization and mapping (SLAM) and related problems. Existing datasets often include small environments, have incomplete ground truth, or lack important sensor data, such as depth and infrared images. We propose an easy-to-use framework for acquiring building-scale 3D reconstruction using a consumer depth camera. Unlike complex and expensive acquisition setups, our system enables crowd-sourcing, which can greatly benefit data-hungry algorithms. Compared to similar systems, we utilize raw depth maps for odometry computation and loop closure refinement which results in better reconstructions. We acquire a building-scale 3D dataset (BS3D) and demonstrate its value by training an improved monocular depth estimation model. As a unique experiment, we benchmark visual-inertial odometry methods using both color and active infrared images.
[ { "version": "v1", "created": "Tue, 3 Jan 2023 11:46:14 GMT" } ]
2023-01-04T00:00:00
[ [ "Mustaniemi", "Janne", "" ], [ "Kannala", "Juho", "" ], [ "Rahtu", "Esa", "" ], [ "Liu", "Li", "" ], [ "Heikkilä", "Janne", "" ] ]
new_dataset
0.999561
2301.01116
Irene Marcovici
Chlo\'e Boisson, Damien Jamet and Ir\`ene Marcovici
On a probabilistic extension of the Oldenburger-Kolakoski sequence
null
null
null
null
cs.DM math.CO math.PR
http://creativecommons.org/licenses/by/4.0/
The Oldenburger-Kolakoski sequence is the only infinite sequence over the alphabet $\{1,2\}$ that starts with $1$ and is its own run-length encoding. In the present work, we take a step back from this largely known and studied sequence by introducing some randomness in the choice of the letters written. This enables us to provide some results on the convergence of the density of $1$'s in the resulting sequence. When the choice of the letters is given by an infinite sequence of i.i.d. random variables or by a Markov chain, the average densities of letters converge. Moreover, in the case of i.i.d. random variables, we are able to prove that the densities even almost surely converge.
[ { "version": "v1", "created": "Tue, 3 Jan 2023 14:18:39 GMT" } ]
2023-01-04T00:00:00
[ [ "Boisson", "Chloé", "" ], [ "Jamet", "Damien", "" ], [ "Marcovici", "Irène", "" ] ]
new_dataset
0.993859
2301.01134
Mika H\"am\"al\"ainen
Khalid Alnajjar, Mika H\"am\"al\"ainen, Shuo Zhang
Ring That Bell: A Corpus and Method for Multimodal Metaphor Detection in Videos
Figlang 2022
null
null
null
cs.MM cs.CL cs.CV
http://creativecommons.org/licenses/by/4.0/
We present the first openly available multimodal metaphor annotated corpus. The corpus consists of videos including audio and subtitles that have been annotated by experts. Furthermore, we present a method for detecting metaphors in the new dataset based on the textual content of the videos. The method achieves a high F1-score (62\%) for metaphorical labels. We also experiment with other modalities and multimodal methods; however, these methods did not out-perform the text-based model. In our error analysis, we do identify that there are cases where video could help in disambiguating metaphors, however, the visual cues are too subtle for our model to capture. The data is available on Zenodo.
[ { "version": "v1", "created": "Thu, 15 Dec 2022 17:11:35 GMT" } ]
2023-01-04T00:00:00
[ [ "Alnajjar", "Khalid", "" ], [ "Hämäläinen", "Mika", "" ], [ "Zhang", "Shuo", "" ] ]
new_dataset
0.998783
2301.01145
Joseph Saverin Dr.-Ing.
Joseph Saverin
SailFFish: A Lightweight, Parallelised Fast Poisson Solver Library
null
null
null
null
cs.MS cs.NA math.NA
http://creativecommons.org/licenses/by/4.0/
A solver for the Poisson equation for 1D, 2D and 3D regular grids is presented. The solver applies the convolution theorem in order to efficiently solve the Poisson equation in spectral space over a rectangular computational domain. Conversion to and from the spectral space is achieved through the use of discrete Fourier transforms, allowing for the application of highly optimised O(NlogN) algorithms. The data structure is configured to be modular such that the underlying interface for operations to, from and within the spectral space may be interchanged. For computationally demanding tasks, the library is optimised by making use of parallel processing architectures. A range of boundary conditions can be applied to the domain including periodic, Dirichlet, Neumann and fully unbounded. In the case of Neumann and Dirichlet boundary conditions, arbitrary inhomogeneous boundary conditions may be specified. The desired solution may be found either on regular (cell-boundary) or staggered (cell-centre) grid configurations. For problems with periodic, Dirichlet or Neumann boundary conditions either a pseudo-spectral or a second-order finite difference operator may be applied. For unbounded boundary conditions a range of Green's functions are available. In addition to this, a range of differential operators may be applied in the spectral space in order to treat different forms of the Poisson equation or to extract highly accurate gradients of the input fields. The underlying framework of the solver is first detailed, followed by a range of validations for each of the available boundary condition types. Finally, the performance of the library is investigated. The code is free and publicly available under a GNU v3.0 license.
[ { "version": "v1", "created": "Sun, 1 Jan 2023 23:02:19 GMT" } ]
2023-01-04T00:00:00
[ [ "Saverin", "Joseph", "" ] ]
new_dataset
0.963256
2301.01147
Patrick Wenzel
Patrick Wenzel, Nan Yang, Rui Wang, Niclas Zeller, Daniel Cremers
4Seasons: Benchmarking Visual SLAM and Long-Term Localization for Autonomous Driving in Challenging Conditions
arXiv admin note: substantial text overlap with arXiv:2009.06364
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a novel visual SLAM and long-term localization benchmark for autonomous driving in challenging conditions based on the large-scale 4Seasons dataset. The proposed benchmark provides drastic appearance variations caused by seasonal changes and diverse weather and illumination conditions. While significant progress has been made in advancing visual SLAM on small-scale datasets with similar conditions, there is still a lack of unified benchmarks representative of real-world scenarios for autonomous driving. We introduce a new unified benchmark for jointly evaluating visual odometry, global place recognition, and map-based visual localization performance which is crucial to successfully enable autonomous driving in any condition. The data has been collected for more than one year, resulting in more than 300 km of recordings in nine different environments ranging from a multi-level parking garage to urban (including tunnels) to countryside and highway. We provide globally consistent reference poses with up to centimeter-level accuracy obtained from the fusion of direct stereo-inertial odometry with RTK GNSS. We evaluate the performance of several state-of-the-art visual odometry and visual localization baseline approaches on the benchmark and analyze their properties. The experimental results provide new insights into current approaches and show promising potential for future research. Our benchmark and evaluation protocols will be available at https://www.4seasons-dataset.com/.
[ { "version": "v1", "created": "Sat, 31 Dec 2022 13:52:36 GMT" } ]
2023-01-04T00:00:00
[ [ "Wenzel", "Patrick", "" ], [ "Yang", "Nan", "" ], [ "Wang", "Rui", "" ], [ "Zeller", "Niclas", "" ], [ "Cremers", "Daniel", "" ] ]
new_dataset
0.99966
2301.01191
Kevin Moran
Carlos Bernal-C\'ardenas, Nathan Cooper, Madeleine Havranek, Kevin Moran, Oscar Chaparro, Denys Poshyvanyk, Andrian Marcus
Translating Video Recordings of Complex Mobile App UI Gestures into Replayable Scenarios
Accepted to IEEE Transactions on Software Engineering. arXiv admin note: substantial text overlap with arXiv:2005.09057
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Screen recordings of mobile applications are easy to obtain and capture a wealth of information pertinent to software developers (e.g., bugs or feature requests), making them a popular mechanism for crowdsourced app feedback. Thus, these videos are becoming a common artifact that developers must manage. In light of unique mobile development constraints, including swift release cycles and rapidly evolving platforms, automated techniques for analyzing all types of rich software artifacts provide benefit to mobile developers. Unfortunately, automatically analyzing screen recordings presents serious challenges, due to their graphical nature, compared to other types of (textual) artifacts. To address these challenges, this paper introduces V2S+, an automated approach for translating video recordings of Android app usages into replayable scenarios. V2S+ is based primarily on computer vision techniques and adapts recent solutions for object detection and image classification to detect and classify user gestures captured in a video, and convert these into a replayable test scenario. Given that V2S+ takes a computer vision-based approach, it is applicable to both hybrid and native Android applications. We performed an extensive evaluation of V2S+ involving 243 videos depicting 4,028 GUI-based actions collected from users exercising features and reproducing bugs from a collection of over 90 popular native and hybrid Android apps. Our results illustrate that V2S+ can accurately replay scenarios from screen recordings, and is capable of reproducing $\approx$ 90.2% of sequential actions recorded in native application scenarios on physical devices, and $\approx$ 83% of sequential actions recorded in hybrid application scenarios on emulators, both with low overhead. A case study with three industrial partners illustrates the potential usefulness of V2S+ from the viewpoint of developers.
[ { "version": "v1", "created": "Tue, 3 Jan 2023 16:47:42 GMT" } ]
2023-01-04T00:00:00
[ [ "Bernal-Cárdenas", "Carlos", "" ], [ "Cooper", "Nathan", "" ], [ "Havranek", "Madeleine", "" ], [ "Moran", "Kevin", "" ], [ "Chaparro", "Oscar", "" ], [ "Poshyvanyk", "Denys", "" ], [ "Marcus", "Andrian", "" ] ]
new_dataset
0.950884
2301.01234
Dmytro Humeniuk
Dmytro Humeniuk, Foutse Khomh and Giuliano Antoniol
AmbieGen: A Search-based Framework for Autonomous Systems Testing
17 pages, 10 figures
null
null
null
cs.RO cs.NE
http://creativecommons.org/licenses/by-nc-nd/4.0/
Thorough testing of safety-critical autonomous systems, such as self-driving cars, autonomous robots, and drones, is essential for detecting potential failures before deployment. One crucial testing stage is model-in-the-loop testing, where the system model is evaluated by executing various scenarios in a simulator. However, the search space of possible parameters defining these test scenarios is vast, and simulating all combinations is computationally infeasible. To address this challenge, we introduce AmbieGen, a search-based test case generation framework for autonomous systems. AmbieGen uses evolutionary search to identify the most critical scenarios for a given system, and has a modular architecture that allows for the addition of new systems under test, algorithms, and search operators. Currently, AmbieGen supports test case generation for autonomous robots and autonomous car lane keeping assist systems. In this paper, we provide a high-level overview of the framework's architecture and demonstrate its practical use cases.
[ { "version": "v1", "created": "Sun, 1 Jan 2023 23:42:32 GMT" } ]
2023-01-04T00:00:00
[ [ "Humeniuk", "Dmytro", "" ], [ "Khomh", "Foutse", "" ], [ "Antoniol", "Giuliano", "" ] ]
new_dataset
0.997723
2301.01237
Brahim Tamadazte
Bassem Dahroug and Brahim Tamadazte and Nicolas Andreff
Safe Path following for Middle Ear Surgery
40 pages, 26 figures
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
This article formulates a generic representation of a path-following controller operating under contained motion, which was developed in the context of surgical robotics. It reports two types of constrained motion: i) Bilateral Constrained Motion, also called Remote Center Motion (RCM), and ii) Unilaterally Constrained Motion (UCM). In the first case, the incision hole has almost the same diameter as the robotic tool. In contrast, in the second state, the diameter of the incision orifice is larger than the tool diameter. The second case offers more space where the surgical instrument moves freely without constraints before touching the incision wall. The proposed method combines two tasks that must operate hierarchically: i) respect the RCM or UCM constraints formulated by equality or inequality, respectively, and ii) perform a surgical assignment, e.g., scanning or ablation expressed as a 3D path-following task. The proposed methods and materials were tested first on our simulator that mimics realistic conditions of middle ear surgery, and then on an experimental platform. Different validation scenarios were carried out experimentally to assess quantitatively and qualitatively each developed approach. Although ultimate precision was not the goal of this work, our concept is validated with enough accuracy (inferior to 100 micrometres) for ear surgery.
[ { "version": "v1", "created": "Tue, 3 Jan 2023 17:31:19 GMT" } ]
2023-01-04T00:00:00
[ [ "Dahroug", "Bassem", "" ], [ "Tamadazte", "Brahim", "" ], [ "Andreff", "Nicolas", "" ] ]
new_dataset
0.986776
2301.01282
Krishnan Shankar
Krishnan Shankar
RSA+: An algorithm at least as secure as RSA
8 pages, no figures
null
null
null
cs.CR math.NT
http://creativecommons.org/licenses/by-nc-sa/4.0/
The RSA algorithm has been around for nearly five decades and remains one of the most studied public key cryptosystems. Many attempts have been made to break it or improve it and questions remain about the equivalence of the strength of its security to well known hard problems in computational number theory. In this note we propose a modified version which we call RSA+ which is at least as secure as RSA and show that breaking RSA+ is probably computationally equivalent to factoring $n$, the public modulus. The motivation came from wanting to obscure the encryption exponent in RSA.
[ { "version": "v1", "created": "Sat, 31 Dec 2022 02:48:17 GMT" } ]
2023-01-04T00:00:00
[ [ "Shankar", "Krishnan", "" ] ]
new_dataset
0.992265
2011.03669
Gang Liu
Gang Liu, Kenli Li, Zheng Xiao and Rujia Wang
EHAP-ORAM: Efficient Hardware-Assisted Persistent ORAM System for Non-volatile Memory
In Proceedings of The 49th Annual International Symposium on Computer Architecture (ISCA' 22)
null
10.1145/3470496.3527425
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Oblivious RAM (ORAM) is a provable secure primitive to prevent access pattern leakage on the memory bus. It serves as the intermediate layer between the trusted on-chip components and the untrusted external memory systems to modulate the original memory access patterns into indistinguishable memory sequences. By randomly remapping the data blocks and accessing redundant blocks, ORAM prevents access pattern leakage through obfuscation. While there is much prior work focusing on improving ORAM's performance on the conventional DRAM-based memory system, when the memory technology shifts to use non-volatile memory (NVM), new challenges come up as to how to efficiently support crash consistency for ORAM. In this work, we propose EHAP-ORAM, which studies how to persist ORAM construction with an NVM-based memory system. We first analyze the design requirements for a persistent ORAM system and discuss the need to preserve crash consistency and atomicity for both data and ORAM metadata. Next, we discuss some of the challenges in the design of a persistent ORAM system and propose some solutions to those challenges. Then, we propose the modified on-chip ORAM controller architecture. Based on the improved hardware architecture of the ORAM controller on-chip, we propose different persistency protocols to ensure the crash consistency of the ORAM system and satisfy that the metadata in PosMap is safe when it is persisted to NVM in trusted/untrusted off-chip. The proposed architecture and persistency protocol steps minimize the overhead and leakage during the write-back process. Finally, we compared our persistent ORAM with the system without crash consistency support, show that in non-recursive and recursive cases, EHAP-ORAM only incurs 3.36% and 3.65% performance overhead. The results show that the EHAP-ORAM can support efficient crash consistency with minimal performance and hardware overhead.
[ { "version": "v1", "created": "Sat, 7 Nov 2020 03:15:50 GMT" }, { "version": "v2", "created": "Fri, 13 Nov 2020 22:02:32 GMT" }, { "version": "v3", "created": "Sun, 15 May 2022 03:36:06 GMT" }, { "version": "v4", "created": "Thu, 19 May 2022 12:59:25 GMT" }, { "version": "v5", "created": "Sat, 31 Dec 2022 03:44:17 GMT" } ]
2023-01-03T00:00:00
[ [ "Liu", "Gang", "" ], [ "Li", "Kenli", "" ], [ "Xiao", "Zheng", "" ], [ "Wang", "Rujia", "" ] ]
new_dataset
0.997721
2101.00756
Brittany Reid
Brittany Reid, Marcelo d`Amorim, Markus Wagner, Christoph Treude
NCQ: Code reuse support for node.js developers
Submitted to IEEE Transactions on Software Engineering
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Code reuse is an important part of software development. The adoption of code reuse practices is especially common among Node.js developers. The Node.js package manager, NPM, indexes over 1 Million packages and developers often seek out packages to solve programming tasks. Due to the vast number of packages, selecting the right package is difficult and time consuming. With the goal of improving productivity of developers that heavily reuse code through third-party packages, we present Node Code Query (NCQ), a Read-Eval-Print-Loop environment that allows developers to 1) search for NPM packages using natural language queries, 2) search for code snippets related to those packages, 3) automatically correct errors in these code snippets, 4) quickly setup new environments for testing those snippets, and 5) transition between search and editing modes. In two user studies with a total of 20 participants, we find that participants begin programming faster and conclude tasks faster with NCQ than with baseline approaches, and that they like, among other features, the search for code snippets and packages. Our results suggest that NCQ makes Node.js developers more efficient in reusing code.
[ { "version": "v1", "created": "Mon, 4 Jan 2021 03:54:02 GMT" }, { "version": "v2", "created": "Tue, 28 Jun 2022 09:32:59 GMT" }, { "version": "v3", "created": "Mon, 2 Jan 2023 06:02:37 GMT" } ]
2023-01-03T00:00:00
[ [ "Reid", "Brittany", "" ], [ "d`Amorim", "Marcelo", "" ], [ "Wagner", "Markus", "" ], [ "Treude", "Christoph", "" ] ]
new_dataset
0.999351
2109.02325
Ahmet Yavuz Uluslu
Ibrahim Faruk Ceylan and Necmettin Bera Calik
MyProfessors: Mining Turkish Student Reviews
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We introduce Hocalarim (MyProfessors), the largest student review dataset available for the Turkish language. It consists of over 5000 professor reviews left online by students, with different aspects of education rated on a scale of 1 to 5 stars. We investigate the properties of the dataset and present its statistics. We examine the impact of students' institution type on their ratings and the correlation of students' bias to give positive or negative feedback.
[ { "version": "v1", "created": "Mon, 6 Sep 2021 09:55:58 GMT" }, { "version": "v2", "created": "Wed, 29 Dec 2021 14:54:44 GMT" }, { "version": "v3", "created": "Sat, 5 Nov 2022 05:54:45 GMT" }, { "version": "v4", "created": "Sat, 31 Dec 2022 08:13:36 GMT" } ]
2023-01-03T00:00:00
[ [ "Ceylan", "Ibrahim Faruk", "" ], [ "Calik", "Necmettin Bera", "" ] ]
new_dataset
0.998491
2110.10067
Sam Powers
Sam Powers, Eliot Xing, Eric Kolve, Roozbeh Mottaghi, Abhinav Gupta
CORA: Benchmarks, Baselines, and Metrics as a Platform for Continual Reinforcement Learning Agents
Repository available at https://github.com/AGI-Labs/continual_rl
Proceedings of The 1st Conference on Lifelong Learning Agents, PMLR 199:705-743, 2022
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Progress in continual reinforcement learning has been limited due to several barriers to entry: missing code, high compute requirements, and a lack of suitable benchmarks. In this work, we present CORA, a platform for Continual Reinforcement Learning Agents that provides benchmarks, baselines, and metrics in a single code package. The benchmarks we provide are designed to evaluate different aspects of the continual RL challenge, such as catastrophic forgetting, plasticity, ability to generalize, and sample-efficient learning. Three of the benchmarks utilize video game environments (Atari, Procgen, NetHack). The fourth benchmark, CHORES, consists of four different task sequences in a visually realistic home simulator, drawn from a diverse set of task and scene parameters. To compare continual RL methods on these benchmarks, we prepare three metrics in CORA: Continual Evaluation, Isolated Forgetting, and Zero-Shot Forward Transfer. Finally, CORA includes a set of performant, open-source baselines of existing algorithms for researchers to use and expand on. We release CORA and hope that the continual RL community can benefit from our contributions, to accelerate the development of new continual RL algorithms.
[ { "version": "v1", "created": "Tue, 19 Oct 2021 15:48:26 GMT" }, { "version": "v2", "created": "Sat, 31 Dec 2022 07:10:45 GMT" } ]
2023-01-03T00:00:00
[ [ "Powers", "Sam", "" ], [ "Xing", "Eliot", "" ], [ "Kolve", "Eric", "" ], [ "Mottaghi", "Roozbeh", "" ], [ "Gupta", "Abhinav", "" ] ]
new_dataset
0.99936
2112.01924
Wu Ran
Wu Ran, Bohong Yang, Peirong Ma, and Hong Lu
TRNR: Task-Driven Image Rain and Noise Removal with a Few Images Based on Patch Analysis
16 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recent success of learning-based image rain and noise removal can be attributed primarily to well-designed neural network architectures and large labeled datasets. However, we discover that current image rain and noise removal methods result in low utilization of images. To alleviate the reliance of deep models on large labeled datasets, we propose the task-driven image rain and noise removal (TRNR) based on a patch analysis strategy. The patch analysis strategy samples image patches with various spatial and statistical properties for training and can increase image utilization. Furthermore, the patch analysis strategy encourages us to introduce the N-frequency-K-shot learning task for the task-driven approach TRNR. TRNR allows neural networks to learn from numerous N-frequency-K-shot learning tasks, rather than from a large amount of data. To verify the effectiveness of TRNR, we build a Multi-Scale Residual Network (MSResNet) for both image rain removal and Gaussian noise removal. Specifically, we train MSResNet for image rain removal and noise removal with a few images (for example, 20.0\% train-set of Rain100H). Experimental results demonstrate that TRNR enables MSResNet to learn more effectively when data is scarce. TRNR has also been shown in experiments to improve the performance of existing methods. Furthermore, MSResNet trained with a few images using TRNR outperforms most recent deep learning methods trained data-driven on large labeled datasets. These experimental results have confirmed the effectiveness and superiority of the proposed TRNR. The source code is available on \url{https://github.com/Schizophreni/MSResNet-TRNR}.
[ { "version": "v1", "created": "Fri, 3 Dec 2021 14:12:15 GMT" }, { "version": "v2", "created": "Sun, 1 Jan 2023 09:20:40 GMT" } ]
2023-01-03T00:00:00
[ [ "Ran", "Wu", "" ], [ "Yang", "Bohong", "" ], [ "Ma", "Peirong", "" ], [ "Lu", "Hong", "" ] ]
new_dataset
0.999413
2112.12042
Suthee Ruangwises
Suthee Ruangwises, Toshiya Itoh
Physical ZKP for Makaro Using a Standard Deck of Cards
This paper has appeared at TAMC 2022
null
10.1007/978-3-031-20350-3_5
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Makaro is a logic puzzle with an objective to fill numbers into a rectangular grid to satisfy certain conditions. In 2018, Bultel et al. developed a physical zero-knowledge proof (ZKP) protocol for Makaro using a deck of cards, which allows a prover to physically convince a verifier that he/she knows a solution of the puzzle without revealing it. However, their protocol requires several identical copies of some cards, making it impractical as a deck of playing cards found in everyday life typically consists of all different cards. In this paper, we propose a new ZKP protocol for Makaro that can be implemented using a standard deck (a deck consisting of all different cards). Our protocol also uses asymptotically less cards than the protocol of Bultel et al. Most importantly, we develop a general method to encode a number with a sequence of all different cards. This allows us to securely compute several numerical functions using a standard deck, such as verifying that two given numbers are different and verifying that a number is the largest one among the given numbers.
[ { "version": "v1", "created": "Wed, 22 Dec 2021 17:11:32 GMT" }, { "version": "v2", "created": "Wed, 29 Dec 2021 15:46:10 GMT" }, { "version": "v3", "created": "Tue, 25 Oct 2022 09:38:48 GMT" } ]
2023-01-03T00:00:00
[ [ "Ruangwises", "Suthee", "" ], [ "Itoh", "Toshiya", "" ] ]
new_dataset
0.999803
2201.07425
Li Liu
Chunhui Zhang, Guanjie Huang, Li Liu, Shan Huang, Yinan Yang, Xiang Wan, Shiming Ge, Dacheng Tao
WebUAV-3M: A Benchmark for Unveiling the Power of Million-Scale Deep UAV Tracking
25 pages
null
null
null
cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
Unmanned aerial vehicle (UAV) tracking is of great significance for a wide range of applications, such as delivery and agriculture. Previous benchmarks in this area mainly focused on small-scale tracking problems while ignoring the amounts of data, types of data modalities, diversities of target categories and scenarios, and evaluation protocols involved, greatly hiding the massive power of deep UAV tracking. In this work, we propose WebUAV-3M, the largest public UAV tracking benchmark to date, to facilitate both the development and evaluation of deep UAV trackers. WebUAV-3M contains over 3.3 million frames across 4,500 videos and offers 223 highly diverse target categories. Each video is densely annotated with bounding boxes by an efficient and scalable semiautomatic target annotation (SATA) pipeline. Importantly, to take advantage of the complementary superiority of language and audio, we enrich WebUAV-3M by innovatively providing both natural language specifications and audio descriptions. We believe that such additions will greatly boost future research in terms of exploring language features and audio cues for multimodal UAV tracking. In addition, a fine-grained UAV tracking-under-scenario constraint (UTUSC) evaluation protocol and seven challenging scenario subtest sets are constructed to enable the community to develop, adapt and evaluate various types of advanced trackers. We provide extensive evaluations and detailed analyses of 43 representative trackers and envision future research directions in the field of deep UAV tracking and beyond. The dataset, toolkits and baseline results are available at \url{https://github.com/983632847/WebUAV-3M}.
[ { "version": "v1", "created": "Wed, 19 Jan 2022 05:39:42 GMT" }, { "version": "v2", "created": "Mon, 24 Jan 2022 12:07:09 GMT" }, { "version": "v3", "created": "Sun, 7 Aug 2022 01:20:13 GMT" }, { "version": "v4", "created": "Sat, 31 Dec 2022 02:00:27 GMT" } ]
2023-01-03T00:00:00
[ [ "Zhang", "Chunhui", "" ], [ "Huang", "Guanjie", "" ], [ "Liu", "Li", "" ], [ "Huang", "Shan", "" ], [ "Yang", "Yinan", "" ], [ "Wan", "Xiang", "" ], [ "Ge", "Shiming", "" ], [ "Tao", "Dacheng", "" ] ]
new_dataset
0.995835
2203.05056
Ahmed Rida Sekkat
Ahmed Rida Sekkat, Yohan Dupuis, Varun Ravi Kumar, Hazem Rashed, Senthil Yogamani, Pascal Vasseur, Paul Honeine
SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
IEEE Robotics and Automation Letters (RA-L) and IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022). An initial sample of the dataset is released in https://drive.google.com/drive/folders/1N5rrySiw1uh9kLeBuOblMbXJ09YsqO7I
IEEE Robotics and Automation Letters ( Volume: 7, Issue: 3, July 2022)
10.1109/LRA.2022.3188106
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Surround-view cameras are a primary sensor for automated driving, used for near-field perception. It is one of the most commonly used sensors in commercial vehicles primarily used for parking visualization and automated parking. Four fisheye cameras with a 190{\deg} field of view cover the 360{\deg} around the vehicle. Due to its high radial distortion, the standard algorithms do not extend easily. Previously, we released the first public fisheye surround-view dataset named WoodScape. In this work, we release a synthetic version of the surround-view dataset, covering many of its weaknesses and extending it. Firstly, it is not possible to obtain ground truth for pixel-wise optical flow and depth. Secondly, WoodScape did not have all four cameras annotated simultaneously in order to sample diverse frames. However, this means that multi-camera algorithms cannot be designed to obtain a unified output in birds-eye space, which is enabled in the new dataset. We implemented surround-view fisheye geometric projections in CARLA Simulator matching WoodScape's configuration and created SynWoodScape. We release 80k images from the synthetic dataset with annotations for 10+ tasks. We also release the baseline code and supporting scripts.
[ { "version": "v1", "created": "Wed, 9 Mar 2022 21:30:52 GMT" }, { "version": "v2", "created": "Mon, 23 May 2022 08:44:28 GMT" }, { "version": "v3", "created": "Sun, 26 Jun 2022 10:09:16 GMT" }, { "version": "v4", "created": "Mon, 8 Aug 2022 05:14:19 GMT" }, { "version": "v5", "created": "Mon, 2 Jan 2023 08:31:35 GMT" } ]
2023-01-03T00:00:00
[ [ "Sekkat", "Ahmed Rida", "" ], [ "Dupuis", "Yohan", "" ], [ "Kumar", "Varun Ravi", "" ], [ "Rashed", "Hazem", "" ], [ "Yogamani", "Senthil", "" ], [ "Vasseur", "Pascal", "" ], [ "Honeine", "Paul", "" ] ]
new_dataset
0.999742
2205.05533
Azarakhsh Keipour
Mohammadreza Mousaei, Junyi Geng, Azarakhsh Keipour, Dongwei Bai, and Sebastian Scherer
Design, Modeling and Control for a Tilt-rotor VTOL UAV in the Presence of Actuator Failure
8 pages
Proceedings of 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4310-4317
10.1109/IROS47612.2022.9981806
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Enabling vertical take-off and landing while providing the ability to fly long ranges opens the door to a wide range of new real-world aircraft applications while improving many existing tasks. Tiltrotor vertical take-off and landing (VTOL) unmanned aerial vehicles (UAVs) are a better choice than fixed-wing and multirotor aircraft for such applications. Prior works on these aircraft have addressed aerodynamic performance, design, modeling, and control. However, a less explored area is the study of their potential fault tolerance due to their inherent redundancy, which allows them to tolerate some degree of actuation failure. This paper introduces tolerance to several types of actuator failures in a tiltrotor VTOL aircraft. We discuss the design and modeling of a custom tiltrotor VTOL UAV, which is a combination of a fixed-wing aircraft and a quadrotor with tilting rotors, where the four propellers can be rotated individually. Then, we analyze the feasible wrench space the vehicle can generate and design the dynamic control allocation so that the system can adapt to actuator failures, benefiting from the configuration redundancy. The proposed approach is lightweight and is implemented as an extension to an already-existing flight control stack. Extensive experiments validate that the system can maintain the controlled flight under different actuator failures. To the best of our knowledge, this work is the first study of the tiltrotor VTOL's fault-tolerance that exploits the configuration redundancy. The source code and simulation can be accessed at https://theairlab.org/vtol.
[ { "version": "v1", "created": "Wed, 11 May 2022 14:23:18 GMT" }, { "version": "v2", "created": "Mon, 2 Jan 2023 15:23:24 GMT" } ]
2023-01-03T00:00:00
[ [ "Mousaei", "Mohammadreza", "" ], [ "Geng", "Junyi", "" ], [ "Keipour", "Azarakhsh", "" ], [ "Bai", "Dongwei", "" ], [ "Scherer", "Sebastian", "" ] ]
new_dataset
0.985614
2205.13973
Carole Porrier
Thomas Fernique and Carole Porrier
Ammann Bars for Octagonal Tilings
sagemath code as an ancillary file
null
null
null
cs.DM math.DS
http://creativecommons.org/licenses/by/4.0/
Ammann bars are formed by segments (decorations) on the tiles of a tiling such that forming straight lines with them while tiling forces non-periodicity. Only a few cases are known, starting with Robert Ammann's observations on Penrose tiles, but there is no general explanation or construction. In this article we propose a general method for cut and project tilings based on the notion of subperiods and we illustrate it with an aperiodic set of 36 decorated prototiles related to what we called Cyrenaic tilings.
[ { "version": "v1", "created": "Fri, 27 May 2022 13:42:56 GMT" }, { "version": "v2", "created": "Mon, 7 Nov 2022 05:20:11 GMT" }, { "version": "v3", "created": "Sat, 31 Dec 2022 17:11:10 GMT" } ]
2023-01-03T00:00:00
[ [ "Fernique", "Thomas", "" ], [ "Porrier", "Carole", "" ] ]
new_dataset
0.992445
2208.09787
Xue-Feng Zhu
Xue-Feng Zhu, Tianyang Xu, Zhangyong Tang, Zucheng Wu, Haodong Liu, Xiao Yang, Xiao-Jun Wu, Josef Kittler
RGBD1K: A Large-scale Dataset and Benchmark for RGB-D Object Tracking
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
RGB-D object tracking has attracted considerable attention recently, achieving promising performance thanks to the symbiosis between visual and depth channels. However, given a limited amount of annotated RGB-D tracking data, most state-of-the-art RGB-D trackers are simple extensions of high-performance RGB-only trackers, without fully exploiting the underlying potential of the depth channel in the offline training stage. To address the dataset deficiency issue, a new RGB-D dataset named RGBD1K is released in this paper. The RGBD1K contains 1,050 sequences with about 2.5M frames in total. To demonstrate the benefits of training on a larger RGB-D data set in general, and RGBD1K in particular, we develop a transformer-based RGB-D tracker, named SPT, as a baseline for future visual object tracking studies using the new dataset. The results, of extensive experiments using the SPT tracker emonstrate the potential of the RGBD1K dataset to improve the performance of RGB-D tracking, inspiring future developments of effective tracker designs. The dataset and codes will be available on the project homepage: https://github.com/xuefeng-zhu5/RGBD1K.
[ { "version": "v1", "created": "Sun, 21 Aug 2022 03:07:36 GMT" }, { "version": "v2", "created": "Tue, 13 Dec 2022 10:30:06 GMT" }, { "version": "v3", "created": "Fri, 30 Dec 2022 23:23:37 GMT" } ]
2023-01-03T00:00:00
[ [ "Zhu", "Xue-Feng", "" ], [ "Xu", "Tianyang", "" ], [ "Tang", "Zhangyong", "" ], [ "Wu", "Zucheng", "" ], [ "Liu", "Haodong", "" ], [ "Yang", "Xiao", "" ], [ "Wu", "Xiao-Jun", "" ], [ "Kittler", "Josef", "" ] ]
new_dataset
0.999623
2209.00349
Jihoon Kim
Jihoon Kim, Jiseob Kim, Sungjoon Choi
FLAME: Free-form Language-based Motion Synthesis & Editing
AAAI 2023
null
null
null
cs.CV cs.GR
http://creativecommons.org/licenses/by-nc-sa/4.0/
Text-based motion generation models are drawing a surge of interest for their potential for automating the motion-making process in the game, animation, or robot industries. In this paper, we propose a diffusion-based motion synthesis and editing model named FLAME. Inspired by the recent successes in diffusion models, we integrate diffusion-based generative models into the motion domain. FLAME can generate high-fidelity motions well aligned with the given text. Also, it can edit the parts of the motion, both frame-wise and joint-wise, without any fine-tuning. FLAME involves a new transformer-based architecture we devise to better handle motion data, which is found to be crucial to manage variable-length motions and well attend to free-form text. In experiments, we show that FLAME achieves state-of-the-art generation performances on three text-motion datasets: HumanML3D, BABEL, and KIT. We also demonstrate that editing capability of FLAME can be extended to other tasks such as motion prediction or motion in-betweening, which have been previously covered by dedicated models.
[ { "version": "v1", "created": "Thu, 1 Sep 2022 10:34:57 GMT" }, { "version": "v2", "created": "Sun, 1 Jan 2023 11:46:43 GMT" } ]
2023-01-03T00:00:00
[ [ "Kim", "Jihoon", "" ], [ "Kim", "Jiseob", "" ], [ "Choi", "Sungjoon", "" ] ]
new_dataset
0.999278
2209.10941
Ruslan Shevchenko
Ruslan Shevchenko
Embedding generic monadic transformer into Scala
Accepted to publication into "Trends of Functional Programming 2022"
null
10.1007/978-3-031-21314-4_1
null
cs.PL
http://creativecommons.org/licenses/by-nc-nd/4.0/
Dotty-cps-async is an open-source package that consists of scala macro, which implements generic async/await via monadic cps transform, and library, which provides monadic substitutions for higher-order functions from the standard library. It allows developers to use direct control flow constructions of the base language instead of monadic DSL for various applications. Behind well-known async/await operations, the package provides options for transforming higher-order function applications, generating call-chain proxies, and automatic coloring.
[ { "version": "v1", "created": "Thu, 22 Sep 2022 11:46:03 GMT" } ]
2023-01-03T00:00:00
[ [ "Shevchenko", "Ruslan", "" ] ]
new_dataset
0.995264
2210.12193
Jan Hohenheim
Jan Hohenheim, Zhaoyu Devon Liu, Tommaso Stecconi, Pietro Palopoli
A Trainable Sequence Learner that Learns and Recognizes Two-Input Sequence Patterns
Submitted to IEEE TENCON 2022
null
10.1109/TENCON55691.2022.9977663
null
cs.NE
http://creativecommons.org/licenses/by/4.0/
We present two designs for an analog circuit that can learn to detect a temporal sequence of two inputs. The training phase is done by feeding the circuit with the desired sequence and, after the training is completed, each time the trained sequence is encountered again the circuit will emit a signal of correct recognition. Sequences are in the order of tens of nanoseconds. The first design can reset the trained sequence on runtime but assumes very strict timing of the inputs. The second design can only be trained once but is lenient in the input's timing.
[ { "version": "v1", "created": "Fri, 21 Oct 2022 18:43:18 GMT" } ]
2023-01-03T00:00:00
[ [ "Hohenheim", "Jan", "" ], [ "Liu", "Zhaoyu Devon", "" ], [ "Stecconi", "Tommaso", "" ], [ "Palopoli", "Pietro", "" ] ]
new_dataset
0.998861
2212.02108
Ana Kotarcic
Ana Kotarcic, Dominik Hangartner, Fabrizio Gilardi, Selina Kurer, Karsten Donnay
Human-in-the-Loop Hate Speech Classification in a Multilingual Context
Findings of EMNLP 2022
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
The shift of public debate to the digital sphere has been accompanied by a rise in online hate speech. While many promising approaches for hate speech classification have been proposed, studies often focus only on a single language, usually English, and do not address three key concerns: post-deployment performance, classifier maintenance and infrastructural limitations. In this paper, we introduce a new human-in-the-loop BERT-based hate speech classification pipeline and trace its development from initial data collection and annotation all the way to post-deployment. Our classifier, trained using data from our original corpus of over 422k examples, is specifically developed for the inherently multilingual setting of Switzerland and outperforms with its F1 score of 80.5 the currently best-performing BERT-based multilingual classifier by 5.8 F1 points in German and 3.6 F1 points in French. Our systematic evaluations over a 12-month period further highlight the vital importance of continuous, human-in-the-loop classifier maintenance to ensure robust hate speech classification post-deployment.
[ { "version": "v1", "created": "Mon, 5 Dec 2022 09:05:40 GMT" }, { "version": "v2", "created": "Sun, 1 Jan 2023 14:39:09 GMT" } ]
2023-01-03T00:00:00
[ [ "Kotarcic", "Ana", "" ], [ "Hangartner", "Dominik", "" ], [ "Gilardi", "Fabrizio", "" ], [ "Kurer", "Selina", "" ], [ "Donnay", "Karsten", "" ] ]
new_dataset
0.996903
2212.08996
Manuel Luis Delos Santos
Manuel Luis C. Delos Santos (1), Ronaldo S. Tinio (2), Darwin B. Diaz (3) and Karlene Emily I. Tolosa (4), ((1)(3)(4) Asian Institute of Computer Studies, Philippines, (2) Pamantasan ng Lungsod ng Valezuela, Philippines)
Smart Face Shield: A Sensor-Based Wearable Face Shield Utilizing Computer Vision Algorithms
null
IJCSR Volume 6, October 2022, ISSN 2546-115X, pages 1-15
10.25147/ijcsr.2017.001.1.118
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The study aims the development of a wearable device to combat the onslaught of covid-19. Likewise, to enhance the regular face shield available in the market. Furthermore, to raise awareness of the health and safety protocols initiated by the government and its affiliates in the enforcement of social distancing with the integration of computer vision algorithms. The wearable device was composed of various hardware and software components such as a transparent polycarbonate face shield, microprocessor, sensors, camera, thin-film transistor on-screen display, jumper wires, power bank, and python programming language. The algorithm incorporated in the study was object detection under computer vision machine learning. The front camera with OpenCV technology determines the distance of a person in front of the user. Utilizing TensorFlow, the target object identifies and detects the image or live feed to get its bounding boxes. The focal length lens requires the determination of the distance from the camera to the target object. To get the focal length, multiply the pixel width by the known distance and divide it by the known width (Rosebrock, 2020). The deployment of unit testing ensures that the parameters are valid in terms of design and specifications.
[ { "version": "v1", "created": "Sun, 18 Dec 2022 03:23:38 GMT" } ]
2023-01-03T00:00:00
[ [ "Santos", "Manuel Luis C. Delos", "" ], [ "Tinio", "Ronaldo S.", "" ], [ "Diaz", "Darwin B.", "" ], [ "Tolosa", "Karlene Emily I.", "" ] ]
new_dataset
0.993986
2212.09937
Emily Lines
Emily R. Lines, Matt Allen, Carlos Cabo, Kim Calders, Amandine Debus, Stuart W. D. Grieve, Milto Miltiadou, Adam Noach, Harry J. F. Owen and Stefano Puliti
AI applications in forest monitoring need remote sensing benchmark datasets
null
null
null
null
cs.AI cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the rise in high resolution remote sensing technologies there has been an explosion in the amount of data available for forest monitoring, and an accompanying growth in artificial intelligence applications to automatically derive forest properties of interest from these datasets. Many studies use their own data at small spatio-temporal scales, and demonstrate an application of an existing or adapted data science method for a particular task. This approach often involves intensive and time-consuming data collection and processing, but generates results restricted to specific ecosystems and sensor types. There is a lack of widespread acknowledgement of how the types and structures of data used affects performance and accuracy of analysis algorithms. To accelerate progress in the field more efficiently, benchmarking datasets upon which methods can be tested and compared are sorely needed. Here, we discuss how lack of standardisation impacts confidence in estimation of key forest properties, and how considerations of data collection need to be accounted for in assessing method performance. We present pragmatic requirements and considerations for the creation of rigorous, useful benchmarking datasets for forest monitoring applications, and discuss how tools from modern data science can improve use of existing data. We list a set of example large-scale datasets that could contribute to benchmarking, and present a vision for how community-driven, representative benchmarking initiatives could benefit the field.
[ { "version": "v1", "created": "Tue, 20 Dec 2022 01:11:40 GMT" } ]
2023-01-03T00:00:00
[ [ "Lines", "Emily R.", "" ], [ "Allen", "Matt", "" ], [ "Cabo", "Carlos", "" ], [ "Calders", "Kim", "" ], [ "Debus", "Amandine", "" ], [ "Grieve", "Stuart W. D.", "" ], [ "Miltiadou", "Milto", "" ], [ "Noach", "Adam", "" ], [ "Owen", "Harry J. F.", "" ], [ "Puliti", "Stefano", "" ] ]
new_dataset
0.981803
2212.13742
Haiyue Yuan
Jamie Knott, Haiyue Yuan, Matthew Boakes, Shujun Li
Cyber Security and Online Safety Education for Schools in the UK: Looking through the Lens of Twitter Data
This is the full edition of a 4-page poster paper published in the Proceedings of the 38th ACM/SIGAPP Symposium on Applied Computing (SAC '23), which can be accessed via the following DOI link: https://doi.org/10.1145/3555776.3577805
null
null
null
cs.CY cs.SI
http://creativecommons.org/licenses/by/4.0/
In recent years, digital technologies have grown in many ways. As a result, many school-aged children have been exposed to the digital world a lot. Children are using more digital technologies, so schools need to teach kids more about cyber security and online safety. Because of this, there are now more school programmes and projects that teach students about cyber security and online safety and help them learn and improve their skills. Still, despite many programmes and projects, there is not much proof of how many schools have taken part and helped spread the word about them. This work shows how we can learn about the size and scope of cyber security and online safety education in schools in the UK, a country with a very active and advanced cyber security education profile, using nearly 200k public tweets from over 15k schools. By using simple techniques like descriptive statistics and visualisation as well as advanced natural language processing (NLP) techniques like sentiment analysis and topic modelling, we show some new findings and insights about how UK schools as a sector have been doing on Twitter with their cyber security and online safety education activities. Our work has led to a range of large-scale and real-world evidence that can help inform people and organisations interested in cyber security and teaching online safety in schools.
[ { "version": "v1", "created": "Wed, 28 Dec 2022 08:30:24 GMT" }, { "version": "v2", "created": "Fri, 30 Dec 2022 20:48:41 GMT" } ]
2023-01-03T00:00:00
[ [ "Knott", "Jamie", "" ], [ "Yuan", "Haiyue", "" ], [ "Boakes", "Matthew", "" ], [ "Li", "Shujun", "" ] ]
new_dataset
0.969717
2301.00001
Tauheed Khan Mohd
Jordan Thompson, Ryan Benac, Kidus Olana, Talha Hassan, Andrew Sward, Tauheed Khan Mohd
NFTrig
null
null
null
null
cs.HC
http://creativecommons.org/publicdomain/zero/1.0/
NFTrig is a web-based application created for use as an educational tool to teach trigonometry and block chain technology. Creation of the application includes front and back end development as well as integration with other outside sources including MetaMask and OpenSea. The primary development languages include HTML, CSS (Bootstrap 5), and JavaScript as well as Solidity for smart contract creation. The application itself is hosted on Moralis utilizing their Web3 API. This technical report describes how the application was created, what the application requires, and smart contract design with security considerations in mind. The NFTrig application has underwent significant testing and validation prior to and after deployment. Future suggestions and recommendations for further development, maintenance, and use in other fields for education are also described.
[ { "version": "v1", "created": "Wed, 21 Dec 2022 18:07:06 GMT" } ]
2023-01-03T00:00:00
[ [ "Thompson", "Jordan", "" ], [ "Benac", "Ryan", "" ], [ "Olana", "Kidus", "" ], [ "Hassan", "Talha", "" ], [ "Sward", "Andrew", "" ], [ "Mohd", "Tauheed Khan", "" ] ]
new_dataset
0.999741
2301.00023
Balamurugan Thambiraja
Balamurugan Thambiraja, Ikhsanul Habibie, Sadegh Aliakbarian, Darren Cosker, Christian Theobalt, Justus Thies
Imitator: Personalized Speech-driven 3D Facial Animation
https://youtu.be/JhXTdjiUCUw
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Speech-driven 3D facial animation has been widely explored, with applications in gaming, character animation, virtual reality, and telepresence systems. State-of-the-art methods deform the face topology of the target actor to sync the input audio without considering the identity-specific speaking style and facial idiosyncrasies of the target actor, thus, resulting in unrealistic and inaccurate lip movements. To address this, we present Imitator, a speech-driven facial expression synthesis method, which learns identity-specific details from a short input video and produces novel facial expressions matching the identity-specific speaking style and facial idiosyncrasies of the target actor. Specifically, we train a style-agnostic transformer on a large facial expression dataset which we use as a prior for audio-driven facial expressions. Based on this prior, we optimize for identity-specific speaking style based on a short reference video. To train the prior, we introduce a novel loss function based on detected bilabial consonants to ensure plausible lip closures and consequently improve the realism of the generated expressions. Through detailed experiments and a user study, we show that our approach produces temporally coherent facial expressions from input audio while preserving the speaking style of the target actors.
[ { "version": "v1", "created": "Fri, 30 Dec 2022 19:00:02 GMT" } ]
2023-01-03T00:00:00
[ [ "Thambiraja", "Balamurugan", "" ], [ "Habibie", "Ikhsanul", "" ], [ "Aliakbarian", "Sadegh", "" ], [ "Cosker", "Darren", "" ], [ "Theobalt", "Christian", "" ], [ "Thies", "Justus", "" ] ]
new_dataset
0.993421
2301.00044
Hisham A. Kholidy
Thomas Grippo, Hisham A. Kholidy
Detecting Forged Kerberos Tickets in an Active Directory Environment
null
null
null
null
cs.CR cs.CY cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Active Directory is the most popular service to manage users and devices on the network. Its widespread deployment in the corporate world has made it a popular target for threat actors. While there are many attacks that target Active Directory and its authentication protocol Kerberos, ticket forgery attacks are among the most dangerous. By exploiting weaknesses in Kerberos, attackers can craft their own tickets that allow them to gain unauthorized access to services on the network. These types of attacks are both dangerous and hard to detect. They may require a powerful centralized log collecting system to analyze Windows security logs across multiple services. This would give additional visibility to be able to find these forged tickets in the network.
[ { "version": "v1", "created": "Fri, 30 Dec 2022 20:20:42 GMT" } ]
2023-01-03T00:00:00
[ [ "Grippo", "Thomas", "" ], [ "Kholidy", "Hisham A.", "" ] ]
new_dataset
0.998414
2301.00047
Alexander Rubtsov
Alexander Rubtsov
The Simplest Proof of Parikh's Theorem via Derivation Trees
null
null
null
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Parikh's theorem is a fundamental result of the formal language's theory. There had been published many proofs and many papers claimed to provide a simplified proof, but most of them are long and still complicated. We provide the proof that is really short, simple and discloses the nature of this fundamental result. We follow the technique closed to the original Parikh's paper and our proof is similar to the proof by Ryoma Sin'ya 2019, but we provide more detailed exposition and pretend to more simplicity as well. We achieve the simplicity via nonconstructivenes that allows us avoiding many difficulties met by other proofs.
[ { "version": "v1", "created": "Fri, 30 Dec 2022 20:27:09 GMT" } ]
2023-01-03T00:00:00
[ [ "Rubtsov", "Alexander", "" ] ]
new_dataset
0.994846
2301.00072
Shaobo Li
Jinghan Sun, Shaobo Li, Yunxin Sun, Chao Sun, Dejan Vucinic, and Jian Huang
LeaFTL: A Learning-Based Flash Translation Layer for Solid-State Drives
This paper is accepted at the 28th Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS 2023)
null
10.1145/3575693.3575744
null
cs.OS
http://creativecommons.org/licenses/by-nc-sa/4.0/
In modern solid-state drives (SSDs), the indexing of flash pages is a critical component in their storage controllers. It not only affects the data access performance, but also determines the efficiency of the precious in-device DRAM resource. A variety of address mapping schemes and optimization techniques have been proposed. However, most of them were developed with human-driven heuristics. They cannot automatically capture diverse data access patterns at runtime in SSD controllers, which leaves a large room for improvement. In this paper, we present a learning-based flash translation layer (FTL), named LeaFTL, which learns the address mapping to tolerate dynamic data access patterns via linear regression at runtime. By grouping a large set of mapping entries into a learned segment, it significantly reduces the memory footprint of the address mapping table, which further benefits the data caching in SSD controllers. LeaFTL also employs various optimization techniques, including out-of-band metadata verification to tolerate mispredictions, optimized flash allocation, and dynamic compaction of learned index segments. We implement LeaFTL with an SSD simulator and evaluate it with various storage workloads. LeaFTL saves the memory consumption of the mapping table by 2.9x on average and improves the storage performance by 1.4x on average, in comparison with state-of-the-art FTL schemes.
[ { "version": "v1", "created": "Fri, 30 Dec 2022 23:37:39 GMT" } ]
2023-01-03T00:00:00
[ [ "Sun", "Jinghan", "" ], [ "Li", "Shaobo", "" ], [ "Sun", "Yunxin", "" ], [ "Sun", "Chao", "" ], [ "Vucinic", "Dejan", "" ], [ "Huang", "Jian", "" ] ]
new_dataset
0.996133
2301.00153
Peter \v{S}vec
Peter \v{S}vec, \v{S}tefan Balogh, Martin Homola, J\'an K\v{l}uka
Knowledge-Based Dataset for Training PE Malware Detection Models
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ontologies are a standard for semantic schemata in many knowledge-intensive domains of human interest. They are now becoming increasingly important also in areas until very recently dominated by subsymbolic representations and machine-learning-based data processing. One such area is information security, and more specifically malware detection. We propose PE Malware Ontology that offers a reusable semantic schema for Portable Executable (PE, Windows binary format) malware files. The ontology was inspired by the structure of the data in the EMBER dataset and it currently covers the data intended for static malware analysis. With this proposal, we hope to achieve: a) a unified semantic representation for PE malware datasets that are available or will be published in the future; (b) applicability of symbolic, neural-symbolic, or otherwise explainable approaches in the PE Malware domain that may lead to improved interpretability of results which may now be characterized by the terms defined in the ontology; and (c)by joint publishing of semantically treated EMBER data, including fractional datasets, also improved reproducibility of experiments.
[ { "version": "v1", "created": "Sat, 31 Dec 2022 08:46:02 GMT" } ]
2023-01-03T00:00:00
[ [ "Švec", "Peter", "" ], [ "Balogh", "Štefan", "" ], [ "Homola", "Martin", "" ], [ "Kľuka", "Ján", "" ] ]
new_dataset
0.999789
2301.00200
Michael Rose PhD
Sebastian Erhardt, Mainak Ghosh, Erik Buunk, Michael E. Rose, Dietmar Harhoff
Logic Mill -- A Knowledge Navigation System
9 pages, 2 figures, 1 table
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
Logic Mill is a scalable and openly accessible software system that identifies semantically similar documents within either one domain-specific corpus or multi-domain corpora. It uses advanced Natural Language Processing (NLP) techniques to generate numerical representations of documents. Currently it leverages a large pre-trained language model to generate these document representations. The system focuses on scientific publications and patent documents and contains more than 200 million documents. It is easily accessible via a simple Application Programming Interface (API) or via a web interface. Moreover, it is continuously being updated and can be extended to text corpora from other domains. We see this system as a general-purpose tool for future research applications in the social sciences and other domains.
[ { "version": "v1", "created": "Sat, 31 Dec 2022 13:46:50 GMT" } ]
2023-01-03T00:00:00
[ [ "Erhardt", "Sebastian", "" ], [ "Ghosh", "Mainak", "" ], [ "Buunk", "Erik", "" ], [ "Rose", "Michael E.", "" ], [ "Harhoff", "Dietmar", "" ] ]
new_dataset
0.999721
2301.00301
Yuqing Zhu
Rachel Redberg, Yuqing Zhu, Yu-Xiang Wang
Generalized PTR: User-Friendly Recipes for Data-Adaptive Algorithms with Differential Privacy
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
The ''Propose-Test-Release'' (PTR) framework is a classic recipe for designing differentially private (DP) algorithms that are data-adaptive, i.e. those that add less noise when the input dataset is nice. We extend PTR to a more general setting by privately testing data-dependent privacy losses rather than local sensitivity, hence making it applicable beyond the standard noise-adding mechanisms, e.g. to queries with unbounded or undefined sensitivity. We demonstrate the versatility of generalized PTR using private linear regression as a case study. Additionally, we apply our algorithm to solve an open problem from ''Private Aggregation of Teacher Ensembles (PATE)'' -- privately releasing the entire model with a delicate data-dependent analysis.
[ { "version": "v1", "created": "Sat, 31 Dec 2022 22:22:53 GMT" } ]
2023-01-03T00:00:00
[ [ "Redberg", "Rachel", "" ], [ "Zhu", "Yuqing", "" ], [ "Wang", "Yu-Xiang", "" ] ]
new_dataset
0.992758
2301.00395
Jiayi Geng
Ge Zhang, Yizhi Li, Yaoyao Wu, Linyuan Zhang, Chenghua Lin, Jiayi Geng, Shi Wang, Jie Fu
CORGI-PM: A Chinese Corpus For Gender Bias Probing and Mitigation
null
null
null
null
cs.CL cs.AI cs.CY cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As natural language processing (NLP) for gender bias becomes a significant interdisciplinary topic, the prevalent data-driven techniques such as large-scale language models suffer from data inadequacy and biased corpus, especially for languages with insufficient resources such as Chinese. To this end, we propose a Chinese cOrpus foR Gender bIas Probing and Mitigation CORGI-PM, which contains 32.9k sentences with high-quality labels derived by following an annotation scheme specifically developed for gender bias in the Chinese context. Moreover, we address three challenges for automatic textual gender bias mitigation, which requires the models to detect, classify, and mitigate textual gender bias. We also conduct experiments with state-of-the-art language models to provide baselines. To our best knowledge, CORGI-PM is the first sentence-level Chinese corpus for gender bias probing and mitigation.
[ { "version": "v1", "created": "Sun, 1 Jan 2023 12:48:12 GMT" } ]
2023-01-03T00:00:00
[ [ "Zhang", "Ge", "" ], [ "Li", "Yizhi", "" ], [ "Wu", "Yaoyao", "" ], [ "Zhang", "Linyuan", "" ], [ "Lin", "Chenghua", "" ], [ "Geng", "Jiayi", "" ], [ "Wang", "Shi", "" ], [ "Fu", "Jie", "" ] ]
new_dataset
0.987166
2301.00486
Joseph J. Boutros
Joseph J. Boutros and Emina Soljanin
Time-Entanglement QKD: Secret Key Rates and Information Reconciliation Coding
We intend to publish this manuscript in an IEEE journal. 33 pages, 2 tables, and 10 figures
null
null
null
cs.IT math.IT quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In time entanglement-based quantum key distribution (QKD), Alice and Bob extract the raw key bits from the (identical) arrival times of entangled photon pairs by time-binning. Each of them individually discretizes time into bins and groups them into frames. They retain only the frames with a single occupied bin. Thus, Alice and Bob can use the position of the occupied bin within a frame to generate random key bits, as in PPM modulation. Because of entanglement, their occupied bins and their keys should be identical. However, practical photon detectors suffer from time jitter errors. These errors cause discrepancies between Alice's and Bob's keys. Alice sends information to Bob through the public channel to reconcile the keys. The amount of information determines the secret key rate. This paper computes the secret key rates possible with detector jitter errors and constructs codes for information reconciliation to approach these rates.
[ { "version": "v1", "created": "Sun, 1 Jan 2023 22:38:35 GMT" } ]
2023-01-03T00:00:00
[ [ "Boutros", "Joseph J.", "" ], [ "Soljanin", "Emina", "" ] ]
new_dataset
0.963138
2301.00493
Benjamin Wilson
Benjamin Wilson, William Qi, Tanmay Agarwal, John Lambert, Jagjeet Singh, Siddhesh Khandelwal, Bowen Pan, Ratnesh Kumar, Andrew Hartnett, Jhony Kaesemodel Pontes, Deva Ramanan, Peter Carr, James Hays
Argoverse 2: Next Generation Datasets for Self-Driving Perception and Forecasting
Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks
null
null
null
cs.CV cs.AI cs.LG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
[ { "version": "v1", "created": "Mon, 2 Jan 2023 00:36:22 GMT" } ]
2023-01-03T00:00:00
[ [ "Wilson", "Benjamin", "" ], [ "Qi", "William", "" ], [ "Agarwal", "Tanmay", "" ], [ "Lambert", "John", "" ], [ "Singh", "Jagjeet", "" ], [ "Khandelwal", "Siddhesh", "" ], [ "Pan", "Bowen", "" ], [ "Kumar", "Ratnesh", "" ], [ "Hartnett", "Andrew", "" ], [ "Pontes", "Jhony Kaesemodel", "" ], [ "Ramanan", "Deva", "" ], [ "Carr", "Peter", "" ], [ "Hays", "James", "" ] ]
new_dataset
0.999816
2301.00505
Adam Gamba
Adam Gamba and Andr\'es Monroy-Hern\'andez
PokAR: Facilitating Poker Play Through Augmented Reality
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
We introduce PokAR, an augmented reality (AR) application to facilitate poker play. PokAR aims to alleviate three difficulties of traditional poker by leveraging AR technology: (1) need to have physical poker chips, (2) complex rules of poker, (3) slow game pace caused by laborious tasks. Despite the potential benefits of AR in poker, not much research has been done in the field. In fact, PokAR is the first application to enable AR poker on a mobile device without requiring extra costly equipment. This has been done by creating a Snapchat Lens which can be used on most mobile devices. We evaluated this application by instructing 4 participant dyads to use PokAR to engage in poker play and respond to survey questions about their experience. We found that most PokAR features were positively received, AR did not significantly improve nor hinder socialization, PokAR slightly increased the game pace, and participants had an overall enjoyable experience with the Lens. These findings led to three major conclusions: (1) AR has the potential to augment and simplify traditional table games, (2) AR should not be used to replace traditional experiences, only augment them, (3) Future work includes additional features like increased tactility and statistical annotations.
[ { "version": "v1", "created": "Mon, 2 Jan 2023 02:32:26 GMT" } ]
2023-01-03T00:00:00
[ [ "Gamba", "Adam", "" ], [ "Monroy-Hernández", "Andrés", "" ] ]
new_dataset
0.999874
2301.00633
Ver\'onica Becher
Ver\'onica Becher and Olivier Carton
Nested perfect toroidal arrays
null
null
null
null
cs.IT cs.DM math.CO math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce two-dimensional toroidal arrays that are a variant of the de Bruijn tori. We call them nested perfect toroidal arrays. Instead of asking that every array of a given size has exactly one occurrence, we partition the positions in congruence classes and we ask exactly one occurrence in each congruence class. We also ask that this property applies recursively to each of the subarrays. We give a method to construct nested perfect toroidal arrays based on Pascal triangle matrix modulo 2. For the two-symbol alphabet, and for $n$ being a power of $2$, our method yields $2^{n^2+n-1}$ different nested perfect toroidal arrays allocating all the different $n\times n$ arrays in each congruence class that arises from taking the line number modulo $n$ and the column number modulo $n$.
[ { "version": "v1", "created": "Mon, 2 Jan 2023 12:51:30 GMT" } ]
2023-01-03T00:00:00
[ [ "Becher", "Verónica", "" ], [ "Carton", "Olivier", "" ] ]
new_dataset
0.995817
2301.00704
Jarred Barber
Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, Jose Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T. Freeman, Michael Rubinstein, Yuanzhen Li, Dilip Krishnan
Muse: Text-To-Image Generation via Masked Generative Transformers
null
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
[ { "version": "v1", "created": "Mon, 2 Jan 2023 14:43:38 GMT" } ]
2023-01-03T00:00:00
[ [ "Chang", "Huiwen", "" ], [ "Zhang", "Han", "" ], [ "Barber", "Jarred", "" ], [ "Maschinot", "AJ", "" ], [ "Lezama", "Jose", "" ], [ "Jiang", "Lu", "" ], [ "Yang", "Ming-Hsuan", "" ], [ "Murphy", "Kevin", "" ], [ "Freeman", "William T.", "" ], [ "Rubinstein", "Michael", "" ], [ "Li", "Yuanzhen", "" ], [ "Krishnan", "Dilip", "" ] ]
new_dataset
0.977761
2301.00709
Ole-Christoffer Granmo
Bimal Bhattarai and Ole-Christoffer Granmo and Lei Jiao and Rohan Yadav and Jivitesh Sharma
Tsetlin Machine Embedding: Representing Words Using Logical Expressions
9 pages, 7 figures
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Embedding words in vector space is a fundamental first step in state-of-the-art natural language processing (NLP). Typical NLP solutions employ pre-defined vector representations to improve generalization by co-locating similar words in vector space. For instance, Word2Vec is a self-supervised predictive model that captures the context of words using a neural network. Similarly, GLoVe is a popular unsupervised model incorporating corpus-wide word co-occurrence statistics. Such word embedding has significantly boosted important NLP tasks, including sentiment analysis, document classification, and machine translation. However, the embeddings are dense floating-point vectors, making them expensive to compute and difficult to interpret. In this paper, we instead propose to represent the semantics of words with a few defining words that are related using propositional logic. To produce such logical embeddings, we introduce a Tsetlin Machine-based autoencoder that learns logical clauses self-supervised. The clauses consist of contextual words like "black," "cup," and "hot" to define other words like "coffee," thus being human-understandable. We evaluate our embedding approach on several intrinsic and extrinsic benchmarks, outperforming GLoVe on six classification tasks. Furthermore, we investigate the interpretability of our embedding using the logical representations acquired during training. We also visualize word clusters in vector space, demonstrating how our logical embedding co-locate similar words.
[ { "version": "v1", "created": "Mon, 2 Jan 2023 15:02:45 GMT" } ]
2023-01-03T00:00:00
[ [ "Bhattarai", "Bimal", "" ], [ "Granmo", "Ole-Christoffer", "" ], [ "Jiao", "Lei", "" ], [ "Yadav", "Rohan", "" ], [ "Sharma", "Jivitesh", "" ] ]
new_dataset
0.98237
2301.00716
Felix Hamann
Felix Hamann, Adrian Ulges, Maurice Falk
IRT2: Inductive Linking and Ranking in Knowledge Graphs of Varying Scale
null
null
null
null
cs.LG cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
We address the challenge of building domain-specific knowledge models for industrial use cases, where labelled data and taxonomic information is initially scarce. Our focus is on inductive link prediction models as a basis for practical tools that support knowledge engineers with exploring text collections and discovering and linking new (so-called open-world) entities to the knowledge graph. We argue that - though neural approaches to text mining have yielded impressive results in the past years - current benchmarks do not reflect the typical challenges encountered in the industrial wild properly. Therefore, our first contribution is an open benchmark coined IRT2 (inductive reasoning with text) that (1) covers knowledge graphs of varying sizes (including very small ones), (2) comes with incidental, low-quality text mentions, and (3) includes not only triple completion but also ranking, which is relevant for supporting experts with discovery tasks. We investigate two neural models for inductive link prediction, one based on end-to-end learning and one that learns from the knowledge graph and text data in separate steps. These models compete with a strong bag-of-words baseline. The results show a significant advance in performance for the neural approaches as soon as the available graph data decreases for linking. For ranking, the results are promising, and the neural approaches outperform the sparse retriever by a wide margin.
[ { "version": "v1", "created": "Mon, 2 Jan 2023 15:19:21 GMT" } ]
2023-01-03T00:00:00
[ [ "Hamann", "Felix", "" ], [ "Ulges", "Adrian", "" ], [ "Falk", "Maurice", "" ] ]
new_dataset
0.976252
2301.00730
Quan Quan
Quan Quan, Wang Shuai, Gao Wenhan
Lifting-wing Quadcopter Modeling and Unified Control
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hybrid unmanned aerial vehicles (UAVs) integrate the efficient forward flight of fixed-wing and vertical takeoff and landing (VTOL) capabilities of multicopter UAVs. This paper presents the modeling, control and simulation of a new type of hybrid micro-small UAVs, coined as lifting-wing quadcopters. The airframe orientation of the lifting wing needs to tilt a specific angle often within $ 45$ degrees, neither nearly $ 90$ nor approximately $ 0$ degrees. Compared with some convertiplane and tail-sitter UAVs, the lifting-wing quadcopter has a highly reliable structure, robust wind resistance, low cruise speed and reliable transition flight, making it potential to work fully-autonomous outdoor or some confined airspace indoor. In the modeling part, forces and moments generated by both lifting wing and rotors are considered. Based on the established model, a unified controller for the full flight phase is designed. The controller has the capability of uniformly treating the hovering and forward flight, and enables a continuous transition between two modes, depending on the velocity command. What is more, by taking rotor thrust and aerodynamic force under consideration simultaneously, a control allocation based on optimization is utilized to realize cooperative control for energy saving. Finally, comprehensive Hardware-In-the-Loop (HIL) simulations are performed to verify the advantages of the designed aircraft and the proposed controller.
[ { "version": "v1", "created": "Mon, 2 Jan 2023 15:48:45 GMT" } ]
2023-01-03T00:00:00
[ [ "Quan", "Quan", "" ], [ "Shuai", "Wang", "" ], [ "Wenhan", "Gao", "" ] ]
new_dataset
0.960081
2301.00764
Christian Lenz
Christian Lenz, Sven Behnke
Bimanual Telemanipulation with Force and Haptic Feedback through an Anthropomorphic Avatar System
Published in Robotics and Autonomous Systems, 2022 (https://doi.org/10.1016/j.robot.2022.104338). arXiv admin note: substantial text overlap with arXiv:2109.13382
null
10.1016/j.robot.2022.104338
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robotic teleoperation is a key technology for a wide variety of applications. It allows sending robots instead of humans in remote, possibly dangerous locations while still using the human brain with its enormous knowledge and creativity, especially for solving unexpected problems. A main challenge in teleoperation consists of providing enough feedback to the human operator for situation awareness and thus create full immersion, as well as offering the operator suitable control interfaces to achieve efficient and robust task fulfillment. We present a bimanual telemanipulation system consisting of an anthropomorphic avatar robot and an operator station providing force and haptic feedback to the human operator. The avatar arms are controlled in Cartesian space with a direct mapping of the operator movements. The measured forces and torques on the avatar side are haptically displayed to the operator. We developed a predictive avatar model for limit avoidance which runs on the operator side, ensuring low latency. The system was successfully evaluated during the ANA Avatar XPRIZE competition semifinals. In addition, we performed in lab experiments and carried out a small user study with mostly untrained operators.
[ { "version": "v1", "created": "Mon, 2 Jan 2023 17:26:54 GMT" } ]
2023-01-03T00:00:00
[ [ "Lenz", "Christian", "" ], [ "Behnke", "Sven", "" ] ]
new_dataset
0.998834
2301.00808
Saining Xie
Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon and Saining Xie
ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders
Code and models available at https://github.com/facebookresearch/ConvNeXt-V2
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Driven by improved architectures and better representation learning frameworks, the field of visual recognition has enjoyed rapid modernization and performance boost in the early 2020s. For example, modern ConvNets, represented by ConvNeXt, have demonstrated strong performance in various scenarios. While these models were originally designed for supervised learning with ImageNet labels, they can also potentially benefit from self-supervised learning techniques such as masked autoencoders (MAE). However, we found that simply combining these two approaches leads to subpar performance. In this paper, we propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation. We also provide pre-trained ConvNeXt V2 models of various sizes, ranging from an efficient 3.7M-parameter Atto model with 76.7% top-1 accuracy on ImageNet, to a 650M Huge model that achieves a state-of-the-art 88.9% accuracy using only public training data.
[ { "version": "v1", "created": "Mon, 2 Jan 2023 18:59:31 GMT" } ]
2023-01-03T00:00:00
[ [ "Woo", "Sanghyun", "" ], [ "Debnath", "Shoubhik", "" ], [ "Hu", "Ronghang", "" ], [ "Chen", "Xinlei", "" ], [ "Liu", "Zhuang", "" ], [ "Kweon", "In So", "" ], [ "Xie", "Saining", "" ] ]
new_dataset
0.998629
2107.05851
Jun Mao
Jun Mao, Lilian Zhang, Xiaofeng He, Hao Qu, Xiaoping Hu
A 2D Georeferenced Map Aided Visual-Inertial System for Precise UAV Localization
null
2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
10.1109/IROS47612.2022.9982254.
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Precise geolocalization is crucial for unmanned aerial vehicles (UAVs). However, most current deployed UAVs rely on the global navigation satellite systems (GNSS) or high precision inertial navigation systems (INS) for geolocalization. In this paper, we propose to use a lightweight visual-inertial system with a 2D georeference map to obtain accurate and consecutive geodetic positions for UAVs. The proposed system firstly integrates a micro inertial measurement unit (MIMU) and a monocular camera as odometry to consecutively estimate the navigation states and reconstruct the 3D position of the observed visual features in the local world frame. To obtain the geolocation, the visual features tracked by the odometry are further registered to the 2D georeferenced map. While most conventional methods perform image-level aerial image registration, we propose to align the reconstructed points to the map points in the geodetic frame; this helps to filter out the large portion of outliers and decouples the negative effects from the horizontal angles. The registered points are then used to relocalize the vehicle in the geodetic frame. Finally, a pose graph is deployed to fuse the geolocation from the aerial image registration and the local navigation result from the visual-inertial odometry (VIO) to achieve consecutive and drift-free geolocalization performance. We have validated the proposed method by installing the sensors to a UAV body rigidly and have conducted two flights in different environments with unknown initials. The results show that the proposed method can achieve less than 4m position error in flight at 100m high and less than 9m position error in flight about 300m high.
[ { "version": "v1", "created": "Tue, 13 Jul 2021 05:10:02 GMT" }, { "version": "v2", "created": "Thu, 29 Dec 2022 03:17:16 GMT" } ]
2023-01-02T00:00:00
[ [ "Mao", "Jun", "" ], [ "Zhang", "Lilian", "" ], [ "He", "Xiaofeng", "" ], [ "Qu", "Hao", "" ], [ "Hu", "Xiaoping", "" ] ]
new_dataset
0.993287
2110.07276
Soobee Lee
Soobee Lee, Minindu Weerakoon, Jonghyun Choi, Minjia Zhang, Di Wang, Myeongjae Jeon
Carousel Memory: Rethinking the Design of Episodic Memory for Continual Learning
This paper is the extended version of 'CarM: Hierarchical Episodic Memory for Continual Learning' accepted at DAC 2022
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Continual Learning (CL) is an emerging machine learning paradigm that aims to learn from a continuous stream of tasks without forgetting knowledge learned from the previous tasks. To avoid performance decrease caused by forgetting, prior studies exploit episodic memory (EM), which stores a subset of the past observed samples while learning from new non-i.i.d. data. Despite the promising results, since CL is often assumed to execute on mobile or IoT devices, the EM size is bounded by the small hardware memory capacity and makes it infeasible to meet the accuracy requirements for real-world applications. Specifically, all prior CL methods discard samples overflowed from the EM and can never retrieve them back for subsequent training steps, incurring loss of information that would exacerbate catastrophic forgetting. We explore a novel hierarchical EM management strategy to address the forgetting issue. In particular, in mobile and IoT devices, real-time data can be stored not just in high-speed RAMs but in internal storage devices as well, which offer significantly larger capacity than the RAMs. Based on this insight, we propose to exploit the abundant storage to preserve past experiences and alleviate the forgetting by allowing CL to efficiently migrate samples between memory and storage without being interfered by the slow access speed of the storage. We call it Carousel Memory (CarM). As CarM is complementary to existing CL methods, we conduct extensive evaluations of our method with seven popular CL methods and show that CarM significantly improves the accuracy of the methods across different settings by large margins in final average accuracy (up to 28.4%) while retaining the same training efficiency.
[ { "version": "v1", "created": "Thu, 14 Oct 2021 11:27:45 GMT" }, { "version": "v2", "created": "Fri, 15 Oct 2021 03:49:25 GMT" }, { "version": "v3", "created": "Thu, 29 Dec 2022 07:49:32 GMT" } ]
2023-01-02T00:00:00
[ [ "Lee", "Soobee", "" ], [ "Weerakoon", "Minindu", "" ], [ "Choi", "Jonghyun", "" ], [ "Zhang", "Minjia", "" ], [ "Wang", "Di", "" ], [ "Jeon", "Myeongjae", "" ] ]
new_dataset
0.997482
2112.10374
Qi Tian
Qi Tian, Kun Kuang, Baoxiang Wang, Furui Liu, Fei Wu
CGIBNet: Bandwidth-constrained Communication with Graph Information Bottleneck in Multi-Agent Reinforcement Learning
null
null
null
null
cs.AI cs.MA
http://creativecommons.org/licenses/by/4.0/
Communication is one of the core components for cooperative multi-agent reinforcement learning (MARL). The communication bandwidth, in many real applications, is always subject to certain constraints. To improve communication efficiency, in this article, we propose to simultaneously optimize whom to communicate with and what to communicate for each agent in MARL. By initiating the communication between agents with a directed complete graph, we propose a novel communication model, named Communicative Graph Information Bottleneck Network (CGIBNet), to simultaneously compress the graph structure and the node information with the graph information bottleneck principle. The graph structure compression is designed to cut the redundant edges for determining whom to communicate with. The node information compression aims to address the problem of what to communicate via learning compact node representations. Moreover, CGIBNet is the first universal module for bandwidth-constrained communication, which can be applied to various training frameworks (i.e., policy-based and value-based MARL frameworks) and communication modes (i.e., single-round and multi-round communication). Extensive experiments are conducted in Traffic Control and StarCraft II environments. The results indicate that our method can achieve better performance in bandwidth-constrained settings compared with state-of-the-art algorithms.
[ { "version": "v1", "created": "Mon, 20 Dec 2021 07:53:44 GMT" }, { "version": "v2", "created": "Wed, 29 Dec 2021 17:25:01 GMT" }, { "version": "v3", "created": "Fri, 10 Jun 2022 07:26:00 GMT" }, { "version": "v4", "created": "Fri, 30 Dec 2022 10:24:54 GMT" } ]
2023-01-02T00:00:00
[ [ "Tian", "Qi", "" ], [ "Kuang", "Kun", "" ], [ "Wang", "Baoxiang", "" ], [ "Liu", "Furui", "" ], [ "Wu", "Fei", "" ] ]
new_dataset
0.997955
2112.11763
Sascha Kurz
Sascha Kurz
Divisible Codes
105 pages; typos corrected
null
null
null
cs.IT math.CO math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A linear code over $\mathbb{F}_q$ with the Hamming metric is called $\Delta$-divisible if the weights of all codewords are divisible by $\Delta$. They have been introduced by Harold Ward a few decades ago. Applications include subspace codes, partial spreads, vector space partitions, and distance optimal codes. The determination of the possible lengths of projective divisible codes is an interesting and comprehensive challenge.
[ { "version": "v1", "created": "Wed, 22 Dec 2021 10:03:31 GMT" }, { "version": "v2", "created": "Thu, 29 Dec 2022 09:05:04 GMT" } ]
2023-01-02T00:00:00
[ [ "Kurz", "Sascha", "" ] ]
new_dataset
0.999423
2201.05729
Zhecan Wang
Zhecan Wang, Noel Codella, Yen-Chun Chen, Luowei Zhou, Jianwei Yang, Xiyang Dai, Bin Xiao, Haoxuan You, Shih-Fu Chang, Lu Yuan
CLIP-TD: CLIP Targeted Distillation for Vision-Language Tasks
This paper is greatly modified and updated to be re-submitted to another conference. The new paper is under the name "Multimodal Adaptive Distillation for Leveraging Unimodal Encoders for Vision-Language Tasks", https://doi.org/10.48550/arXiv.2204.10496
null
null
null
cs.CV cs.AI cs.CL cs.LG cs.MM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Contrastive language-image pretraining (CLIP) links vision and language modalities into a unified embedding space, yielding the tremendous potential for vision-language (VL) tasks. While early concurrent works have begun to study this potential on a subset of tasks, important questions remain: 1) What is the benefit of CLIP on unstudied VL tasks? 2) Does CLIP provide benefit in low-shot or domain-shifted scenarios? 3) Can CLIP improve existing approaches without impacting inference or pretraining complexity? In this work, we seek to answer these questions through two key contributions. First, we introduce an evaluation protocol that includes Visual Commonsense Reasoning (VCR), Visual Entailment (SNLI-VE), and Visual Question Answering (VQA), across a variety of data availability constraints and conditions of domain shift. Second, we propose an approach, named CLIP Targeted Distillation (CLIP-TD), to intelligently distill knowledge from CLIP into existing architectures using a dynamically weighted objective applied to adaptively selected tokens per instance. Experiments demonstrate that our proposed CLIP-TD leads to exceptional gains in the low-shot (up to 51.9%) and domain-shifted (up to 71.3%) conditions of VCR, while simultaneously improving performance under standard fully-supervised conditions (up to 2%), achieving state-of-art performance on VCR compared to other single models that are pretrained with image-text data only. On SNLI-VE, CLIP-TD produces significant gains in low-shot conditions (up to 6.6%) as well as fully supervised (up to 3%). On VQA, CLIP-TD provides improvement in low-shot (up to 9%), and in fully-supervised (up to 1.3%). Finally, CLIP-TD outperforms concurrent works utilizing CLIP for finetuning, as well as baseline naive distillation approaches. Code will be made available.
[ { "version": "v1", "created": "Sat, 15 Jan 2022 01:54:01 GMT" }, { "version": "v2", "created": "Mon, 16 May 2022 15:47:52 GMT" }, { "version": "v3", "created": "Wed, 28 Dec 2022 20:07:58 GMT" } ]
2023-01-02T00:00:00
[ [ "Wang", "Zhecan", "" ], [ "Codella", "Noel", "" ], [ "Chen", "Yen-Chun", "" ], [ "Zhou", "Luowei", "" ], [ "Yang", "Jianwei", "" ], [ "Dai", "Xiyang", "" ], [ "Xiao", "Bin", "" ], [ "You", "Haoxuan", "" ], [ "Chang", "Shih-Fu", "" ], [ "Yuan", "Lu", "" ] ]
new_dataset
0.998568
2202.01811
Chong Xiang
Chong Xiang, Alexander Valtchanov, Saeed Mahloujifar, Prateek Mittal
ObjectSeeker: Certifiably Robust Object Detection against Patch Hiding Attacks via Patch-agnostic Masking
IEEE Symposium on Security and Privacy 2023; extended version
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Object detectors, which are widely deployed in security-critical systems such as autonomous vehicles, have been found vulnerable to patch hiding attacks. An attacker can use a single physically-realizable adversarial patch to make the object detector miss the detection of victim objects and undermine the functionality of object detection applications. In this paper, we propose ObjectSeeker for certifiably robust object detection against patch hiding attacks. The key insight in ObjectSeeker is patch-agnostic masking: we aim to mask out the entire adversarial patch without knowing the shape, size, and location of the patch. This masking operation neutralizes the adversarial effect and allows any vanilla object detector to safely detect objects on the masked images. Remarkably, we can evaluate ObjectSeeker's robustness in a certifiable manner: we develop a certification procedure to formally determine if ObjectSeeker can detect certain objects against any white-box adaptive attack within the threat model, achieving certifiable robustness. Our experiments demonstrate a significant (~10%-40% absolute and ~2-6x relative) improvement in certifiable robustness over the prior work, as well as high clean performance (~1% drop compared with undefended models).
[ { "version": "v1", "created": "Thu, 3 Feb 2022 19:34:25 GMT" }, { "version": "v2", "created": "Wed, 28 Dec 2022 19:03:52 GMT" } ]
2023-01-02T00:00:00
[ [ "Xiang", "Chong", "" ], [ "Valtchanov", "Alexander", "" ], [ "Mahloujifar", "Saeed", "" ], [ "Mittal", "Prateek", "" ] ]
new_dataset
0.993836
2205.10019
Jiho Jin
Juhee Son, Jiho Jin, Haneul Yoo, JinYeong Bak, Kyunghyun Cho, Alice Oh
Translating Hanja Historical Documents to Contemporary Korean and English
2022 EMNLP Findings
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Annals of Joseon Dynasty (AJD) contain the daily records of the Kings of Joseon, the 500-year kingdom preceding the modern nation of Korea. The Annals were originally written in an archaic Korean writing system, `Hanja', and were translated into Korean from 1968 to 1993. The resulting translation was however too literal and contained many archaic Korean words; thus, a new expert translation effort began in 2012. Since then, the records of only one king have been completed in a decade. In parallel, expert translators are working on English translation, also at a slow pace and produced only one king's records in English so far. Thus, we propose H2KE, a neural machine translation model, that translates historical documents in Hanja to more easily understandable Korean and to English. Built on top of multilingual neural machine translation, H2KE learns to translate a historical document written in Hanja, from both a full dataset of outdated Korean translation and a small dataset of more recently translated contemporary Korean and English. We compare our method against two baselines: a recent model that simultaneously learns to restore and translate Hanja historical document and a Transformer based model trained only on newly translated corpora. The experiments reveal that our method significantly outperforms the baselines in terms of BLEU scores for both contemporary Korean and English translations. We further conduct extensive human evaluation which shows that our translation is preferred over the original expert translations by both experts and non-expert Korean speakers.
[ { "version": "v1", "created": "Fri, 20 May 2022 08:25:11 GMT" }, { "version": "v2", "created": "Fri, 7 Oct 2022 13:51:08 GMT" }, { "version": "v3", "created": "Fri, 28 Oct 2022 06:46:11 GMT" }, { "version": "v4", "created": "Fri, 30 Dec 2022 08:11:29 GMT" } ]
2023-01-02T00:00:00
[ [ "Son", "Juhee", "" ], [ "Jin", "Jiho", "" ], [ "Yoo", "Haneul", "" ], [ "Bak", "JinYeong", "" ], [ "Cho", "Kyunghyun", "" ], [ "Oh", "Alice", "" ] ]
new_dataset
0.999803
2206.07258
XiaoWen Wei
Xiaowen Wei, Xiuwen Gong, Yibing Zhan, Bo Du, Yong Luo, Wenbin Hu
CLNode: Curriculum Learning for Node Classification
null
null
null
null
cs.LG cs.SI
http://creativecommons.org/licenses/by/4.0/
Node classification is a fundamental graph-based task that aims to predict the classes of unlabeled nodes, for which Graph Neural Networks (GNNs) are the state-of-the-art methods. Current GNNs assume that nodes in the training set contribute equally during training. However, the quality of training nodes varies greatly, and the performance of GNNs could be harmed by two types of low-quality training nodes: (1) inter-class nodes situated near class boundaries that lack the typical characteristics of their corresponding classes. Because GNNs are data-driven approaches, training on these nodes could degrade the accuracy. (2) mislabeled nodes. In real-world graphs, nodes are often mislabeled, which can significantly degrade the robustness of GNNs. To mitigate the detrimental effect of the low-quality training nodes, we present CLNode, which employs a selective training strategy to train GNN based on the quality of nodes. Specifically, we first design a multi-perspective difficulty measurer to accurately measure the quality of training nodes. Then, based on the measured qualities, we employ a training scheduler that selects appropriate training nodes to train GNN in each epoch. To evaluate the effectiveness of CLNode, we conduct extensive experiments by incorporating it in six representative backbone GNNs. Experimental results on real-world networks demonstrate that CLNode is a general framework that can be combined with various GNNs to improve their accuracy and robustness.
[ { "version": "v1", "created": "Wed, 15 Jun 2022 02:43:36 GMT" }, { "version": "v2", "created": "Fri, 30 Dec 2022 12:20:56 GMT" } ]
2023-01-02T00:00:00
[ [ "Wei", "Xiaowen", "" ], [ "Gong", "Xiuwen", "" ], [ "Zhan", "Yibing", "" ], [ "Du", "Bo", "" ], [ "Luo", "Yong", "" ], [ "Hu", "Wenbin", "" ] ]
new_dataset
0.962747
2207.04690
Zhaohua Chen
Zhaohua Chen, Chang Wang, Qian Wang, Yuqi Pan, Zhuming Shi, Zheng Cai, Yukun Ren, Zhihua Zhu, Xiaotie Deng
Dynamic Budget Throttling in Repeated Second-Price Auctions
45 pages, 1 figure, 1 table
null
null
null
cs.GT cs.LG econ.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In today's online advertising markets, an important demand for an advertiser (buyer) is to control her total expenditure within a time span under some budget. Among all budget control approaches, throttling stands out as a popular one, where the buyer participates in only a part of auctions. This paper gives a theoretical panorama of a single buyer's dynamic budget throttling process in repeated second-price auctions, which is lacking in the literature. We first establish a lower bound on the regret and an upper bound on the asymptotic competitive ratio for any throttling algorithm, respectively, on whether the buyer's values are stochastic or adversarial. Second, on the algorithmic side, we consider two different information structures, with increasing difficulty in learning the stochastic distribution of the highest competing bid. We further propose the OGD-CB algorithm, which is oblivious to stochastic or adversarial values and has asymptotically equal results under these two information structures. Specifically, with stochastic values, we demonstrate that this algorithm guarantees a near-optimal expected regret. When values are adversarial, we prove that the proposed algorithm reaches the upper bound on the asymptotic competitive ratio. At last, we compare throttling with pacing, another widely adopted budget control method, in repeated second-price auctions. In the stochastic case, we illustrate that pacing is generally better than throttling for the buyer, which is an extension of known results that pacing is asymptotically optimal in this scenario. However, in the adversarial case, we give an exciting result indicating that throttling is the asymptotically optimal dynamic bidding strategy. Our results fill the gaps in the theoretical research of throttling in repeated auctions and comprehensively reveal the ability of this popular budget-smoothing strategy.
[ { "version": "v1", "created": "Mon, 11 Jul 2022 08:12:02 GMT" }, { "version": "v2", "created": "Tue, 12 Jul 2022 08:46:34 GMT" }, { "version": "v3", "created": "Fri, 15 Jul 2022 02:04:52 GMT" }, { "version": "v4", "created": "Wed, 21 Dec 2022 15:33:58 GMT" }, { "version": "v5", "created": "Thu, 22 Dec 2022 05:01:10 GMT" }, { "version": "v6", "created": "Tue, 27 Dec 2022 04:53:36 GMT" } ]
2023-01-02T00:00:00
[ [ "Chen", "Zhaohua", "" ], [ "Wang", "Chang", "" ], [ "Wang", "Qian", "" ], [ "Pan", "Yuqi", "" ], [ "Shi", "Zhuming", "" ], [ "Cai", "Zheng", "" ], [ "Ren", "Yukun", "" ], [ "Zhu", "Zhihua", "" ], [ "Deng", "Xiaotie", "" ] ]
new_dataset
0.996335
2207.14556
Seyed Amir Tafrishi
Seyed Amir Tafrishi and Ankit A. Ravankar and Yasuhisa Hirata
PSM: A Predictive Safety Model for Body Motion Based On the Spring-Damper Pendulum
Accepted to 2022 International Conference on Intelligent Robots and Systems (IROS), 9 pages, 11 figures
null
10.1109/IROS47612.2022.9981274
null
cs.RO cs.SY eess.SY math.DS
http://creativecommons.org/licenses/by/4.0/
Quantifying the safety of the human body orientation is an important issue in human-robot interaction. Knowing the changing physical constraints on human motion can improve inspection of safe human motions and bring essential information about stability and normality of human body orientations with real-time risk assessment. Also, this information can be used in cooperative robots and monitoring systems to evaluate and interact in the environment more freely. Furthermore, the workspace area can be more deterministic with the known physical characteristics of safety. Based on this motivation, we propose a novel predictive safety model (PSM) that relies on the information of an inertial measurement unit on the human chest. The PSM encompasses a 3-Dofs spring-damper pendulum model that predicts human motion based on a safe motion dataset. The estimated safe orientation of humans is obtained by integrating a safety dataset and an elastic spring-damper model in a way that the proposed approach can realize complex motions at different safety levels. We did experiments in a real-world scenario to verify our novel proposed model. This novel approach can be used in different guidance/assistive robots and health monitoring systems to support and evaluate the human condition, particularly elders.
[ { "version": "v1", "created": "Fri, 29 Jul 2022 09:11:36 GMT" } ]
2023-01-02T00:00:00
[ [ "Tafrishi", "Seyed Amir", "" ], [ "Ravankar", "Ankit A.", "" ], [ "Hirata", "Yasuhisa", "" ] ]
new_dataset
0.998094
2208.10657
Rayson Laroca
Rayson Laroca, Marcelo Santos, Valter Estevam, Eduardo Luz, David Menotti
A First Look at Dataset Bias in License Plate Recognition
Accepted for presentation at the Conference on Graphics, Patterns and Images (SIBGRAPI) 2022
null
10.1109/SIBGRAPI55357.2022.9991768
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Public datasets have played a key role in advancing the state of the art in License Plate Recognition (LPR). Although dataset bias has been recognized as a severe problem in the computer vision community, it has been largely overlooked in the LPR literature. LPR models are usually trained and evaluated separately on each dataset. In this scenario, they have often proven robust in the dataset they were trained in but showed limited performance in unseen ones. Therefore, this work investigates the dataset bias problem in the LPR context. We performed experiments on eight datasets, four collected in Brazil and four in mainland China, and observed that each dataset has a unique, identifiable "signature" since a lightweight classification model predicts the source dataset of a license plate (LP) image with more than 95% accuracy. In our discussion, we draw attention to the fact that most LPR models are probably exploiting such signatures to improve the results achieved in each dataset at the cost of losing generalization capability. These results emphasize the importance of evaluating LPR models in cross-dataset setups, as they provide a better indication of generalization (hence real-world performance) than within-dataset ones.
[ { "version": "v1", "created": "Tue, 23 Aug 2022 00:20:33 GMT" }, { "version": "v2", "created": "Fri, 30 Dec 2022 10:23:26 GMT" } ]
2023-01-02T00:00:00
[ [ "Laroca", "Rayson", "" ], [ "Santos", "Marcelo", "" ], [ "Estevam", "Valter", "" ], [ "Luz", "Eduardo", "" ], [ "Menotti", "David", "" ] ]
new_dataset
0.952012
2209.14350
Linghao Song
Linghao Song, Licheng Guo, Suhail Basalama, Yuze Chi, Robert F. Lucas, Jason Cong
Callipepla: Stream Centric Instruction Set and Mixed Precision for Accelerating Conjugate Gradient Solver
To appear in FPGA 2023
null
10.1145/3543622.3573182
null
cs.AR cs.DC
http://creativecommons.org/licenses/by-nc-sa/4.0/
The continued growth in the processing power of FPGAs coupled with high bandwidth memories (HBM), makes systems like the Xilinx U280 credible platforms for linear solvers which often dominate the run time of scientific and engineering applications. In this paper, we present Callipepla, an accelerator for a preconditioned conjugate gradient linear solver (CG). FPGA acceleration of CG faces three challenges: (1) how to support an arbitrary problem and terminate acceleration processing on the fly, (2) how to coordinate long-vector data flow among processing modules, and (3) how to save off-chip memory bandwidth and maintain double (FP64) precision accuracy. To tackle the three challenges, we present (1) a stream-centric instruction set for efficient streaming processing and control, (2) vector streaming reuse (VSR) and decentralized vector flow scheduling to coordinate vector data flow among modules and further reduce off-chip memory accesses with a double memory channel design, and (3) a mixed precision scheme to save bandwidth yet still achieve effective double precision quality solutions. To the best of our knowledge, this is the first work to introduce the concept of VSR for data reusing between on-chip modules to reduce unnecessary off-chip accesses for FPGA accelerators. We prototype the accelerator on a Xilinx U280 HBM FPGA. Our evaluation shows that compared to the Xilinx HPC product, the XcgSolver, Callipepla achieves a speedup of 3.94x, 3.36x higher throughput, and 2.94x better energy efficiency. Compared to an NVIDIA A100 GPU which has 4x the memory bandwidth of Callipepla, we still achieve 77% of its throughput with 3.34x higher energy efficiency. The code is available at https://github.com/UCLA-VAST/Callipepla.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 18:26:30 GMT" }, { "version": "v2", "created": "Thu, 29 Dec 2022 06:43:44 GMT" } ]
2023-01-02T00:00:00
[ [ "Song", "Linghao", "" ], [ "Guo", "Licheng", "" ], [ "Basalama", "Suhail", "" ], [ "Chi", "Yuze", "" ], [ "Lucas", "Robert F.", "" ], [ "Cong", "Jason", "" ] ]
new_dataset
0.994391
2212.14125
Akash Mittal
Akash Mittal, Ragini Gupta
MuTable (Music Table): Turn any surface into musical instrument
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
With the rise in pervasive computing solutions, interactive surfaces have gained a large popularity across multi-application domains including smart boards for education, touch-enabled kiosks for smart retail and smart mirrors for smart homes. Despite the increased popularity of such interactive surfaces, existing platforms are mostly limited to custom built surfaces with attached sensors and hardware, that are expensive and require complicated design considerations. To address this, we design a low-cost, intuitive system called MuTable that repurposes any flat surface (such as table tops) into a live musical instrument. This provides a unique, close to real-time instrument playing experience to the user to play any type of musical instrument. This is achieved by projecting the instrument's shape on any tangible surface, sensor calibration, user taps detection, tap position identification, and associated sound generation. We demonstrate the performance of our working system by reporting an accuracy of 83% for detecting softer taps, 100% accuracy for detecting the regular taps, and a precision of 95.7% for estimating hand location.
[ { "version": "v1", "created": "Wed, 28 Dec 2022 23:42:10 GMT" } ]
2023-01-02T00:00:00
[ [ "Mittal", "Akash", "" ], [ "Gupta", "Ragini", "" ] ]
new_dataset
0.998954
2212.14143
Mai Nguyen
Siddhant Baldota, Shreyas Anantha Ramaprasad, Jaspreet Kaur Bhamra, Shane Luna, Ravi Ramachandra, Eugene Zen, Harrison Kim, Daniel Crawl, Ismael Perez, Ilkay Altintas, Garrison W. Cottrell, Mai H.Nguyen
Multimodal Wildland Fire Smoke Detection
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Research has shown that climate change creates warmer temperatures and drier conditions, leading to longer wildfire seasons and increased wildfire risks in the United States. These factors have in turn led to increases in the frequency, extent, and severity of wildfires in recent years. Given the danger posed by wildland fires to people, property, wildlife, and the environment, there is an urgency to provide tools for effective wildfire management. Early detection of wildfires is essential to minimizing potentially catastrophic destruction. In this paper, we present our work on integrating multiple data sources in SmokeyNet, a deep learning model using spatio-temporal information to detect smoke from wildland fires. Camera image data is integrated with weather sensor measurements and processed by SmokeyNet to create a multimodal wildland fire smoke detection system. We present our results comparing performance in terms of both accuracy and time-to-detection for multimodal data vs. a single data source. With a time-to-detection of only a few minutes, SmokeyNet can serve as an automated early notification system, providing a useful tool in the fight against destructive wildfires.
[ { "version": "v1", "created": "Thu, 29 Dec 2022 01:16:06 GMT" } ]
2023-01-02T00:00:00
[ [ "Baldota", "Siddhant", "" ], [ "Ramaprasad", "Shreyas Anantha", "" ], [ "Bhamra", "Jaspreet Kaur", "" ], [ "Luna", "Shane", "" ], [ "Ramachandra", "Ravi", "" ], [ "Zen", "Eugene", "" ], [ "Kim", "Harrison", "" ], [ "Crawl", "Daniel", "" ], [ "Perez", "Ismael", "" ], [ "Altintas", "Ilkay", "" ], [ "Cottrell", "Garrison W.", "" ], [ "Nguyen", "Mai H.", "" ] ]
new_dataset
0.999687
2212.14180
Juan Lagos
Juan Lagos, Esa Rahtu
PanDepth: Joint Panoptic Segmentation and Depth Completion
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Understanding 3D environments semantically is pivotal in autonomous driving applications where multiple computer vision tasks are involved. Multi-task models provide different types of outputs for a given scene, yielding a more holistic representation while keeping the computational cost low. We propose a multi-task model for panoptic segmentation and depth completion using RGB images and sparse depth maps. Our model successfully predicts fully dense depth maps and performs semantic segmentation, instance segmentation, and panoptic segmentation for every input frame. Extensive experiments were done on the Virtual KITTI 2 dataset and we demonstrate that our model solves multiple tasks, without a significant increase in computational cost, while keeping high accuracy performance. Code is available at https://github.com/juanb09111/PanDepth.git
[ { "version": "v1", "created": "Thu, 29 Dec 2022 05:37:38 GMT" } ]
2023-01-02T00:00:00
[ [ "Lagos", "Juan", "" ], [ "Rahtu", "Esa", "" ] ]
new_dataset
0.994227
2212.14201
Yuan Fang
Menghan Dou, Tianrui Zou, Yuan Fang, Jing Wang, Dongyi Zhao, Lei Yu, Boying Chen, Wenbo Guo, Ye Li, Zhaoyun Chen, Guoping Guo
QPanda: high-performance quantum computing framework for multiple application scenarios
null
null
null
null
cs.PL quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the birth of Noisy Intermediate Scale Quantum (NISQ) devices and the verification of "quantum supremacy" in random number sampling and boson sampling, more and more fields hope to use quantum computers to solve specific problems, such as aerodynamic design, route allocation, financial option prediction, quantum chemical simulation to find new materials, and the challenge of quantum cryptography to automotive industry security. However, these fields still need to constantly explore quantum algorithms that adapt to the current NISQ machine, so a quantum programming framework that can face multi-scenarios and application needs is required. Therefore, this paper proposes QPanda, an application scenario-oriented quantum programming framework with high-performance simulation. Such as designing quantum chemical simulation algorithms based on it to explore new materials, building a quantum machine learning framework to serve finance, etc. This framework implements high-performance simulation of quantum circuits, a configuration of the fusion processing backend of quantum computers and supercomputers, and compilation and optimization methods of quantum programs for NISQ machines. Finally, the experiment shows that quantum jobs can be executed with high fidelity on the quantum processor using quantum circuit compile and optimized interface and have better simulation performance.
[ { "version": "v1", "created": "Thu, 29 Dec 2022 07:38:50 GMT" } ]
2023-01-02T00:00:00
[ [ "Dou", "Menghan", "" ], [ "Zou", "Tianrui", "" ], [ "Fang", "Yuan", "" ], [ "Wang", "Jing", "" ], [ "Zhao", "Dongyi", "" ], [ "Yu", "Lei", "" ], [ "Chen", "Boying", "" ], [ "Guo", "Wenbo", "" ], [ "Li", "Ye", "" ], [ "Chen", "Zhaoyun", "" ], [ "Guo", "Guoping", "" ] ]
new_dataset
0.997678
2212.14209
Kangcheng Liu
Kangcheng Liu
An Enhanced LiDAR-Inertial SLAM System for Robotics Localization and Mapping
ICCA 2022 (Oral)
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
The LiDAR and inertial sensors based localization and mapping are of great significance for Unmanned Ground Vehicle related applications. In this work, we have developed an improved LiDAR-inertial localization and mapping system for unmanned ground vehicles, which is appropriate for versatile search and rescue applications. Compared with existing LiDAR-based localization and mapping systems such as LOAM, we have two major contributions: the first is the improvement of the robustness of particle swarm filter-based LiDAR SLAM, while the second is the loop closure methods developed for global optimization to improve the localization accuracy of the whole system. We demonstrate by experiments that the accuracy and robustness of the LiDAR SLAM system are both improved. Finally, we have done systematic experimental tests at the Hong Kong science park as well as other indoor or outdoor real complicated testing circumstances, which demonstrates the effectiveness and efficiency of our approach. It is demonstrated that our system has high accuracy, robustness, as well as efficiency. Our system is of great importance to the localization and mapping of the unmanned ground vehicle in an unknown environment.
[ { "version": "v1", "created": "Thu, 29 Dec 2022 08:01:19 GMT" } ]
2023-01-02T00:00:00
[ [ "Liu", "Kangcheng", "" ] ]
new_dataset
0.999115
2212.14232
Xin Hu
Xin Hu, Lingling Zhang, Jun Liu, Jinfu Fan, Yang You, Yaqiang Wu
GPTR: Gestalt-Perception Transformer for Diagram Object Detection
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diagram object detection is the key basis of practical applications such as textbook question answering. Because the diagram mainly consists of simple lines and color blocks, its visual features are sparser than those of natural images. In addition, diagrams usually express diverse knowledge, in which there are many low-frequency object categories in diagrams. These lead to the fact that traditional data-driven detection model is not suitable for diagrams. In this work, we propose a gestalt-perception transformer model for diagram object detection, which is based on an encoder-decoder architecture. Gestalt perception contains a series of laws to explain human perception, that the human visual system tends to perceive patches in an image that are similar, close or connected without abrupt directional changes as a perceptual whole object. Inspired by these thoughts, we build a gestalt-perception graph in transformer encoder, which is composed of diagram patches as nodes and the relationships between patches as edges. This graph aims to group these patches into objects via laws of similarity, proximity, and smoothness implied in these edges, so that the meaningful objects can be effectively detected. The experimental results demonstrate that the proposed GPTR achieves the best results in the diagram object detection task. Our model also obtains comparable results over the competitors in natural image object detection.
[ { "version": "v1", "created": "Thu, 29 Dec 2022 09:03:05 GMT" } ]
2023-01-02T00:00:00
[ [ "Hu", "Xin", "" ], [ "Zhang", "Lingling", "" ], [ "Liu", "Jun", "" ], [ "Fan", "Jinfu", "" ], [ "You", "Yang", "" ], [ "Wu", "Yaqiang", "" ] ]
new_dataset
0.999525
2212.14377
Barak Hoffer
Barak Hoffer, Nicol\'as Wainstein, Christopher M. Neumann, Eric Pop, Eilam Yalon, Shahar Kvatinsky
Stateful Logic using Phase Change Memory
null
IEEE Journal on Exploratory Solid-State Computational Devices and Circuits (Volume: 8, Issue: 2, December 2022)
10.1109/JXCDC.2022.3219731
null
cs.ET
http://creativecommons.org/licenses/by-nc-nd/4.0/
Stateful logic is a digital processing-in-memory technique that could address von Neumann memory bottleneck challenges while maintaining backward compatibility with standard von Neumann architectures. In stateful logic, memory cells are used to perform the logic operations without reading or moving any data outside the memory array. Stateful logic has been previously demonstrated using several resistive memory types, mostly by resistive RAM (RRAM). Here we present a new method to design stateful logic using a different resistive memory - phase change memory (PCM). We propose and experimentally demonstrate four logic gate types (NOR, IMPLY, OR, NIMP) using commonly used PCM materials. Our stateful logic circuits are different than previously proposed circuits due to the different switching mechanism and functionality of PCM compared to RRAM. Since the proposed stateful logic form a functionally complete set, these gates enable sequential execution of any logic function within the memory, paving the way to PCM-based digital processing-in-memory systems.
[ { "version": "v1", "created": "Thu, 29 Dec 2022 17:20:35 GMT" } ]
2023-01-02T00:00:00
[ [ "Hoffer", "Barak", "" ], [ "Wainstein", "Nicolás", "" ], [ "Neumann", "Christopher M.", "" ], [ "Pop", "Eric", "" ], [ "Yalon", "Eilam", "" ], [ "Kvatinsky", "Shahar", "" ] ]
new_dataset
0.973457
2212.14397
Krzysztof Lis
Krzysztof Lis, Matthias Rottmann, Sina Honari, Pascal Fua, Mathieu Salzmann
AttEntropy: Segmenting Unknown Objects in Complex Scenes using the Spatial Attention Entropy of Semantic Segmentation Transformers
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vision transformers have emerged as powerful tools for many computer vision tasks. It has been shown that their features and class tokens can be used for salient object segmentation. However, the properties of segmentation transformers remain largely unstudied. In this work we conduct an in-depth study of the spatial attentions of different backbone layers of semantic segmentation transformers and uncover interesting properties. The spatial attentions of a patch intersecting with an object tend to concentrate within the object, whereas the attentions of larger, more uniform image areas rather follow a diffusive behavior. In other words, vision transformers trained to segment a fixed set of object classes generalize to objects well beyond this set. We exploit this by extracting heatmaps that can be used to segment unknown objects within diverse backgrounds, such as obstacles in traffic scenes. Our method is training-free and its computational overhead negligible. We use off-the-shelf transformers trained for street-scene segmentation to process other scene types.
[ { "version": "v1", "created": "Thu, 29 Dec 2022 18:07:56 GMT" } ]
2023-01-02T00:00:00
[ [ "Lis", "Krzysztof", "" ], [ "Rottmann", "Matthias", "" ], [ "Honari", "Sina", "" ], [ "Fua", "Pascal", "" ], [ "Salzmann", "Mathieu", "" ] ]
new_dataset
0.998227
2212.14402
Michael Bommarito Ii
Michael Bommarito II, Daniel Martin Katz
GPT Takes the Bar Exam
Additional material available online at https://github.com/mjbommar/gpt-takes-the-bar-exam
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Nearly all jurisdictions in the United States require a professional license exam, commonly referred to as "the Bar Exam," as a precondition for law practice. To even sit for the exam, most jurisdictions require that an applicant completes at least seven years of post-secondary education, including three years at an accredited law school. In addition, most test-takers also undergo weeks to months of further, exam-specific preparation. Despite this significant investment of time and capital, approximately one in five test-takers still score under the rate required to pass the exam on their first try. In the face of a complex task that requires such depth of knowledge, what, then, should we expect of the state of the art in "AI?" In this research, we document our experimental evaluation of the performance of OpenAI's `text-davinci-003` model, often-referred to as GPT-3.5, on the multistate multiple choice (MBE) section of the exam. While we find no benefit in fine-tuning over GPT-3.5's zero-shot performance at the scale of our training data, we do find that hyperparameter optimization and prompt engineering positively impacted GPT-3.5's zero-shot performance. For best prompt and parameters, GPT-3.5 achieves a headline correct rate of 50.3% on a complete NCBE MBE practice exam, significantly in excess of the 25% baseline guessing rate, and performs at a passing rate for both Evidence and Torts. GPT-3.5's ranking of responses is also highly-correlated with correctness; its top two and top three choices are correct 71% and 88% of the time, respectively, indicating very strong non-entailment performance. While our ability to interpret these results is limited by nascent scientific understanding of LLMs and the proprietary nature of GPT, we believe that these results strongly suggest that an LLM will pass the MBE component of the Bar Exam in the near future.
[ { "version": "v1", "created": "Thu, 29 Dec 2022 18:19:43 GMT" } ]
2023-01-02T00:00:00
[ [ "Bommarito", "Michael", "II" ], [ "Katz", "Daniel Martin", "" ] ]
new_dataset
0.999577
2212.14410
Niladri Das
Niladri Das and B. Sundar Rajan
Shared Cache Coded Caching Schemes Using Designs and Circuits of Matrices
36 pages, the paper has been submitted to IEEE Transactions on Information Theory
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study shared cache coded caching (SC-CC): a set of caches serves a larger set of users; each user access one cache, and a cache may serve many users. For this problem, under uncoded placement, Parrinello, \"Unsal, and Elia showed an optimal SC-CC scheme, in which the subpacketization level depends upon the number of caches. We show an SC-CC scheme where the subpacketization level does not directly depend upon the number of users or caches; any number of caches and users can be accommodated for a fixed subpacketization level. Furthermore, new caches can be added without re-doing the placement of the existing caches. We show that given an upper limit on the allowable subpacketization level, our SC-CC scheme may achieve a lesser rate than other relevant SC-CC schemes. Our scheme is constructed using matrices and designs. A matroid can be obtained from a matrix over a finite field; the placement of our scheme is decided by a design constructed from a matrix; the circuits of a matroid obtained from the matrix and the design is used to decide the delivery.
[ { "version": "v1", "created": "Thu, 29 Dec 2022 18:35:54 GMT" } ]
2023-01-02T00:00:00
[ [ "Das", "Niladri", "" ], [ "Rajan", "B. Sundar", "" ] ]
new_dataset
0.987234
2212.14438
Edgar Martinez-Moro
G\"uls\"um G\"ozde Y{\i}lmazg\"u\c{c} and Javier de la Cruz and Edgar Mart\'inez-Moro
Abelian and consta-Abelian polyadic codes over affine algebras with a finite commutative chain coefficient ring
null
null
null
null
cs.IT cs.DM math.IT
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this paper, we define Abelian and consta-Abelian polyadic codes over rings defined as affine algebras over chain rings. For that aim, we use the classical construction via splittings and multipliers of the underlying Abelian group. We also derive some results on the structure of the associated polyadic codes and the number of codes under these conditions.
[ { "version": "v1", "created": "Thu, 29 Dec 2022 19:25:13 GMT" } ]
2023-01-02T00:00:00
[ [ "Yılmazgüç", "Gülsüm Gözde", "" ], [ "de la Cruz", "Javier", "" ], [ "Martínez-Moro", "Edgar", "" ] ]
new_dataset
0.998042
2212.14494
Mario Rom\'an
Elena Di Lavore, Giovanni de Felice, Mario Rom\'an
Coinductive Streams in Monoidal Categories
Expanded version of Monoidal Streams for Dataflow Programming, published in LiCS'22, arXiv:2202.02061. 57 pages, 33 figures
null
null
null
cs.LO math.CT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce monoidal streams. Monoidal streams are a generalization of causal stream functions, which can be defined in cartesian monoidal categories, to arbitrary symmetric monoidal categories. In the same way that streams provide semantics to dataflow programming with pure functions, monoidal streams provide semantics to dataflow programming with theories of processes represented by a symmetric monoidal category. Monoidal streams also form a feedback monoidal category. In the same way that we can use a coinductive stream calculus to reason about signal flow graphs, we can use coinductive string diagrams to reason about feedback monoidal categories. As an example, we study syntax for a stochastic dataflow language, with semantics in stochastic monoidal streams.
[ { "version": "v1", "created": "Fri, 30 Dec 2022 00:25:12 GMT" } ]
2023-01-02T00:00:00
[ [ "Di Lavore", "Elena", "" ], [ "de Felice", "Giovanni", "" ], [ "Román", "Mario", "" ] ]
new_dataset
0.998401
2212.14521
Hiram H. L\'opez
Sarah E. Anderson, Eduardo Camps-Moreno, Hiram H. L\'opez, Gretchen L. Matthews, Diego Ruano, Ivan Soprunov
Relative hulls and quantum codes
null
null
null
null
cs.IT math.IT math.RA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The relative hull of a code $C_1$ with respect to another code $C_2$ is the intersection $C_1\cap C_2^\perp$. We prove that the dimension of the relative hull can always be repeatedly reduced by one by replacing any of the two codes with an equivalent one, down to a specified lower bound. We show how to construct an equivalent code $C_1^\prime$ of $C_1$ (or $C_2^\prime$ of $C_2$) such that the dimension of $C_1^\prime \cap C_2^{\perp}$ (or $C_1 \cap C_2^{\prime\perp}$) is one less than the dimension of $C_1\cap C_2^\perp$. Given codes $C_1$ and $C_2$, we provide a method to specify a code equivalent to $C_2$ which gives a relative hull of any specified dimension, between the difference in dimensions of $C_1$ and $C_2$ and the dimension of the relative hull of $C_1$ with respect to $C_2$. These results apply to hulls taken with respect to the $e$-Galois inner product, which has as special cases both the Euclidean and Hermitian inner products. We also give conditions under which the dimension of the relative hull can be increased by one via equivalent codes. We study the consequences of the relative hull properties on quantum codes constructed via CSS construction. Finally, we use families of decreasing monomial-Cartesian codes to generate pure or impure quantum codes.
[ { "version": "v1", "created": "Fri, 30 Dec 2022 02:49:32 GMT" } ]
2023-01-02T00:00:00
[ [ "Anderson", "Sarah E.", "" ], [ "Camps-Moreno", "Eduardo", "" ], [ "López", "Hiram H.", "" ], [ "Matthews", "Gretchen L.", "" ], [ "Ruano", "Diego", "" ], [ "Soprunov", "Ivan", "" ] ]
new_dataset
0.998575
2212.14569
Prafful Kumar Khoba
Prafful Kumar Khoba, Chirag Parikh, Rohit Saluja, Ravi Kiran Sarvadevabhatla, C. V. Jawahar
A Fine-Grained Vehicle Detection (FGVD) Dataset for Unconstrained Roads
null
null
10.1145/3571600.3571626
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The previous fine-grained datasets mainly focus on classification and are often captured in a controlled setup, with the camera focusing on the objects. We introduce the first Fine-Grained Vehicle Detection (FGVD) dataset in the wild, captured from a moving camera mounted on a car. It contains 5502 scene images with 210 unique fine-grained labels of multiple vehicle types organized in a three-level hierarchy. While previous classification datasets also include makes for different kinds of cars, the FGVD dataset introduces new class labels for categorizing two-wheelers, autorickshaws, and trucks. The FGVD dataset is challenging as it has vehicles in complex traffic scenarios with intra-class and inter-class variations in types, scale, pose, occlusion, and lighting conditions. The current object detectors like yolov5 and faster RCNN perform poorly on our dataset due to a lack of hierarchical modeling. Along with providing baseline results for existing object detectors on FGVD Dataset, we also present the results of a combination of an existing detector and the recent Hierarchical Residual Network (HRN) classifier for the FGVD task. Finally, we show that FGVD vehicle images are the most challenging to classify among the fine-grained datasets.
[ { "version": "v1", "created": "Fri, 30 Dec 2022 06:50:15 GMT" } ]
2023-01-02T00:00:00
[ [ "Khoba", "Prafful Kumar", "" ], [ "Parikh", "Chirag", "" ], [ "Saluja", "Rohit", "" ], [ "Sarvadevabhatla", "Ravi Kiran", "" ], [ "Jawahar", "C. V.", "" ] ]
new_dataset
0.999894
2212.14574
DongKi Noh
DongKi Noh, Changki Sung, Teayoung Uhm, WooJu Lee, Hyungtae Lim, Jaeseok Choi, Kyuewang Lee, Dasol Hong, Daeho Um, Inseop Chung, Hochul Shin, MinJung Kim, Hyoung-Rock Kim, SeungMin Baek, and Hyun Myung
X-MAS: Extremely Large-Scale Multi-Modal Sensor Dataset for Outdoor Surveillance in Real Environments
8 pages, 13 figures, IEEE Robotics and Automation Letters
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
In robotics and computer vision communities, extensive studies have been widely conducted regarding surveillance tasks, including human detection, tracking, and motion recognition with a camera. Additionally, deep learning algorithms are widely utilized in the aforementioned tasks as in other computer vision tasks. Existing public datasets are insufficient to develop learning-based methods that handle various surveillance for outdoor and extreme situations such as harsh weather and low illuminance conditions. Therefore, we introduce a new large-scale outdoor surveillance dataset named eXtremely large-scale Multi-modAl Sensor dataset (X-MAS) containing more than 500,000 image pairs and the first-person view data annotated by well-trained annotators. Moreover, a single pair contains multi-modal data (e.g. an IR image, an RGB image, a thermal image, a depth image, and a LiDAR scan). This is the first large-scale first-person view outdoor multi-modal dataset focusing on surveillance tasks to the best of our knowledge. We present an overview of the proposed dataset with statistics and present methods of exploiting our dataset with deep learning-based algorithms. The latest information on the dataset and our study are available at https://github.com/lge-robot-navi, and the dataset will be available for download through a server.
[ { "version": "v1", "created": "Fri, 30 Dec 2022 07:26:26 GMT" } ]
2023-01-02T00:00:00
[ [ "Noh", "DongKi", "" ], [ "Sung", "Changki", "" ], [ "Uhm", "Teayoung", "" ], [ "Lee", "WooJu", "" ], [ "Lim", "Hyungtae", "" ], [ "Choi", "Jaeseok", "" ], [ "Lee", "Kyuewang", "" ], [ "Hong", "Dasol", "" ], [ "Um", "Daeho", "" ], [ "Chung", "Inseop", "" ], [ "Shin", "Hochul", "" ], [ "Kim", "MinJung", "" ], [ "Kim", "Hyoung-Rock", "" ], [ "Baek", "SeungMin", "" ], [ "Myung", "Hyun", "" ] ]
new_dataset
0.999804
2212.14641
Juan-Pablo Ortega
Lukas Gonon, Lyudmila Grigoryeva, and Juan-Pablo Ortega
Reservoir kernels and Volterra series
10 pages, 2 figures, 1 table
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
A universal kernel is constructed whose sections approximate any causal and time-invariant filter in the fading memory category with inputs and outputs in a finite-dimensional Euclidean space. This kernel is built using the reservoir functional associated with a state-space representation of the Volterra series expansion available for any analytic fading memory filter. It is hence called the Volterra reservoir kernel. Even though the state-space representation and the corresponding reservoir feature map are defined on an infinite-dimensional tensor algebra space, the kernel map is characterized by explicit recursions that are readily computable for specific data sets when employed in estimation problems using the representer theorem. We showcase the performance of the Volterra reservoir kernel in a popular data science application in relation to bitcoin price prediction.
[ { "version": "v1", "created": "Fri, 30 Dec 2022 11:33:20 GMT" } ]
2023-01-02T00:00:00
[ [ "Gonon", "Lukas", "" ], [ "Grigoryeva", "Lyudmila", "" ], [ "Ortega", "Juan-Pablo", "" ] ]
new_dataset
0.972246
2212.14649
Dmitry Yudin
Dmitry Yudin, Yaroslav Solomentsev, Ruslan Musaev, Aleksei Staroverov, Aleksandr I. Panov
HPointLoc: Point-based Indoor Place Recognition using Synthetic RGB-D Images
Accepted for publishing in proceedings of the 29th International Conference on Neural Information Processing (ICONIP 2022)
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
We present a novel dataset named as HPointLoc, specially designed for exploring capabilities of visual place recognition in indoor environment and loop detection in simultaneous localization and mapping. The loop detection sub-task is especially relevant when a robot with an on-board RGB-D camera can drive past the same place (``Point") at different angles. The dataset is based on the popular Habitat simulator, in which it is possible to generate photorealistic indoor scenes using both own sensor data and open datasets, such as Matterport3D. To study the main stages of solving the place recognition problem on the HPointLoc dataset, we proposed a new modular approach named as PNTR. It first performs an image retrieval with the Patch-NetVLAD method, then extracts keypoints and matches them using R2D2, LoFTR or SuperPoint with SuperGlue, and finally performs a camera pose optimization step with TEASER++. Such a solution to the place recognition problem has not been previously studied in existing publications. The PNTR approach has shown the best quality metrics on the HPointLoc dataset and has a high potential for real use in localization systems for unmanned vehicles. The proposed dataset and framework are publicly available: https://github.com/metra4ok/HPointLoc.
[ { "version": "v1", "created": "Fri, 30 Dec 2022 12:20:56 GMT" } ]
2023-01-02T00:00:00
[ [ "Yudin", "Dmitry", "" ], [ "Solomentsev", "Yaroslav", "" ], [ "Musaev", "Ruslan", "" ], [ "Staroverov", "Aleksei", "" ], [ "Panov", "Aleksandr I.", "" ] ]
new_dataset
0.99979
2212.14671
Collin Connors
Collin Connors and Dilip Sarkar
Novel Architecture to Create and Maintain Personal Blockchains
null
null
null
null
cs.CY cs.CR cs.DB cs.SE
http://creativecommons.org/licenses/by/4.0/
Blockchain has been touted as a revolutionary technology. However, despite the excitement, blockchain has not been adopted in many fields. Many are hesitant to adopt blockchain technology due to privacy concerns, barriers to use, or lack of practical use cases. In this work, we outline a potential blockchain use case for tracking financial transactions across multiple financial institutions. We show the downsides of traditional centralized approaches and that blockchain approaches fail to give all the privacy and accessibility required for this use case. Thus we propose a novel blockchain architecture to support our use case. This novel architecture combines the ease of use of public blockchains with the privacy of private blockchains by allowing users to create personal blockchains. We believe this novel personal blockchain architecture will lead to more blockchain adoption, particularly in use cases handling private data.
[ { "version": "v1", "created": "Mon, 12 Dec 2022 02:05:59 GMT" } ]
2023-01-02T00:00:00
[ [ "Connors", "Collin", "" ], [ "Sarkar", "Dilip", "" ] ]
new_dataset
0.964341
2212.14674
G\"urkan Soykan
G\"urkan Soykan, Deniz Yuret, Tevfik Metin Sezgin
A Comprehensive Gold Standard and Benchmark for Comics Text Detection and Recognition
33 pages, 10 figures, 16 tables
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
This study focuses on improving the optical character recognition (OCR) data for panels in the COMICS dataset, the largest dataset containing text and images from comic books. To do this, we developed a pipeline for OCR processing and labeling of comic books and created the first text detection and recognition datasets for western comics, called "COMICS Text+: Detection" and "COMICS Text+: Recognition". We evaluated the performance of state-of-the-art text detection and recognition models on these datasets and found significant improvement in word accuracy and normalized edit distance compared to the text in COMICS. We also created a new dataset called "COMICS Text+", which contains the extracted text from the textboxes in the COMICS dataset. Using the improved text data of COMICS Text+ in the comics processing model from resulted in state-of-the-art performance on cloze-style tasks without changing the model architecture. The COMICS Text+ dataset can be a valuable resource for researchers working on tasks including text detection, recognition, and high-level processing of comics, such as narrative understanding, character relations, and story generation. All the data and inference instructions can be accessed in https://github.com/gsoykan/comics_text_plus.
[ { "version": "v1", "created": "Tue, 27 Dec 2022 12:05:23 GMT" } ]
2023-01-02T00:00:00
[ [ "Soykan", "Gürkan", "" ], [ "Yuret", "Deniz", "" ], [ "Sezgin", "Tevfik Metin", "" ] ]
new_dataset
0.999776
2212.14710
Pengwei Yin
Pengwei Yin, Jiawu Dai, Jingjing Wang, Di Xie and Shiliang Pu
NeRF-Gaze: A Head-Eye Redirection Parametric Model for Gaze Estimation
10 pages, 8 figures, submitted to CVPR 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gaze estimation is the fundamental basis for many visual tasks. Yet, the high cost of acquiring gaze datasets with 3D annotations hinders the optimization and application of gaze estimation models. In this work, we propose a novel Head-Eye redirection parametric model based on Neural Radiance Field, which allows dense gaze data generation with view consistency and accurate gaze direction. Moreover, our head-eye redirection parametric model can decouple the face and eyes for separate neural rendering, so it can achieve the purpose of separately controlling the attributes of the face, identity, illumination, and eye gaze direction. Thus diverse 3D-aware gaze datasets could be obtained by manipulating the latent code belonging to different face attributions in an unsupervised manner. Extensive experiments on several benchmarks demonstrate the effectiveness of our method in domain generalization and domain adaptation for gaze estimation tasks.
[ { "version": "v1", "created": "Fri, 30 Dec 2022 13:52:28 GMT" } ]
2023-01-02T00:00:00
[ [ "Yin", "Pengwei", "" ], [ "Dai", "Jiawu", "" ], [ "Wang", "Jingjing", "" ], [ "Xie", "Di", "" ], [ "Pu", "Shiliang", "" ] ]
new_dataset
0.982636
2212.14742
Xinyuan Chen
Xinyuan Chen, Yangchen Xie, Li Sun and Yue Lu
DGFont++: Robust Deformable Generative Networks for Unsupervised Font Generation
arXiv admin note: substantial text overlap with arXiv:2104.03064
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Automatic font generation without human experts is a practical and significant problem, especially for some languages that consist of a large number of characters. Existing methods for font generation are often in supervised learning. They require a large number of paired data, which are labor-intensive and expensive to collect. In contrast, common unsupervised image-to-image translation methods are not applicable to font generation, as they often define style as the set of textures and colors. In this work, we propose a robust deformable generative network for unsupervised font generation (abbreviated as DGFont++). We introduce a feature deformation skip connection (FDSC) to learn local patterns and geometric transformations between fonts. The FDSC predicts pairs of displacement maps and employs the predicted maps to apply deformable convolution to the low-level content feature maps. The outputs of FDSC are fed into a mixer to generate final results. Moreover, we introduce contrastive self-supervised learning to learn a robust style representation for fonts by understanding the similarity and dissimilarities of fonts. To distinguish different styles, we train our model with a multi-task discriminator, which ensures that each style can be discriminated independently. In addition to adversarial loss, another two reconstruction losses are adopted to constrain the domain-invariant characteristics between generated images and content images. Taking advantage of FDSC and the adopted loss functions, our model is able to maintain spatial information and generates high-quality character images in an unsupervised manner. Experiments demonstrate that our model is able to generate character images of higher quality than state-of-the-art methods.
[ { "version": "v1", "created": "Fri, 30 Dec 2022 14:35:10 GMT" } ]
2023-01-02T00:00:00
[ [ "Chen", "Xinyuan", "" ], [ "Xie", "Yangchen", "" ], [ "Sun", "Li", "" ], [ "Lu", "Yue", "" ] ]
new_dataset
0.980703
2212.14814
R\'emi Pellerin
Christophe Crespelle, R\'emi Pellerin, St\'ephan Thomass\'e
A quasi-quadratic vertex Kernel for Cograph edge editing
null
null
null
null
cs.DS cs.CC
http://creativecommons.org/licenses/by-nc-sa/4.0/
We provide a $O(k^2 \mathrm{log} k)$ vertex kernel for cograph edge editing. This improves a cubic kernel found by Guillemot, Havet, Paul and Perez [1] which involved four reduction rules. We generalize one of their rules, based on packing of induced paths of length four, by introducing t-modules, which are modules up to t edge modifications. The key fact is that large t-modules cannot be edited more than t times, and this allows to obtain a near quadratic kernel. The extra $\mathrm{log} k$ factor seems tricky to remove as it is necessary in the combinatorial lemma on trees which is central in our proof. Nevertheless, we think that a quadratic bound should be reachable.
[ { "version": "v1", "created": "Fri, 30 Dec 2022 16:23:27 GMT" } ]
2023-01-02T00:00:00
[ [ "Crespelle", "Christophe", "" ], [ "Pellerin", "Rémi", "" ], [ "Thomassé", "Stéphan", "" ] ]
new_dataset
0.993814
2003.00982
Vijay Prakash Dwivedi
Vijay Prakash Dwivedi, Chaitanya K. Joshi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, Xavier Bresson
Benchmarking Graph Neural Networks
Benchmarking framework on GitHub at https://github.com/graphdeeplearning/benchmarking-gnns
Journal of Machine Learning Research (JMLR), 2022
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the last few years, graph neural networks (GNNs) have become the standard toolkit for analyzing and learning from data on graphs. This emerging field has witnessed an extensive growth of promising techniques that have been applied with success to computer science, mathematics, biology, physics and chemistry. But for any successful field to become mainstream and reliable, benchmarks must be developed to quantify progress. This led us in March 2020 to release a benchmark framework that i) comprises of a diverse collection of mathematical and real-world graphs, ii) enables fair model comparison with the same parameter budget to identify key architectures, iii) has an open-source, easy-to-use and reproducible code infrastructure, and iv) is flexible for researchers to experiment with new theoretical ideas. As of December 2022, the GitHub repository has reached 2,000 stars and 380 forks, which demonstrates the utility of the proposed open-source framework through the wide usage by the GNN community. In this paper, we present an updated version of our benchmark with a concise presentation of the aforementioned framework characteristics, an additional medium-sized molecular dataset AQSOL, similar to the popular ZINC, but with a real-world measured chemical target, and discuss how this framework can be leveraged to explore new GNN designs and insights. As a proof of value of our benchmark, we study the case of graph positional encoding (PE) in GNNs, which was introduced with this benchmark and has since spurred interest of exploring more powerful PE for Transformers and GNNs in a robust experimental setting.
[ { "version": "v1", "created": "Mon, 2 Mar 2020 15:58:46 GMT" }, { "version": "v2", "created": "Thu, 11 Jun 2020 16:45:15 GMT" }, { "version": "v3", "created": "Fri, 3 Jul 2020 16:38:28 GMT" }, { "version": "v4", "created": "Wed, 11 May 2022 17:07:03 GMT" }, { "version": "v5", "created": "Wed, 28 Dec 2022 04:57:24 GMT" } ]
2022-12-29T00:00:00
[ [ "Dwivedi", "Vijay Prakash", "" ], [ "Joshi", "Chaitanya K.", "" ], [ "Luu", "Anh Tuan", "" ], [ "Laurent", "Thomas", "" ], [ "Bengio", "Yoshua", "" ], [ "Bresson", "Xavier", "" ] ]
new_dataset
0.982519
2103.14972
Francielle Alves Vargas
Francielle Alves Vargas, Isabelle Carvalho, Fabiana Rodrigues de G\'oes, Fabr\'icio Benevenuto, Thiago Alexandre Salgueiro Pardo
HateBR: A Large Expert Annotated Corpus of Brazilian Instagram Comments for Offensive Language and Hate Speech Detection
Published at LREC 2022 Proceedings
https://aclanthology.org/2022.lrec-1.777/
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Due to the severity of the social media offensive and hateful comments in Brazil, and the lack of research in Portuguese, this paper provides the first large-scale expert annotated corpus of Brazilian Instagram comments for hate speech and offensive language detection. The HateBR corpus was collected from the comment section of Brazilian politicians' accounts on Instagram and manually annotated by specialists, reaching a high inter-annotator agreement. The corpus consists of 7,000 documents annotated according to three different layers: a binary classification (offensive versus non-offensive comments), offensiveness-level classification (highly, moderately, and slightly offensive), and nine hate speech groups (xenophobia, racism, homophobia, sexism, religious intolerance, partyism, apology for the dictatorship, antisemitism, and fatphobia). We also implemented baseline experiments for offensive language and hate speech detection and compared them with a literature baseline. Results show that the baseline experiments on our corpus outperform the current state-of-the-art for the Portuguese language.
[ { "version": "v1", "created": "Sat, 27 Mar 2021 19:43:16 GMT" }, { "version": "v2", "created": "Sat, 3 Apr 2021 22:15:40 GMT" }, { "version": "v3", "created": "Tue, 6 Apr 2021 10:02:52 GMT" }, { "version": "v4", "created": "Sun, 2 May 2021 20:58:41 GMT" }, { "version": "v5", "created": "Sun, 9 May 2021 16:41:18 GMT" }, { "version": "v6", "created": "Tue, 27 Dec 2022 12:24:13 GMT" } ]
2022-12-29T00:00:00
[ [ "Vargas", "Francielle Alves", "" ], [ "Carvalho", "Isabelle", "" ], [ "de Góes", "Fabiana Rodrigues", "" ], [ "Benevenuto", "Fabrício", "" ], [ "Pardo", "Thiago Alexandre Salgueiro", "" ] ]
new_dataset
0.993083
2203.10193
Tomoyuki Yamakami
Tomoyuki Yamakami
Between SC and LOGDCFL: Families of Languages Accepted by Logarithmic-Space Deterministic Auxiliary Depth-k Storage Automata
(A4, 10pt, p28) This exposition corrects and expands its preliminary report, which appeared in the Proceedings of the 27th International Conference on Computing and Combinatorics (COCOON 2021), Tainan, Taiwan, October 24--26, 2021, Lecture Notes in Computer Science, Springer, vol. 13025, pp. 164--175, 2021. An oral presentation was given online due to the coronavirus pandemic
null
null
null
cs.FL cs.CC
http://creativecommons.org/licenses/by/4.0/
The closure of deterministic context-free languages under logarithmic-space many-one reductions ($\mathrm{L}$-m-reductions), known as LOGDCFL, has been studied in depth from an aspect of parallel computability because it is nicely situated between $\mathrm{L}$ and $\mathrm{AC}^{1}\cap\mathrm{SC}^2$. By replacing a memory device from pushdown stacks with access-controlled storage tapes, we introduce a computational model of one-way deterministic depth-$k$ storage automata ($k$-sda's) whose tape cells are freely modified during the first $k$ accesses and then become blank forever. These $k$-sda's naturally induce the language family $k\mathrm{SDA}$. Similarly to $\mathrm{LOGDCFL}$, we study the closure $\mathrm{LOG}k\mathrm{SDA}$ of all languages in $k\mathrm{SDA}$ under $\mathrm{L}$-m-reductions. We demonstrate that $\mathrm{DCFL}\subseteq k\mathrm{SDA}\subseteq \mathrm{SC}^k$ by significantly extending Cook's early result (1979) of $\mathrm{DCFL}\subseteq \mathrm{SC}^2$. The entire hierarch of $\mathrm{LOG}k\mathrm{SDA}$ for all $k\geq1$ therefore lies between $\mathrm{LOGDCFL}$ and $\mathrm{SC}$. As an immediate consequence, we obtain the same simulation bounds for Hibbard's limited automata. We further characterize $\mathrm{LOG}k\mathrm{SDA}$ in terms of a new machine model, called logarithmic-space deterministic auxiliary depth-$k$ storage automata that run in polynomial time. These machines are as powerful as a polynomial-time two-way multi-head deterministic depth-$k$ storage automata. We also provide a ``generic'' $\mathrm{LOG}k\mathrm{SDA}$-complete language under $\mathrm{L}$-m-reductions by constructing a two-way universal simulator working for all $k$-sda's.
[ { "version": "v1", "created": "Fri, 18 Mar 2022 23:44:27 GMT" }, { "version": "v2", "created": "Tue, 27 Dec 2022 00:40:41 GMT" } ]
2022-12-29T00:00:00
[ [ "Yamakami", "Tomoyuki", "" ] ]
new_dataset
0.995498
2208.02494
Stefano Kalonaris
Stefano Kalonaris
Tokyo Kion-On: Query-Based Generative Sonification of Atmospheric Data
To appear in: Proceedings of the 27th International Conference on Auditory Display (ICAD 2022)
null
10.21785/icad2022.039
null
cs.SD cs.HC cs.LG eess.AS
http://creativecommons.org/licenses/by-nc-sa/4.0/
Amid growing environmental concerns, interactive displays of data constitute an important tool for exploring and understanding the impact of climate change on the planet's ecosystemic integrity. This paper presents Tokyo kion-on, a query-based sonification model of Tokyo's air temperature from 1876 to 2021. The system uses a recurrent neural network architecture known as LSTM with attention trained on a small dataset of Japanese melodies and conditioned upon said atmospheric data. After describing the model's implementation, a brief comparative illustration of the musical results is presented, along with a discussion on how the exposed hyper-parameters can promote active and non-linear exploration of the data.
[ { "version": "v1", "created": "Thu, 4 Aug 2022 06:56:06 GMT" } ]
2022-12-29T00:00:00
[ [ "Kalonaris", "Stefano", "" ] ]
new_dataset
0.999445
2212.08448
Hadar Shavit
Hadar Shavit and Filip Jatelnicki and Pol Mor-Puigvent\'os and Wojtek Kowalczyk
From Xception to NEXcepTion: New Design Decisions and Neural Architecture Search
Accepted at ICPRAM 2023 for a 20 minutes oral presentation
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
In this paper, we present a modified Xception architecture, the NEXcepTion network. Our network has significantly better performance than the original Xception, achieving top-1 accuracy of 81.5% on the ImageNet validation dataset (an improvement of 2.5%) as well as a 28% higher throughput. Another variant of our model, NEXcepTion-TP, reaches 81.8% top-1 accuracy, similar to ConvNeXt (82.1%), while having a 27% higher throughput. Our model is the result of applying improved training procedures and new design decisions combined with an application of Neural Architecture Search (NAS) on a smaller dataset. These findings call for revisiting older architectures and reassessing their potential when combined with the latest enhancements.
[ { "version": "v1", "created": "Fri, 16 Dec 2022 12:46:21 GMT" }, { "version": "v2", "created": "Wed, 28 Dec 2022 13:43:14 GMT" } ]
2022-12-29T00:00:00
[ [ "Shavit", "Hadar", "" ], [ "Jatelnicki", "Filip", "" ], [ "Mor-Puigventós", "Pol", "" ], [ "Kowalczyk", "Wojtek", "" ] ]
new_dataset
0.998619
2212.13163
Wei Ji
Wei Ji, Long Chen, Yinwei Wei, Yiming Wu, Tat-Seng Chua
MRTNet: Multi-Resolution Temporal Network for Video Sentence Grounding
work in progress
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Given an untrimmed video and natural language query, video sentence grounding aims to localize the target temporal moment in the video. Existing methods mainly tackle this task by matching and aligning semantics of the descriptive sentence and video segments on a single temporal resolution, while neglecting the temporal consistency of video content in different resolutions. In this work, we propose a novel multi-resolution temporal video sentence grounding network: MRTNet, which consists of a multi-modal feature encoder, a Multi-Resolution Temporal (MRT) module, and a predictor module. MRT module is an encoder-decoder network, and output features in the decoder part are in conjunction with Transformers to predict the final start and end timestamps. Particularly, our MRT module is hot-pluggable, which means it can be seamlessly incorporated into any anchor-free models. Besides, we utilize a hybrid loss to supervise cross-modal features in MRT module for more accurate grounding in three scales: frame-level, clip-level and sequence-level. Extensive experiments on three prevalent datasets have shown the effectiveness of MRTNet.
[ { "version": "v1", "created": "Mon, 26 Dec 2022 13:48:05 GMT" }, { "version": "v2", "created": "Tue, 27 Dec 2022 05:14:51 GMT" } ]
2022-12-29T00:00:00
[ [ "Ji", "Wei", "" ], [ "Chen", "Long", "" ], [ "Wei", "Yinwei", "" ], [ "Wu", "Yiming", "" ], [ "Chua", "Tat-Seng", "" ] ]
new_dataset
0.999733
2212.13283
Yaniv Sadeh
Yaniv Sadeh
On Ranges and Partitions in Optimal TCAMs
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traffic splitting is a required functionality in networks, for example for load balancing over paths or servers, or by the source's access restrictions. The capacities of the servers (or the number of users with particular access restrictions) determine the sizes of the parts into which traffic should be split. A recent approach implements traffic splitting within the ternary content addressable memory (TCAM), which is often available in switches. It is important to reduce the amount of memory allocated for this task since TCAMs are power consuming and are often also required for other tasks such as classification and routing. In the longest-prefix model (LPM), Draves et al. (INFOCOM 1999) find a minimal representation of a function, and Sadeh et al. (INFOCOM 2019) find a minimal representation of a partition. In certain situations, range-functions are of special interest, that is, all the addresses with the same target, or action, are consecutive. In this paper we show that minimizing the amount of TCAM entries to represent a partition comes at the cost of fragmentation, such that for some partitions some actions must have multiple ranges. Then, we also study the case where each target must have a single segment of addresses.
[ { "version": "v1", "created": "Mon, 26 Dec 2022 19:29:49 GMT" } ]
2022-12-29T00:00:00
[ [ "Sadeh", "Yaniv", "" ] ]
new_dataset
0.985647
2212.13312
Muhammad Lutfor Rahman
Muhammad Lutfor Rahman, Daniel Timko, Hamid Wali, and Ajaya Neupane
Users really do respond to smishing
CODASPY'23
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Text phish messages, referred to as Smishing is a type of social engineering attack where fake text messages are created, and used to lure users into responding to those messages. These messages aim to obtain user credentials, install malware on the phones, or launch smishing attacks. They ask users to reply to their message, click on a URL that redirects them to a phishing website, or call the provided number. Thousands of mobile users are affected by smishing attacks daily. Drawing inspiration by the works of Tu et al. (USENIX Security, 2019) on Robocalls and Tischer et al. (IEEE Symposium on Security and Privacy, 2016) on USB drives, this paper investigates why smishing works. Accordingly, we designed smishing experiments and sent phishing SMSes to 265 users to measure the efficacy of smishing attacks. We sent eight fake text messages to participants and recorded their CLICK, REPLY, and CALL responses along with their feedback in a post-test survey. Our results reveal that 16.92% of our participants had potentially fallen for our smishing attack. To test repeat phishing, we subjected a set of randomly selected participants to a second round of smishing attacks with a different message than the one they received in the first round. As a result, we observed that 12.82% potentially fell for the attack again. Using logistic regression, we observed that a combination of user REPLY and CLICK actions increased the odds that a user would respond to our smishing message when compared to CLICK. Additionally, we found a similar statistically significant increase when comparing Facebook and Walmart entity scenario to our IRS baseline.
[ { "version": "v1", "created": "Mon, 26 Dec 2022 22:29:12 GMT" } ]
2022-12-29T00:00:00
[ [ "Rahman", "Muhammad Lutfor", "" ], [ "Timko", "Daniel", "" ], [ "Wali", "Hamid", "" ], [ "Neupane", "Ajaya", "" ] ]
new_dataset
0.994591
2212.13367
Chonghe Zhao
Chonghe Zhao, Taotao Wang, Shengli Zhang and Soung Chang Liew
HCB: Enabling Compact Block in Ethereum Network with Secondary Pool and Transaction Prediction
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Compact block, which replaces transactions in the block with their hashes, is an effective means to speed up block propagation in the Bitcoin network. The compact block mechanism in Bitcoin counts on the fact that many nodes may already have the transactions (or most of the transactions) in the block, therefore sending the complete block containing the full transactions is unnecessary. This fact, however, does not hold in the Ethereum network. Adopting compact block directly in Ethereum may degrade the block propagation speed significantly because the probability of a node not having a transaction in the sending block is relatively high in Ethereum and requesting the missing transactions after receiving the compact block takes much additional time. This paper proposes hybrid-compact block (HCB), an efficient compact block propagation scheme for Ethereum and other similar blockchains. First, we develop a Secondary Pool to store the low-fee transactions, which are removed from the primary transaction pool, to conserve storage space. As simple auxiliary storage, the Secondary Pool does not affect the normal block processing of the primary pool in Ethereum. Second, we design a machine learning-based transaction prediction module to precisely predict the missing transactions caused by network latency and selfish behaviors. We implemented our HCB scheme and other compact-block-like schemes (as benchmarks) and deployed a number of worldwide nodes over the Ethereum MainNet to experimentally investigate them. Experimental results show that HCB performs best among the existing compact-block-like schemes and can reduce propagation time by more than half with respect to the current block propagation scheme in Ethereum.
[ { "version": "v1", "created": "Tue, 27 Dec 2022 05:50:21 GMT" } ]
2022-12-29T00:00:00
[ [ "Zhao", "Chonghe", "" ], [ "Wang", "Taotao", "" ], [ "Zhang", "Shengli", "" ], [ "Liew", "Soung Chang", "" ] ]
new_dataset
0.998649