id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2112.04481
|
Nilesh Kulkarni
|
Nilesh Kulkarni, Justin Johnson, David F. Fouhey
|
What's Behind the Couch? Directed Ray Distance Functions (DRDF) for 3D
Scene Reconstruction
|
Updated illustrations for method section. Project Page see
https://nileshkulkarni.github.io/scene_drdf
| null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/publicdomain/zero/1.0/
|
We present an approach for full 3D scene reconstruction from a single unseen
image. We train on dataset of realistic non-watertight scans of scenes. Our
approach predicts a distance function, since these have shown promise in
handling complex topologies and large spaces. We identify and analyze two key
challenges for predicting such image conditioned distance functions that have
prevented their success on real 3D scene data. First, we show that predicting a
conventional scene distance from an image requires reasoning over a large
receptive field. Second, we analytically show that the optimal output of the
network trained to predict these distance functions does not obey all the
distance function properties. We propose an alternate distance function, the
Directed Ray Distance Function (DRDF), that tackles both challenges. We show
that a deep network trained to predict DRDFs outperforms all other methods
quantitatively and qualitatively on 3D reconstruction from single image on
Matterport3D, 3DFront, and ScanNet.
|
[
{
"version": "v1",
"created": "Wed, 8 Dec 2021 18:59:04 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Apr 2022 04:40:19 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Kulkarni",
"Nilesh",
""
],
[
"Johnson",
"Justin",
""
],
[
"Fouhey",
"David F.",
""
]
] |
new_dataset
| 0.999336 |
2112.05923
|
Xiao-Yang Liu
|
Xiao-Yang Liu and Zechu Li and Zhuoran Yang and Jiahao Zheng and
Zhaoran Wang and Anwar Walid and Jian Guo and Michael I. Jordan
|
ElegantRL-Podracer: Scalable and Elastic Library for Cloud-Native Deep
Reinforcement Learning
|
9 pages, 7 figures
|
Deep Reinforcement Learning Workshop, NeurIPS 2021
| null | null |
cs.LG cs.AI cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep reinforcement learning (DRL) has revolutionized learning and actuation
in applications such as game playing and robotic control. The cost of data
collection, i.e., generating transitions from agent-environment interactions,
remains a major challenge for wider DRL adoption in complex real-world
problems. Following a cloud-native paradigm to train DRL agents on a GPU cloud
platform is a promising solution. In this paper, we present a scalable and
elastic library ElegantRL-podracer for cloud-native deep reinforcement
learning, which efficiently supports millions of GPU cores to carry out
massively parallel training at multiple levels. At a high-level,
ElegantRL-podracer employs a tournament-based ensemble scheme to orchestrate
the training process on hundreds or even thousands of GPUs, scheduling the
interactions between a leaderboard and a training pool with hundreds of pods.
At a low-level, each pod simulates agent-environment interactions in parallel
by fully utilizing nearly 7,000 GPU CUDA cores in a single GPU. Our
ElegantRL-podracer library features high scalability, elasticity and
accessibility by following the development principles of containerization,
microservices and MLOps. Using an NVIDIA DGX SuperPOD cloud, we conduct
extensive experiments on various tasks in locomotion and stock trading and show
that ElegantRL-podracer substantially outperforms RLlib. Our codes are
available on GitHub.
|
[
{
"version": "v1",
"created": "Sat, 11 Dec 2021 06:31:21 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Apr 2022 01:58:52 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Liu",
"Xiao-Yang",
""
],
[
"Li",
"Zechu",
""
],
[
"Yang",
"Zhuoran",
""
],
[
"Zheng",
"Jiahao",
""
],
[
"Wang",
"Zhaoran",
""
],
[
"Walid",
"Anwar",
""
],
[
"Guo",
"Jian",
""
],
[
"Jordan",
"Michael I.",
""
]
] |
new_dataset
| 0.993259 |
2112.10194
|
Dima Damen
|
Will Price, Carl Vondrick, Dima Damen
|
UnweaveNet: Unweaving Activity Stories
|
Accepted at IEEE/CVF Computer Vision and Pattern Recognition (CVPR)
2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Our lives can be seen as a complex weaving of activities; we switch from one
activity to another, to maximise our achievements or in reaction to demands
placed upon us. Observing a video of unscripted daily activities, we parse the
video into its constituent activity threads through a process we call
unweaving. To accomplish this, we introduce a video representation explicitly
capturing activity threads called a thread bank, along with a neural controller
capable of detecting goal changes and resuming of past activities, together
forming UnweaveNet. We train and evaluate UnweaveNet on sequences from the
unscripted egocentric dataset EPIC-KITCHENS. We propose and showcase the
efficacy of pretraining UnweaveNet in a self-supervised manner.
|
[
{
"version": "v1",
"created": "Sun, 19 Dec 2021 17:07:37 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Apr 2022 11:33:49 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Price",
"Will",
""
],
[
"Vondrick",
"Carl",
""
],
[
"Damen",
"Dima",
""
]
] |
new_dataset
| 0.999428 |
2201.06293
|
Francesco Pierri
|
Marco Di Giovanni, Francesco Pierri, Christopher Torres-Lugo and Marco
Brambilla
|
VaccinEU: COVID-19 vaccine conversations on Twitter in French, German
and Italian
|
9 pages, 6 figures, 3 tables. Data can be fully accessed in a
Dataverse (https://doi.org/10.7910/DVN/NZUMZG) and a GitHub repository
(https://github.com/DataSciencePolimi/VaccinEU)
|
Proc. Intl. AAAI Conf. on Web and Social Media (ICWSM), 2022
| null | null |
cs.SI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Despite the increasing limitations for unvaccinated people, in many European
countries there is still a non-negligible fraction of individuals who refuse to
get vaccinated against SARS-CoV-2, undermining governmental efforts to
eradicate the virus. We study the role of online social media in influencing
individuals' opinion towards getting vaccinated by designing a large-scale
collection of Twitter messages in three different languages -- French, German
and Italian -- and providing public access to the data collected. Focusing on
the European context, our VaccinEU dataset aims to help researchers to better
understand the impact of online (mis)information about vaccines and design more
accurate communication strategies to maximize vaccination coverage.
|
[
{
"version": "v1",
"created": "Mon, 17 Jan 2022 09:16:51 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Apr 2022 10:34:56 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Di Giovanni",
"Marco",
""
],
[
"Pierri",
"Francesco",
""
],
[
"Torres-Lugo",
"Christopher",
""
],
[
"Brambilla",
"Marco",
""
]
] |
new_dataset
| 0.997258 |
2202.06554
|
Paul Staat
|
Paul Staat, Kai Jansen, Christian Zenger, Harald Elders-Boll, Christof
Paar
|
Analog Physical-Layer Relay Attacks with Application to Bluetooth and
Phase-Based Ranging
|
Accepted for presentation at WiSec '22
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Today, we use smartphones as multi-purpose devices that communicate with
their environment to implement context-aware services, including asset
tracking, indoor localization, contact tracing, or access control. As a
de-facto standard, Bluetooth is available in virtually every smartphone to
provide short-range wireless communication. Importantly, many Bluetooth-driven
applications such as Phone as a Key (PaaK) for vehicles and buildings require
proximity of legitimate devices, which must be protected against unauthorized
access. In earlier access control systems, attackers were able to violate
proximity-verification through relay station attacks. However, the
vulnerability of Bluetooth against such attacks was yet unclear as existing
relay attack strategies are not applicable or can be defeated through wireless
distance measurement. In this paper, we design and implement an analog
physical-layer relay attack based on low-cost off-the-shelf radio hardware to
simultaneously increase the wireless communication range and manipulate
distance measurements. Using our setup, we successfully demonstrate relay
attacks against Bluetooth-based access control of a car and a smart lock.
Further, we show that our attack can arbitrarily manipulate Multi-Carrier
Phase-based Ranging (MCPR) while relaying signals over 90 m.
|
[
{
"version": "v1",
"created": "Mon, 14 Feb 2022 08:46:09 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Apr 2022 11:28:36 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Staat",
"Paul",
""
],
[
"Jansen",
"Kai",
""
],
[
"Zenger",
"Christian",
""
],
[
"Elders-Boll",
"Harald",
""
],
[
"Paar",
"Christof",
""
]
] |
new_dataset
| 0.999449 |
2204.00325
|
Yanan Zhang
|
Yanan Zhang, Jiaxin Chen, Di Huang
|
CAT-Det: Contrastively Augmented Transformer for Multi-modal 3D Object
Detection
|
Accepted to CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In autonomous driving, LiDAR point-clouds and RGB images are two major data
modalities with complementary cues for 3D object detection. However, it is
quite difficult to sufficiently use them, due to large inter-modal
discrepancies. To address this issue, we propose a novel framework, namely
Contrastively Augmented Transformer for multi-modal 3D object Detection
(CAT-Det). Specifically, CAT-Det adopts a two-stream structure consisting of a
Pointformer (PT) branch, an Imageformer (IT) branch along with a Cross-Modal
Transformer (CMT) module. PT, IT and CMT jointly encode intra-modal and
inter-modal long-range contexts for representing an object, thus fully
exploring multi-modal information for detection. Furthermore, we propose an
effective One-way Multi-modal Data Augmentation (OMDA) approach via
hierarchical contrastive learning at both the point and object levels,
significantly improving the accuracy only by augmenting point-clouds, which is
free from complex generation of paired samples of the two modalities. Extensive
experiments on the KITTI benchmark show that CAT-Det achieves a new
state-of-the-art, highlighting its effectiveness.
|
[
{
"version": "v1",
"created": "Fri, 1 Apr 2022 10:07:25 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Apr 2022 04:45:36 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Zhang",
"Yanan",
""
],
[
"Chen",
"Jiaxin",
""
],
[
"Huang",
"Di",
""
]
] |
new_dataset
| 0.993101 |
2204.00645
|
Young-Ho Kim
|
Christian DeBuy, Florin Ghesu, Reza Langari, Young-Ho Kim
|
Design and validation of zero-slack separable manipulator for
Intracardiac Echocardiography
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Clinicians require substantial training and experience to become comfortable
with steering Intracardiac echocardiography (ICE) catheter to localize and
measure the area of treatment to watch for complications while device catheters
are deployed in another access. Thus, it is reasonable that a robotic-assist
system to hold and actively manipulate the ICE catheter could ease the workload
of the physician. Existing commercially-available robotic systems and research
prototypes all use existing commercially available ICE catheters based on
multiple tendon-sheath mechanism (TSM). To motorize the existing TSM-based ICE
catheter, the actuators interface with the outer handle knobs to manipulate
four internal tendons. However, in practice, the actuators are located at a
sterile, safe place far away from the ICE handle. Thus, to interface with
knobs, there exist multiple coupled gear structures between two, leading to a
highly nonlinear behavior (e.g. various slack, elasticity) alongside hysteresis
phenomena in TSM.
Since ICE catheters are designed for single use, the expensive actuators need
to be located in a safe place so as to be reusable. Moreover, these actuators
should interface as directly as possible with the tendons for accurate tip
controls. In this paper, we introduce a separable ICE catheter robot with four
tendon actuation: one part reusable and another disposable. Moreover, we
propose a practical model and calibration method for our proposed mechanism so
that four tendons are actuated simultaneously allowing for precise tip control
and mitigating issues with conventional devices such as dead-zone and
hysteresis with simple linear compensation. We consider an open-loop controller
since many available ICE catheters are used without position-tracking sensors
at the tip due to costs and single use
|
[
{
"version": "v1",
"created": "Fri, 1 Apr 2022 18:17:21 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"DeBuy",
"Christian",
""
],
[
"Ghesu",
"Florin",
""
],
[
"Langari",
"Reza",
""
],
[
"Kim",
"Young-Ho",
""
]
] |
new_dataset
| 0.99914 |
2204.00655
|
Jacqueline Hausmann
|
Jacqueline Hausmann, Md Sirajus Salekin, Ghada Zamzmi, Dmitry Goldgof,
Yu Sun
|
Robust Neonatal Face Detection in Real-world Clinical Settings
|
Accepted at IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR Workshops 2021)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Current face detection algorithms are extremely generalized and can obtain
decent accuracy when detecting the adult faces. These approaches are
insufficient when handling outlier cases, for example when trying to detect the
face of a neonate infant whose face composition and expressions are relatively
different than that of the adult. It is furthermore difficult when applied to
detect faces in a complicated setting such as the Neonate Intensive Care Unit.
By training a state-of-the-art face detection model, You-Only-Look-Once, on a
proprietary dataset containing labelled neonate faces in a clinical setting,
this work achieves near real time neonate face detection. Our preliminary
findings show an accuracy of 68.7%, compared to the off the shelf solution
which detected neonate faces with an accuracy of 7.37%. Although further
experiments are needed to validate our model, our results are promising and
prove the feasibility of detecting neonatal faces in challenging real-world
settings. The robust and real-time detection of neonatal faces would benefit
wide range of automated systems (e.g., pain recognition and surveillance) who
currently suffer from the time and effort due to the necessity of manual
annotations. To benefit the research community, we make our trained weights
publicly available at github(https://github.com/ja05haus/trained_neonate_face).
|
[
{
"version": "v1",
"created": "Fri, 1 Apr 2022 18:50:47 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Hausmann",
"Jacqueline",
""
],
[
"Salekin",
"Md Sirajus",
""
],
[
"Zamzmi",
"Ghada",
""
],
[
"Goldgof",
"Dmitry",
""
],
[
"Sun",
"Yu",
""
]
] |
new_dataset
| 0.998582 |
2204.00674
|
Roshan Sah
|
Roshan Sah, Raunak Srivastava, Kaushik Das
|
Design of Low Thrust Controlled Maneuvers to Chase and De-orbit the
Space Debris
|
23 Pages, 21 Figures, Presented & Published at ASET 2022 Conference
on "Artificial Intelligence(AI) Enabled Aerobots and Hydrobots" Organized by
ISRO Inertial Systems Unit & IIST at Vikram Sarabhai Space Center,
Thiruvananthapuram, India on 17-18, March, 2022,
https://aset2022.vssc.gov.in/proceedings.php
| null | null | null |
cs.RO astro-ph.EP astro-ph.IM physics.space-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Over the several decades, the space debris at LEO has grown rapidly which had
caused a serious threat to the operating satellite in an orbit. To avoid the
risk of collision and protect the LEO environment, the space robotics ADR
concept has been continuously developed for over a decade to chase, capture,
and deorbit space debris. This paper presents the designed small satellite with
dual robotic manipulators. The small satellite is designed based on CubeSat
standards, which uses commercially available products in the market. In this
paper, an approach is detailed for designing the controlled chase and deorbit
maneuver for a small satellite equipped with an RCS thruster. The maneuvers are
comprised of two phases, a. bringing the chaser satellite to the debris orbit
and accelerating it to close proximity of 1m to the debris object by using the
low thrust RCS thruster, and b. Once captured, controlled deorbiting it to 250
km of altitude. A Hohmann transfer concept is used to move our chaser satellite
from the lower orbit to the debris orbit by two impulsive burns. A number of
the scenarios are simulated, where one or more orbital elements are adjusted.
For more than one orbital elements adjustment, the DAG law and the Q law are
utilized. These laws synthesize the three direction thrusts to the single
thrust force for the controlled maneuver. The $\Delta$V requirement at each
maneuver is determined by using the performance parameters of the RCS thruster
intended for a small satellite. The results show that, for long term simulation
of a chaser satellite maneuver to debris object, an optimum DAG law is most
suitable than the Q law, as it can handle the singularity behavior of the
orbital elements caused due by adjustment of one or more elements more
efficiently.
|
[
{
"version": "v1",
"created": "Fri, 1 Apr 2022 19:33:11 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Sah",
"Roshan",
""
],
[
"Srivastava",
"Raunak",
""
],
[
"Das",
"Kaushik",
""
]
] |
new_dataset
| 0.991879 |
2204.00679
|
Arsha Nagrani
|
Arsha Nagrani, Paul Hongsuck Seo, Bryan Seybold, Anja Hauth, Santiago
Manen, Chen Sun and Cordelia Schmid
|
Learning Audio-Video Modalities from Image Captions
| null | null | null | null |
cs.CV cs.MM cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A major challenge in text-video and text-audio retrieval is the lack of
large-scale training data. This is unlike image-captioning, where datasets are
in the order of millions of samples. To close this gap we propose a new video
mining pipeline which involves transferring captions from image captioning
datasets to video clips with no additional manual effort. Using this pipeline,
we create a new large-scale, weakly labelled audio-video captioning dataset
consisting of millions of paired clips and captions. We show that training a
multimodal transformed based model on this data achieves competitive
performance on video retrieval and video captioning, matching or even
outperforming HowTo100M pretraining with 20x fewer clips. We also show that our
mined clips are suitable for text-audio pretraining, and achieve state of the
art results for the task of audio retrieval.
|
[
{
"version": "v1",
"created": "Fri, 1 Apr 2022 19:48:18 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Nagrani",
"Arsha",
""
],
[
"Seo",
"Paul Hongsuck",
""
],
[
"Seybold",
"Bryan",
""
],
[
"Hauth",
"Anja",
""
],
[
"Manen",
"Santiago",
""
],
[
"Sun",
"Chen",
""
],
[
"Schmid",
"Cordelia",
""
]
] |
new_dataset
| 0.955152 |
2204.00686
|
James Haley
|
James D. Haley
|
Assimilation of Satellite Active Fires Data
| null | null | null | null |
cs.LG physics.ao-ph stat.AP
|
http://creativecommons.org/licenses/by/4.0/
|
Wildland fires pose an increasingly serious problem in our society. The
number and severity of these fires has been rising for many years. Wildfires
pose direct threats to life and property as well as threats through ancillary
effects like reduced air quality. The aim of this thesis is to develop
techniques to help combat the impacts of wildfires by improving wildfire
modeling capabilities by using satellite fire observations. Already much work
has been done in this direction by other researchers. Our work seeks to expand
the body of knowledge using mathematically sound methods to utilize information
about wildfires that considers the uncertainties inherent in the satellite
data.
In this thesis we explore methods for using satellite data to help initialize
and steer wildfire simulations. In particular, we develop a method for
constructing the history of a fire, a new technique for assimilating wildfire
data, and a method for modifying the behavior of a modeled fire by inferring
information about the fuels in the fire domain. These goals rely on being able
to estimate the time a fire first arrived at every location in a geographic
region of interest. Because detailed knowledge of real wildfires is typically
unavailable, the basic procedure for developing and testing the methods in this
thesis will be to first work with simulated data so that the estimates produced
can be compared with known solutions. The methods thus developed are then
applied to real-world scenarios. Analysis of these scenarios shows that the
work with constructing the history of fires and data assimilation improves
improves fire modeling capabilities. The research is significant because it
gives us a better understanding of the capabilities and limitations of using
satellite data to inform wildfire models and it points the way towards new
avenues for modeling fire behavior.
|
[
{
"version": "v1",
"created": "Fri, 1 Apr 2022 20:11:28 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Haley",
"James D.",
""
]
] |
new_dataset
| 0.986159 |
2204.00806
|
Ashutosh Modi
|
Arnav Kapoor and Mudit Dhawan and Anmol Goel and T.H. Arjun and
Akshala Bhatnagar and Vibhu Agrawal and Amul Agrawal and Arnab Bhattacharya
and Ponnurangam Kumaraguru and Ashutosh Modi
|
HLDC: Hindi Legal Documents Corpus
|
16 Pages, Accepted at ACL 2022 Findings
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Many populous countries including India are burdened with a considerable
backlog of legal cases. Development of automated systems that could process
legal documents and augment legal practitioners can mitigate this. However,
there is a dearth of high-quality corpora that is needed to develop such
data-driven systems. The problem gets even more pronounced in the case of low
resource languages such as Hindi. In this resource paper, we introduce the
Hindi Legal Documents Corpus (HLDC), a corpus of more than 900K legal documents
in Hindi. Documents are cleaned and structured to enable the development of
downstream applications. Further, as a use-case for the corpus, we introduce
the task of bail prediction. We experiment with a battery of models and propose
a Multi-Task Learning (MTL) based model for the same. MTL models use
summarization as an auxiliary task along with bail prediction as the main task.
Experiments with different models are indicative of the need for further
research in this area. We release the corpus and model implementation code with
this paper: https://github.com/Exploration-Lab/HLDC
|
[
{
"version": "v1",
"created": "Sat, 2 Apr 2022 08:22:52 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Kapoor",
"Arnav",
""
],
[
"Dhawan",
"Mudit",
""
],
[
"Goel",
"Anmol",
""
],
[
"Arjun",
"T. H.",
""
],
[
"Bhatnagar",
"Akshala",
""
],
[
"Agrawal",
"Vibhu",
""
],
[
"Agrawal",
"Amul",
""
],
[
"Bhattacharya",
"Arnab",
""
],
[
"Kumaraguru",
"Ponnurangam",
""
],
[
"Modi",
"Ashutosh",
""
]
] |
new_dataset
| 0.999445 |
2204.00889
|
Shintaro Ishikawa
|
Shintaro Ishikawa, Komei Sugiura
|
Moment-based Adversarial Training for Embodied Language Comprehension
|
Accepted for presentation at ICPR2022
| null | null | null |
cs.RO cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we focus on a vision-and-language task in which a robot is
instructed to execute household tasks. Given an instruction such as "Rinse off
a mug and place it in the coffee maker," the robot is required to locate the
mug, wash it, and put it in the coffee maker. This is challenging because the
robot needs to break down the instruction sentences into subgoals and execute
them in the correct order. On the ALFRED benchmark, the performance of
state-of-the-art methods is still far lower than that of humans. This is
partially because existing methods sometimes fail to infer subgoals that are
not explicitly specified in the instruction sentences. We propose Moment-based
Adversarial Training (MAT), which uses two types of moments for perturbation
updates in adversarial training. We introduce MAT to the embedding spaces of
the instruction, subgoals, and state representations to handle their varieties.
We validated our method on the ALFRED benchmark, and the results demonstrated
that our method outperformed the baseline method for all the metrics on the
benchmark.
|
[
{
"version": "v1",
"created": "Sat, 2 Apr 2022 16:07:24 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Ishikawa",
"Shintaro",
""
],
[
"Sugiura",
"Komei",
""
]
] |
new_dataset
| 0.997692 |
2204.01018
|
Sixun Dong
|
Huazhang Hu, Sixun Dong, Yiqun Zhao, Dongze Lian, Zhengxin Li,
Shenghua Gao
|
TransRAC: Encoding Multi-scale Temporal Correlation with Transformers
for Repetitive Action Counting
|
(Revised) CVPR 2022 Oral. RepCount dataset:
https://svip-lab.github.io/dataset/RepCount_dataset.html , Code:
https://github.com/SvipRepetitionCounting/TransRAC
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Counting repetitive actions are widely seen in human activities such as
physical exercise. Existing methods focus on performing repetitive action
counting in short videos, which is tough for dealing with longer videos in more
realistic scenarios. In the data-driven era, the degradation of such
generalization capability is mainly attributed to the lack of long video
datasets. To complement this margin, we introduce a new large-scale repetitive
action counting dataset covering a wide variety of video lengths, along with
more realistic situations where action interruption or action inconsistencies
occur in the video. Besides, we also provide a fine-grained annotation of the
action cycles instead of just counting annotation along with a numerical value.
Such a dataset contains 1,451 videos with about 20,000 annotations, which is
more challenging. For repetitive action counting towards more realistic
scenarios, we further propose encoding multi-scale temporal correlation with
transformers that can take into account both performance and efficiency.
Furthermore, with the help of fine-grained annotation of action cycles, we
propose a density map regression-based method to predict the action period,
which yields better performance with sufficient interpretability. Our proposed
method outperforms state-of-the-art methods on all datasets and also achieves
better performance on the unseen dataset without fine-tuning. The dataset and
code are available.
|
[
{
"version": "v1",
"created": "Sun, 3 Apr 2022 07:50:18 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Hu",
"Huazhang",
""
],
[
"Dong",
"Sixun",
""
],
[
"Zhao",
"Yiqun",
""
],
[
"Lian",
"Dongze",
""
],
[
"Li",
"Zhengxin",
""
],
[
"Gao",
"Shenghua",
""
]
] |
new_dataset
| 0.999796 |
2204.01026
|
Peishan Cong
|
Peishan Cong and Xinge Zhu and Feng Qiao and Yiming Ren and Xidong
Peng and Yuenan Hou and Lan Xu and Ruigang Yang and Dinesh Manocha and Yuexin
Ma
|
STCrowd: A Multimodal Dataset for Pedestrian Perception in Crowded
Scenes
|
accepted at CVPR2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Accurately detecting and tracking pedestrians in 3D space is challenging due
to large variations in rotations, poses and scales. The situation becomes even
worse for dense crowds with severe occlusions. However, existing benchmarks
either only provide 2D annotations, or have limited 3D annotations with
low-density pedestrian distribution, making it difficult to build a reliable
pedestrian perception system especially in crowded scenes. To better evaluate
pedestrian perception algorithms in crowded scenarios, we introduce a
large-scale multimodal dataset,STCrowd. Specifically, in STCrowd, there are a
total of 219 K pedestrian instances and 20 persons per frame on average, with
various levels of occlusion. We provide synchronized LiDAR point clouds and
camera images as well as their corresponding 3D labels and joint IDs. STCrowd
can be used for various tasks, including LiDAR-only, image-only, and
sensor-fusion based pedestrian detection and tracking. We provide baselines for
most of the tasks. In addition, considering the property of sparse global
distribution and density-varying local distribution of pedestrians, we further
propose a novel method, Density-aware Hierarchical heatmap Aggregation (DHA),
to enhance pedestrian perception in crowded scenes. Extensive experiments show
that our new method achieves state-of-the-art performance for pedestrian
detection on various datasets.
|
[
{
"version": "v1",
"created": "Sun, 3 Apr 2022 08:26:07 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Cong",
"Peishan",
""
],
[
"Zhu",
"Xinge",
""
],
[
"Qiao",
"Feng",
""
],
[
"Ren",
"Yiming",
""
],
[
"Peng",
"Xidong",
""
],
[
"Hou",
"Yuenan",
""
],
[
"Xu",
"Lan",
""
],
[
"Yang",
"Ruigang",
""
],
[
"Manocha",
"Dinesh",
""
],
[
"Ma",
"Yuexin",
""
]
] |
new_dataset
| 0.9998 |
2204.01061
|
Dimitra Gkatzia
|
Carl Strathearn and Dimitra Gkatzia
|
Task2Dial: A Novel Task and Dataset for Commonsense enhanced Task-based
Dialogue Grounded in Documents
| null |
Proceedings of The Fourth International Conference on Natural
Language and Speech Processing (ICNLSP 2021)
| null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This paper proposes a novel task on commonsense-enhanced task-based dialogue
grounded in documents and describes the Task2Dial dataset, a novel dataset of
document-grounded task-based dialogues, where an Information Giver (IG)
provides instructions (by consulting a document) to an Information Follower
(IF), so that the latter can successfully complete the task. In this unique
setting, the IF can ask clarification questions which may not be grounded in
the underlying document and require commonsense knowledge to be answered. The
Task2Dial dataset poses new challenges: (1) its human reference texts show more
lexical richness and variation than other document-grounded dialogue datasets;
(2) generating from this set requires paraphrasing as instructional responses
might have been modified from the underlying document; (3) requires commonsense
knowledge, since questions might not necessarily be grounded in the document;
(4) generating requires planning based on context, as task steps need to be
provided in order. The Task2Dial dataset contains dialogues with an average
$18.15$ number of turns and 19.79 tokens per turn, as compared to 12.94 and 12
respectively in existing datasets. As such, learning from this dataset promises
more natural, varied and less template-like system utterances.
|
[
{
"version": "v1",
"created": "Sun, 3 Apr 2022 12:15:56 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Strathearn",
"Carl",
""
],
[
"Gkatzia",
"Dimitra",
""
]
] |
new_dataset
| 0.999728 |
2204.01081
|
Andrew Melnik
|
Andrew Melnik, Eren Akbulut, Jannik Sheikh, Kira Loos, Michael
Buettner, Tobias Lenze
|
Faces: AI Blitz XIII Solutions
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
AI Blitz XIII Faces challenge hosted on www.aicrowd.com platform consisted of
five problems: Sentiment Classification, Age Prediction, Mask Prediction, Face
Recognition, and Face De-Blurring. Our team GLaDOS took second place. Here we
present our solutions and results. Code implementation:
https://github.com/ndrwmlnk/ai-blitz-xiii
|
[
{
"version": "v1",
"created": "Sun, 3 Apr 2022 14:28:16 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Melnik",
"Andrew",
""
],
[
"Akbulut",
"Eren",
""
],
[
"Sheikh",
"Jannik",
""
],
[
"Loos",
"Kira",
""
],
[
"Buettner",
"Michael",
""
],
[
"Lenze",
"Tobias",
""
]
] |
new_dataset
| 0.999328 |
2204.01095
|
Brandon Foggo
|
Brandon Foggo, Koji Yamashita, Nanpeng Yu
|
pmuBAGE: The Benchmarking Assortment of Generated PMU Data for Power
System Events -- Part I: Overview and Results
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We present pmuGE (phasor measurement unit Generator of Events), one of the
first data-driven generative model for power system event data. We have trained
this model on thousands of actual events and created a dataset denoted pmuBAGE
(the Benchmarking Assortment of Generated PMU Events). The dataset consists of
almost 1000 instances of labeled event data to encourage benchmark evaluations
on phasor measurement unit (PMU) data analytics. The dataset is available
online for use by any researcher or practitioner in the field. PMU data are
challenging to obtain, especially those covering event periods. Nevertheless,
power system problems have recently seen phenomenal advancements via
data-driven machine learning solutions - solutions created by researchers who
were fortunate enough to obtain such PMU data. A highly accessible standard
benchmarking dataset would enable a drastic acceleration of the development of
successful machine learning techniques in this field. We propose a novel
learning method based on the Event Participation Decomposition of Power System
Events, which makes it possible to learn a generative model of PMU data during
system anomalies. The model can create highly realistic event data without
compromising the differential privacy of the PMUs used to train it. The dataset
is available online for any researcher to use at the pmuBAGE Github Repository
- https://github.com/NanpengYu/pmuBAGE.
Part I - This is part I of a two part paper. In part I, we describe a high
level overview of pmuBAGE, its creation, and the experiments used to test it.
Part II will discuss the exact models used in its generation in far more
detail.
|
[
{
"version": "v1",
"created": "Sun, 3 Apr 2022 15:30:08 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Foggo",
"Brandon",
""
],
[
"Yamashita",
"Koji",
""
],
[
"Yu",
"Nanpeng",
""
]
] |
new_dataset
| 0.997354 |
2204.01139
|
Kejie Li
|
Kejie Li, Yansong Tang, Victor Adrian Prisacariu, Philip H.S. Torr
|
BNV-Fusion: Dense 3D Reconstruction using Bi-level Neural Volume Fusion
|
Accepted at CVPR 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Dense 3D reconstruction from a stream of depth images is the key to many
mixed reality and robotic applications. Although methods based on Truncated
Signed Distance Function (TSDF) Fusion have advanced the field over the years,
the TSDF volume representation is confronted with striking a balance between
the robustness to noisy measurements and maintaining the level of detail. We
present Bi-level Neural Volume Fusion (BNV-Fusion), which leverages recent
advances in neural implicit representations and neural rendering for dense 3D
reconstruction. In order to incrementally integrate new depth maps into a
global neural implicit representation, we propose a novel bi-level fusion
strategy that considers both efficiency and reconstruction quality by design.
We evaluate the proposed method on multiple datasets quantitatively and
qualitatively, demonstrating a significant improvement over existing methods.
|
[
{
"version": "v1",
"created": "Sun, 3 Apr 2022 19:33:09 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Li",
"Kejie",
""
],
[
"Tang",
"Yansong",
""
],
[
"Prisacariu",
"Victor Adrian",
""
],
[
"Torr",
"Philip H. S.",
""
]
] |
new_dataset
| 0.975701 |
2204.01159
|
Oren Katzir
|
Oren Katzir, Dani Lischinski, Daniel Cohen-Or
|
Shape-Pose Disentanglement using SE(3)-equivariant Vector Neurons
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce an unsupervised technique for encoding point clouds into a
canonical shape representation, by disentangling shape and pose. Our encoder is
stable and consistent, meaning that the shape encoding is purely
pose-invariant, while the extracted rotation and translation are able to
semantically align different input shapes of the same class to a common
canonical pose. Specifically, we design an auto-encoder based on Vector Neuron
Networks, a rotation-equivariant neural network, whose layers we extend to
provide translation-equivariance in addition to rotation-equivariance only. The
resulting encoder produces pose-invariant shape encoding by construction,
enabling our approach to focus on learning a consistent canonical pose for a
class of objects. Quantitative and qualitative experiments validate the
superior stability and consistency of our approach.
|
[
{
"version": "v1",
"created": "Sun, 3 Apr 2022 21:00:44 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Katzir",
"Oren",
""
],
[
"Lischinski",
"Dani",
""
],
[
"Cohen-Or",
"Daniel",
""
]
] |
new_dataset
| 0.997549 |
2204.01233
|
Siyuan Tang
|
Siyuan Tang, Xianghang Mi, Ying Li, XiaoFeng Wang, Kai Chen
|
Clues in Tweets: Twitter-Guided Discovery and Analysis of SMS Spam
|
CCS 2022
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With its critical role in business and service delivery through mobile
devices, SMS (Short Message Service) has long been abused for spamming, which
is still on the rise today possibly due to the emergence of A2P bulk messaging.
The effort to control SMS spam has been hampered by the lack of up-to-date
information about illicit activities. In our research, we proposed a novel
solution to collect recent SMS spam data, at a large scale, from Twitter, where
users voluntarily report the spam messages they receive. For this purpose, we
designed and implemented SpamHunter, an automated pipeline to discover SMS spam
reporting tweets and extract message content from the attached screenshots.
Leveraging SpamHunter, we collected from Twitter a dataset of 21,918 SMS spam
messages in 75 languages, spanning over four years. To our best knowledge, this
is the largest SMS spam dataset ever made public. More importantly, SpamHunter
enables us to continuously monitor emerging SMS spam messages, which
facilitates the ongoing effort to mitigate SMS spamming. We also performed an
in-depth measurement study that sheds light on the new trends in the spammer's
strategies, infrastructure and spam campaigns. We also utilized our spam SMS
data to evaluate the robustness of the spam countermeasures put in place by the
SMS ecosystem, including anti-spam services, bulk SMS services, and text
messaging apps. Our evaluation shows that such protection cannot effectively
handle those spam samples: either introducing significant false positives or
missing a large number of newly reported spam messages.
|
[
{
"version": "v1",
"created": "Mon, 4 Apr 2022 04:22:45 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Tang",
"Siyuan",
""
],
[
"Mi",
"Xianghang",
""
],
[
"Li",
"Ying",
""
],
[
"Wang",
"XiaoFeng",
""
],
[
"Chen",
"Kai",
""
]
] |
new_dataset
| 0.998418 |
2204.01343
|
Federico Quin
|
Federico Quin, Danny Weyns
|
SEAByTE: A Self-adaptive Micro-service System Artifact for Automating
A/B Testing
|
SEAMS'22 artifact paper
| null |
10.1145/3524844.3528081
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Micro-services are a common architectural approach to software development
today. An indispensable tool for evolving micro-service systems is A/B testing.
In A/B testing, two variants, A and B, are applied in an experimental setting.
By measuring the outcome of an evaluation criterion, developers can make
evidence-based decisions to guide the evolution of their software. Recent
studies highlight the need for enhancing the automation when such experiments
are conducted in iterations. To that end, we contribute a novel artifact that
aims at enhancing the automation of an experimentation pipeline of a
micro-service system relying on the principles of self-adaptation. Concretely,
we propose SEAByTE, an experimental framework for testing novel self-adaptation
solutions to enhance the automation of continuous A/B testing of a
micro-service based system. We illustrate the use of the SEAByTE artifact with
a concrete example.
|
[
{
"version": "v1",
"created": "Mon, 4 Apr 2022 09:36:03 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Quin",
"Federico",
""
],
[
"Weyns",
"Danny",
""
]
] |
new_dataset
| 0.998607 |
2204.01386
|
Yusuke Takimoto
|
Yusuke Takimoto, Hiroyuki Sato, Hikari Takehara, Keishiro Uragaki,
Takehiro Tawara, Xiao Liang, Kentaro Oku, Wataru Kishimoto, Bo Zheng
|
Dressi: A Hardware-Agnostic Differentiable Renderer with Reactive Shader
Packing and Soft Rasterization
|
13 pages, 17 figures, EUROGRAPHICS 2022
| null | null | null |
cs.GR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Differentiable rendering (DR) enables various computer graphics and computer
vision applications through gradient-based optimization with derivatives of the
rendering equation. Most rasterization-based approaches are built on
general-purpose automatic differentiation (AD) libraries and DR-specific
modules handcrafted using CUDA. Such a system design mixes DR algorithm
implementation and algorithm building blocks, resulting in hardware dependency
and limited performance. In this paper, we present a practical
hardware-agnostic differentiable renderer called Dressi, which is based on a
new full AD design. The DR algorithms of Dressi are fully written in our
Vulkan-based AD for DR, Dressi-AD, which supports all primitive operations for
DR. Dressi-AD and our inverse UV technique inside it bring hardware
independence and acceleration by graphics hardware. Stage packing, our runtime
optimization technique, can adapt hardware constraints and efficiently execute
complex computational graphs of DR with reactive cache considering the render
pass hierarchy of Vulkan. HardSoftRas, our novel rendering process, is designed
for inverse rendering with a graphics pipeline. Under the limited
functionalities of the graphics pipeline, HardSoftRas can propagate the
gradients of pixels from the screen space to far-range triangle attributes. Our
experiments and applications demonstrate that Dressi establishes hardware
independence, high-quality and robust optimization with fast speed, and
photorealistic rendering.
|
[
{
"version": "v1",
"created": "Mon, 4 Apr 2022 11:07:03 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Takimoto",
"Yusuke",
""
],
[
"Sato",
"Hiroyuki",
""
],
[
"Takehara",
"Hikari",
""
],
[
"Uragaki",
"Keishiro",
""
],
[
"Tawara",
"Takehiro",
""
],
[
"Liang",
"Xiao",
""
],
[
"Oku",
"Kentaro",
""
],
[
"Kishimoto",
"Wataru",
""
],
[
"Zheng",
"Bo",
""
]
] |
new_dataset
| 0.996676 |
2204.01433
|
OFer Amrani
|
Itay Shrem, Ben Grinboim, and OFer Amrani
|
Dynamic Network-Code Design for Satellite Networks
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Internet access from space enjoys renaissance as satellites in
Mega-Constellations is no longer fictitious. Network capacity, subject to power
and computational complexity constraints among other challenges, is a major
goal in this type of networks. This work studies Network Coding in the presence
of dynamically changing network conditions. The notion of generalized acyclic
network is introduced and employed for promoting the generation of
linear-multicast network code for what is considered to be a cyclic network.
The performance of several network coding schemes, among these is the known
static network code, is evaluated by a STK simulation for a swarm of
communicating satellites, conceptually based on the Iridium system. Exploiting
the prior knowledge of the networks topology over time, new network coding
approaches are described, whose aim is to better cope with the time-varying,
dynamic behavior of the network. It is demonstrated that in all cases,
pertaining to our example network, static network codes under-perform compared
to the presented approach. In addition, an efficient test for identifying the
most appropriate coding approach is presented.
|
[
{
"version": "v1",
"created": "Mon, 4 Apr 2022 12:34:45 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Shrem",
"Itay",
""
],
[
"Grinboim",
"Ben",
""
],
[
"Amrani",
"OFer",
""
]
] |
new_dataset
| 0.99579 |
2204.01436
|
Andr\'e Artelt
|
Jonathan Jakob, Andr\'e Artelt, Martina Hasenj\"ager, Barbara Hammer
|
SAM-kNN Regressor for Online Learning in Water Distribution Networks
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Water distribution networks are a key component of modern infrastructure for
housing and industry. They transport and distribute water via widely branched
networks from sources to consumers. In order to guarantee a working network at
all times, the water supply company continuously monitors the network and takes
actions when necessary -- e.g. reacting to leakages, sensor faults and drops in
water quality. Since real world networks are too large and complex to be
monitored by a human, algorithmic monitoring systems have been developed. A
popular type of such systems are residual based anomaly detection systems that
can detect events such as leakages and sensor faults. For a continuous high
quality monitoring, it is necessary for these systems to adapt to changed
demands and presence of various anomalies.
In this work, we propose an adaption of the incremental SAM-kNN classifier
for regression to build a residual based anomaly detection system for water
distribution networks that is able to adapt to any kind of change.
|
[
{
"version": "v1",
"created": "Mon, 4 Apr 2022 12:40:05 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Jakob",
"Jonathan",
""
],
[
"Artelt",
"André",
""
],
[
"Hasenjäger",
"Martina",
""
],
[
"Hammer",
"Barbara",
""
]
] |
new_dataset
| 0.990936 |
2204.01439
|
Jona Cappelle
|
Jona Cappelle, Geoffrey Ottoy, Sarah Goossens, Hanne Deprez, Jarne Van
Mulders, Guus Leenders, Gilles Callebaut
|
IoT with a Soft Touch: A Modular Remote Sensing Platform for STE(A)M
Applications
| null | null | null | null |
cs.CY cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Besides wide attraction in the industry, IoT is being used to advance STEM
and STEAM education across a range of education levels. This work presents a
remote sensing platform, named IoT with a Soft Touch, developed to achieve two
goals. First, it aims to lower the technicality, stimulating the students to do
STE(A)M. Second, the technology is to be used in `softer' applications (e.g.,
environmental and health care), thereby aiming to attract a more diverse set of
student profiles. Students can easily build a wireless sensing device, with a
specific application in mind. The modular design of the platform and an
intuitive graphical configurator tool allows them to tailor the device's
functionality to their needs. The sensor's data is transmitted wirelessly with
LoRaWAN. The data can be viewed and analyzed on a dashboard, or the raw data
can be extracted for further processing, e.g., as part of the school's STE(A)M
curriculum. This work elaborates on the low-power and modular design
challenges, and how the platform is used in education.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 11:41:39 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Cappelle",
"Jona",
""
],
[
"Ottoy",
"Geoffrey",
""
],
[
"Goossens",
"Sarah",
""
],
[
"Deprez",
"Hanne",
""
],
[
"Van Mulders",
"Jarne",
""
],
[
"Leenders",
"Guus",
""
],
[
"Callebaut",
"Gilles",
""
]
] |
new_dataset
| 0.999435 |
2204.01564
|
Shakeel Ahmad Sheikh
|
Shakeel Ahmad Sheikh, Md Sahidullah, Fabrice Hirsch, Slim Ouni
|
Introducing ECAPA-TDNN and Wav2Vec2.0 Embeddings to Stuttering Detection
|
Submitted to Interspeech 2022
| null | null | null |
cs.SD cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The adoption of advanced deep learning (DL) architecture in stuttering
detection (SD) tasks is challenging due to the limited size of the available
datasets. To this end, this work introduces the application of speech
embeddings extracted with pre-trained deep models trained on massive audio
datasets for different tasks. In particular, we explore audio representations
obtained using emphasized channel attention, propagation, and
aggregation-time-delay neural network (ECAPA-TDNN) and Wav2Vec2.0 model trained
on VoxCeleb and LibriSpeech datasets respectively. After extracting the
embeddings, we benchmark with several traditional classifiers, such as a
k-nearest neighbor, Gaussian naive Bayes, and neural network, for the
stuttering detection tasks. In comparison to the standard SD system trained
only on the limited SEP-28k dataset, we obtain a relative improvement of 16.74%
in terms of overall accuracy over baseline. Finally, we have shown that
combining two embeddings and concatenating multiple layers of Wav2Vec2.0 can
further improve SD performance up to 1% and 2.64% respectively.
|
[
{
"version": "v1",
"created": "Mon, 4 Apr 2022 15:12:25 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Sheikh",
"Shakeel Ahmad",
""
],
[
"Sahidullah",
"Md",
""
],
[
"Hirsch",
"Fabrice",
""
],
[
"Ouni",
"Slim",
""
]
] |
new_dataset
| 0.990844 |
2204.01565
|
Xiaoyu Bie
|
Xiaoyu Bie, Wen Guo, Simon Leglaive, Lauren Girin, Francesc
Moreno-Noguer, Xavier Alameda-Pineda
|
HiT-DVAE: Human Motion Generation via Hierarchical Transformer Dynamical
VAE
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Studies on the automatic processing of 3D human pose data have flourished in
the recent past. In this paper, we are interested in the generation of
plausible and diverse future human poses following an observed 3D pose
sequence. Current methods address this problem by injecting random variables
from a single latent space into a deterministic motion prediction framework,
which precludes the inherent multi-modality in human motion generation. In
addition, previous works rarely explore the use of attention to select which
frames are to be used to inform the generation process up to our knowledge. To
overcome these limitations, we propose Hierarchical Transformer Dynamical
Variational Autoencoder, HiT-DVAE, which implements auto-regressive generation
with transformer-like attention mechanisms. HiT-DVAE simultaneously learns the
evolution of data and latent space distribution with time correlated
probabilistic dependencies, thus enabling the generative model to learn a more
complex and time-varying latent space as well as diverse and realistic human
motions. Furthermore, the auto-regressive generation brings more flexibility on
observation and prediction, i.e. one can have any length of observation and
predict arbitrary large sequences of poses with a single pre-trained model. We
evaluate the proposed method on HumanEva-I and Human3.6M with various
evaluation methods, and outperform the state-of-the-art methods on most of the
metrics.
|
[
{
"version": "v1",
"created": "Mon, 4 Apr 2022 15:12:34 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Bie",
"Xiaoyu",
""
],
[
"Guo",
"Wen",
""
],
[
"Leglaive",
"Simon",
""
],
[
"Girin",
"Lauren",
""
],
[
"Moreno-Noguer",
"Francesc",
""
],
[
"Alameda-Pineda",
"Xavier",
""
]
] |
new_dataset
| 0.969601 |
2204.01590
|
Se-Hang Cheong
|
Se-Hang Cheong, Yain-Whar Si
|
CWBound: boundary node detection algorithm for complex non-convex mobile
ad hoc networks
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Efficient message forwarding in mobile ad hoc network in disaster scenarios
is challenging because location information on the boundary and interior nodes
is often unavailable. Information related to boundary nodes can be used to
design efficient routing protocols as well as to prolong the battery power of
devices along the boundary of an ad hoc network. In this article, we developed
an algorithm, CWBound, which discovers boundary nodes in a complex non-convex
mobile ad hoc (CNCAH) networks. Experiments show that the CWBound algorithm is
at least three times faster than other state-of-the-art algorithms, and up to
400 times faster than classical force-directed algorithms. The experiments also
confirmed that the CWBound algorithm achieved the highest accuracy (above 97%
for 3 out of the 4 types of CNCAH networks) and sensitivity (90%) among the
algorithms evaluated.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 08:14:43 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Cheong",
"Se-Hang",
""
],
[
"Si",
"Yain-Whar",
""
]
] |
new_dataset
| 0.996901 |
2204.01611
|
Taewoon Kim
|
Taewoon Kim, Michael Cochez, Vincent Francois-Lavet, Mark Neerincx,
and Piek Vossen
|
A Machine With Human-Like Memory Systems
|
Submitted to Human-Centered Design of Symbiotic Hybrid Intelligence
2022 (https://ii.tudelft.nl/humancenteredsymbioticHI/)
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Inspired by the cognitive science theory, we explicitly model an agent with
both semantic and episodic memory systems, and show that it is better than
having just one of the two memory systems. In order to show this, we have
designed and released our own challenging environment, "the Room", compatible
with OpenAI Gym, where an agent has to properly learn how to encode, store, and
retrieve memories to maximize its rewards. The Room environment allows for a
hybrid intelligence setup where machines and humans can collaborate. We show
that two agents collaborating with each other results in better performance
than one agent acting alone. We have open-sourced our code and models at
https://github.com/tae898/explicit-memory.
|
[
{
"version": "v1",
"created": "Mon, 4 Apr 2022 16:05:53 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Kim",
"Taewoon",
""
],
[
"Cochez",
"Michael",
""
],
[
"Francois-Lavet",
"Vincent",
""
],
[
"Neerincx",
"Mark",
""
],
[
"Vossen",
"Piek",
""
]
] |
new_dataset
| 0.981405 |
2204.01626
|
Benjamin Maschler
|
Benjamin Maschler, Angel Iliev, Thi Thu Huong Pham, Michael Weyrich
|
Stuttgart Open Relay Degradation Dataset (SOReDD)
|
Dataset description (8 pages, 4 figures, 8 tables)
| null |
10.18419/darus-2785
| null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Real-life industrial use cases for machine learning oftentimes involve
heterogeneous and dynamic assets, processes and data, resulting in a need to
continuously adapt the learning algorithm accordingly. Industrial transfer
learning offers to lower the effort of such adaptation by allowing the
utilization of previously acquired knowledge in solving new (variants of)
tasks. Being data-driven methods, the development of industrial transfer
learning algorithms naturally requires appropriate datasets for training.
However, open-source datasets suitable for transfer learning training, i.e.
spanning different assets, processes and data (variants), are rare. With the
Stuttgart Open Relay Degradation Dataset (SOReDD) we want to offer such a
dataset. It provides data on the degradation of different electromechanical
relays under different operating conditions, allowing for a large number of
different transfer scenarios. Although such relays themselves are usually
inexpensive standard components, their failure often leads to the failure of a
machine as a whole due to their role as the central power switching element of
a machine. The main cost factor in the event of a relay defect is therefore not
the relay itself, but the reduced machine availability. It is therefore
desirable to predict relay degradation as accurately as possible for specific
applications in order to be able to replace relays in good time and avoid
unplanned machine downtimes. Nevertheless, data-driven failure prediction for
electromechanical relays faces the challenge that relay degradation behavior is
highly dependent on the operating conditions, high-resolution measurement data
on relay degradation behavior is only collected in rare cases, and such data
can then only cover a fraction of the possible operating environments. Relays
are thus representative of many other central standard components in automation
technology.
|
[
{
"version": "v1",
"created": "Mon, 4 Apr 2022 16:16:04 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Maschler",
"Benjamin",
""
],
[
"Iliev",
"Angel",
""
],
[
"Pham",
"Thi Thu Huong",
""
],
[
"Weyrich",
"Michael",
""
]
] |
new_dataset
| 0.999845 |
2204.01672
|
Li Liu
|
Jianrong Wang, Zixuan Wang, Xiaosheng Hu, Xuewei Li, Qiang Fang, Li
Liu
|
Residual-guided Personalized Speech Synthesis based on Face Image
|
ICASSP 2022
| null | null | null |
cs.SD cs.CV eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Previous works derive personalized speech features by training the model on a
large dataset composed of his/her audio sounds. It was reported that face
information has a strong link with the speech sound. Thus in this work, we
innovatively extract personalized speech features from human faces to
synthesize personalized speech using neural vocoder. A Face-based Residual
Personalized Speech Synthesis Model (FR-PSS) containing a speech encoder, a
speech synthesizer and a face encoder is designed for PSS. In this model, by
designing two speech priors, a residual-guided strategy is introduced to guide
the face feature to approach the true speech feature in the training. Moreover,
considering the error of feature's absolute values and their directional bias,
we formulate a novel tri-item loss function for face encoder. Experimental
results show that the speech synthesized by our model is comparable to the
personalized speech synthesized by training a large amount of audio data in
previous works.
|
[
{
"version": "v1",
"created": "Fri, 1 Apr 2022 15:27:14 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Wang",
"Jianrong",
""
],
[
"Wang",
"Zixuan",
""
],
[
"Hu",
"Xiaosheng",
""
],
[
"Li",
"Xuewei",
""
],
[
"Fang",
"Qiang",
""
],
[
"Liu",
"Li",
""
]
] |
new_dataset
| 0.981844 |
2204.01695
|
Enric Corona
|
Enric Corona, Tomas Hodan, Minh Vo, Francesc Moreno-Noguer, Chris
Sweeney, Richard Newcombe, Lingni Ma
|
LISA: Learning Implicit Shape and Appearance of Hands
|
Published at CVPR 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This paper proposes a do-it-all neural model of human hands, named LISA. The
model can capture accurate hand shape and appearance, generalize to arbitrary
hand subjects, provide dense surface correspondences, be reconstructed from
images in the wild and easily animated. We train LISA by minimizing the shape
and appearance losses on a large set of multi-view RGB image sequences
annotated with coarse 3D poses of the hand skeleton. For a 3D point in the hand
local coordinate, our model predicts the color and the signed distance with
respect to each hand bone independently, and then combines the per-bone
predictions using predicted skinning weights. The shape, color and pose
representations are disentangled by design, allowing to estimate or animate
only selected parameters. We experimentally demonstrate that LISA can
accurately reconstruct a dynamic hand from monocular or multi-view sequences,
achieving a noticeably higher quality of reconstructed hand shapes compared to
baseline approaches. Project page:
https://www.iri.upc.edu/people/ecorona/lisa/.
|
[
{
"version": "v1",
"created": "Mon, 4 Apr 2022 17:59:03 GMT"
}
] | 2022-04-05T00:00:00 |
[
[
"Corona",
"Enric",
""
],
[
"Hodan",
"Tomas",
""
],
[
"Vo",
"Minh",
""
],
[
"Moreno-Noguer",
"Francesc",
""
],
[
"Sweeney",
"Chris",
""
],
[
"Newcombe",
"Richard",
""
],
[
"Ma",
"Lingni",
""
]
] |
new_dataset
| 0.980414 |
1612.04350
|
Xuebin Ren Dr
|
Xuebin Ren, Chia-Mu Yu, Weiren Yu, Shusen Yang, Xinyu Yang, Julie A.
McCann, Philip S. Yu
|
LoPub: High-Dimensional Crowdsourced Data Publication with Local
Differential Privacy
| null | null |
10.1109/TIFS.2018.2812146
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
High-dimensional crowdsourced data collected from a large number of users
produces rich knowledge for our society. However, it also brings unprecedented
privacy threats to participants. Local privacy, a variant of differential
privacy, is proposed as a means to eliminate the privacy concern.
Unfortunately, achieving local privacy on high-dimensional crowdsourced data
raises great challenges on both efficiency and effectiveness. Here, based on EM
and Lasso regression, we propose efficient multi-dimensional joint distribution
estimation algorithms with local privacy. Then, we develop a Locally
privacy-preserving high-dimensional data Publication algorithm, LoPub, by
taking advantage of our distribution estimation techniques. In particular, both
correlations and joint distribution among multiple attributes can be identified
to reduce the dimension of crowdsourced data, thus achieving both efficiency
and effectiveness in locally private high-dimensional data publication.
Extensive experiments on real-world datasets demonstrated that the efficiency
of our multivariate distribution estimation scheme and confirm the
effectiveness of our LoPub scheme in generating approximate datasets with local
privacy.
|
[
{
"version": "v1",
"created": "Tue, 13 Dec 2016 20:34:13 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Aug 2017 14:12:10 GMT"
}
] | 2022-04-04T00:00:00 |
[
[
"Ren",
"Xuebin",
""
],
[
"Yu",
"Chia-Mu",
""
],
[
"Yu",
"Weiren",
""
],
[
"Yang",
"Shusen",
""
],
[
"Yang",
"Xinyu",
""
],
[
"McCann",
"Julie A.",
""
],
[
"Yu",
"Philip S.",
""
]
] |
new_dataset
| 0.966824 |
2009.11501
|
Ehsan Aghaei
|
Ehsan Aghaei, Waseem Shadid, Ehab Al-Shaer
|
ThreatZoom: CVE2CWE using Hierarchical Neural Network
|
This is accepted paper in EAI SecureComm 2020, 16th EAI International
Conference on Security and Privacy in Communication Networks
|
EAI SecureComm 2020, 16th EAI International Conference on Security
and Privacy in Communication Networks
|
10.1007/978-3-030-63086-7_2
| null |
cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Common Vulnerabilities and Exposures (CVE) represent standard means for
sharing publicly known information security vulnerabilities. One or more CVEs
are grouped into the Common Weakness Enumeration (CWE) classes for the purpose
of understanding the software or configuration flaws and potential impacts
enabled by these vulnerabilities and identifying means to detect or prevent
exploitation. As the CVE-to-CWE classification is mostly performed manually by
domain experts, thousands of critical and new CVEs remain unclassified, yet
they are unpatchable. This significantly limits the utility of CVEs and slows
down proactive threat mitigation. This paper presents the first automatic tool
to classify CVEs to CWEs. ThreatZoom uses a novel learning algorithm that
employs an adaptive hierarchical neural network which adjusts its weights based
on text analytic scores and classification errors. It automatically estimates
the CWE classes corresponding to a CVE instance using both statistical and
semantic features extracted from the description of a CVE. This tool is
rigorously tested by various datasets provided by MITRE and the National
Vulnerability Database (NVD). The accuracy of classifying CVE instances to
their correct CWE classes are 92% (fine-grain) and 94% (coarse-grain) for NVD
dataset, and 75% (fine-grain) and 90% (coarse-grain) for MITRE dataset, despite
the small corpus.
|
[
{
"version": "v1",
"created": "Thu, 24 Sep 2020 06:04:56 GMT"
}
] | 2022-04-04T00:00:00 |
[
[
"Aghaei",
"Ehsan",
""
],
[
"Shadid",
"Waseem",
""
],
[
"Al-Shaer",
"Ehab",
""
]
] |
new_dataset
| 0.986788 |
2011.06075
|
Joseph Friedman
|
Wesley H. Brigner, Naimul Hassan, Xuan Hu, Christopher H. Bennett,
Felipe Garcia-Sanchez, Can Cui, Alvaro Velasquez, Matthew J. Marinella, Jean
Anne C. Incorvia, Joseph S. Friedman
|
Domain Wall Leaky Integrate-and-Fire Neurons with Shape-Based
Configurable Activation Functions
| null | null |
10.1109/TED.2022.3159508
| null |
cs.NE cond-mat.mes-hall cs.ET physics.app-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Complementary metal oxide semiconductor (CMOS) devices display volatile
characteristics, and are not well suited for analog applications such as
neuromorphic computing. Spintronic devices, on the other hand, exhibit both
non-volatile and analog features, which are well-suited to neuromorphic
computing. Consequently, these novel devices are at the forefront of
beyond-CMOS artificial intelligence applications. However, a large quantity of
these artificial neuromorphic devices still require the use of CMOS, which
decreases the efficiency of the system. To resolve this, we have previously
proposed a number of artificial neurons and synapses that do not require CMOS
for operation. Although these devices are a significant improvement over
previous renditions, their ability to enable neural network learning and
recognition is limited by their intrinsic activation functions. This work
proposes modifications to these spintronic neurons that enable configuration of
the activation functions through control of the shape of a magnetic domain wall
track. Linear and sigmoidal activation functions are demonstrated in this work,
which can be extended through a similar approach to enable a wide variety of
activation functions.
|
[
{
"version": "v1",
"created": "Wed, 11 Nov 2020 21:07:02 GMT"
}
] | 2022-04-04T00:00:00 |
[
[
"Brigner",
"Wesley H.",
""
],
[
"Hassan",
"Naimul",
""
],
[
"Hu",
"Xuan",
""
],
[
"Bennett",
"Christopher H.",
""
],
[
"Garcia-Sanchez",
"Felipe",
""
],
[
"Cui",
"Can",
""
],
[
"Velasquez",
"Alvaro",
""
],
[
"Marinella",
"Matthew J.",
""
],
[
"Incorvia",
"Jean Anne C.",
""
],
[
"Friedman",
"Joseph S.",
""
]
] |
new_dataset
| 0.98332 |
2012.04631
|
Didac Sur\'is Coll-Vinent
|
D\'idac Sur\'is, Dave Epstein, Carl Vondrick
|
Globetrotter: Connecting Languages by Connecting Images
|
CVPR 2022 (Oral)
| null | null | null |
cs.CL cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine translation between many languages at once is highly challenging,
since training with ground truth requires supervision between all language
pairs, which is difficult to obtain. Our key insight is that, while languages
may vary drastically, the underlying visual appearance of the world remains
consistent. We introduce a method that uses visual observations to bridge the
gap between languages, rather than relying on parallel corpora or topological
properties of the representations. We train a model that aligns segments of
text from different languages if and only if the images associated with them
are similar and each image in turn is well-aligned with its textual
description. We train our model from scratch on a new dataset of text in over
fifty languages with accompanying images. Experiments show that our method
outperforms previous work on unsupervised word and sentence translation using
retrieval. Code, models and data are available on globetrotter.cs.columbia.edu.
|
[
{
"version": "v1",
"created": "Tue, 8 Dec 2020 18:50:40 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Mar 2022 22:37:07 GMT"
},
{
"version": "v3",
"created": "Sun, 27 Mar 2022 20:19:44 GMT"
},
{
"version": "v4",
"created": "Fri, 1 Apr 2022 03:41:40 GMT"
}
] | 2022-04-04T00:00:00 |
[
[
"Surís",
"Dídac",
""
],
[
"Epstein",
"Dave",
""
],
[
"Vondrick",
"Carl",
""
]
] |
new_dataset
| 0.998835 |
2102.11585
|
Gurkirt Singh
|
Gurkirt Singh, Stephen Akrigg, Manuele Di Maio, Valentina Fontana,
Reza Javanmard Alitappeh, Suman Saha, Kossar Jeddisaravi, Farzad Yousefi,
Jacob Culley, Tom Nicholson, Jordan Omokeowa, Salman Khan, Stanislao
Grazioso, Andrew Bradley, Giuseppe Di Gironimo, Fabio Cuzzolin
|
ROAD: The ROad event Awareness Dataset for Autonomous Driving
|
29 pages, accepted at TPAMI
|
TPAMI.2022.3150906
|
10.1109/TPAMI.2022.3150906
| null |
cs.CV cs.AI cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Humans drive in a holistic fashion which entails, in particular,
understanding dynamic road events and their evolution. Injecting these
capabilities in autonomous vehicles can thus take situational awareness and
decision making closer to human-level performance. To this purpose, we
introduce the ROad event Awareness Dataset (ROAD) for Autonomous Driving, to
our knowledge the first of its kind. ROAD is designed to test an autonomous
vehicle's ability to detect road events, defined as triplets composed by an
active agent, the action(s) it performs and the corresponding scene locations.
ROAD comprises videos originally from the Oxford RobotCar Dataset annotated
with bounding boxes showing the location in the image plane of each road event.
We benchmark various detection tasks, proposing as a baseline a new incremental
algorithm for online road event awareness termed 3D-RetinaNet. We also report
the performance on the ROAD tasks of Slowfast and YOLOv5 detectors, as well as
that of the winners of the ICCV2021 ROAD challenge, which highlight the
challenges faced by situation awareness in autonomous driving. ROAD is designed
to allow scholars to investigate exciting tasks such as complex (road) activity
detection, future event anticipation and continual learning. The dataset is
available at https://github.com/gurkirt/road-dataset; the baseline can be found
at https://github.com/gurkirt/3D-RetinaNet.
|
[
{
"version": "v1",
"created": "Tue, 23 Feb 2021 09:48:56 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Feb 2021 10:07:31 GMT"
},
{
"version": "v3",
"created": "Fri, 1 Apr 2022 12:19:51 GMT"
}
] | 2022-04-04T00:00:00 |
[
[
"Singh",
"Gurkirt",
""
],
[
"Akrigg",
"Stephen",
""
],
[
"Di Maio",
"Manuele",
""
],
[
"Fontana",
"Valentina",
""
],
[
"Alitappeh",
"Reza Javanmard",
""
],
[
"Saha",
"Suman",
""
],
[
"Jeddisaravi",
"Kossar",
""
],
[
"Yousefi",
"Farzad",
""
],
[
"Culley",
"Jacob",
""
],
[
"Nicholson",
"Tom",
""
],
[
"Omokeowa",
"Jordan",
""
],
[
"Khan",
"Salman",
""
],
[
"Grazioso",
"Stanislao",
""
],
[
"Bradley",
"Andrew",
""
],
[
"Di Gironimo",
"Giuseppe",
""
],
[
"Cuzzolin",
"Fabio",
""
]
] |
new_dataset
| 0.99976 |
2106.15715
|
Hans Hanley
|
Hans W. A. Hanley, Deepak Kumar, Zakir Durumeric
|
No Calm in The Storm: Investigating QAnon Website Relationships
| null | null | null | null |
cs.CY cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
QAnon is a far-right conspiracy theory whose followers largely organize
online. In this work, we use web crawls seeded from two of the largest QAnon
hotbeds on the Internet, Voat and 8kun, to build a QAnon-centered domain-based
hyperlink graph. We use this graph to identify, understand, and learn about the
set of websites that spread QAnon content online. Specifically, we curate the
largest list of QAnon centered websites to date, from which we document the
types of QAnon sites, their hosting providers, as well as their popularity. We
further analyze QAnon websites' connection to mainstream news and
misinformation online, highlighting the outsized role misinformation websites
play in spreading the conspiracy. Finally, we leverage the observed
relationship between QAnon and misinformation sites to build a highly accurate
random forest classifier that distinguishes between misinformation and
authentic news sites. Our results demonstrate new and effective ways to study
the growing presence of conspiracy theories and misinformation on the Internet.
|
[
{
"version": "v1",
"created": "Tue, 29 Jun 2021 20:39:17 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Aug 2021 23:43:28 GMT"
},
{
"version": "v3",
"created": "Wed, 24 Nov 2021 15:10:54 GMT"
},
{
"version": "v4",
"created": "Sat, 12 Mar 2022 22:13:44 GMT"
},
{
"version": "v5",
"created": "Thu, 31 Mar 2022 18:09:44 GMT"
}
] | 2022-04-04T00:00:00 |
[
[
"Hanley",
"Hans W. A.",
""
],
[
"Kumar",
"Deepak",
""
],
[
"Durumeric",
"Zakir",
""
]
] |
new_dataset
| 0.991988 |
2107.10388
|
Omar Peracha
|
Omar Peracha
|
JS Fake Chorales: a Synthetic Dataset of Polyphonic Music with Human
Annotation
| null |
Proceedings of the 2022 Sound and Music Computing Conference, SMC
2022
| null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
High-quality datasets for learning-based modelling of polyphonic symbolic
music remain less readily-accessible at scale than in other domains, such as
language modelling or image classification. Deep learning algorithms show great
potential for enabling the widespread use of interactive music generation
technology in consumer applications, but the lack of large-scale datasets
remains a bottleneck for the development of algorithms that can consistently
generate high-quality outputs. We propose that models with narrow expertise can
serve as a source of high-quality scalable synthetic data, and open-source the
JS Fake Chorales, a dataset of 500 pieces generated by a new learning-based
algorithm, provided in MIDI form.
We take consecutive outputs from the algorithm and avoid cherry-picking in
order to validate the potential to further scale this dataset on-demand. We
conduct an online experiment for human evaluation, designed to be as fair to
the listener as possible, and find that respondents were on average only 7%
better than random guessing at distinguishing JS Fake Chorales from real
chorales composed by JS Bach. Furthermore, we make anonymised data collected
from experiments available along with the MIDI samples. Finally, we conduct
ablation studies to demonstrate the effectiveness of using the synthetic pieces
for research in polyphonic music modelling, and find that we can improve on
state-of-the-art validation set loss for the canonical JSB Chorales dataset,
using a known algorithm, by simply augmenting the training set with the JS Fake
Chorales.
|
[
{
"version": "v1",
"created": "Wed, 21 Jul 2021 23:07:22 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Aug 2021 00:00:25 GMT"
},
{
"version": "v3",
"created": "Tue, 12 Oct 2021 07:58:46 GMT"
},
{
"version": "v4",
"created": "Thu, 31 Mar 2022 18:27:45 GMT"
}
] | 2022-04-04T00:00:00 |
[
[
"Peracha",
"Omar",
""
]
] |
new_dataset
| 0.999527 |
2108.03004
|
Zhiqing Wei
|
Zhiqing Wei, Fengkai Zhang, Shuo Chang, Yangyang Liu, Huici Wu,
Zhiyong Feng
|
MmWave Radar and Vision Fusion for Object Detection in Autonomous
Driving: A Review
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With autonomous driving developing in a booming stage, accurate object
detection in complex scenarios attract wide attention to ensure the safety of
autonomous driving. Millimeter wave (mmWave) radar and vision fusion is a
mainstream solution for accurate obstacle detection. This article presents a
detailed survey on mmWave radar and vision fusion based obstacle detection
methods. First, we introduce the tasks, evaluation criteria, and datasets of
object detection for autonomous driving. The process of mmWave radar and vision
fusion is then divided into three parts: sensor deployment, sensor calibration,
and sensor fusion, which are reviewed comprehensively. Specifically, we
classify the fusion methods into data level, decision level, and feature level
fusion methods. In addition, we introduce three-dimensional(3D) object
detection, the fusion of lidar and vision in autonomous driving and multimodal
information fusion, which are promising for the future. Finally, we summarize
this article.
|
[
{
"version": "v1",
"created": "Fri, 6 Aug 2021 08:38:42 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Aug 2021 02:48:24 GMT"
},
{
"version": "v3",
"created": "Fri, 1 Apr 2022 07:44:14 GMT"
}
] | 2022-04-04T00:00:00 |
[
[
"Wei",
"Zhiqing",
""
],
[
"Zhang",
"Fengkai",
""
],
[
"Chang",
"Shuo",
""
],
[
"Liu",
"Yangyang",
""
],
[
"Wu",
"Huici",
""
],
[
"Feng",
"Zhiyong",
""
]
] |
new_dataset
| 0.999799 |
2109.10575
|
Koshi Oishi
|
Koshi Oishi and Tomohiko Jimbo
|
Autonomous Cooperative Transportation System involving Multi-Aerial
Robots with Variable Attachment Mechanism
|
2021 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS 2021)(in press)
| null |
10.1109/IROS51168.2021.9636145
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cooperative transportation by multi-aerial robots has the potential to
support various payloads and improve failsafe against dropping. Furthermore,
changing the attachment positions of robots according payload characteristics
increases the stability of transportation. However, there are almost no
transportation systems capable of scaling to the payload weight and size and
changing the optimal attachment positions. To address this issue, we propose a
cooperative transportation system comprising autonomously executable software
and suitable hardware, covering the entire process, from pre-takeoff setting to
controlled flight. The proposed system decides the formation of the attachment
positions by prioritizing controllability based on the center of gravity
obtained from Bayesian estimations with robot pairs. We investigated the
cooperative transportation of an unknown payload larger than that of whole
carrier robots through numerical simulations. Furthermore, we performed
cooperative transportation of an unknown payload (with a weight of about 3.2 kg
and maximum length of 1.76 m) using eight robots. The proposed system was found
to be versatile with regard to handling unknown payloads with different shapes
and center-of-gravity positions.
|
[
{
"version": "v1",
"created": "Wed, 22 Sep 2021 08:13:32 GMT"
}
] | 2022-04-04T00:00:00 |
[
[
"Oishi",
"Koshi",
""
],
[
"Jimbo",
"Tomohiko",
""
]
] |
new_dataset
| 0.994154 |
2110.03290
|
Gaojian Wang
|
Gaojian Wang, Qian Jiang, Xin Jin, Wei Li and Xiaohui Cui
|
MC-LCR: Multi-modal contrastive classification by locally correlated
representations for effective face forgery detection
|
20 pages, 12 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As the remarkable development of facial manipulation technologies is
accompanied by severe security concerns, face forgery detection has become a
recent research hotspot. Most existing detection methods train a binary
classifier under global supervision to judge real or fake. However, advanced
manipulations only perform small-scale tampering, posing challenges to
comprehensively capture subtle and local forgery artifacts, especially in high
compression settings and cross-dataset scenarios. To address such limitations,
we propose a novel framework named Multi-modal Contrastive Classification by
Locally Correlated Representations(MC-LCR), for effective face forgery
detection. Instead of specific appearance features, our MC-LCR aims to amplify
implicit local discrepancies between authentic and forged faces from both
spatial and frequency domains. Specifically, we design the shallow style
representation block that measures the pairwise correlation of shallow feature
maps, which encodes local style information to extract more discriminative
features in the spatial domain. Moreover, we make a key observation that subtle
forgery artifacts can be further exposed in the patch-wise phase and amplitude
spectrum and exhibit different clues. According to the complementarity of
amplitude and phase information, we develop a patch-wise amplitude and phase
dual attention module to capture locally correlated inconsistencies with each
other in the frequency domain. Besides the above two modules, we further
introduce the collaboration of supervised contrastive loss with cross-entropy
loss. It helps the network learn more discriminative and generalized
representations. Through extensive experiments and comprehensive studies, we
achieve state-of-the-art performance and demonstrate the robustness and
generalization of our method.
|
[
{
"version": "v1",
"created": "Thu, 7 Oct 2021 09:24:12 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Apr 2022 07:03:46 GMT"
}
] | 2022-04-04T00:00:00 |
[
[
"Wang",
"Gaojian",
""
],
[
"Jiang",
"Qian",
""
],
[
"Jin",
"Xin",
""
],
[
"Li",
"Wei",
""
],
[
"Cui",
"Xiaohui",
""
]
] |
new_dataset
| 0.999067 |
2110.15721
|
Christian Wallraven
|
Daehyun Cho, Christian Wallraven
|
Paperswithtopic: Topic Identification from Paper Title Only
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The deep learning field is growing rapidly as witnessed by the exponential
growth of papers submitted to journals, conferences, and pre-print servers. To
cope with the sheer number of papers, several text mining tools from natural
language processing (NLP) have been proposed that enable researchers to keep
track of recent findings. In this context, our paper makes two main
contributions: first, we collected and annotated a dataset of papers paired by
title and sub-field from the field of artificial intelligence (AI), and,
second, we present results on how to predict a paper's AI sub-field from a
given paper title only. Importantly, for the latter, short-text classification
task we compare several algorithms from conventional machine learning all the
way up to recent, larger transformer architectures. Finally, for the
transformer models, we also present gradient-based, attention visualizations to
further explain the model's classification process. All code can be found at
\url{https://github.com/1pha/paperswithtopic}
|
[
{
"version": "v1",
"created": "Sat, 9 Oct 2021 06:32:09 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Apr 2022 03:57:16 GMT"
}
] | 2022-04-04T00:00:00 |
[
[
"Cho",
"Daehyun",
""
],
[
"Wallraven",
"Christian",
""
]
] |
new_dataset
| 0.999721 |
2202.10753
|
Carlos Granero Belinchon
|
Binh Minh Nguyen (IMT Atlantique), Ganglin Tian (IMT Atlantique),
Minh-Triet Vo (IMT Atlantique), Aur\'elie Michel, Thomas Corpetti (CNRS,
LETG), Carlos Granero-Belinchon (Lab-STICC\_OSE, IMT Atlantique - MEE)
|
Convolutional Neural Network Modelling for MODIS Land Surface
Temperature Super-Resolution
| null | null | null | null |
cs.CV cs.AI cs.LG eess.IV physics.data-an
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays, thermal infrared satellite remote sensors enable to extract very
interesting information at large scale, in particular Land Surface Temperature
(LST). However such data are limited in spatial and/or temporal resolutions
which prevents from an analysis at fine scales. For example, MODIS satellite
provides daily acquisitions with 1Km spatial resolutions which is not
sufficient to deal with highly heterogeneous environments as agricultural
parcels. Therefore, image super-resolution is a crucial task to better exploit
MODIS LSTs. This issue is tackled in this paper. We introduce a deep
learning-based algorithm, named Multi-residual U-Net, for super-resolution of
MODIS LST single-images. Our proposed network is a modified version of U-Net
architecture, which aims at super-resolving the input LST image from 1Km to
250m per pixel. The results show that our Multi-residual U-Net outperforms
other state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 09:12:40 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Apr 2022 07:50:44 GMT"
}
] | 2022-04-04T00:00:00 |
[
[
"Nguyen",
"Binh Minh",
"",
"IMT Atlantique"
],
[
"Tian",
"Ganglin",
"",
"IMT Atlantique"
],
[
"Vo",
"Minh-Triet",
"",
"IMT Atlantique"
],
[
"Michel",
"Aurélie",
"",
"CNRS,\n LETG"
],
[
"Corpetti",
"Thomas",
"",
"CNRS,\n LETG"
],
[
"Granero-Belinchon",
"Carlos",
"",
"Lab-STICC\\_OSE, IMT Atlantique - MEE"
]
] |
new_dataset
| 0.997314 |
2202.11367
|
Tong Zhang
|
Tong Zhang
|
The DoF Region of Two-User MIMO Broadcast Channel with Delayed
Imperfect-Quality CSIT
|
Accepted by Electronics Letters
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The channel state information at the transmitter (CSIT) play an important
role in the performance of wireless networks. The CSIT model can be delayed and
imperfect-quality, since the feedback link has a delay and the channel state
information (CSI) feedback has distortion. In this paper, we thus characterize
the degrees-of-freedom (DoF) region of the two-user multiple-input
multiple-output (MIMO) broadcast channel with delayed imperfect-quality CSIT,
where the antenna configurations can be arbitrary. The converse proof of DoF
region is based on the enhancement of physically degraded channel. The
achievability proof of DoF region is through a novel transmission scheme
design, where the duration of each phase and the amount of transmitted symbols
are configured based on the imperfect-quality of delayed CSIT. As a result, we
show that the DoF region with delayed imperfect-quality CSIT is located between
the DoF region with no CSIT and the DoF region with delayed CSIT.
|
[
{
"version": "v1",
"created": "Wed, 23 Feb 2022 09:19:43 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Apr 2022 06:03:17 GMT"
}
] | 2022-04-04T00:00:00 |
[
[
"Zhang",
"Tong",
""
]
] |
new_dataset
| 0.997058 |
2203.15349
|
Yaman Kumar Singla
|
Debanjan Mahata, Navneet Agarwal, Dibya Gautam, Amardeep Kumar,
Swapnil Parekh, Yaman Kumar Singla, Anish Acharya, Rajiv Ratn Shah
|
LDKP: A Dataset for Identifying Keyphrases from Long Scientific
Documents
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Identifying keyphrases (KPs) from text documents is a fundamental task in
natural language processing and information retrieval. Vast majority of the
benchmark datasets for this task are from the scientific domain containing only
the document title and abstract information. This limits keyphrase extraction
(KPE) and keyphrase generation (KPG) algorithms to identify keyphrases from
human-written summaries that are often very short (approx 8 sentences). This
presents three challenges for real-world applications: human-written summaries
are unavailable for most documents, the documents are almost always long, and a
high percentage of KPs are directly found beyond the limited context of title
and abstract. Therefore, we release two extensive corpora mapping KPs of ~1.3M
and ~100K scientific articles with their fully extracted text and additional
metadata including publication venue, year, author, field of study, and
citations for facilitating research on this real-world problem.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 08:44:57 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Apr 2022 08:24:39 GMT"
}
] | 2022-04-04T00:00:00 |
[
[
"Mahata",
"Debanjan",
""
],
[
"Agarwal",
"Navneet",
""
],
[
"Gautam",
"Dibya",
""
],
[
"Kumar",
"Amardeep",
""
],
[
"Parekh",
"Swapnil",
""
],
[
"Singla",
"Yaman Kumar",
""
],
[
"Acharya",
"Anish",
""
],
[
"Shah",
"Rajiv Ratn",
""
]
] |
new_dataset
| 0.999853 |
2204.00080
|
Varun Aggarwal
|
Varun Aggarwal, Aanis Ahmad, Aaron Etienne, Dharmendra Saraswat
|
4Weed Dataset: Annotated Imagery Weeds Dataset
| null | null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Weeds are a major threat to crops and are responsible for reducing crop yield
worldwide. To mitigate their negative effect, it is advantageous to accurately
identify them early in the season to prevent their spread throughout the field.
Traditionally, farmers rely on manually scouting fields and applying herbicides
for different weeds. However, it is easy to confuse between crops with weeds
during the early growth stages. Recently, deep learning-based weed
identification has become popular as deep learning relies on convolutional
neural networks that are capable of learning important distinguishable features
between weeds and crops. However, training robust deep learning models requires
access to large imagery datasets. Therefore, an early-season weeds dataset was
acquired under field conditions. The dataset consists of 159 Cocklebur images,
139 Foxtail images, 170 Redroot Pigweed images and 150 Giant Ragweed images
corresponding to four common weed species found in corn and soybean production
systems.. Bounding box annotations were created for each image to prepare the
dataset for training both image classification and object detection deep
learning networks capable of accurately locating and identifying weeds within
corn and soybean fields. (https://osf.io/w9v3j/)
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 03:10:54 GMT"
}
] | 2022-04-04T00:00:00 |
[
[
"Aggarwal",
"Varun",
""
],
[
"Ahmad",
"Aanis",
""
],
[
"Etienne",
"Aaron",
""
],
[
"Saraswat",
"Dharmendra",
""
]
] |
new_dataset
| 0.998766 |
2204.00121
|
Alejandro Linares-Barranco A. Linares-Barranco
|
Enrique Pinero-Fuentes, Salvador Canas-Moreno, Antonio Rios-Navarro,
Daniel Cascado-Caballero, Angel Jimenez-Fernandez, Alejandro Linares-Barranco
|
An MPSoC-based on-line Edge Infrastructure for Embedded Neuromorphic
Robotic Controllers
|
4 pages plus references, 5 figures, submitted to ISCAS2022
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, an all-in-one neuromorphic controller system with reduced
latency and power consumption for a robotic arm is presented. Biological muscle
movement consists of stretching and shrinking fibres via spike-commanded
signals that come from motor neurons, which in turn are connected to a central
pattern generator neural structure. In addition, biological systems are able to
respond to diverse stimuli rather fast and efficiently, and this is based on
the way information is coded within neural processes. As opposed to
human-created encoding systems, neural ones use neurons and spikes to process
the information and make weighted decisions based on a continuous learning
process. The Event-Driven Scorbot platform (ED-Scorbot) consists of a 6 Degrees
of Freedom (DoF) robotic arm whose controller implements a Spiking
Proportional-Integrative- Derivative algorithm, mimicking in this way the
previously commented biological systems. In this paper, we present an
infrastructure upgrade to the ED-Scorbot platform, replacing the controller
hardware, which was comprised of two Spartan Field Programmable Gate Arrays
(FPGAs) and a barebone computer, with an edge device, the Xilinx Zynq-7000 SoC
(System on Chip) which reduces the response time, power consumption and overall
complexity.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 22:11:46 GMT"
}
] | 2022-04-04T00:00:00 |
[
[
"Pinero-Fuentes",
"Enrique",
""
],
[
"Canas-Moreno",
"Salvador",
""
],
[
"Rios-Navarro",
"Antonio",
""
],
[
"Cascado-Caballero",
"Daniel",
""
],
[
"Jimenez-Fernandez",
"Angel",
""
],
[
"Linares-Barranco",
"Alejandro",
""
]
] |
new_dataset
| 0.997254 |
2204.00155
|
Isabella Ferreira
|
Isabella Ferreira, Bram Adams, Jinghui Cheng
|
How heated is it? Understanding GitHub locked issues
| null |
In 19th International Conference on Mining Software Repositories
(MSR'22), May 23-24, 2022, Pittsburgh, PA, USA
|
10.1145/3524842.3527957
| null |
cs.SE cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although issues of open source software are created to discuss and solve
technical problems, conversations can become heated, with discussants getting
angry and/or agitated for a variety of reasons, such as poor suggestions or
violation of community conventions. To prevent and mitigate discussions from
getting heated, tools like GitHub have introduced the ability to lock issue
discussions that violate the code of conduct or other community guidelines.
Despite some early research on locked issues, there is a lack of understanding
of how communities use this feature and of potential threats to validity for
researchers relying on a dataset of locked issues as an oracle for heated
discussions. To address this gap, we (i) quantitatively analyzed 79 GitHub
projects that have at least one issue locked as too heated, and (ii)
qualitatively analyzed all issues locked as too heated of the 79 projects, a
total of 205 issues comprising 5,511 comments. We found that projects have
different behaviors when locking issues: while 54 locked less than 10% of their
closed issues, 14 projects locked more than 90% of their closed issues.
Additionally, locked issues tend to have a similar number of comments,
participants, and emoji reactions to non-locked issues. For the 205 issues
locked as too heated, we found that one-third do not contain any uncivil
discourse, and only 8.82% of the analyzed comments are actually uncivil.
Finally, we found that the locking justifications provided by maintainers do
not always match the label used to lock the issue. Based on our results, we
identified three pitfalls to avoid when using the GitHub locked issues data and
we provide recommendations for researchers and practitioners.
|
[
{
"version": "v1",
"created": "Fri, 1 Apr 2022 01:39:19 GMT"
}
] | 2022-04-04T00:00:00 |
[
[
"Ferreira",
"Isabella",
""
],
[
"Adams",
"Bram",
""
],
[
"Cheng",
"Jinghui",
""
]
] |
new_dataset
| 0.957064 |
2204.00207
|
Jiawei Xu
|
Brian Zhu, Jiawei Xu, Andrew Charway, David Salda\~na
|
PogoDrone: Design, Model, and Control of a Jumping Quadrotor
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present a design, model, and control for a novel jumping-flying robot that
is called PogoDrone. The robot is composed of a quadrotor with a passive
mechanism for jumping. The robot can continuously jump in place or fly like a
normal quadrotor. Jumping in place allows the robot to quickly move and operate
very close to the ground. For instance, in agricultural applications, the
jumping mechanism allows the robot to take samples of soil. We propose a hybrid
controller that switches from attitude to position control to allow the robot
to fall horizontally and recover to the original position. We compare the
jumping mode with the hovering mode to analyze the energy consumption. In
simulations, we evaluate the effect of different factors on energy consumption.
In real experiments, we show that our robot can repeatedly impact the ground,
jump, and fly in a physical environment.
|
[
{
"version": "v1",
"created": "Fri, 1 Apr 2022 04:59:55 GMT"
}
] | 2022-04-04T00:00:00 |
[
[
"Zhu",
"Brian",
""
],
[
"Xu",
"Jiawei",
""
],
[
"Charway",
"Andrew",
""
],
[
"Saldaña",
"David",
""
]
] |
new_dataset
| 0.99928 |
2204.00256
|
Stefano Zacchiroli
|
Stefano Zacchiroli (IP Paris, LTCI)
|
A Large-scale Dataset of (Open Source) License Text Variants
|
The 2022 Mining Software Repositories Conference, May 2022,
Pittsburgh, Pennsylvania, United States
| null |
10.1145/3524842.3528491
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a large-scale dataset of the complete texts of free/open source
software (FOSS) license variants. To assemble it we have collected from the
Software Heritage archive-the largest publicly available archive of FOSS source
code with accompanying development history-all versions of files whose names
are commonly used to convey licensing terms to software users and
developers.The dataset consists of 6.5 million unique license files that can be
used to conduct empirical studies on open source licensing, training of
automated license classifiers, natural language processing (NLP) analyses of
legal texts, as well as historical and phylogenetic studies on FOSS licensing.
Additional metadata about shipped license files are also provided, making the
dataset ready to use in various contexts; they include: file length measures,
detected MIME type, detected SPDX license (using ScanCode), example origin
(e.g., GitHub repository), oldest public commit in which the license
appeared.The dataset is released as open data as an archive file containing all
deduplicated license files, plus several portable CSV files for metadata,
referencing files via cryptographic checksums.
|
[
{
"version": "v1",
"created": "Fri, 1 Apr 2022 07:31:02 GMT"
}
] | 2022-04-04T00:00:00 |
[
[
"Zacchiroli",
"Stefano",
"",
"IP Paris, LTCI"
]
] |
new_dataset
| 0.999855 |
2204.00301
|
Florian Euchner
|
Florian Euchner and Christian Senger
|
PERIDOT Codes: Replacing Identifiers, Sequence Numbers and Nonces with
Permutations
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Identifiers and sequence numbers make up a large part of the protocol
overhead in certain low-power wide-area networks. The requirement for
cryptographic nonces in authentication and encryption schemes often demands
excessively long sequence numbers, which leads to an increase in energy
consumption per transmitted packet. In this paper, the novel PERIDOT coding
scheme is proposed. It replaces identifiers and sequence numbers with a code,
based on which receivers can identify transmitters with high confidence.
PERIDOT is based on specially constructed integer permutations assigned to
transmitters. An upper bound on the performance of PERIDOT codes is provided
and methods for constructing particularly suitable permutations are presented.
In practice, PERIDOT can significantly increase intervals between nonce reuses
and, at the same time, reduce power consumption.
|
[
{
"version": "v1",
"created": "Fri, 1 Apr 2022 09:18:38 GMT"
}
] | 2022-04-04T00:00:00 |
[
[
"Euchner",
"Florian",
""
],
[
"Senger",
"Christian",
""
]
] |
new_dataset
| 0.998576 |
2204.00333
|
Francesco Moramarco
|
Alex Papadopoulos Korfiatis, Francesco Moramarco, Radmila Sarac,
Aleksandar Savkov
|
PriMock57: A Dataset Of Primary Care Mock Consultations
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advances in Automatic Speech Recognition (ASR) have made it possible
to reliably produce automatic transcripts of clinician-patient conversations.
However, access to clinical datasets is heavily restricted due to patient
privacy, thus slowing down normal research practices. We detail the development
of a public access, high quality dataset comprising of57 mocked primary care
consultations, including audio recordings, their manual utterance-level
transcriptions, and the associated consultation notes. Our work illustrates how
the dataset can be used as a benchmark for conversational medical ASR as well
as consultation note generation from transcripts.
|
[
{
"version": "v1",
"created": "Fri, 1 Apr 2022 10:18:28 GMT"
}
] | 2022-04-04T00:00:00 |
[
[
"Korfiatis",
"Alex Papadopoulos",
""
],
[
"Moramarco",
"Francesco",
""
],
[
"Sarac",
"Radmila",
""
],
[
"Savkov",
"Aleksandar",
""
]
] |
new_dataset
| 0.99978 |
2204.00411
|
Jens Schreiber
|
Stephan Vogt and Jens Schreiber and Bernhard Sick
|
Synthetic Photovoltaic and Wind Power Forecasting Data
|
12 pages, 8 figures, and 2 tables
| null | null | null |
cs.LG cs.AI eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Photovoltaic and wind power forecasts in power systems with a high share of
renewable energy are essential in several applications. These include stable
grid operation, profitable power trading, and forward-looking system planning.
However, there is a lack of publicly available datasets for research on machine
learning based prediction methods. This paper provides an openly accessible
time series dataset with realistic synthetic power data. Other publicly and
non-publicly available datasets often lack precise geographic coordinates,
timestamps, or static power plant information, e.g., to protect business
secrets. On the opposite, this dataset provides these. The dataset comprises
120 photovoltaic and 273 wind power plants with distinct sides all over Germany
from 500 days in hourly resolution. This large number of available sides allows
forecasting experiments to include spatial correlations and run experiments in
transfer and multi-task learning. It includes side-specific, power
source-dependent, non-synthetic input features from the ICON-EU weather model.
A simulation of virtual power plants with physical models and actual
meteorological measurements provides realistic synthetic power measurement time
series. These time series correspond to the power output of virtual power
plants at the location of the respective weather measurements. Since the
synthetic time series are based exclusively on weather measurements, possible
errors in the weather forecast are comparable to those in actual power data. In
addition to the data description, we evaluate the quality of
weather-prediction-based power forecasts by comparing simplified physical
models and a machine learning model. This experiment shows that forecasts
errors on the synthetic power data are comparable to real-world historical
power measurements.
|
[
{
"version": "v1",
"created": "Fri, 1 Apr 2022 13:20:05 GMT"
}
] | 2022-04-04T00:00:00 |
[
[
"Vogt",
"Stephan",
""
],
[
"Schreiber",
"Jens",
""
],
[
"Sick",
"Bernhard",
""
]
] |
new_dataset
| 0.999739 |
2204.00423
|
Mehdi Miah
|
Duc Minh Dimitri Nguyen, Mehdi Miah, Guillaume-Alexandre Bilodeau,
Wassim Bouachir
|
Transformers for 1D Signals in Parkinson's Disease Detection from Gait
|
International Conference on Pattern Recognition (ICPR 2022)
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper focuses on the detection of Parkinson's disease based on the
analysis of a patient's gait. The growing popularity and success of Transformer
networks in natural language processing and image recognition motivated us to
develop a novel method for this problem based on an automatic features
extraction via Transformers. The use of Transformers in 1D signal is not really
widespread yet, but we show in this paper that they are effective in extracting
relevant features from 1D signals. As Transformers require a lot of memory, we
decoupled temporal and spatial information to make the model smaller. Our
architecture used temporal Transformers, dimension reduction layers to reduce
the dimension of the data, a spatial Transformer, two fully connected layers
and an output layer for the final prediction. Our model outperforms the current
state-of-the-art algorithm with 95.2\% accuracy in distinguishing a
Parkinsonian patient from a healthy one on the Physionet dataset. A key
learning from this work is that Transformers allow for greater stability in
results. The source code and pre-trained models are released in
https://github.com/DucMinhDimitriNguyen/Transformers-for-1D-signals-in-Parkinson-s-disease-detection-from-gait.git
|
[
{
"version": "v1",
"created": "Fri, 1 Apr 2022 13:30:52 GMT"
}
] | 2022-04-04T00:00:00 |
[
[
"Nguyen",
"Duc Minh Dimitri",
""
],
[
"Miah",
"Mehdi",
""
],
[
"Bilodeau",
"Guillaume-Alexandre",
""
],
[
"Bouachir",
"Wassim",
""
]
] |
new_dataset
| 0.997968 |
2204.00448
|
Manos Plitsis
|
Gerasimos Chatzoudis, Manos Plitsis, Spyridoula Stamouli,
Athanasia-Lida Dimou, Athanasios Katsamanis, Vassilis Katsouros
|
Zero-Shot Cross-lingual Aphasia Detection using Automatic Speech
Recognition
|
5 pages, 1 figure, submitted to INTERSPEECH 2022
| null | null | null |
cs.LG cs.AI cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Aphasia is a common speech and language disorder, typically caused by a brain
injury or a stroke, that affects millions of people worldwide. Detecting and
assessing Aphasia in patients is a difficult, time-consuming process, and
numerous attempts to automate it have been made, the most successful using
machine learning models trained on aphasic speech data. Like in many medical
applications, aphasic speech data is scarce and the problem is exacerbated in
so-called "low resource" languages, which are, for this task, most languages
excluding English. We attempt to leverage available data in English and achieve
zero-shot aphasia detection in low-resource languages such as Greek and French,
by using language-agnostic linguistic features. Current cross-lingual aphasia
detection approaches rely on manually extracted transcripts. We propose an
end-to-end pipeline using pre-trained Automatic Speech Recognition (ASR) models
that share cross-lingual speech representations and are fine-tuned for our
desired low-resource languages. To further boost our ASR model's performance,
we also combine it with a language model. We show that our ASR-based end-to-end
pipeline offers comparable results to previous setups using human-annotated
transcripts.
|
[
{
"version": "v1",
"created": "Fri, 1 Apr 2022 14:05:02 GMT"
}
] | 2022-04-04T00:00:00 |
[
[
"Chatzoudis",
"Gerasimos",
""
],
[
"Plitsis",
"Manos",
""
],
[
"Stamouli",
"Spyridoula",
""
],
[
"Dimou",
"Athanasia-Lida",
""
],
[
"Katsamanis",
"Athanasios",
""
],
[
"Katsouros",
"Vassilis",
""
]
] |
new_dataset
| 0.998952 |
2204.00488
|
Bruno Jos\'e Olivieri de Souza
|
Breno Perricone, Thiago Lamenza, Marcelo Paulon, Bruno Jose Olivieri
de Souza, Markus Endler
|
GrADyS-GS -- A ground station for managing field experiments with
Autonomous Vehicles and Wireless Sensor Networks
| null | null | null | null |
cs.RO cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
In many kinds of research, collecting data is tailored to individual
research. It is usual to use dedicated and not reusable software to collect
data. GrADyS Ground Station framework (GrADyS-GS) aims to collect data in a
reusable manner with dynamic background tools. This technical report describes
GrADyS-GS, a ground station software designed to connect with various
technologies to control, monitor, and store results of Mobile Internet of
Things field experiments with Autonomous Vehicles (UAV) and Sensor Networks
(WSN). In the GrADyS project GrADyS-GS is used with ESP32-based IoT devices on
the ground and Unmanned Aerial Vehicles (quad-copters) in the air. The
GrADyS-GS tool was created to support the design, development and testing of
simulated movement coordination algorithms for the AVs, testing of customized
Bluetooth Mesh variations, and overall communication, coordination, and
context-awareness field experiments planed in the GraDyS project. Nevertheless,
GrADyS-GS is also a general purpose tool, as it relies on a dynamic and
easy-to-use Python and JavaScript framework that allows easy customization and
(re)utilization in another projects and field experiments with other kinds of
IoT devices, other WSN types and protocols, and other kinds of mobile connected
flying or ground vehicles. So far, GrADyS-GS has been used to start UAV flights
and collects its data in s centralized manner inside GrADyS project.
|
[
{
"version": "v1",
"created": "Fri, 1 Apr 2022 14:48:02 GMT"
}
] | 2022-04-04T00:00:00 |
[
[
"Perricone",
"Breno",
""
],
[
"Lamenza",
"Thiago",
""
],
[
"Paulon",
"Marcelo",
""
],
[
"de Souza",
"Bruno Jose Olivieri",
""
],
[
"Endler",
"Markus",
""
]
] |
new_dataset
| 0.998513 |
2204.00536
|
Chuhan Wu
|
Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang
|
Semi-FairVAE: Semi-supervised Fair Representation Learning with
Adversarial Variational Autoencoder
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Adversarial learning is a widely used technique in fair representation
learning to remove the biases on sensitive attributes from data
representations. It usually requires to incorporate the sensitive attribute
labels as prediction targets. However, in many scenarios the sensitive
attribute labels of many samples can be unknown, and it is difficult to train a
strong discriminator based on the scarce data with observed attribute labels,
which may lead to generate unfair representations. In this paper, we propose a
semi-supervised fair representation learning approach based on adversarial
variational autoencoder, which can reduce the dependency of adversarial fair
models on data with labeled sensitive attributes. More specifically, we use a
bias-aware model to capture inherent bias information on sensitive attribute by
accurately predicting sensitive attributes from input data, and we use a
bias-free model to learn debiased fair representations by using adversarial
learning to remove bias information from them. The hidden representations
learned by the two models are regularized to be orthogonal. In addition, the
soft labels predicted by the two models are further integrated into a
semi-supervised variational autoencoder to reconstruct the input data, and we
apply an additional entropy regularization to encourage the attribute labels
inferred from the bias-free model to be high-entropy. In this way, the
bias-aware model can better capture attribute information while the bias-free
model is less discriminative on sensitive attributes if the input data is well
reconstructed. Extensive experiments on two datasets for different tasks
validate that our approach can achieve good representation learning fairness
under limited data with sensitive attribute labels.
|
[
{
"version": "v1",
"created": "Fri, 1 Apr 2022 15:57:47 GMT"
}
] | 2022-04-04T00:00:00 |
[
[
"Wu",
"Chuhan",
""
],
[
"Wu",
"Fangzhao",
""
],
[
"Qi",
"Tao",
""
],
[
"Huang",
"Yongfeng",
""
]
] |
new_dataset
| 0.987893 |
2106.02689
|
Chun-Fu (Richard) Chen
|
Chun-Fu Chen, Rameswar Panda, Quanfu Fan
|
RegionViT: Regional-to-Local Attention for Vision Transformers
|
add more results and link to codes and models.
https://github.com/ibm/regionvit, formatted with ICLR style
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision transformer (ViT) has recently shown its strong capability in
achieving comparable results to convolutional neural networks (CNNs) on image
classification. However, vanilla ViT simply inherits the same architecture from
the natural language processing directly, which is often not optimized for
vision applications. Motivated by this, in this paper, we propose a new
architecture that adopts the pyramid structure and employ a novel
regional-to-local attention rather than global self-attention in vision
transformers. More specifically, our model first generates regional tokens and
local tokens from an image with different patch sizes, where each regional
token is associated with a set of local tokens based on the spatial location.
The regional-to-local attention includes two steps: first, the regional
self-attention extract global information among all regional tokens and then
the local self-attention exchanges the information among one regional token and
the associated local tokens via self-attention. Therefore, even though local
self-attention confines the scope in a local region but it can still receive
global information. Extensive experiments on four vision tasks, including image
classification, object and keypoint detection, semantics segmentation and
action recognition, show that our approach outperforms or is on par with
state-of-the-art ViT variants including many concurrent works. Our source codes
and models are available at https://github.com/ibm/regionvit.
|
[
{
"version": "v1",
"created": "Fri, 4 Jun 2021 19:57:11 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Dec 2021 22:16:46 GMT"
},
{
"version": "v3",
"created": "Thu, 31 Mar 2022 03:20:15 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Chen",
"Chun-Fu",
""
],
[
"Panda",
"Rameswar",
""
],
[
"Fan",
"Quanfu",
""
]
] |
new_dataset
| 0.999307 |
2106.06278
|
Yann Hamdaoui
|
Teodoro Freund, Yann Hamdaoui and Arnaud Spiwack
|
Union and intersection contracts are hard, actually
| null |
DLS 2021: Proceedings of the 17th ACM SIGPLAN International
Symposium on Dynamic Languages
|
10.1145/3486602.3486767
| null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Union and intersection types are a staple of gradually typed language such as
TypeScript. While it's long been recognized that union and intersection types
are difficult to verify statically, it may appear at first that the dynamic
part of gradual typing is actually pretty simple.
It turns out however, that in presence of higher-order contracts union and
intersection are deceptively difficult. The literature on higher-order
contracts with union and intersection, while keenly aware of the fact, doesn't
really explain why. We point and illustrate the problems and trade-offs
inherent to union and intersection contracts, via example and a survey of the
literature.
|
[
{
"version": "v1",
"created": "Fri, 11 Jun 2021 09:48:19 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Mar 2022 15:55:18 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Freund",
"Teodoro",
""
],
[
"Hamdaoui",
"Yann",
""
],
[
"Spiwack",
"Arnaud",
""
]
] |
new_dataset
| 0.995901 |
2110.09977
|
Hongda Wu
|
Shufeng Li, Mingyu Cai, Libiao Jin, Yao Sun, Hongda Wu, Ping Wang
|
An Ultra-Reliable Low-Latency Non-Binary Polar Coded SCMA Scheme
| null | null | null | null |
cs.IT cs.NI math.IT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The joint transmission scheme of polar codes and sparse code multiple access
(SCMA) has been regarded as a promising technology for future wireless
communication systems. However, most of the existing polar-coded SCMA (PC-SCMA)
systems suffer from high latency caused by the feedback iteration and list
decoding. In addition, the error performance of PC-SCMA systems is
unsatisfactory for ultra-reliable transmission. Inspired by the compelling
benefits of non-binary polar codes, in this paper, we design a non-binary
polar-coded SCMA (NB-PC-SCMA) system with a free order matching strategy to
address the issues of delay and reliability. Specifically, we first formulate a
joint factor graph for NB-PC-SCMA and propose a non-binary successive
cancellation list (NB-SCL) and damping based joint iterative detection and
decoding (NSD-JIDD) multiuser receiver to improve the BER and latency
performance. Then, a lazy-search based NB-SCL (L-NB-SCL) decoding is proposed
to reduce the computational complexity by simplifying the path search pattern
of the list decoder. After that, we modify the update of user nodes for SCMA
detection to improve the convergence error and finally propose the improved
NSD-JIDD (ISD-JIDD) algorithm, which can avoid redundant operations by
exploiting L-NB-SCL decoding. Simulation results show that the proposed
NB-PC-SCMA system achieves better bit error rate (BER) performance and
considerable latency gain when compared to its counterparts. In particular, the
proposed ISD-JIDD can achieve similar BER performance of NSD-JIDD with less
complexity.
|
[
{
"version": "v1",
"created": "Tue, 19 Oct 2021 13:51:18 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Mar 2022 14:27:36 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Li",
"Shufeng",
""
],
[
"Cai",
"Mingyu",
""
],
[
"Jin",
"Libiao",
""
],
[
"Sun",
"Yao",
""
],
[
"Wu",
"Hongda",
""
],
[
"Wang",
"Ping",
""
]
] |
new_dataset
| 0.999521 |
2112.02753
|
Xingyu Chen
|
Xingyu Chen, Yufeng Liu, Yajiao Dong, Xiong Zhang, Chongyang Ma,
Yanmin Xiong, Yuan Zhang, and Xiaoyan Guo
|
MobRecon: Mobile-Friendly Hand Mesh Reconstruction from Monocular Image
| null |
CVPR2022
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this work, we propose a framework for single-view hand mesh
reconstruction, which can simultaneously achieve high reconstruction accuracy,
fast inference speed, and temporal coherence. Specifically, for 2D encoding, we
propose lightweight yet effective stacked structures. Regarding 3D decoding, we
provide an efficient graph operator, namely depth-separable spiral convolution.
Moreover, we present a novel feature lifting module for bridging the gap
between 2D and 3D representations. This module begins with a map-based position
regression (MapReg) block to integrate the merits of both heatmap encoding and
position regression paradigms for improved 2D accuracy and temporal coherence.
Furthermore, MapReg is followed by pose pooling and pose-to-vertex lifting
approaches, which transform 2D pose encodings to semantic features of 3D
vertices. Overall, our hand reconstruction framework, called MobRecon,
comprises affordable computational costs and miniature model size, which
reaches a high inference speed of 83FPS on Apple A14 CPU. Extensive experiments
on popular datasets such as FreiHAND, RHD, and HO3Dv2 demonstrate that our
MobRecon achieves superior performance on reconstruction accuracy and temporal
coherence. Our code is publicly available at
https://github.com/SeanChenxy/HandMesh.
|
[
{
"version": "v1",
"created": "Mon, 6 Dec 2021 03:01:24 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Mar 2022 03:30:50 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Chen",
"Xingyu",
""
],
[
"Liu",
"Yufeng",
""
],
[
"Dong",
"Yajiao",
""
],
[
"Zhang",
"Xiong",
""
],
[
"Ma",
"Chongyang",
""
],
[
"Xiong",
"Yanmin",
""
],
[
"Zhang",
"Yuan",
""
],
[
"Guo",
"Xiaoyan",
""
]
] |
new_dataset
| 0.980625 |
2203.01515
|
Zheng Yuan
|
Zheng Yuan, Chuanqi Tan, Songfang Huang
|
Code Synonyms Do Matter: Multiple Synonyms Matching Network for
Automatic ICD Coding
|
Accepted by ACL 2022 Main Conference, Short Paper
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Automatic ICD coding is defined as assigning disease codes to electronic
medical records (EMRs). Existing methods usually apply label attention with
code representations to match related text snippets. Unlike these works that
model the label with the code hierarchy or description, we argue that the code
synonyms can provide more comprehensive knowledge based on the observation that
the code expressions in EMRs vary from their descriptions in ICD. By aligning
codes to concepts in UMLS, we collect synonyms of every code. Then, we propose
a multiple synonyms matching network to leverage synonyms for better code
representation learning, and finally help the code classification. Experiments
on the MIMIC-III dataset show that our proposed method outperforms previous
state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Thu, 3 Mar 2022 04:57:08 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Mar 2022 03:10:44 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Yuan",
"Zheng",
""
],
[
"Tan",
"Chuanqi",
""
],
[
"Huang",
"Songfang",
""
]
] |
new_dataset
| 0.986348 |
2203.08537
|
Ozan Unal
|
Ozan Unal and Dengxin Dai and Luc Van Gool
|
Scribble-Supervised LiDAR Semantic Segmentation
|
Accepted at CVPR 2022 (ORAL)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Densely annotating LiDAR point clouds remains too expensive and
time-consuming to keep up with the ever growing volume of data. While current
literature focuses on fully-supervised performance, developing efficient
methods that take advantage of realistic weak supervision have yet to be
explored. In this paper, we propose using scribbles to annotate LiDAR point
clouds and release ScribbleKITTI, the first scribble-annotated dataset for
LiDAR semantic segmentation. Furthermore, we present a pipeline to reduce the
performance gap that arises when using such weak annotations. Our pipeline
comprises of three stand-alone contributions that can be combined with any
LiDAR semantic segmentation model to achieve up to 95.7% of the
fully-supervised performance while using only 8% labeled points. Our scribble
annotations and code are available at github.com/ouenal/scribblekitti.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 11:01:23 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Mar 2022 10:14:29 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Unal",
"Ozan",
""
],
[
"Dai",
"Dengxin",
""
],
[
"Van Gool",
"Luc",
""
]
] |
new_dataset
| 0.997315 |
2203.10289
|
Christian Haase
|
Christian Haase, Timo R\"oseler, Mattias Seidel
|
METL: a modern ETL pipeline with a dynamic mapping matrix
|
version 6: clean up
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern ETL streaming pipelines extract data from various sources and forward
it to multiple consumers, such as data warehouses (DW) and analytical systems
that leverage machine learning (ML). However, the increasing number of systems
that are connected to such pipelines requires new solutions for data
integration. The canonical (or common) data model (CDM) offers such an
integration. It is particular useful for integrating microservice systems into
ETL pipelines. (Villaca et al 2020, Oliveira et al 2019) However, a mapping to
a CDM is complex. (Lemcke et al 2012) There are three complexity problems,
namely the size of the required mapping matrix, the automation of updates of
the matrix in response to changes in the extraction sources and the time
efficiency of the mapping. In this paper, we present a new solution for these
problems. More precisely, we present a new dynamic mapping matrix (DMM), which
is based on permutation matrices that are obtained by block-partitioning the
full mapping matrix. We show that the DMM can be used for automated updates in
response to schema changes, for parallel computation in near real-time and for
highly efficient compacting. For the solution, we draw on research into matrix
partitioning (Quinn 2004) and dynamic networks (Haase et al 2021). The DMM has
been implemented into an app called Message ETL (METL). METL is the key part of
a new ETL streaming pipeline at EOS that conducts the transformation to a CDM.
The ETL pipeline is based on Kafka-streams. It extracts data from more than 80
microservices with log-based Change Data Capture (CDC) with Debezium and loads
the data to a DW and an ML platform. EOS is part of the Otto-Group, the
second-largest e-commerce provider in Europe.
|
[
{
"version": "v1",
"created": "Sat, 19 Mar 2022 10:18:51 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Mar 2022 17:28:58 GMT"
},
{
"version": "v3",
"created": "Wed, 23 Mar 2022 11:24:43 GMT"
},
{
"version": "v4",
"created": "Tue, 29 Mar 2022 13:54:22 GMT"
},
{
"version": "v5",
"created": "Wed, 30 Mar 2022 14:45:26 GMT"
},
{
"version": "v6",
"created": "Thu, 31 Mar 2022 11:16:25 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Haase",
"Christian",
""
],
[
"Röseler",
"Timo",
""
],
[
"Seidel",
"Mattias",
""
]
] |
new_dataset
| 0.990729 |
2203.12692
|
Puneet Kumar
|
Puneet Kumar, Gaurav Bhat, Omkar Ingle, Daksh Goyal and
Balasubramanian Raman
|
Affective Feedback Synthesis Towards Multimodal Text and Image Data
|
Submitted to ACM Transactions on Multimedia Computing,
Communications, and Applications
| null | null | null |
cs.MM cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we have defined a novel task of affective feedback synthesis
that deals with generating feedback for input text & corresponding image in a
similar way as humans respond towards the multimodal data. A feedback synthesis
system has been proposed and trained using ground-truth human comments along
with image-text input. We have also constructed a large-scale dataset
consisting of image, text, Twitter user comments, and the number of likes for
the comments by crawling the news articles through Twitter feeds. The proposed
system extracts textual features using a transformer-based textual encoder
while the visual features have been extracted using a Faster region-based
convolutional neural networks model. The textual and visual features have been
concatenated to construct the multimodal features using which the decoder
synthesizes the feedback. We have compared the results of the proposed system
with the baseline models using quantitative and qualitative measures. The
generated feedbacks have been analyzed using automatic and human evaluation.
They have been found to be semantically similar to the ground-truth comments
and relevant to the given text-image input.
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 19:28:20 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Mar 2022 05:20:40 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Kumar",
"Puneet",
""
],
[
"Bhat",
"Gaurav",
""
],
[
"Ingle",
"Omkar",
""
],
[
"Goyal",
"Daksh",
""
],
[
"Raman",
"Balasubramanian",
""
]
] |
new_dataset
| 0.9992 |
2203.15173
|
Chun-Hsien Lin
|
Chun-Hsien Lin and Pu-Jen Cheng
|
An Evaluation Dataset for Legal Word Embedding: A Case Study On Chinese
Codex
|
16 pages, 9 figures, 3rd International Conference on Natural Language
Computing and AI (NLCAI 2022)
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Word embedding is a modern distributed word representations approach widely
used in many natural language processing tasks. Converting the vocabulary in a
legal document into a word embedding model facilitates subjecting legal
documents to machine learning, deep learning, and other algorithms and
subsequently performing the downstream tasks of natural language processing
vis-\`a-vis, for instance, document classification, contract review, and
machine translation. The most common and practical approach of accuracy
evaluation with the word embedding model uses a benchmark set with linguistic
rules or the relationship between words to perform analogy reasoning via
algebraic calculation. This paper proposes establishing a 1,134 Legal
Analogical Reasoning Questions Set (LARQS) from the 2,388 Chinese Codex corpus
using five kinds of legal relations, which are then used to evaluate the
accuracy of the Chinese word embedding model. Moreover, we discovered that
legal relations might be ubiquitous in the word embedding model.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 01:26:26 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Lin",
"Chun-Hsien",
""
],
[
"Cheng",
"Pu-Jen",
""
]
] |
new_dataset
| 0.999785 |
2203.15640
|
Linda Lastrico
|
Luca Garello, Linda Lastrico, Alessandra Sciutti, Nicoletta Noceti,
Fulvio Mastrogiovanni and Francesco Rea
|
Synthesis and Execution of Communicative Robotic Movements with
Generative Adversarial Networks
|
Submitted to the Special Issue on Emerging Topics on Development and
Learning, IEEE TCDS. Unpublished, review process ongoing. Luca Garello and
Linda Lastrico contributed equally to this work, hence they share the first
name
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Object manipulation is a natural activity we perform every day. How humans
handle objects can communicate not only the willfulness of the acting, or key
aspects of the context where we operate, but also the properties of the objects
involved, without any need for explicit verbal description. Since human
intelligence comprises the ability to read the context, allowing robots to
perform actions that intuitively convey this kind of information would greatly
facilitate collaboration. In this work, we focus on how to transfer on two
different robotic platforms the same kinematics modulation that humans adopt
when manipulating delicate objects, aiming to endow robots with the capability
to show carefulness in their movements. We choose to modulate the velocity
profile adopted by the robots' end-effector, inspired by what humans do when
transporting objects with different characteristics. We exploit a novel
Generative Adversarial Network architecture, trained with human kinematics
examples, to generalize over them and generate new and meaningful velocity
profiles, either associated with careful or not careful attitudes. This
approach would allow next generation robots to select the most appropriate
style of movement, depending on the perceived context, and autonomously
generate their motor action execution.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 15:03:05 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Mar 2022 11:22:50 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Garello",
"Luca",
""
],
[
"Lastrico",
"Linda",
""
],
[
"Sciutti",
"Alessandra",
""
],
[
"Noceti",
"Nicoletta",
""
],
[
"Mastrogiovanni",
"Fulvio",
""
],
[
"Rea",
"Francesco",
""
]
] |
new_dataset
| 0.964649 |
2203.16349
|
Niusen Chen
|
Niusen Chen, Bo Chen, Weisong Shi
|
The Block-based Mobile PDE Systems Are Not Secure -- Experimental
Attacks
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Nowadays, mobile devices have been used broadly to store and process
sensitive data. To ensure confidentiality of the sensitive data, Full Disk
Encryption (FDE) is often integrated in mainstream mobile operating systems
like Android and iOS. FDE however cannot defend against coercive attacks in
which the adversary can force the device owner to disclose the decryption key.
To combat the coercive attacks, Plausibly Deniable Encryption (PDE) is
leveraged to plausibly deny the very existence of sensitive data. However, most
of the existing PDE systems for mobile devices are deployed at the block layer
and suffer from deniability compromises.
Having observed that none of existing works in the literature have
experimentally demonstrated the aforementioned compromises, our work bridges
this gap by experimentally confirming the deniability compromises of the
block-layer mobile PDE systems. We have built a mobile device testbed, which
consists of a host computing device and a flash storage device. Additionally,
we have deployed both the hidden volume PDE and the steganographic file system
at the block layer of the testbed and performed disk forensics to assess
potential compromises on the raw NAND flash. Our experimental results confirm
it is indeed possible for the adversary to compromise the block-layer PDE
systems by accessing the raw NAND flash in practice. We also discuss potential
issues when performing such attacks in real world.
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2022 14:24:50 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Mar 2022 01:26:53 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Chen",
"Niusen",
""
],
[
"Chen",
"Bo",
""
],
[
"Shi",
"Weisong",
""
]
] |
new_dataset
| 0.983644 |
2203.16621
|
Mingfei Chen
|
Mingfei Chen, Yue Liao, Si Liu, Fei Wang, Jenq-Neng Hwang
|
TR-MOT: Multi-Object Tracking by Reference
|
10 pages, 3 figures, 2 tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-object Tracking (MOT) generally can be split into two sub-tasks, i.e.,
detection and association. Many previous methods follow the tracking by
detection paradigm, which first obtain detections at each frame and then
associate them between adjacent frames. Though with an impressive performance
by utilizing a strong detector, it will degrade their detection and association
performance under scenes with many occlusions and large motion if not using
temporal information. In this paper, we propose a novel Reference Search (RS)
module to provide a more reliable association based on the deformable
transformer structure, which is natural to learn the feature alignment for each
object among frames. RS takes previous detected results as references to
aggregate the corresponding features from the combined features of the adjacent
frames and makes a one-to-one track state prediction for each reference in
parallel. Therefore, RS can attain a reliable association coping with
unexpected motions by leveraging visual temporal features while maintaining the
strong detection performance by decoupling from the detector. Our RS module can
also be compatible with the structure of the other tracking by detection
frameworks. Furthermore, we propose a joint training strategy and an effective
matching pipeline for our online MOT framework with the RS module. Our method
achieves competitive results on MOT17 and MOT20 datasets.
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2022 19:07:26 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Chen",
"Mingfei",
""
],
[
"Liao",
"Yue",
""
],
[
"Liu",
"Si",
""
],
[
"Wang",
"Fei",
""
],
[
"Hwang",
"Jenq-Neng",
""
]
] |
new_dataset
| 0.999065 |
2203.16682
|
Yiran Luo
|
Yiran Luo, Pratyay Banerjee, Tejas Gokhale, Yezhou Yang, Chitta Baral
|
To Find Waldo You Need Contextual Cues: Debiasing Who's Waldo
|
Accepted at ACL 2022 (Short Paper)
| null | null | null |
cs.CV cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We present a debiased dataset for the Person-centric Visual Grounding (PCVG)
task first proposed by Cui et al. (2021) in the Who's Waldo dataset. Given an
image and a caption, PCVG requires pairing up a person's name mentioned in a
caption with a bounding box that points to the person in the image. We find
that the original Who's Waldo dataset compiled for this task contains a large
number of biased samples that are solvable simply by heuristic methods; for
instance, in many cases the first name in the sentence corresponds to the
largest bounding box, or the sequence of names in the sentence corresponds to
an exact left-to-right order in the image. Naturally, models trained on these
biased data lead to over-estimation of performance on the benchmark. To enforce
models being correct for the correct reasons, we design automated tools to
filter and debias the original dataset by ruling out all examples of
insufficient context, such as those with no verb or with a long chain of
conjunct names in their captions. Our experiments show that our new sub-sampled
dataset contains less bias with much lowered heuristic performances and widened
gaps between heuristic and supervised methods. We also demonstrate the same
benchmark model trained on our debiased training set outperforms that trained
on the original biased (and larger) training set on our debiased test set. We
argue our debiased dataset offers the PCVG task a more practical baseline for
reliable benchmarking and future improvements.
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2022 21:35:53 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Luo",
"Yiran",
""
],
[
"Banerjee",
"Pratyay",
""
],
[
"Gokhale",
"Tejas",
""
],
[
"Yang",
"Yezhou",
""
],
[
"Baral",
"Chitta",
""
]
] |
new_dataset
| 0.999867 |
2203.16718
|
Yaroslav Golubev
|
Konstantin Grotov, Sergey Titov, Vladimir Sotnikov, Yaroslav Golubev,
Timofey Bryksin
|
A Large-Scale Comparison of Python Code in Jupyter Notebooks and Scripts
|
12 pages, 3 figures
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, Jupyter notebooks have grown in popularity in several
domains of software engineering, such as data science, machine learning, and
computer science education. Their popularity has to do with their rich features
for presenting and visualizing data, however, recent studies show that
notebooks also share a lot of drawbacks: high number of code clones, low
reproducibility, etc.
In this work, we carry out a comparison between Python code written in
Jupyter Notebooks and in traditional Python scripts. We compare the code from
two perspectives: structural and stylistic. In the first part of the analysis,
we report the difference in the number of lines, the usage of functions, as
well as various complexity metrics. In the second part, we show the difference
in the number of stylistic issues and provide an extensive overview of the 15
most frequent stylistic issues in the studied mediums. Overall, we demonstrate
that notebooks are characterized by the lower code complexity, however, their
code could be perceived as more entangled than in the scripts. As for the
style, notebooks tend to have 1.4 times more stylistic issues, but at the same
time, some of them are caused by specific coding practices in notebooks and
should be considered as false positives. With this research, we want to pave
the way to studying specific problems of notebooks that should be addressed by
the development of notebook-specific tools, and provide various insights that
can be useful in this regard.
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2022 23:59:23 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Grotov",
"Konstantin",
""
],
[
"Titov",
"Sergey",
""
],
[
"Sotnikov",
"Vladimir",
""
],
[
"Golubev",
"Yaroslav",
""
],
[
"Bryksin",
"Timofey",
""
]
] |
new_dataset
| 0.976146 |
2203.16761
|
Mingze Xu
|
Jiarui Cai, Mingze Xu, Wei Li, Yuanjun Xiong, Wei Xia, Zhuowen Tu,
Stefano Soatto
|
MeMOT: Multi-Object Tracking with Memory
|
CVPR 2022 Oral
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose an online tracking algorithm that performs the object detection
and data association under a common framework, capable of linking objects after
a long time span. This is realized by preserving a large spatio-temporal memory
to store the identity embeddings of the tracked objects, and by adaptively
referencing and aggregating useful information from the memory as needed. Our
model, called MeMOT, consists of three main modules that are all
Transformer-based: 1) Hypothesis Generation that produce object proposals in
the current video frame; 2) Memory Encoding that extracts the core information
from the memory for each tracked object; and 3) Memory Decoding that solves the
object detection and data association tasks simultaneously for multi-object
tracking. When evaluated on widely adopted MOT benchmark datasets, MeMOT
observes very competitive performance.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 02:33:20 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Cai",
"Jiarui",
""
],
[
"Xu",
"Mingze",
""
],
[
"Li",
"Wei",
""
],
[
"Xiong",
"Yuanjun",
""
],
[
"Xia",
"Wei",
""
],
[
"Tu",
"Zhuowen",
""
],
[
"Soatto",
"Stefano",
""
]
] |
new_dataset
| 0.998943 |
2203.16763
|
Ziqi Zhang
|
Ziqi Zhang, Yuxin Chen, Zongyang Ma, Zhongang Qi, Chunfeng Yuan, Bing
Li, Ying Shan, Weiming Hu
|
CREATE: A Benchmark for Chinese Short Video Retrieval and Title
Generation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Previous works of video captioning aim to objectively describe the video's
actual content, which lacks subjective and attractive expression, limiting its
practical application scenarios. Video titling is intended to achieve this
goal, but there is a lack of a proper benchmark. In this paper, we propose to
CREATE, the first large-scale Chinese shoRt vidEo retrievAl and Title
gEneration benchmark, to facilitate research and application in video titling
and video retrieval in Chinese. CREATE consists of a high-quality labeled 210K
dataset and two large-scale 3M/10M pre-training datasets, covering 51
categories, 50K+ tags, 537K manually annotated titles and captions, and 10M+
short videos. Based on CREATE, we propose a novel model ALWIG which combines
video retrieval and video titling tasks to achieve the purpose of multi-modal
ALignment WIth Generation with the help of video tags and a GPT pre-trained
model. CREATE opens new directions for facilitating future research and
applications on video titling and video retrieval in the field of Chinese short
videos.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 02:39:18 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Zhang",
"Ziqi",
""
],
[
"Chen",
"Yuxin",
""
],
[
"Ma",
"Zongyang",
""
],
[
"Qi",
"Zhongang",
""
],
[
"Yuan",
"Chunfeng",
""
],
[
"Li",
"Bing",
""
],
[
"Shan",
"Ying",
""
],
[
"Hu",
"Weiming",
""
]
] |
new_dataset
| 0.999887 |
2203.16768
|
Namyup Kim
|
Namyup Kim, Dongwon Kim, Cuiling Lan, Wenjun Zeng, Suha Kwak
|
ReSTR: Convolution-free Referring Image Segmentation Using Transformers
|
CVPR 2022 accepted
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Referring image segmentation is an advanced semantic segmentation task where
target is not a predefined class but is described in natural language. Most of
existing methods for this task rely heavily on convolutional neural networks,
which however have trouble capturing long-range dependencies between entities
in the language expression and are not flexible enough for modeling
interactions between the two different modalities. To address these issues, we
present the first convolution-free model for referring image segmentation using
transformers, dubbed ReSTR. Since it extracts features of both modalities
through transformer encoders, it can capture long-range dependencies between
entities within each modality. Also, ReSTR fuses features of the two modalities
by a self-attention encoder, which enables flexible and adaptive interactions
between the two modalities in the fusion process. The fused features are fed to
a segmentation module, which works adaptively according to the image and
language expression in hand. ReSTR is evaluated and compared with previous work
on all public benchmarks, where it outperforms all existing models.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 02:55:39 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Kim",
"Namyup",
""
],
[
"Kim",
"Dongwon",
""
],
[
"Lan",
"Cuiling",
""
],
[
"Zeng",
"Wenjun",
""
],
[
"Kwak",
"Suha",
""
]
] |
new_dataset
| 0.99236 |
2203.16775
|
Abdullah Al Asif
|
Amit Kumar Das, Abdullah Al Asif, Anik Paul, and Md. Nur Hossain
|
Bangla hate speech detection on social media using attention-based
recurrent neural network
| null |
Type: Journal Language: English Publisher: De Gruyter First
published: September 1, 1991 Publication Frequency: 1 Issue per Year
Audience: researchers in the field of intelligent systems
|
10.1515/jisys-2020-0060
| null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Hate speech has spread more rapidly through the daily use of technology and,
most notably, by sharing your opinions or feelings on social media in a
negative aspect. Although numerous works have been carried out in detecting
hate speeches in English, German, and other languages, very few works have been
carried out in the context of the Bengali language. In contrast, millions of
people communicate on social media in Bengali. The few existing works that have
been carried out need improvements in both accuracy and interpretability. This
article proposed encoder decoder based machine learning model, a popular tool
in NLP, to classify user's Bengali comments on Facebook pages. A dataset of
7,425 Bengali comments, consisting of seven distinct categories of hate
speeches, was used to train and evaluate our model. For extracting and encoding
local features from the comments, 1D convolutional layers were used. Finally,
the attention mechanism, LSTM, and GRU based decoders have been used for
predicting hate speech categories. Among the three encoder decoder algorithms,
the attention-based decoder obtained the best accuracy (77%).
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 03:31:53 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Das",
"Amit Kumar",
""
],
[
"Asif",
"Abdullah Al",
""
],
[
"Paul",
"Anik",
""
],
[
"Hossain",
"Md. Nur",
""
]
] |
new_dataset
| 0.999367 |
2203.16777
|
Yang Shao
|
Yang Shao, Quan Kong, Tadayuki Matsumura, Taiki Fuji, Kiyoto Ito and
Hiroyuki Mizuno
|
Mask Atari for Deep Reinforcement Learning as POMDP Benchmarks
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We present Mask Atari, a new benchmark to help solve partially observable
Markov decision process (POMDP) problems with Deep Reinforcement Learning
(DRL)-based approaches. To achieve a simulation environment for the POMDP
problems, Mask Atari is constructed based on Atari 2600 games with
controllable, moveable, and learnable masks as the observation area for the
target agent, especially with the active information gathering (AIG) setting in
POMDPs. Given that one does not yet exist, Mask Atari provides a challenging,
efficient benchmark for evaluating the methods that focus on the above problem.
Moreover, the mask operation is a trial for introducing the receptive field in
the human vision system into a simulation environment for an agent, which means
the evaluations are not biased from the sensing ability and purely focus on the
cognitive performance of the methods when compared with the human baseline. We
describe the challenges and features of our benchmark and evaluate several
baselines with Mask Atari.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 03:34:02 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Shao",
"Yang",
""
],
[
"Kong",
"Quan",
""
],
[
"Matsumura",
"Tadayuki",
""
],
[
"Fuji",
"Taiki",
""
],
[
"Ito",
"Kiyoto",
""
],
[
"Mizuno",
"Hiroyuki",
""
]
] |
new_dataset
| 0.972493 |
2203.16788
|
Srishti Mehra
|
Srishti Mehra, Robert Louka, Yixun Zhang
|
ESGBERT: Language Model to Help with Classification Tasks Related to
Companies Environmental, Social, and Governance Practices
| null |
pp. 183-190, 2022. CS & IT - CSCP 2022
|
10.5121/csit.2022.120616
| null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Environmental, Social, and Governance (ESG) are non-financial factors that
are garnering attention from investors as they increasingly look to apply these
as part of their analysis to identify material risks and growth opportunities.
Some of this attention is also driven by clients who, now more aware than ever,
are demanding for their money to be managed and invested responsibly. As the
interest in ESG grows, so does the need for investors to have access to
consumable ESG information. Since most of it is in text form in reports,
disclosures, press releases, and 10-Q filings, we see a need for sophisticated
NLP techniques for classification tasks for ESG text. We hypothesize that an
ESG domain-specific pre-trained model will help with such and study building of
the same in this paper. We explored doing this by fine-tuning BERTs pre-trained
weights using ESG specific text and then further fine-tuning the model for a
classification task. We were able to achieve accuracy better than the original
BERT and baseline models in environment-specific classification tasks.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 04:22:44 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Mehra",
"Srishti",
""
],
[
"Louka",
"Robert",
""
],
[
"Zhang",
"Yixun",
""
]
] |
new_dataset
| 0.995933 |
2203.16792
|
Yinfeng Gao
|
Qichao Zhang, Yinfeng Gao, Yikang Zhang, Youtian Guo, Dawei Ding,
Yunpeng Wang, Peng Sun, Dongbin Zhao
|
TrajGen: Generating Realistic and Diverse Trajectories with Reactive and
Feasible Agent Behaviors for Autonomous Driving
| null | null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Realistic and diverse simulation scenarios with reactive and feasible agent
behaviors can be used for validation and verification of self-driving system
performance without relying on expensive and time-consuming real-world testing.
Existing simulators rely on heuristic-based behavior models for background
vehicles, which cannot capture the complex interactive behaviors in real-world
scenarios. To bridge the gap between simulation and the real world, we propose
TrajGen, a two-stage trajectory generation framework, which can capture more
realistic behaviors directly from human demonstration. In particular, TrajGen
consists of the multi-modal trajectory prediction stage and the reinforcement
learning based trajectory modification stage. In the first stage, we propose a
novel auxiliary RouteLoss for the trajectory prediction model to generate
multi-modal diverse trajectories in the drivable area. In the second stage,
reinforcement learning is used to track the predicted trajectories while
avoiding collisions, which can improve the feasibility of generated
trajectories. In addition, we develop a data-driven simulator I-Sim that can be
used to train reinforcement learning models in parallel based on naturalistic
driving data. The vehicle model in I-Sim can guarantee that the generated
trajectories by TrajGen satisfy vehicle kinematic constraints. Finally, we give
comprehensive metrics to evaluate generated trajectories for simulation
scenarios, which shows that TrajGen outperforms either trajectory prediction or
inverse reinforcement learning in terms of fidelity, reactivity, feasibility,
and diversity.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 04:48:29 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Zhang",
"Qichao",
""
],
[
"Gao",
"Yinfeng",
""
],
[
"Zhang",
"Yikang",
""
],
[
"Guo",
"Youtian",
""
],
[
"Ding",
"Dawei",
""
],
[
"Wang",
"Yunpeng",
""
],
[
"Sun",
"Peng",
""
],
[
"Zhao",
"Dongbin",
""
]
] |
new_dataset
| 0.995906 |
2203.16838
|
Jingbei Li
|
Jingbei Li, Yi Meng, Zhiyong Wu, Helen Meng, Qiao Tian, Yuping Wang,
Yuxuan Wang
|
NeuFA: Neural Network Based End-to-End Forced Alignment with
Bidirectional Attention Mechanism
|
Accepted by ICASSP 2022
| null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Although deep learning and end-to-end models have been widely used and shown
superiority in automatic speech recognition (ASR) and text-to-speech (TTS)
synthesis, state-of-the-art forced alignment (FA) models are still based on
hidden Markov model (HMM). HMM has limited view of contextual information and
is developed with long pipelines, leading to error accumulation and
unsatisfactory performance. Inspired by the capability of attention mechanism
in capturing long term contextual information and learning alignments in ASR
and TTS, we propose a neural network based end-to-end forced aligner called
NeuFA, in which a novel bidirectional attention mechanism plays an essential
role. NeuFA integrates the alignment learning of both ASR and TTS tasks in a
unified framework by learning bidirectional alignment information from a shared
attention matrix in the proposed bidirectional attention mechanism. Alignments
are extracted from the learnt attention weights and optimized by the ASR, TTS
and FA tasks in a multi-task learning manner. Experimental results demonstrate
the effectiveness of our proposed model, with mean absolute error on test set
drops from 25.8 ms to 23.7 ms at word level, and from 17.0 ms to 15.7 ms at
phoneme level compared with state-of-the-art HMM based model.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 06:45:39 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Li",
"Jingbei",
""
],
[
"Meng",
"Yi",
""
],
[
"Wu",
"Zhiyong",
""
],
[
"Meng",
"Helen",
""
],
[
"Tian",
"Qiao",
""
],
[
"Wang",
"Yuping",
""
],
[
"Wang",
"Yuxuan",
""
]
] |
new_dataset
| 0.99845 |
2203.16844
|
Zehui Yang
|
Zehui Yang, Yifan Chen, Lei Luo, Runyan Yang, Lingxuan Ye, Gaofeng
Cheng, Ji Xu, Yaohui Jin, Qingqing Zhang, Pengyuan Zhang, Lei Xie, Yonghong
Yan
|
Open Source MagicData-RAMC: A Rich Annotated Mandarin
Conversational(RAMC) Speech Dataset
|
Paper on submission to Interspeech2022
| null | null | null |
cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a high-quality rich annotated Mandarin conversational
(RAMC) speech dataset called MagicData-RAMC. The MagicData-RAMC corpus contains
180 hours of conversational speech data recorded from native speakers of
Mandarin Chinese over mobile phones with a sampling rate of 16 kHz. The dialogs
in MagicData-RAMC are classified into 15 diversified domains and tagged with
topic labels, ranging from science and technology to ordinary life. Accurate
transcription and precise speaker voice activity timestamps are manually
labeled for each sample. Speakers' detailed information is also provided. As a
Mandarin speech dataset designed for dialog scenarios with high quality and
rich annotations, MagicData-RAMC enriches the data diversity in the Mandarin
speech community and allows extensive research on a series of speech-related
tasks, including automatic speech recognition, speaker diarization, topic
detection, keyword search, text-to-speech, etc. We also conduct several
relevant tasks and provide experimental results to help evaluate the dataset.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 07:01:06 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Yang",
"Zehui",
""
],
[
"Chen",
"Yifan",
""
],
[
"Luo",
"Lei",
""
],
[
"Yang",
"Runyan",
""
],
[
"Ye",
"Lingxuan",
""
],
[
"Cheng",
"Gaofeng",
""
],
[
"Xu",
"Ji",
""
],
[
"Jin",
"Yaohui",
""
],
[
"Zhang",
"Qingqing",
""
],
[
"Zhang",
"Pengyuan",
""
],
[
"Xie",
"Lei",
""
],
[
"Yan",
"Yonghong",
""
]
] |
new_dataset
| 0.999796 |
2203.16857
|
Se-Hang Cheong
|
Se-Hang Cheong, Kai-Ip Lee, Yain-Whar Si, Leong-Hou U
|
Lifeline: Emergency Ad Hoc Network
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lifeline is a group of systems designed for mobile phones and battery powered
wireless routers for forming emergency Ad hoc networks. Devices installed with
Lifeline program can automatically form Ad hoc networks when cellular signal is
unavailable or disrupted during natural disasters. For instance, large scale
earthquakes can cause extensive damages to land-based telecommunication
infrastructures. In such circumstances, mobile phones installed with Lifeline
program can be used to send emergency messages by the victims who are trapped
under collapsed buildings. In addition, Lifeline also provides a function for
the rescuers to estimate the positions of the victims based on network
propagation techniques. Lifeline also has the ability to recover from partial
crash of network and nodes lost.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 07:34:27 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Cheong",
"Se-Hang",
""
],
[
"Lee",
"Kai-Ip",
""
],
[
"Si",
"Yain-Whar",
""
],
[
"U",
"Leong-Hou",
""
]
] |
new_dataset
| 0.99984 |
2203.16859
|
Se-Hang Cheong
|
Se-Hang Cheong, Yain-Whar Si
|
Boundary Node Detection and Unfolding of Complex Non-Convex Ad Hoc
Networks
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Complex non-convex ad hoc networks (CNCAH) contain intersecting polygons and
edges. In many instances, the layouts of these networks are not entirely convex
in shape. In this article, we propose a Kamada-Kawai-based algorithm called
W-KK-MS for boundary node detection problems, which is capable of aligning node
positions while achieving high sensitivity, specificity, and accuracy in
producing a visual drawing from the input network topology. The algorithm put
forward in this article selects and assigns weights to top-k nodes in each
iteration to speed up the updating process of nodes. We also propose a novel
approach to detect and unfold stacked regions in CNCAH networks. Experimental
results show that the proposed algorithms can achieve fast convergence on
boundary node detection in CNCAH networks and are able to successfully unfold
stacked regions. The design and implementation of a prototype system called
ELnet for analyzing CNCAH networks is also described in this article. The ELnet
system is capable of generating synthetic networks for testing, integrating
with force-directed algorithms, and visualizing and analyzing algorithms'
outcomes.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 07:41:57 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Cheong",
"Se-Hang",
""
],
[
"Si",
"Yain-Whar",
""
]
] |
new_dataset
| 0.998625 |
2203.16864
|
Se-Hang Cheong
|
Se-Hang Cheong, Yain-Whar Si, Leong-Hou U
|
Saving lives: design and implementation of lifeline emergency ad hoc
network
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper aims to propose a system for automatically forming ad hoc networks
using mobile phones and battery-powered wireless routers for emergency
situations. The system also provides functions to send emergency messages and
identify the location of victims based on the network topology information.
Optimized link state routing protocol is used to instantly form an ad hoc
emergency network based on WiFi signals from mobile phones of the victims,
backup battery-powered wireless routers preinstalled in buildings and mobile
devices deployed by search and rescue teams. The proposed system is also
designed to recover from partial crash of network and nodes lost.
Experimental results demonstrate the effectiveness of the proposed system in
terms of battery life, transmission distance and noises.
A novel message routing schedule is proposed for conserving battery life. A
novel function to estimate the location of a mobile device which sent an
emergency message is proposed in this paper.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 07:45:33 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Cheong",
"Se-Hang",
""
],
[
"Si",
"Yain-Whar",
""
],
[
"U",
"Leong-Hou",
""
]
] |
new_dataset
| 0.999045 |
2203.16871
|
Richard Adeyemi Ikuesan Dr.
|
Avinash Singh, Richard Adeyemi Ikuesan, and Hein Venter
|
Ransomware Detection using Process Memory
|
11 Pages, 3 Figures, and 11 Tables
|
17th International Conference on Cyber Warfare and Security,
03/2022
| null | null |
cs.CR cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Ransomware attacks have increased significantly in recent years, causing
great destruction and damage to critical systems and business operations.
Attackers are unfailingly finding innovative ways to bypass detection
mechanisms, whichencouraged the adoption of artificial intelligence. However,
most research summarizes the general features of AI and induces many false
positives, as the behavior of ransomware constantly differs to bypass
detection. Focusing on the key indicating features of ransomware becomes vital
as this guides the investigator to the inner workings and main function of
ransomware itself. By utilizing access privileges in process memory, the main
function of the ransomware can be detected more easily and accurately.
Furthermore, new signatures and fingerprints of ransomware families can be
identified to classify novel ransomware attacks correctly. The current research
used the process memory access privileges of the different memory regions of
the behavior of an executable to quickly determine its intent before serious
harm can occur. To achieve this aim, several well-known machine learning
algorithms were explored with an accuracy range of 81.38 to 96.28 percents. The
study thus confirms the feasibility of utilizing process memory as a detection
mechanism for ransomware.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 08:03:48 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Singh",
"Avinash",
""
],
[
"Ikuesan",
"Richard Adeyemi",
""
],
[
"Venter",
"Hein",
""
]
] |
new_dataset
| 0.968398 |
2203.16920
|
Afonso Fontes
|
Afonso Henriques Fontes Neto Segundo, Joel Sotero da Cunha Neto,
Halisson Alves de Oliveira, \'Atila Gir\~ao de Oliveira, Reginaldo Florencio
da Silva
|
Desenvolvimento de ferramenta de simula\c{c}\~ao para aux\'ilio no
ensino da disciplina de rob\'otica industrial
|
COBENGE 2019, in Portuguese language
| null | null | null |
cs.RO cs.GT
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Currently, robotics is one of the fastest growing areas not only in the
industrial sector but also in the consumer and service sectors. Several areas
benefit from the technological advancement of robotics, especially the
industrial area those benefits from gains in productivity and quality. However,
to supply this growing demand it is necessary for the newly graduated
professionals to have a deeper understanding of how to design and control a
robotic manipulator. It is logical that in order to obtain this more in-depth
knowledge of robotics, it is necessary to have an experience with a real
robotic manipulator, since the practice is a much more efficient way of
learning than theory. However, it is known that a robotic arm is not a cheap
investment, and its maintenance is not cheap either. Therefore, many
educational institutions are not able to provide this type of experience to
their students. With this in mind, and through the use of Unity 3D, which is a
game development software, a robotic arm simulator has been developed to
correlate classroom theory with what actually happens in practice. The robotic
manipulators implemented on this simulator can be controlled by both inverse
kinematics (which is the industry standard) and direct kinematics.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 09:44:40 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Segundo",
"Afonso Henriques Fontes Neto",
""
],
[
"Neto",
"Joel Sotero da Cunha",
""
],
[
"de Oliveira",
"Halisson Alves",
""
],
[
"de Oliveira",
"Átila Girão",
""
],
[
"da Silva",
"Reginaldo Florencio",
""
]
] |
new_dataset
| 0.999788 |
2203.16923
|
Afonso Fontes
|
Daniel Maia Evangelista, Pedro Benevides Cavalcante, Afonso Henriques
Fontes Neto Segundo
|
Aplica\c{c}\~ao de ros como ferramenta de ensino a rob\'otica / using
ros as a robotics teaching tool
|
in Portuguese language
| null | null | null |
cs.RO cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The study of robotic manipulators is the main goal of Industrial Robotics
Class, part of Control Engineers training course. There is a difficulty in
preparing academic practices and projects in the area of robotics due to the
high cost of specific educational equipment. The practical classes and the
development of projects are very important for engineers training, it is
proposed to use simulation software in order to provide practical experience
for the students of the discipline. In this context, the present article aims
to expose the use of the Robot Operation System (ROS) as a tool to develop a
robotic arm and implement the functionality of forward and inverse kinematics.
Such development could be used as an educational tool to increase the interest
and learning of students in the robotics discipline and to expand research
areas for the discipline.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 09:48:21 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Evangelista",
"Daniel Maia",
""
],
[
"Cavalcante",
"Pedro Benevides",
""
],
[
"Segundo",
"Afonso Henriques Fontes Neto",
""
]
] |
new_dataset
| 0.979726 |
2203.16981
|
Damien Chablat
|
Damien Chablat (ReV, LS2N), Riccardo Mattacchione, Erika Ottaviano
|
Design of a robot for the automatic charging of an electric car
| null |
ROMANSY 24 - Robot Design, Dynamics and Control, Springer, 2022
| null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, a robot with parallel architecture is proposed for charging an
electric vehicle having the charging socket on its front side. Kinematic models
are developed to design the robot for a given workspace that corresponds to the
car's plug placements. A demonstrator composed by commercial components is
shown.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 12:08:55 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Chablat",
"Damien",
"",
"ReV, LS2N"
],
[
"Mattacchione",
"Riccardo",
""
],
[
"Ottaviano",
"Erika",
""
]
] |
new_dataset
| 0.998735 |
2203.16997
|
Natarajan Chidambaram
|
Natarajan Chidambaram, Pooya Rostami Mazrae
|
Bot Detection in GitHub Repositories
|
3 pages, 3 figures
| null |
10.1145/3524842.3528520
| null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Contemporary social coding platforms like GitHub promote collaborative
development. Many open-source software repositories hosted in these platforms
use machine accounts (bots) to automate and facilitate a wide range of
effort-intensive and repetitive activities. Determining if an account
corresponds to a bot or a human contributor is important for socio-technical
development analytics, for example, to understand how humans collaborate and
interact in the presence of bots, to assess the positive and negative impact of
using bots, to identify the top project contributors, to identify potential bus
factors, and so on. Our project aims to include the trained machine learning
(ML) classifier from the BoDeGHa bot detection tool as a plugin to the
GrimoireLab software development analytics platform. In this work, we present
the procedure to form a pipeline for retrieving contribution and contributor
data using Perceval, distinguishing bots from humans using BoDeGHa, and
visualising the results using Kibana.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 12:43:50 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Chidambaram",
"Natarajan",
""
],
[
"Mazrae",
"Pooya Rostami",
""
]
] |
new_dataset
| 0.993684 |
2203.17023
|
Chengxin Chen
|
Chengxin Chen, Pengyuan Zhang
|
CTA-RNN: Channel and Temporal-wise Attention RNN Leveraging Pre-trained
ASR Embeddings for Speech Emotion Recognition
|
5 pages, 2 figures, submitted to INTERSPEECH 2022
| null | null | null |
cs.SD cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Previous research has looked into ways to improve speech emotion recognition
(SER) by utilizing both acoustic and linguistic cues of speech. However, the
potential association between state-of-the-art ASR models and the SER task has
yet to be investigated. In this paper, we propose a novel channel and
temporal-wise attention RNN (CTA-RNN) architecture based on the intermediate
representations of pre-trained ASR models. Specifically, the embeddings of a
large-scale pre-trained end-to-end ASR encoder contain both acoustic and
linguistic information, as well as the ability to generalize to different
speakers, making them well suited for downstream SER task. To further exploit
the embeddings from different layers of the ASR encoder, we propose a novel
CTA-RNN architecture to capture the emotional salient parts of embeddings in
both the channel and temporal directions. We evaluate our approach on two
popular benchmark datasets, IEMOCAP and MSP-IMPROV, using both within-corpus
and cross-corpus settings. Experimental results show that our proposed method
can achieve excellent performance in terms of accuracy and robustness.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 13:32:51 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Chen",
"Chengxin",
""
],
[
"Zhang",
"Pengyuan",
""
]
] |
new_dataset
| 0.985656 |
2203.17042
|
Shivani Choudhary
|
Shivani Choudhary
|
IITD-DBAI: Multi-Stage Retrieval with Pseudo-Relevance Feedback and
Query Reformulation
| null | null | null | null |
cs.IR cs.AI cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Resolving the contextual dependency is one of the most challenging tasks in
the Conversational system. Our submission to CAsT-2021 aimed to preserve the
key terms and the context in all subsequent turns and use classical Information
retrieval methods. It was aimed to pull as relevant documents as possible from
the corpus. We have participated in automatic track and submitted two runs in
the CAsT-2021. Our submission has produced a mean NDCG@3 performance better
than the median model.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 14:07:47 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Choudhary",
"Shivani",
""
]
] |
new_dataset
| 0.950517 |
2203.17112
|
Sohil Lal Shrestha
|
Sohil Lal Shrestha, Shafiul Azam Chowdhury and Christoph Csallner
|
SLNET: A Redistributable Corpus of 3rd-party Simulink Models
|
Published in Mining Software Repositories 2022 - Data and Tool
Showcase Track
| null |
10.1145/3524842.3528001
| null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
MATLAB/Simulink is widely used for model-based design. Engineers create
Simulink models and compile them to embedded code, often to control
safety-critical cyber-physical systems in automotive, aerospace, and healthcare
applications. Despite Simulink's importance, there are few large-scale
empirical Simulink studies, perhaps because there is no large readily available
corpus of third-party open-source Simulink models. To enable empirical Simulink
studies, this paper introduces SLNET, the largest corpus of freely available
third-party Simulink models. SLNET has several advantages over earlier
collections. Specifically, SLNET is 8 times larger than the largest previous
corpus of Simulink models, includes fine-grained metadata, is constructed
automatically, is self-contained, and allows redistribution. SLNET is available
under permissive open-source licenses and contains all of its collection and
analysis tools.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 15:33:39 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Shrestha",
"Sohil Lal",
""
],
[
"Chowdhury",
"Shafiul Azam",
""
],
[
"Csallner",
"Christoph",
""
]
] |
new_dataset
| 0.998411 |
2203.17178
|
Yunlu Chen
|
Yunlu Chen, Basura Fernando, Hakan Bilen, Matthias Nie{\ss}ner,
Efstratios Gavves
|
3D Equivariant Graph Implicit Functions
|
Video: https://youtu.be/W7goOzZP2Kc
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, neural implicit representations have made remarkable
progress in modeling of 3D shapes with arbitrary topology. In this work, we
address two key limitations of such representations, in failing to capture
local 3D geometric fine details, and to learn from and generalize to shapes
with unseen 3D transformations. To this end, we introduce a novel family of
graph implicit functions with equivariant layers that facilitates modeling fine
local details and guaranteed robustness to various groups of geometric
transformations, through local $k$-NN graph embeddings with sparse point set
observations at multiple resolutions. Our method improves over the existing
rotation-equivariant implicit function from 0.69 to 0.89 (IoU) on the ShapeNet
reconstruction task. We also show that our equivariant implicit function can be
extended to other types of similarity transformations and generalizes to unseen
translations and scaling.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 16:51:25 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Chen",
"Yunlu",
""
],
[
"Fernando",
"Basura",
""
],
[
"Bilen",
"Hakan",
""
],
[
"Nießner",
"Matthias",
""
],
[
"Gavves",
"Efstratios",
""
]
] |
new_dataset
| 0.961309 |
2203.17194
|
Irene M\'arquez-Corbella
|
Ignacio Garc\'ia-Marco, Irene M\'arquez-Corbella, Edgar
Mart\'inez-Moro, and Yuriko Pitones
|
Free Resolutions and Generalized Hamming Weights of binary linear codes
| null | null | null | null |
cs.IT math.AC math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, we explore the relationship between free resolution of some
monomial ideals and Generalized Hamming Weights (GHWs) of binary codes. More
precisely, we look for a structure smaller than the set of codewords of minimal
support that provides us some information about the GHWs. We prove that the
first and second generalized Hamming weight of a binary linear code can be
computed (by means of a graded free resolution) from a set of monomials
associated to a binomial ideal related with the code. Moreover, the remaining
weights are bounded by the Betti numbers for that set.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 17:18:18 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"García-Marco",
"Ignacio",
""
],
[
"Márquez-Corbella",
"Irene",
""
],
[
"Martínez-Moro",
"Edgar",
""
],
[
"Pitones",
"Yuriko",
""
]
] |
new_dataset
| 0.996144 |
2203.17256
|
Manoj Gulati
|
Manoj Gulati and Pandarasamy Arjunan
|
LEAD1.0: A Large-scale Annotated Dataset for Energy Anomaly Detection in
Commercial Buildings
| null | null | null | null |
cs.LG cs.AI eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Modern buildings are densely equipped with smart energy meters, which
periodically generate a massive amount of time-series data yielding few million
data points every day. This data can be leveraged to discover the underlying
loads, infer their energy consumption patterns, inter-dependencies on
environmental factors, and the building's operational properties. Furthermore,
it allows us to simultaneously identify anomalies present in the electricity
consumption profiles, which is a big step towards saving energy and achieving
global sustainability. However, to date, the lack of large-scale annotated
energy consumption datasets hinders the ongoing research in anomaly detection.
We contribute to this effort by releasing a well-annotated version of a
publicly available ASHRAE Great Energy Predictor III data set containing 1,413
smart electricity meter time series spanning over one year. In addition, we
benchmark the performance of eight state-of-the-art anomaly detection methods
on our dataset and compare their performance.
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2022 07:30:59 GMT"
}
] | 2022-04-01T00:00:00 |
[
[
"Gulati",
"Manoj",
""
],
[
"Arjunan",
"Pandarasamy",
""
]
] |
new_dataset
| 0.999125 |
1811.12369
|
Johannes Bund
|
Johannes Bund, Christoph Lenzen, Moti Medina
|
Small Hazard-free Transducers
|
This work has been accepted for publication at the 13th Innovations
in Theoretical Computer Science Conference (ITCS 2022)
| null | null | null |
cs.DS cs.CC
|
http://creativecommons.org/licenses/by/4.0/
|
Ikenmeyer et al. (JACM'19) proved an unconditional exponential separation
between the hazard-free complexity and (standard) circuit complexity of
explicit functions. This raises the question: which classes of functions permit
efficient hazard-free circuits?
In this work, we prove that circuit implementations of transducers with small
state space are such a class. A transducer is a finite state machine that
transcribes, symbol by symbol, an input string of length $n$ into an output
string of length $n$. We present a construction that transforms any function
arising from a transducer into an efficient circuit of size $\mathcal{O}(n)$
computing the hazard-free extension of the function. More precisely, given a
transducer with $s$ states, receiving $n$ input symbols encoded by $l$ bits,
and computing $n$ output symbols encoded by $m$ bits, the transducer has a
hazard-free circuit of size $2^{\mathcal{O}(s+\ell)} m n$ and depth
$\mathcal{O}(s\log n + \ell)$; in particular, if $s, \ell,m\in \mathcal{O}(1)$,
size and depth are asymptotically optimal. In light of the strong hardness
results by Ikenmeyer et al. (JACM'19), we consider this a surprising result.
|
[
{
"version": "v1",
"created": "Thu, 29 Nov 2018 18:27:25 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Dec 2018 10:16:58 GMT"
},
{
"version": "v3",
"created": "Tue, 16 Nov 2021 09:38:48 GMT"
},
{
"version": "v4",
"created": "Thu, 18 Nov 2021 11:53:51 GMT"
},
{
"version": "v5",
"created": "Wed, 30 Mar 2022 08:32:11 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Bund",
"Johannes",
""
],
[
"Lenzen",
"Christoph",
""
],
[
"Medina",
"Moti",
""
]
] |
new_dataset
| 0.997699 |
2104.09486
|
Gianira N. Alfarano
|
Gianira N. Alfarano, Anina Gruica, Julia Lieb, Joachim Rosenthal
|
Convolutional codes over finite chain rings, MDP codes and their
characterization
|
19 pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we develop the theory of convolutional codes over finite
commutative chain rings. In particular, we focus on maximum distance profile
(MDP) convolutional codes and we provide a characterization of these codes,
generalizing the one known for fields. Moreover, we relate (reverse) MDP
convolutional codes over a finite chain ring with (reverse) MDP convolutional
codes over its residue field. Finally, we provide a construction of (reverse)
MDP convolutional codes over finite chain rings generalizing the notion of
(reverse) superregular matrices.
|
[
{
"version": "v1",
"created": "Mon, 19 Apr 2021 17:46:28 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Mar 2022 11:01:34 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Alfarano",
"Gianira N.",
""
],
[
"Gruica",
"Anina",
""
],
[
"Lieb",
"Julia",
""
],
[
"Rosenthal",
"Joachim",
""
]
] |
new_dataset
| 0.998327 |
2105.06561
|
Niclas Boehmer
|
Luca Kreisel, Niclas Boehmer, Vincent Froese, Rolf Niedermeier
|
Equilibria in Schelling Games: Computational Hardness and Robustness
|
Accepted to AAMAS'22
| null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the simplest game-theoretic formulation of Schelling's model of
segregation on graphs, agents of two different types each select their own
vertex in a given graph so as to maximize the fraction of agents of their type
in their occupied neighborhood. Two ways of modeling agent movement here are
either to allow two agents to swap their vertices or to allow an agent to jump
to a free vertex. The contributions of this paper are twofold. First, we prove
that deciding the existence of a swap-equilibrium and a jump-equilibrium in
this simplest model of Schelling games is NP-hard, thereby answering questions
left open by Agarwal et al. [AAAI '20] and Elkind et al. [IJCAI '19]. Second,
we introduce two measures for the robustness of equilibria in Schelling games
in terms of the minimum number of edges or the minimum number of vertices that
need to be deleted to make an equilibrium unstable. We prove tight lower and
upper bounds on the edge- and vertex-robustness of swap-equilibria in Schelling
games on different graph classes.
|
[
{
"version": "v1",
"created": "Thu, 13 May 2021 21:32:50 GMT"
},
{
"version": "v2",
"created": "Thu, 20 May 2021 10:56:10 GMT"
},
{
"version": "v3",
"created": "Wed, 30 Mar 2022 09:09:49 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Kreisel",
"Luca",
""
],
[
"Boehmer",
"Niclas",
""
],
[
"Froese",
"Vincent",
""
],
[
"Niedermeier",
"Rolf",
""
]
] |
new_dataset
| 0.959167 |
2110.04667
|
Shivam Bajaj
|
Shivam Bajaj, Eric Torng, Shaunak D. Bopardikar, Alexander Von Moll,
Isaac Weintraub, Eloy Garcia, David W. Casbeer
|
Competitive Perimeter Defense of Conical Environments
|
Version 2 has additional images
| null | null | null |
cs.DS cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
We consider a perimeter defense problem in a planar conical environment in
which a single vehicle, having a finite capture radius, aims to defend a
concentric perimeter from mobile intruders. The intruders are arbitrarily
released at the circumference of the environment and they move radially toward
the perimeter with fixed speed. We present a competitive analysis approach to
this problem by measuring the performance of multiple online algorithms for the
vehicle against arbitrary inputs, relative to an optimal offline algorithm that
has information about entire input sequence in advance. In particular, we
establish two necessary conditions on the parameter space to guarantee (i)
finite competitiveness of any algorithm and (ii) a competitive ratio of at
least 2 for any algorithm. We then design and analyze three online algorithms
and characterize parameter regimes in which they have finite competitive
ratios. Specifically, our first two algorithms are provably 1, and
2-competitive, respectively, whereas our third algorithm exhibits different
competitive ratios in different regimes of problem parameters. Finally, we
provide a numerical plot in the parameter space to reveal additional insights
into the relative performance of our algorithms.
|
[
{
"version": "v1",
"created": "Sun, 10 Oct 2021 00:19:46 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Mar 2022 03:55:25 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Bajaj",
"Shivam",
""
],
[
"Torng",
"Eric",
""
],
[
"Bopardikar",
"Shaunak D.",
""
],
[
"Von Moll",
"Alexander",
""
],
[
"Weintraub",
"Isaac",
""
],
[
"Garcia",
"Eloy",
""
],
[
"Casbeer",
"David W.",
""
]
] |
new_dataset
| 0.982546 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.