id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2203.01193
|
Takato Yasuno
|
Takato Yasuno, Junichiro Fujii, Riku Ogata, Masahiro Okano
|
VAE-iForest: Auto-encoding Reconstruction and Isolation-based Anomalies
Detecting Fallen Objects on Road Surface
|
5 pages, 9 figures, 3 tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In road monitoring, it is an important issue to detect changes in the road
surface at an early stage to prevent damage to third parties. The target of the
falling object may be a fallen tree due to the external force of a flood or an
earthquake, and falling rocks from a slope. Generative deep learning is
possible to flexibly detect anomalies of the falling objects on the road
surface. We prototype a method that combines auto-encoding reconstruction and
isolation-based anomaly detector in application for road surface monitoring.
Actually, we apply our method to a set of test images that fallen objects is
located on the raw inputs added with fallen stone and plywood, and that snow is
covered on the winter road. Finally we mention the future works for practical
purpose application.
|
[
{
"version": "v1",
"created": "Wed, 2 Mar 2022 15:47:36 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Yasuno",
"Takato",
""
],
[
"Fujii",
"Junichiro",
""
],
[
"Ogata",
"Riku",
""
],
[
"Okano",
"Masahiro",
""
]
] |
new_dataset
| 0.997971 |
2203.01198
|
Aritra Mitra
|
Aritra Mitra, Hamed Hassani and George J. Pappas
|
Linear Stochastic Bandits over a Bit-Constrained Channel
| null | null | null | null |
cs.LG cs.IT cs.SY eess.SY math.IT math.OC
|
http://creativecommons.org/licenses/by/4.0/
|
One of the primary challenges in large-scale distributed learning stems from
stringent communication constraints. While several recent works address this
challenge for static optimization problems, sequential decision-making under
uncertainty has remained much less explored in this regard. Motivated by this
gap, we introduce a new linear stochastic bandit formulation over a
bit-constrained channel. Specifically, in our setup, an agent interacting with
an environment transmits encoded estimates of an unknown model parameter to a
server over a communication channel of finite capacity. The goal of the server
is to take actions based on these estimates to minimize cumulative regret. To
this end, we develop a novel and general algorithmic framework that hinges on
two main components: (i) an adaptive encoding mechanism that exploits
statistical concentration bounds, and (ii) a decision-making principle based on
confidence sets that account for encoding errors. As our main result, we prove
that when the unknown model is $d$-dimensional, a channel capacity of $O(d)$
bits suffices to achieve order-optimal regret. To demonstrate the generality of
our approach, we then show that the same result continues to hold for
non-linear observation models satisfying standard regularity conditions.
Finally, we establish that for the simpler unstructured multi-armed bandit
problem, $1$ bit channel-capacity is sufficient for achieving optimal regret
bounds. Overall, our work takes a significant first step towards paving the way
for statistical decision-making over finite-capacity channels.
|
[
{
"version": "v1",
"created": "Wed, 2 Mar 2022 15:54:03 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Mitra",
"Aritra",
""
],
[
"Hassani",
"Hamed",
""
],
[
"Pappas",
"George J.",
""
]
] |
new_dataset
| 0.996701 |
2203.01285
|
Cl\'ement Tamines
|
V\'eronique Bruy\`ere, Baptiste Fievet, Jean-Fran\c{c}ois Raskin,
Cl\'ement Tamines
|
Stackelberg-Pareto Synthesis (Extended Version)
|
47 pages, 9 figures. arXiv admin note: substantial text overlap with
arXiv:2102.08925
| null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the framework of two-player Stackelberg games played on graphs in
which Player 0 announces a strategy and Player 1 responds rationally with a
strategy that is an optimal response. While it is usually assumed that Player 1
has a single objective, we consider here the new setting where he has several.
In this context, after responding with his strategy, Player 1 gets a payoff in
the form of a vector of Booleans corresponding to his satisfied objectives.
Rationality of Player 1 is encoded by the fact that his response must produce a
Pareto-optimal payoff given the strategy of Player 0. We study for several
kinds of $\omega$-regular objectives the Stackelberg-Pareto Synthesis problem
which asks whether Player 0 can announce a strategy which satisfies his
objective, whatever the rational response of Player 1. We show that this
problem is fixed-parameter tractable for games in which objectives are all
reachability, safety, B\"uchi, co-B\"uchi, Boolean B\"uchi, parity, Muller,
Streett or Rabin objectives. We also show that this problem is
NEXPTIME-complete except for the cases of B\"uchi objectives for which it is
NP-complete and co-B\"uchi objectives for which it is in NEXPTIME and NP-hard.
The problem is already NP-complete in the simple case of reachability
objectives and graphs that are trees.
|
[
{
"version": "v1",
"created": "Wed, 2 Mar 2022 18:11:06 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Bruyère",
"Véronique",
""
],
[
"Fievet",
"Baptiste",
""
],
[
"Raskin",
"Jean-François",
""
],
[
"Tamines",
"Clément",
""
]
] |
new_dataset
| 0.996375 |
2203.01286
|
Ishaan Mehta
|
Ishaan Mehta, Hao-Ya Hsueh, Nikolaos Kourtzanidis, Mateusz Brylka and
Sajad Saeedi
|
Far-UVC Disinfection with Robotic Mobile Manipulator
|
Paper accepted at ISMR 2022
|
ISMR 2022
| null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The COVID-19 pandemic has demonstrated the need for a more effective and
efficient disinfection approach to combat infectious diseases. Ultraviolet
germicidal irradiation (UVGI) is a proven mean for disinfection and
sterilization and has been integrated into handheld devices and autonomous
mobile robots. Existing UVGI robots which are commonly equipped with uncovered
lamps that emit intense ultraviolet radiation suffer from: inability to be used
in human presence, shadowing of objects, and long disinfection time. These
robots also have a high operational cost. This paper introduces a
cost-effective germicidal system that utilizes UVGI to disinfect pathogens,
such as viruses, bacteria, and fungi, on high contact surfaces (e.g. doors and
tables). This system is composed of a team of 5-DOF mobile manipulators with
end-effectors that are equipped with far-UVC excimer lamps. The design of the
system is discussed with emphasis on path planning, coverage planning, and
scene understanding. Evaluations of the UVGI system using simulations and
irradiance models are also included.
|
[
{
"version": "v1",
"created": "Wed, 2 Mar 2022 18:12:24 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Mehta",
"Ishaan",
""
],
[
"Hsueh",
"Hao-Ya",
""
],
[
"Kourtzanidis",
"Nikolaos",
""
],
[
"Brylka",
"Mateusz",
""
],
[
"Saeedi",
"Sajad",
""
]
] |
new_dataset
| 0.995179 |
1207.3146
|
Arun Padakandla
|
Arun Padakandla, S. Sandeep Pradhan
|
Achievable rate region for three user discrete broadcast channel based
on coset codes
|
A non-additive 3-user discrete broadcast channel is identified for
which achievable rate region based on coset codes is analytically proven to
be strictly larger than that achievable using unstructured iid codes. This
version is submitted to IEEE Transactions on Information Theory
| null |
10.1109/ISIT.2013.6620432
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an achievable rate region for the general three user discrete
memoryless broadcast channel, based on nested coset codes. We characterize
3-to-1 discrete broadcast channels, a class of broadcast channels for which the
best known coding technique\footnote{We henceforth refer to this as Marton's
coding for three user discrete broadcast channel.}, which is obtained by a
natural generalization of that proposed by Marton for the general two user
discrete broadcast channel, is strictly sub-optimal. In particular, we identify
a novel 3-to-1 discrete broadcast channel for which Marton's coding is
\textit{analytically} proved to be strictly suboptimal. We present achievable
rate regions for the general 3-to-1 discrete broadcast channels, based on
nested coset codes, that strictly enlarge Marton's rate region for the
aforementioned channel. We generalize this to present achievable rate region
for the general three user discrete broadcast channel. Combining together
Marton's coding and that proposed herein, we propose the best known coding
technique, for a general three user discrete broadcast channel.
|
[
{
"version": "v1",
"created": "Fri, 13 Jul 2012 05:21:06 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Aug 2012 17:25:31 GMT"
},
{
"version": "v3",
"created": "Tue, 5 Mar 2013 20:59:41 GMT"
},
{
"version": "v4",
"created": "Wed, 6 Mar 2013 21:30:44 GMT"
},
{
"version": "v5",
"created": "Sat, 18 May 2013 23:18:33 GMT"
},
{
"version": "v6",
"created": "Tue, 13 Jan 2015 04:56:43 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"Padakandla",
"Arun",
""
],
[
"Pradhan",
"S. Sandeep",
""
]
] |
new_dataset
| 0.989214 |
1502.04367
|
Arun Padakandla
|
Arun Padakandla and S. Sandeep Pradhan
|
Coset codes for communicating over non-additive channels
| null | null |
10.1109/ISIT.2015.7282820
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a case for the use of codes possessing algebraic closure
properties - coset codes - in developing coding techniques and characterizing
achievable rate regions for generic multi-terminal channels. In particular, we
consider three diverse communication scenarios - $3-$user interference channel
(many-to-many), $3-$user broadcast channel (one-to-many), and multiple access
with distributed states (many-to-one) - and identify non-additive examples for
which coset codes are analytically proven to yield strictly larger achievable
rate regions than those achievable using iid codes. On the one hand, our
findings motivate the need for multi-terminal information theory to step beyond
iid codes. On the other, it encourages current research of linear code-based
techniques to go beyond particular additive communication channels. Detailed
proofs of our results are available in [1]-[3].
|
[
{
"version": "v1",
"created": "Sun, 15 Feb 2015 21:37:25 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"Padakandla",
"Arun",
""
],
[
"Pradhan",
"S. Sandeep",
""
]
] |
new_dataset
| 0.999804 |
2012.04293
|
Aykut Erdem
|
Tayfun Ates, M. Samil Atesoglu, Cagatay Yigit, Ilker Kesen, Mert
Kobas, Erkut Erdem, Aykut Erdem, Tilbe Goksun, Deniz Yuret
|
CRAFT: A Benchmark for Causal Reasoning About Forces and inTeractions
|
Accepted to Findings of ACL 2022
| null | null | null |
cs.AI cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Humans are able to perceive, understand and reason about causal events.
Developing models with similar physical and causal understanding capabilities
is a long-standing goal of artificial intelligence. As a step towards this
direction, we introduce CRAFT, a new video question answering dataset that
requires causal reasoning about physical forces and object interactions. It
contains 58K video and question pairs that are generated from 10K videos from
20 different virtual environments, containing various objects in motion that
interact with each other and the scene. Two question categories in CRAFT
include previously studied descriptive and counterfactual questions.
Additionally, inspired by the Force Dynamics Theory in cognitive linguistics,
we introduce a new causal question category that involves understanding the
causal interactions between objects through notions like cause, enable, and
prevent. Our results show that even though the questions in CRAFT are easy for
humans, the tested baseline models, including existing state-of-the-art
methods, do not yet deal with the challenges posed in our benchmark.
|
[
{
"version": "v1",
"created": "Tue, 8 Dec 2020 09:11:32 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Jun 2021 10:55:23 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Mar 2022 10:02:21 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"Ates",
"Tayfun",
""
],
[
"Atesoglu",
"M. Samil",
""
],
[
"Yigit",
"Cagatay",
""
],
[
"Kesen",
"Ilker",
""
],
[
"Kobas",
"Mert",
""
],
[
"Erdem",
"Erkut",
""
],
[
"Erdem",
"Aykut",
""
],
[
"Goksun",
"Tilbe",
""
],
[
"Yuret",
"Deniz",
""
]
] |
new_dataset
| 0.999774 |
2102.02437
|
Weina Jin
|
Weina Jin, Jianyu Fan, Diane Gromala, Philippe Pasquier, Ghassan
Hamarneh
|
EUCA: the End-User-Centered Explainable AI Framework
|
EUCA Framework, EUCA dataset (and accompanying code), and
Supplementary Materials are available at:
https://github.com/weinajin/end-user-xai
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
The ability to explain decisions to end-users is a necessity to deploy AI as
critical decision support. Yet making AI explainable to non-technical end-users
is a relatively ignored and challenging problem. To bridge the gap, we first
identify twelve end-user-friendly explanatory forms that do not require
technical knowledge to comprehend, including feature-, example-, and rule-based
explanations. We then instantiate the explanatory forms as prototyping cards in
four AI-assisted critical decision-making tasks, and conduct a user study to
co-design low-fidelity prototypes with 32 layperson participants. The results
confirm the relevance of using explanatory forms as building blocks of
explanations, and identify their proprieties - pros, cons, applicable
explanation goals, and design implications. The explanatory forms, their
proprieties, and prototyping supports (including a suggested prototyping
process, design templates and exemplars, and associated algorithms to actualize
explanatory forms) constitute the End-User-Centered explainable AI framework
EUCA, and is available at http://weinajin.github.io/end-user-xai . It serves as
a practical prototyping toolkit for HCI/AI practitioners and researchers to
understand user requirements and build end-user-centered explainable AI.
|
[
{
"version": "v1",
"created": "Thu, 4 Feb 2021 06:39:31 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Mar 2022 14:13:19 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"Jin",
"Weina",
""
],
[
"Fan",
"Jianyu",
""
],
[
"Gromala",
"Diane",
""
],
[
"Pasquier",
"Philippe",
""
],
[
"Hamarneh",
"Ghassan",
""
]
] |
new_dataset
| 0.995019 |
2105.00819
|
Schyan Zafar
|
Schyan Zafar and Geoff Nicholls
|
Measuring diachronic sense change: new models and Monte Carlo methods
for Bayesian inference
|
Additional results included in the appendix
| null | null | null |
cs.CL stat.ME
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a bag-of-words model, the senses of a word with multiple meanings, e.g.
"bank" (used either in a river-bank or an institution sense), are represented
as probability distributions over context words, and sense prevalence is
represented as a probability distribution over senses. Both of these may change
with time. Modelling and measuring this kind of sense change is challenging due
to the typically high-dimensional parameter space and sparse datasets. A
recently published corpus of ancient Greek texts contains expert-annotated
sense labels for selected target words. Automatic sense-annotation for the word
"kosmos" (meaning decoration, order or world) has been used as a test case in
recent work with related generative models and Monte Carlo methods. We adapt an
existing generative sense change model to develop a simpler model for the main
effects of sense and time, and give MCMC methods for Bayesian inference on all
these models that are more efficient than existing methods. We carry out
automatic sense-annotation of snippets containing "kosmos" using our model, and
measure the time-evolution of its three senses and their prevalence. As far as
we are aware, ours is the first analysis of this data, within the class of
generative models we consider, that quantifies uncertainty and returns credible
sets for evolving sense prevalence in good agreement with those given by expert
annotation.
|
[
{
"version": "v1",
"created": "Wed, 14 Apr 2021 11:40:21 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Mar 2022 17:42:47 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"Zafar",
"Schyan",
""
],
[
"Nicholls",
"Geoff",
""
]
] |
new_dataset
| 0.99419 |
2106.12188
|
Onel Luis Alcaraz Lopez
|
Onel L. A. L\'opez, Dileep Kumar, Richard Demo Souza, Petar Popovski,
Antti T\"olli, Matti Latva-aho
|
Massive MIMO with Radio Stripes for Indoor Wireless Energy Transfer
|
Accepted at IEEE TWC. 16 pags, 14 figures, 3 algorithms
| null |
10.1109/TWC.2022.3154428
| null |
cs.IT cs.NI math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Radio frequency wireless energy transfer (WET) is a promising solution for
powering autonomous Internet of Things (IoT) deployments. In this work, we
leverage energy beamforming for powering multiple user equipments (UEs) with
stringent energy harvesting (EH) demands in an indoor distributed massive
multiple-input multiple-output system. Based on semi-definite programming,
successive convex approximation (SCA), and maximum ratio transmission (MRT)
techniques, we derive optimal and sub-optimal precoders aimed at minimizing the
radio stripes' transmit power while exploiting information of the power
transfer efficiency of the EH circuits at the UEs. Moreover, we propose an
analytical framework to assess and control the electromagnetic field (EMF)
radiation exposure in the considered indoor scenario. Numerical results show
that i) the EMF radiation exposure can be more easily controlled at higher
frequencies at the cost of a higher transmit power consumption, ii) training is
not a very critical factor for the considered indoor system, iii) MRT/SCA-based
precoders are particularly appealing when serving a small number of UEs, thus,
especially suitable for implementation in a time domain multiple access (TDMA)
scheduling framework, and iv) TDMA is more efficient than spatial domain
multiple access (SDMA) when serving a relatively small number of UEs. Results
suggest that additional boosting performance strategies are needed to increase
the overall system efficiency, thus making the technology viable in practice.
|
[
{
"version": "v1",
"created": "Wed, 23 Jun 2021 06:25:15 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Feb 2022 19:52:48 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"López",
"Onel L. A.",
""
],
[
"Kumar",
"Dileep",
""
],
[
"Souza",
"Richard Demo",
""
],
[
"Popovski",
"Petar",
""
],
[
"Tölli",
"Antti",
""
],
[
"Latva-aho",
"Matti",
""
]
] |
new_dataset
| 0.983997 |
2108.05080
|
Shahroz Tariq
|
Hasam Khalid and Shahroz Tariq and Minha Kim and Simon S. Woo
|
FakeAVCeleb: A Novel Audio-Video Multimodal Deepfake Dataset
|
Part of Proceedings of the Neural Information Processing Systems
Track on Datasets and Benchmarks (NeurIPS Datasets and Benchmarks 2021)
| null | null | null |
cs.CV cs.MM cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While the significant advancements have made in the generation of deepfakes
using deep learning technologies, its misuse is a well-known issue now.
Deepfakes can cause severe security and privacy issues as they can be used to
impersonate a person's identity in a video by replacing his/her face with
another person's face. Recently, a new problem of generating synthesized human
voice of a person is emerging, where AI-based deep learning models can
synthesize any person's voice requiring just a few seconds of audio. With the
emerging threat of impersonation attacks using deepfake audios and videos, a
new generation of deepfake detectors is needed to focus on both video and audio
collectively. To develop a competent deepfake detector, a large amount of
high-quality data is typically required to capture real-world (or practical)
scenarios. Existing deepfake datasets either contain deepfake videos or audios,
which are racially biased as well. As a result, it is critical to develop a
high-quality video and audio deepfake dataset that can be used to detect both
audio and video deepfakes simultaneously. To fill this gap, we propose a novel
Audio-Video Deepfake dataset, FakeAVCeleb, which contains not only deepfake
videos but also respective synthesized lip-synced fake audios. We generate this
dataset using the most popular deepfake generation methods. We selected real
YouTube videos of celebrities with four ethnic backgrounds to develop a more
realistic multimodal dataset that addresses racial bias, and further help
develop multimodal deepfake detectors. We performed several experiments using
state-of-the-art detection methods to evaluate our deepfake dataset and
demonstrate the challenges and usefulness of our multimodal Audio-Video
deepfake dataset.
|
[
{
"version": "v1",
"created": "Wed, 11 Aug 2021 07:49:36 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Aug 2021 03:26:20 GMT"
},
{
"version": "v3",
"created": "Mon, 6 Sep 2021 04:15:53 GMT"
},
{
"version": "v4",
"created": "Tue, 1 Mar 2022 10:38:07 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"Khalid",
"Hasam",
""
],
[
"Tariq",
"Shahroz",
""
],
[
"Kim",
"Minha",
""
],
[
"Woo",
"Simon S.",
""
]
] |
new_dataset
| 0.988113 |
2108.07223
|
Keita Iwabuchi
|
Keita Iwabuchi (1), Karim Youssef (1 and 2), Kaushik Velusamy (3),
Maya Gokhale (1), Roger Pearce (1) ((1) Center for Applied Scientific
Computing, Livermore National Laboratory, (2) Department of Computer Science,
Virginia Polytechnic Institute and State University, Blacksburg, (3)
Department of Computer Science, University of Maryland, Baltimore County)
|
Metall: A Persistent Memory Allocator For Data-Centric Analytics
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data analytics applications transform raw input data into analytics-specific
data structures before performing analytics. Unfortunately, such data ingestion
step is often more expensive than analytics. In addition, various types of
NVRAM devices are already used in many HPC systems today. Such devices will be
useful for storing and reusing data structures beyond a single process life
cycle.
We developed Metall, a persistent memory allocator built on top of the
memory-mapped file mechanism. Metall enables applications to transparently
allocate custom C++ data structures into various types of persistent memories.
Metall incorporates a concise and high-performance memory management algorithm
inspired by Supermalloc and the rich C++ interface developed by
Boost.Interprocess library.
On a dynamic graph construction workload, Metall achieved up to 11.7x and
48.3x performance improvements over Boost.Interprocess and memkind (PMEM kind),
respectively. We also demonstrate Metall's high adaptability by integrating
Metall into a graph processing framework, GraphBLAS Template Library. This
study's outcomes indicate that Metall will be a strong tool for accelerating
future large-scale data analytics by allowing applications to leverage
persistent memory efficiently.
|
[
{
"version": "v1",
"created": "Tue, 10 Aug 2021 18:04:10 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Aug 2021 17:12:55 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Mar 2022 03:25:58 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"Iwabuchi",
"Keita",
"",
"1 and 2"
],
[
"Youssef",
"Karim",
"",
"1 and 2"
],
[
"Velusamy",
"Kaushik",
""
],
[
"Gokhale",
"Maya",
""
],
[
"Pearce",
"Roger",
""
]
] |
new_dataset
| 0.997867 |
2109.00430
|
Guojun Yan
|
Guojun Yan and Jiahuan Pei and Pengjie Ren and Zhaochun Ren and Xin
Xin and Huasheng Liang and Maarten de Rijke and Zhumin Chen
|
ReMeDi: Resources for Multi-domain, Multi-service, Medical Dialogues
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Medical dialogue systems (MDSs) aim to assist doctors and patients with a
range of professional medical services, i.e., diagnosis, treatment and
consultation. The development of MDSs is hindered because of a lack of
resources. In particular. (1) there is no dataset with large-scale medical
dialogues that covers multiple medical services and contains fine-grained
medical labels (i.e., intents, actions, slots, values), and (2) there is no set
of established benchmarks for MDSs for multi-domain, multi-service medical
dialogues. In this paper, we present ReMeDi, a set of resource for medical
dialogues. ReMeDi consists of two parts, the ReMeDi dataset and the ReMeDi
benchmarks. The ReMeDi dataset contains 96,965 conversations between doctors
and patients, including 1,557 conversations with fine-gained labels. It covers
843 types of diseases, 5,228 medical entities, and 3 specialties of medical
services across 40 domains. To the best of our knowledge, the ReMeDi dataset is
the only medical dialogue dataset that covers multiple domains and services,
and has fine-grained medical labels. The second part of the ReMeDi resources
consists of a set of state-of-the-art models for (medical) dialogue generation.
The ReMeDi benchmark has the following methods: (1) pretrained models (i.e.,
BERT-WWM, BERT-MED, GPT2, and MT5) trained, validated, and tested on the ReMeDi
dataset, and (2) a self-supervised contrastive learning(SCL) method to expand
the ReMeDi dataset and enhance the training of the state-of-the-art pretrained
models. We describe the creation of the ReMeDi dataset, the ReMeDi benchmarking
methods, and establish experimental results using the ReMeDi benchmarking
methods on the ReMeDi dataset for future research to compare against. With this
paper, we share the dataset, implementations of the benchmarks, and evaluation
scripts.
|
[
{
"version": "v1",
"created": "Wed, 1 Sep 2021 15:24:54 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Sep 2021 14:11:29 GMT"
},
{
"version": "v3",
"created": "Wed, 8 Sep 2021 13:31:00 GMT"
},
{
"version": "v4",
"created": "Tue, 1 Mar 2022 14:36:56 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"Yan",
"Guojun",
""
],
[
"Pei",
"Jiahuan",
""
],
[
"Ren",
"Pengjie",
""
],
[
"Ren",
"Zhaochun",
""
],
[
"Xin",
"Xin",
""
],
[
"Liang",
"Huasheng",
""
],
[
"de Rijke",
"Maarten",
""
],
[
"Chen",
"Zhumin",
""
]
] |
new_dataset
| 0.999567 |
2110.06648
|
Anxing Xiao
|
Anxing Xiao, Hao Luan, Ziqi Zhao, Yue Hong, Jieting Zhao, Weinan Chen,
Jiankun Wang, Max Q.-H. Meng
|
Robotic Autonomous Trolley Collection with Progressive Perception and
Nonlinear Model Predictive Control
|
Accepted to the 2022 International Conference on Robotics and
Automation (ICRA 2022)
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous mobile manipulation robots that can collect trolleys are widely
used to liberate human resources and fight epidemics. Most prior robotic
trolley collection solutions only detect trolleys with 2D poses or are merely
based on specific marks and lack the formal design of planning algorithms. In
this paper, we present a novel mobile manipulation system with applications in
luggage trolley collection. The proposed system integrates a compact hardware
design and a progressive perception and planning framework, enabling the system
to efficiently and robustly collect trolleys in dynamic and complex
environments. For the perception, we first develop a 3D trolley detection
method that combines object detection and keypoint estimation. Then, a docking
process in a short distance is achieved with an accurate point cloud plane
detection method and a novel manipulator design. On the planning side, we
formulate the robot's motion planning under a nonlinear model predictive
control framework with control barrier functions to improve obstacle avoidance
capabilities while maintaining the target in the sensors' field of view at
close distances. We demonstrate our design and framework by deploying the
system on actual trolley collection tasks, and their effectiveness and
robustness are experimentally validated.
|
[
{
"version": "v1",
"created": "Wed, 13 Oct 2021 11:20:54 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Mar 2022 09:45:28 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"Xiao",
"Anxing",
""
],
[
"Luan",
"Hao",
""
],
[
"Zhao",
"Ziqi",
""
],
[
"Hong",
"Yue",
""
],
[
"Zhao",
"Jieting",
""
],
[
"Chen",
"Weinan",
""
],
[
"Wang",
"Jiankun",
""
],
[
"Meng",
"Max Q. -H.",
""
]
] |
new_dataset
| 0.966681 |
2111.03133
|
Peter Schaldenbrand
|
Peter Schaldenbrand, Zhixuan Liu and Jean Oh
|
StyleCLIPDraw: Coupling Content and Style in Text-to-Drawing Synthesis
|
Superseded by arXiv:2202.12362
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Generating images that fit a given text description using machine learning
has improved greatly with the release of technologies such as the CLIP
image-text encoder model; however, current methods lack artistic control of the
style of image to be generated. We introduce StyleCLIPDraw which adds a style
loss to the CLIPDraw text-to-drawing synthesis model to allow artistic control
of the synthesized drawings in addition to control of the content via text.
Whereas performing decoupled style transfer on a generated image only affects
the texture, our proposed coupled approach is able to capture a style in both
texture and shape, suggesting that the style of the drawing is coupled with the
drawing process itself. More results and our code are available at
https://github.com/pschaldenbrand/StyleCLIPDraw
|
[
{
"version": "v1",
"created": "Thu, 4 Nov 2021 19:57:17 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Mar 2022 02:31:48 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"Schaldenbrand",
"Peter",
""
],
[
"Liu",
"Zhixuan",
""
],
[
"Oh",
"Jean",
""
]
] |
new_dataset
| 0.998538 |
2201.10936
|
Dimitri von R\"utte
|
Dimitri von R\"utte, Luca Biggio, Yannic Kilcher, Thomas Hofmann
|
FIGARO: Generating Symbolic Music with Fine-Grained Artistic Control
|
14 pages, 9 figures
| null | null | null |
cs.SD cs.LG eess.AS stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generating music with deep neural networks has been an area of active
research in recent years. While the quality of generated samples has been
steadily increasing, most methods are only able to exert minimal control over
the generated sequence, if any. We propose the self-supervised
description-to-sequence task, which allows for fine-grained controllable
generation on a global level. We do so by extracting high-level features about
the target sequence and learning the conditional distribution of sequences
given the corresponding high-level description in a sequence-to-sequence
modelling setup. We train FIGARO (FIne-grained music Generation via
Attention-based, RObust control) by applying description-to-sequence modelling
to symbolic music. By combining learned high level features with domain
knowledge, which acts as a strong inductive bias, the model achieves
state-of-the-art results in controllable symbolic music generation and
generalizes well beyond the training distribution.
|
[
{
"version": "v1",
"created": "Wed, 26 Jan 2022 13:51:19 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Feb 2022 12:33:01 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Mar 2022 09:36:11 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"von Rütte",
"Dimitri",
""
],
[
"Biggio",
"Luca",
""
],
[
"Kilcher",
"Yannic",
""
],
[
"Hofmann",
"Thomas",
""
]
] |
new_dataset
| 0.996874 |
2202.12450
|
Wenrui Zhang
|
Wenrui Zhang, Shijia Geng, Zhaoji Fu, Linlin Zheng, Chenyang Jiang,
Shenda Hong
|
MetaVA: Curriculum Meta-learning and Pre-fine-tuning of Deep Neural
Networks for Detecting Ventricular Arrhythmias based on ECGs
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ventricular arrhythmias (VA) are the main causes of sudden cardiac death.
Developing machine learning methods for detecting VA based on
electrocardiograms (ECGs) can help save people's lives. However, developing
such machine learning models for ECGs is challenging because of the following:
1) group-level diversity from different subjects and 2) individual-level
diversity from different moments of a single subject. In this study, we aim to
solve these problems in the pre-training and fine-tuning stages. For the
pre-training stage, we propose a novel model agnostic meta-learning (MAML) with
curriculum learning (CL) method to solve group-level diversity. MAML is
expected to better transfer the knowledge from a large dataset and use only a
few recordings to quickly adapt the model to a new person. CL is supposed to
further improve MAML by meta-learning from easy to difficult tasks. For the
fine-tuning stage, we propose improved pre-fine-tuning to solve
individual-level diversity. We conduct experiments using a combination of three
publicly available ECG datasets. The results show that our method outperforms
the compared methods in terms of all evaluation metrics. Ablation studies show
that MAML and CL could help perform more evenly, and pre-fine-tuning could
better fit the model to training data.
|
[
{
"version": "v1",
"created": "Fri, 25 Feb 2022 01:26:19 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Mar 2022 02:05:59 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"Zhang",
"Wenrui",
""
],
[
"Geng",
"Shijia",
""
],
[
"Fu",
"Zhaoji",
""
],
[
"Zheng",
"Linlin",
""
],
[
"Jiang",
"Chenyang",
""
],
[
"Hong",
"Shenda",
""
]
] |
new_dataset
| 0.98678 |
2202.13976
|
Andr\'as Strausz
|
Andr\'as Strausz, Flavio Vella, Salvatore Di Girolamo, Maciej Besta
and Torsten Hoefler
|
Asynchronous Distributed-Memory Triangle Counting and LCC with RMA
Caching
|
11 pages, 10 figures, to be published at IPDPS'22
| null | null | null |
cs.DC
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Triangle count and local clustering coefficient are two core metrics for
graph analysis. They find broad application in analyses such as community
detection and link recommendation. Current state-of-the-art solutions suffer
from synchronization overheads or expensive pre-computations needed to
distribute the graph, achieving limited scaling capabilities. We propose a
fully asynchronous implementation for triangle counting and local clustering
coefficient based on 1D partitioning, using remote memory accesses for
transferring data and avoid synchronization. Additionally, we show how these
algorithms present data reuse on remote memory accesses and how the overall
communication time can be improved by caching these accesses. Finally, we
extend CLaMPI, a software-layer caching system for MPI RMA, to include
application-specific scores for cached entries and influence the eviction
procedure to improve caching efficiency. Our results show improvements on
shared memory, and we achieve 14x speedup from 4 to 64 nodes for the
LiveJournal1 graph on distributed memory. Moreover, we demonstrate how caching
remote accesses reduces total running time by up to 73% with respect to a
non-cached version. Finally, we compare our implementation to TriC, the 2020
graph champion paper, and achieve up to 100x faster results for scale-free
graphs.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 17:26:15 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Mar 2022 16:51:01 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"Strausz",
"András",
""
],
[
"Vella",
"Flavio",
""
],
[
"Di Girolamo",
"Salvatore",
""
],
[
"Besta",
"Maciej",
""
],
[
"Hoefler",
"Torsten",
""
]
] |
new_dataset
| 0.994525 |
2203.00002
|
Yee Sin Ang
|
Tianning Zhang, Yee Sin Ang, Erping Li, Chun Yun Kee, L. K. Ang
|
SUTD-PRCM Dataset and Neural Architecture Search Approach for Complex
Metasurface Design
|
20 pages, 7 figures, 2 tables
| null | null | null |
cs.LG physics.app-ph physics.comp-ph physics.data-an
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Metasurfaces have received a lot of attentions recently due to their
versatile capability in manipulating electromagnetic wave. Advanced designs to
satisfy multiple objectives with non-linear constraints have motivated
researchers in using machine learning (ML) techniques like deep learning (DL)
for accelerated design of metasurfaces. For metasurfaces, it is difficult to
make quantitative comparisons between different ML models without having a
common and yet complex dataset used in many disciplines like image
classification. Many studies were directed to a relatively constrained datasets
that are limited to specified patterns or shapes in metasurfaces. In this
paper, we present our SUTD polarized reflection of complex metasurfaces
(SUTD-PRCM) dataset, which contains approximately 260,000 samples of complex
metasurfaces created from electromagnetic simulation, and it has been used to
benchmark our DL models. The metasurface patterns are divided into different
classes to facilitate different degree of complexity, which involves
identifying and exploiting the relationship between the patterns and the
electromagnetic responses that can be compared in using different DL models.
With the release of this SUTD-PRCM dataset, we hope that it will be useful for
benchmarking existing or future DL models developed in the ML community. We
also propose a classification problem that is less encountered and apply neural
architecture search to have a preliminary understanding of potential
modification to the neural architecture that will improve the prediction by DL
models. Our finding shows that convolution stacking is not the dominant element
of the neural architecture anymore, which implies that low-level features are
preferred over the traditional deep hierarchical high-level features thus
explains why deep convolutional neural network based models are not performing
well in our dataset.
|
[
{
"version": "v1",
"created": "Thu, 24 Feb 2022 16:15:13 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"Zhang",
"Tianning",
""
],
[
"Ang",
"Yee Sin",
""
],
[
"Li",
"Erping",
""
],
[
"Kee",
"Chun Yun",
""
],
[
"Ang",
"L. K.",
""
]
] |
new_dataset
| 0.999846 |
2203.00046
|
Mattias Heinrich
|
Mattias P. Heinrich and Lasse Hansen
|
Voxelmorph++ Going beyond the cranial vault with keypoint supervision
and multi-channel instance optimisation
|
10 pages, accepted at WBIR 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The majority of current research in deep learning based image registration
addresses inter-patient brain registration with moderate deformation
magnitudes. The recent Learn2Reg medical registration benchmark has
demonstrated that single-scale U-Net architectures, such as VoxelMorph that
directly employ a spatial transformer loss, often do not generalise well beyond
the cranial vault and fall short of state-of-the-art performance for abdominal
or intra-patient lung registration. Here, we propose two straightforward steps
that greatly reduce this gap in accuracy. First, we employ keypoint
self-supervision with a novel network head that predicts a discretised heatmap
and robustly reduces large deformations for better robustness. Second, we
replace multiple learned fine-tuning steps by a single instance optimisation
with hand-crafted features and the Adam optimiser. Different to other related
work, including FlowNet or PDD-Net, our approach does not require a fully
discretised architecture with correlation layer. Our ablation study
demonstrates the importance of keypoints in both self-supervised and
unsupervised (using only a MIND metric) settings. On a multi-centric
inspiration-exhale lung CT dataset, including very challenging COPD scans, our
method outperforms VoxelMorph by improving nonlinear alignment by 77% compared
to 19% - reaching target registration errors of 2 mm that outperform all but
one learning methods published to date. Extending the method to semantic
features sets new stat-of-the-art performance on inter-subject abdominal CT
registration.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 19:23:29 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"Heinrich",
"Mattias P.",
""
],
[
"Hansen",
"Lasse",
""
]
] |
new_dataset
| 0.968928 |
2203.00069
|
Xin Tian UoB
|
Xin Tian, Nantheera Anantrasirichai, Lindsay Nicholson, Alin Achim
|
Optimal Transport-based Graph Matching for 3D retinal OCT image
registration
| null | null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Registration of longitudinal optical coherence tomography (OCT) images
assists disease monitoring and is essential in image fusion applications. Mouse
retinal OCT images are often collected for longitudinal study of eye disease
models such as uveitis, but their quality is often poor compared with human
imaging. This paper presents a novel but efficient framework involving an
optimal transport based graph matching (OT-GM) method for 3D mouse OCT image
registration. We first perform registration of fundus-like images obtained by
projecting all b-scans of a volume on a plane orthogonal to them, hereafter
referred to as the x-y plane. We introduce Adaptive Weighted Vessel Graph
Descriptors (AWVGD) and 3D Cube Descriptors (CD) to identify the correspondence
between nodes of graphs extracted from segmented vessels within the OCT
projection images. The AWVGD comprises scaling, translation and rotation, which
are computationally efficient, whereas CD exploits 3D spatial and frequency
domain information. The OT-GM method subsequently performs the correct
alignment in the x-y plane. Finally, registration along the direction
orthogonal to the x-y plane (the z-direction) is guided by the segmentation of
two important anatomical features peculiar to mouse b-scans, the Internal
Limiting Membrane (ILM) and the hyaloid remnant (HR). Both subjective and
objective evaluation results demonstrate that our framework outperforms other
well-established methods on mouse OCT images within a reasonable execution
time.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 20:15:12 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"Tian",
"Xin",
""
],
[
"Anantrasirichai",
"Nantheera",
""
],
[
"Nicholson",
"Lindsay",
""
],
[
"Achim",
"Alin",
""
]
] |
new_dataset
| 0.952888 |
2203.00271
|
Firoj Alam
|
Hamdy Mubarak, Shammur Absar Chowdhury, Firoj Alam
|
ArabGend: Gender Analysis and Inference on Arabic Twitter
|
Gender Analysis Dataset, Demography, Arabic Twitter Accounts, Arabic
Social Media Content
| null | null | null |
cs.CL cs.CY cs.SI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Gender analysis of Twitter can reveal important socio-cultural differences
between male and female users. There has been a significant effort to analyze
and automatically infer gender in the past for most widely spoken languages'
content, however, to our knowledge very limited work has been done for Arabic.
In this paper, we perform an extensive analysis of differences between male and
female users on the Arabic Twitter-sphere. We study differences in user
engagement, topics of interest, and the gender gap in professions. Along with
gender analysis, we also propose a method to infer gender by utilizing
usernames, profile pictures, tweets, and networks of friends. In order to do
so, we manually annotated gender and locations for ~166K Twitter accounts
associated with ~92K user location, which we plan to make publicly available at
http://anonymous.com. Our proposed gender inference method achieve an F1 score
of 82.1%, which is 47.3% higher than majority baseline. In addition, we also
developed a demo and made it publicly available.
|
[
{
"version": "v1",
"created": "Tue, 1 Mar 2022 07:13:09 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"Mubarak",
"Hamdy",
""
],
[
"Chowdhury",
"Shammur Absar",
""
],
[
"Alam",
"Firoj",
""
]
] |
new_dataset
| 0.990892 |
2203.00285
|
Joan Boyar
|
Joan Boyar, Lene M. Favrholdt, Kim S. Larsen
|
Online Unit Profit Knapsack with Untrusted Predictions
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A variant of the online knapsack problem is considered in the settings of
trusted and untrusted predictions. In Unit Profit Knapsack, the items have unit
profit, and it is easy to find an optimal solution offline: Pack as many of the
smallest items as possible into the knapsack. For Online Unit Profit Knapsack,
the competitive ratio is unbounded. In contrast, previous work on online
algorithms with untrusted predictions generally studied problems where an
online algorithm with a constant competitive ratio is known. The prediction,
possibly obtained from a machine learning source, that our algorithm uses is
the average size of those smallest items that fit in the knapsack. For the
prediction error in this hard online problem, we use the ratio
$r=\frac{a}{\hat{a}}$ where $a$ is the actual value for this average size and
$\hat{a}$ is the prediction. The algorithm presented achieves a competitive
ratio of $\frac{1}{2r}$ for $r\geq 1$ and $\frac{r}{2}$ for $r\leq 1$. Using an
adversary technique, we show that this is optimal in some sense, giving a
trade-off in the competitive ratio attainable for different values of $r$. Note
that the result for accurate advice, $r=1$, is only $\frac{1}{2}$, but we show
that no algorithm knowing the value $a$ can achieve a competitive ratio better
than $\frac{e-1}{e}\approx 0.6321$ and present an algorithm with a matching
upper bound. We also show that this latter algorithm attains a competitive
ratio of $r\frac{e-1}{e}$ for $r \leq 1$ and $\frac{e-r}{e}$ for $1 \leq r <
e$, and no algorithm can be better for both $r<1$ and $1\leq r<e$.
|
[
{
"version": "v1",
"created": "Tue, 1 Mar 2022 08:17:04 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"Boyar",
"Joan",
""
],
[
"Favrholdt",
"Lene M.",
""
],
[
"Larsen",
"Kim S.",
""
]
] |
new_dataset
| 0.970848 |
2203.00403
|
Nikolaos Passalis
|
N. Passalis, S. Pedrazzi, R. Babuska, W. Burgard, D. Dias, F. Ferro,
M. Gabbouj, O. Green, A. Iosifidis, E. Kayacan, J. Kober, O. Michel, N.
Nikolaidis, P. Nousi, R. Pieters, M. Tzelepi, A. Valada, and A. Tefas
|
OpenDR: An Open Toolkit for Enabling High Performance, Low Footprint
Deep Learning for Robotics
| null | null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing Deep Learning (DL) frameworks typically do not provide ready-to-use
solutions for robotics, where very specific learning, reasoning, and embodiment
problems exist. Their relatively steep learning curve and the different
methodologies employed by DL compared to traditional approaches, along with the
high complexity of DL models, which often leads to the need of employing
specialized hardware accelerators, further increase the effort and cost needed
to employ DL models in robotics. Also, most of the existing DL methods follow a
static inference paradigm, as inherited by the traditional computer vision
pipelines, ignoring active perception, which can be employed to actively
interact with the environment in order to increase perception accuracy. In this
paper, we present the Open Deep Learning Toolkit for Robotics (OpenDR). OpenDR
aims at developing an open, non-proprietary, efficient, and modular toolkit
that can be easily used by robotics companies and research institutions to
efficiently develop and deploy AI and cognition technologies to robotics
applications, providing a solid step towards addressing the aforementioned
challenges. We also detail the design choices, along with an abstract interface
that was created to overcome these challenges. This interface can describe
various robotic tasks, spanning beyond traditional DL cognition and inference,
as known by existing frameworks, incorporating openness, homogeneity and
robotics-oriented perception e.g., through active perception, as its core
design principles.
|
[
{
"version": "v1",
"created": "Tue, 1 Mar 2022 12:59:59 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"Passalis",
"N.",
""
],
[
"Pedrazzi",
"S.",
""
],
[
"Babuska",
"R.",
""
],
[
"Burgard",
"W.",
""
],
[
"Dias",
"D.",
""
],
[
"Ferro",
"F.",
""
],
[
"Gabbouj",
"M.",
""
],
[
"Green",
"O.",
""
],
[
"Iosifidis",
"A.",
""
],
[
"Kayacan",
"E.",
""
],
[
"Kober",
"J.",
""
],
[
"Michel",
"O.",
""
],
[
"Nikolaidis",
"N.",
""
],
[
"Nousi",
"P.",
""
],
[
"Pieters",
"R.",
""
],
[
"Tzelepi",
"M.",
""
],
[
"Valada",
"A.",
""
],
[
"Tefas",
"A.",
""
]
] |
new_dataset
| 0.999503 |
2203.00435
|
Negar Rostamzadeh
|
Lindiwe Brigitte Malobola, Negar Rostamzadeh, Shakir Mohamed
|
se-Shweshwe Inspired Fashion Generation
|
CVPR 2021 Beyond Fairness workshop
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Fashion is one of the ways in which we show ourselves to the world. It is a
reflection of our personal decisions and one of the ways in which people
distinguish and represent themselves. In this paper, we focus on the fashion
design process and expand computer vision for fashion beyond its current focus
on western fashion. We discuss the history of Southern African se-Shweshwe
fabric fashion, the collection of a se-Shweshwe dataset, and the application of
sketch-to-design image generation for affordable fashion-design. The
application to fashion raises both technical questions of training with small
amounts of data, and also important questions for computer vision beyond
fairness, in particular ethical considerations on creating and employing
fashion datasets, and how computer vision supports cultural representation and
might avoid algorithmic cultural appropriation.
|
[
{
"version": "v1",
"created": "Fri, 25 Feb 2022 22:10:23 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"Malobola",
"Lindiwe Brigitte",
""
],
[
"Rostamzadeh",
"Negar",
""
],
[
"Mohamed",
"Shakir",
""
]
] |
new_dataset
| 0.999714 |
2203.00501
|
Xiaofeng Wang
|
Ilenia Fronza, Luis Corral, Xiaofeng Wang, Claus Pahl
|
Keeping Fun Alive: an Experience Report on Running Online Coding Camps
| null | null |
10.1145/3510456.3514153
| null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The outbreak of the COVID-19 pandemic prohibited radically the collocation
and face-to-face interactions of participants in coding bootcamps and similar
experiences, which are key characteristics that help participants to advance
technical work. Several specific issues are faced and need to be solved when
running online coding camps, which can achieve the same level of positive
outcomes for participants. One of such issues is how to keep the same level of
fun that participants obtained through physical activities and interactions in
the face-to-face settings. In this paper, we report on our experience and
insights gained from designing and running a fully remote coding camp that
exposes high school students to Agile-based Software Engineering practices to
enhance their ability to develop high-quality software. To design the online
coding camp, we adapted the face-to-face version of the coding camp to keep the
same "level of fun", i.e., adaptations aimed at increasing communication,
engaging participants, and introducing fun items to reduce fatigue due to
prolonged computer use, while preserving the technical curriculum that enables
students to attain the learning goals originally planned. The comparison with
the results of the face-to-face coding camp shows that we succeeded in keeping
the fun alive in the online edition, and the participants of online camp were
able to produce the results at the same level of quality in terms of product
and process as in the face-to-face edition. From our experience, we synthesize
lessons learned, and we sketch some guidelines for educators.
|
[
{
"version": "v1",
"created": "Tue, 1 Mar 2022 14:50:50 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"Fronza",
"Ilenia",
""
],
[
"Corral",
"Luis",
""
],
[
"Wang",
"Xiaofeng",
""
],
[
"Pahl",
"Claus",
""
]
] |
new_dataset
| 0.994632 |
2203.00579
|
Manesh Thankappan
|
Manesh Thankappan, Helena Rif\`a-Pous, Carles Garrigues
|
Multi-Channel Man-in-the-Middle Attacks Against Protected Wi-Fi
Networks: A State of the Art Review
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Multi-Channel Man-in-the-Middle (MitM) attacks are special MitM attacks
capable of manipulating encrypted Wi-Fi wireless frames between two legitimate
endpoints. Since its inception in 2014, attackers have been targeting WPA Wi-Fi
networks to perform different attacks, such as cipher downgrades, denial of
service, key reinstallation Man-in-the-Middle (MitM) attacks (KRACK) in 2017,
and recently FragAttacks in 2021, which widely impacted millions of Wi-Fi
Multi-Channel MitM (MC-MitM) devices, especially IoT devices. To the best of
our knowledge, there are no studies in the literature that KRACK holistically
review the different types of Multi-Channel MitM enabled attacks and analyze
their potential Internet of Things (IoT) impact. To this end, we evaluate the
capabilities of Multi-Channel MitM and review every reported attack in
Encryption the state of the art. We examine practical issues that hamper the
total adoption of protection mechanisms, i.e., Security security patches and
Protected Management Frames (PMF), and review available defense mechanisms in
FragAttacks confronting the Multi-Channel MitM enabled attacks in the IoT
context. Finally, we highlight the potential research problems and identify
future research lines in this field.
|
[
{
"version": "v1",
"created": "Tue, 1 Mar 2022 16:03:25 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"Thankappan",
"Manesh",
""
],
[
"Rifà-Pous",
"Helena",
""
],
[
"Garrigues",
"Carles",
""
]
] |
new_dataset
| 0.999339 |
2203.00591
|
Maria Waheed
|
Maria Waheed, Michael Milford, Klaus McDonald-Maier and Shoaib Ehsan
|
SwitchHit: A Probabilistic, Complementarity-Based Switching System for
Improved Visual Place Recognition in Changing Environments
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual place recognition (VPR), a fundamental task in computer vision and
robotics, is the problem of identifying a place mainly based on visual
information. Viewpoint and appearance changes, such as due to weather and
seasonal variations, make this task challenging. Currently, there is no
universal VPR technique that can work in all types of environments, on a
variety of robotic platforms, and under a wide range of viewpoint and
appearance changes. Recent work has shown the potential of combining different
VPR methods intelligently by evaluating complementarity for some specific VPR
datasets to achieve better performance. This, however, requires ground truth
information (correct matches) which is not available when a robot is deployed
in a real-world scenario. Moreover, running multiple VPR techniques in parallel
may be prohibitive for resource-constrained embedded platforms. To overcome
these limitations, this paper presents a probabilistic complementarity based
switching VPR system, SwitchHit. Our proposed system consists of multiple VPR
techniques, however, it does not simply run all techniques at once, rather
predicts the probability of correct match for an incoming query image and
dynamically switches to another complementary technique if the probability of
correctly matching the query is below a certain threshold. This innovative use
of multiple VPR techniques allow our system to be more efficient and robust
than other combined VPR approaches employing brute force and running multiple
VPR techniques at once. Thus making it more suitable for resource constrained
embedded systems and achieving an overall superior performance from what any
individual VPR method in the system could have by achieved running
independently.
|
[
{
"version": "v1",
"created": "Tue, 1 Mar 2022 16:23:22 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"Waheed",
"Maria",
""
],
[
"Milford",
"Michael",
""
],
[
"McDonald-Maier",
"Klaus",
""
],
[
"Ehsan",
"Shoaib",
""
]
] |
new_dataset
| 0.997957 |
2203.00600
|
Daniel T Chang
|
Daniel T. Chang
|
Dual Embodied-Symbolic Concept Representations for Deep Learning
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motivated by recent findings from cognitive neural science, we advocate the
use of a dual-level model for concept representations: the embodied level
consists of concept-oriented feature representations, and the symbolic level
consists of concept graphs. Embodied concept representations are modality
specific and exist in the form of feature vectors in a feature space. Symbolic
concept representations, on the other hand, are amodal and language specific,
and exist in the form of word / knowledge-graph embeddings in a concept /
knowledge space. The human conceptual system comprises both embodied
representations and symbolic representations, which typically interact to drive
conceptual processing. As such, we further advocate the use of dual
embodied-symbolic concept representations for deep learning. To demonstrate
their usage and value, we discuss two important use cases: embodied-symbolic
knowledge distillation for few-shot class incremental learning, and
embodied-symbolic fused representation for image-text matching. Dual
embodied-symbolic concept representations are the foundation for deep learning
and symbolic AI integration. We discuss two important examples of such
integration: scene graph generation with knowledge graph bridging, and
multimodal knowledge graphs.
|
[
{
"version": "v1",
"created": "Tue, 1 Mar 2022 16:40:12 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"Chang",
"Daniel T.",
""
]
] |
new_dataset
| 0.95324 |
2203.00637
|
Saad Islam
|
Saad Islam, Koksal Mus, Richa Singh, Patrick Schaumont and Berk Sunar
|
Signature Correction Attack on Dilithium Signature Scheme
| null | null | null | null |
cs.CR
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Motivated by the rise of quantum computers, existing public-key cryptosystems
are expected to be replaced by post-quantum schemes in the next decade in
billions of devices. To facilitate the transition, NIST is running a
standardization process which is currently in its final Round. Only three
digital signature schemes are left in the competition, among which Dilithium
and Falcon are the ones based on lattices. Classical fault attacks on signature
schemes make use of pairs of faulty and correct signatures to recover the
secret key which only works on deterministic schemes. To counter such attacks,
Dilithium offers a randomized version which makes each signature unique, even
when signing identical messages.
In this work, we introduce a novel Signature Correction Attack which not only
applies to the deterministic version but also to the randomized version of
Dilithium and is effective even on constant-time implementations using AVX2
instructions. The Signature Correction Attack exploits the mathematical
structure of Dilithium to recover the secret key bits by using faulty
signatures and the public-key. It can work for any fault mechanism which can
induce single bit-flips. For demonstration, we are using Rowhammer induced
faults. Thus, our attack does not require any physical access or special
privileges, and hence could be also implemented on shared cloud servers. We
perform a thorough classical and quantum security analysis of Dilithium and
successfully recover 1,851 bits out of 3,072 bits of secret key $s_1$ for
security level 2. The lattice strength against quantum attackers is reduced
from $2^{128}$ to $2^{81}$ while the strength against classical attackers is
reduced from $2^{141}$ to $2^{89}$. Hence, the Signature Correction Attack may
be employed to achieve a practical attack on Dilithium (security level 2) as
proposed in Round 3 of the NIST post-quantum standardization process.
|
[
{
"version": "v1",
"created": "Tue, 1 Mar 2022 17:26:18 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"Islam",
"Saad",
""
],
[
"Mus",
"Koksal",
""
],
[
"Singh",
"Richa",
""
],
[
"Schaumont",
"Patrick",
""
],
[
"Sunar",
"Berk",
""
]
] |
new_dataset
| 0.995656 |
2203.00642
|
Peter Sewell
|
Ben Simner, Alasdair Armstrong, Jean Pichon-Pharabod, Christopher
Pulte, Richard Grisenthwaite, Peter Sewell
|
Relaxed virtual memory in Armv8-A (extended version)
| null | null | null | null |
cs.AR cs.OS cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Virtual memory is an essential mechanism for enforcing security boundaries,
but its relaxed-memory concurrency semantics has not previously been
investigated in detail. The concurrent systems code managing virtual memory has
been left on an entirely informal basis, and OS and hypervisor verification has
had to make major simplifying assumptions.
We explore the design space for relaxed virtual memory semantics in the
Armv8-A architecture, to support future system-software verification. We
identify many design questions, in discussion with Arm; develop a test suite,
including use cases from the pKVM production hypervisor under development by
Google; delimit the design space with axiomatic-style concurrency models; prove
that under simple stable configurations our architectural model collapses to
previous "user" models; develop tooling to compute allowed behaviours in the
model integrated with the full Armv8-A ISA semantics; and develop a hardware
test harness.
This lays out some of the main issues in relaxed virtual memory bringing
these security-critical systems phenomena into the domain of
programming-language semantics and verification with foundational architecture
semantics.
This document is an extended version of a paper in ESOP 2022, with additional
explanation and examples in the main body, and appendices detailing our litmus
tests, models, proofs, and test results.
|
[
{
"version": "v1",
"created": "Tue, 1 Mar 2022 17:34:36 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"Simner",
"Ben",
""
],
[
"Armstrong",
"Alasdair",
""
],
[
"Pichon-Pharabod",
"Jean",
""
],
[
"Pulte",
"Christopher",
""
],
[
"Grisenthwaite",
"Richard",
""
],
[
"Sewell",
"Peter",
""
]
] |
new_dataset
| 0.998567 |
2203.00649
|
Hamza El-Kebir
|
Hamza El-Kebir, Joseph Bentsman, Melkior Ornik
|
Lodestar: An Integrated Embedded Real-Time Control Engine
|
8 pages, 7 figures. Submitted to IROS22. More info, including source
code, at https://ldstr.dev
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
In this work we present Lodestar, an integrated engine for rapid real-time
control system development. Using a functional block diagram paradigm, Lodestar
allows for complex multi-disciplinary control software design, while
automatically resolving execution order, circular data-dependencies, and
networking. In particular, Lodestar presents a unified set of control, signal
processing, and computer vision routines to users, which may be interfaced with
external hardware and software packages using interoperable user-defined
wrappers. Lodestar allows for user-defined block diagrams to be directly
executed, or for them to be translated to overhead-free source code for
integration in other programs. We demonstrate how our framework departs from
approaches used in state-of-the-art simulation frameworks to enable real-time
performance, and compare its capabilities to existing solutions in the realm of
control software. To demonstrate the utility of Lodestar in real-time control
systems design, we have applied Lodestar to implement two real-time
torque-based controller for a robotic arm. In addition, we have developed a
novel autofocus algorithm for use in thermography-based localization and
parameter estimation in electrosurgery and other areas of robot-assisted
surgery. We compare our algorithm design approach in Lodestar to a classical
ground-up approach, showing that Lodestar considerably eases the design
process. We also show how Lodestar can seamlessly interface with existing
simulation and networking framework in a number of simulation examples.
|
[
{
"version": "v1",
"created": "Tue, 1 Mar 2022 17:45:21 GMT"
}
] | 2022-03-02T00:00:00 |
[
[
"El-Kebir",
"Hamza",
""
],
[
"Bentsman",
"Joseph",
""
],
[
"Ornik",
"Melkior",
""
]
] |
new_dataset
| 0.999687 |
2007.12404
|
Murdoch Gabbay
|
Murdoch J. Gabbay
|
Algebras of UTxO blockchains
| null | null |
10.1017/S0960129521000438
| null |
cs.LO math.RA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We condense the theory of blockchains down to a simple and compact set of
four type equations (Idealised EUTxO), and to an algebraic characterisation
(abstract chunk systems), and exhibit an adjoint pair of functors between them.
This gives a novel account of the essential mathematical structures underlying
blockchain technology, such as Bitcoin.
|
[
{
"version": "v1",
"created": "Fri, 24 Jul 2020 08:20:16 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Sep 2021 14:11:33 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Gabbay",
"Murdoch J.",
""
]
] |
new_dataset
| 0.983515 |
2102.13253
|
Javier Gonzalez-Trejo
|
Javier Gonz\'alez-Trejo, Diego Mercado-Ravell, Israel Becerra and
Rafael Murrieta-Cid
|
On the Visual-based Safe Landing of UAVs in Populated Areas: a Crucial
Aspect for Urban Deployment
|
Video: https://youtu.be/yKSvNFzdDog
|
IEEE Robotics and Automation Letters, 6(4), 7901 7908 (2021)
|
10.1109/LRA.2021.3101861
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous landing of Unmanned Aerial Vehicles (UAVs) in crowded scenarios is
crucial for successful deployment of UAVs in populated areas, particularly in
emergency landing situations where the highest priority is to avoid hurting
people. In this work, a new visual-based algorithm for identifying Safe Landing
Zones (SLZ) in crowded scenarios is proposed, considering a camera mounted on
an UAV, where the people in the scene move with unknown dynamics. To do so, a
density map is generated for each image frame using a Deep Neural Network, from
where a binary occupancy map is obtained aiming to overestimate the people's
location for security reasons. Then, the occupancy map is projected to the
head's plane, and the SLZ candidates are obtained as circular regions in the
head's plane with a minimum security radius. Finally, to keep track of the SLZ
candidates, a multiple instance tracking algorithm is implemented using Kalman
Filters along with the Hungarian algorithm for data association. Several
scenarios were studied to prove the validity of the proposed strategy,
including public datasets and real uncontrolled scenarios with people moving in
public squares, taken from an UAV in flight. The study showed promising results
in the search of preventing the UAV from hurting people during emergency
landing.
|
[
{
"version": "v1",
"created": "Fri, 26 Feb 2021 01:31:28 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"González-Trejo",
"Javier",
""
],
[
"Mercado-Ravell",
"Diego",
""
],
[
"Becerra",
"Israel",
""
],
[
"Murrieta-Cid",
"Rafael",
""
]
] |
new_dataset
| 0.996133 |
2103.10698
|
Alessandro Saviolo
|
Antonio Loquercio, Alessandro Saviolo, Davide Scaramuzza
|
AutoTune: Controller Tuning for High-Speed Flight
|
Video: https://youtu.be/m2q_y7C01So; Code:
https://github.com/uzh-rpg/mh_autotune
|
IEEE Robotics and Automation Letters 2022
|
10.1109/LRA.2022.3146897
| null |
cs.RO cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Due to noisy actuation and external disturbances, tuning controllers for
high-speed flight is very challenging. In this paper, we ask the following
questions: How sensitive are controllers to tuning when tracking high-speed
maneuvers? What algorithms can we use to automatically tune them? To answer the
first question, we study the relationship between parameters and performance
and find out that the faster the maneuver, the more sensitive a controller
becomes to its parameters. To answer the second question, we review existing
methods for controller tuning and discover that prior works often perform
poorly on the task of high-speed flight. Therefore, we propose AutoTune, a
sampling-based tuning algorithm specifically tailored to high-speed flight. In
contrast to previous work, our algorithm does not assume any prior knowledge of
the drone or its optimization function and can deal with the multi-modal
characteristics of the parameters' optimization space. We thoroughly evaluate
AutoTune both in simulation and in the physical world. In our experiments, we
outperform existing tuning algorithms by up to 90% in trajectory completion.
The resulting controllers are tested in the AirSim Game of Drones competition,
where we outperform the winner by up to 25% in lap-time. Finally, we show that
AutoTune improves tracking error when flying a physical platform with respect
to parameters tuned by a human expert.
|
[
{
"version": "v1",
"created": "Fri, 19 Mar 2021 09:12:51 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Feb 2022 21:28:23 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Loquercio",
"Antonio",
""
],
[
"Saviolo",
"Alessandro",
""
],
[
"Scaramuzza",
"Davide",
""
]
] |
new_dataset
| 0.992861 |
2106.06147
|
Giampiero Salvi
|
Jerome Abdelnour, Jean Rouat, Giampiero Salvi
|
NAAQA: A Neural Architecture for Acoustic Question Answering
|
Submitted to IEEE Transactions on Pattern Analysis and Machine
Intelligence (PAMI) in April 2021 (first revision February 2022)
| null | null | null |
cs.CL cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The goal of the Acoustic Question Answering (AQA) task is to answer a
free-form text question about the content of an acoustic scene. It was inspired
by the Visual Question Answering (VQA) task. In this paper, based on the
previously introduced CLEAR dataset, we propose a new benchmark for AQA, namely
CLEAR2, that emphasizes the specific challenges of acoustic inputs. These
include handling of variable duration scenes, and scenes built with elementary
sounds that differ between training and test set. We also introduce NAAQA, a
neural architecture that leverages specific properties of acoustic inputs. The
use of 1D convolutions in time and frequency to process 2D spectro-temporal
representations of acoustic content shows promising results and enables
reductions in model complexity. We show that time coordinate maps augment
temporal localization capabilities which enhance performance of the network by
~17 percentage points. On the other hand, frequency coordinate maps have little
influence on this task. NAAQA achieves 79.5% of accuracy on the AQA task with
~4 times fewer parameters than the previously explored VQA model. We evaluate
the perfomance of NAAQA on an independent data set reconstructed from DAQA. We
also test the addition of a MALiMo module in our model on both CLEAR2 and DAQA.
We provide a detailed analysis of the results for the different question types.
We release the code to produce CLEAR2 as well as NAAQA to foster research in
this newly emerging machine learning task.
|
[
{
"version": "v1",
"created": "Fri, 11 Jun 2021 03:05:48 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Feb 2022 20:20:10 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Abdelnour",
"Jerome",
""
],
[
"Rouat",
"Jean",
""
],
[
"Salvi",
"Giampiero",
""
]
] |
new_dataset
| 0.999686 |
2106.07487
|
Nuri Cingillioglu
|
Nuri Cingillioglu, Alessandra Russo
|
pix2rule: End-to-end Neuro-symbolic Rule Learning
|
IJCLR-NeSy, 41 pages. Minor correction to Lukasiewicz logic
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Humans have the ability to seamlessly combine low-level visual input with
high-level symbolic reasoning often in the form of recognising objects,
learning relations between them and applying rules. Neuro-symbolic systems aim
to bring a unifying approach to connectionist and logic-based principles for
visual processing and abstract reasoning respectively. This paper presents a
complete neuro-symbolic method for processing images into objects, learning
relations and logical rules in an end-to-end fashion. The main contribution is
a differentiable layer in a deep learning architecture from which symbolic
relations and rules can be extracted by pruning and thresholding. We evaluate
our model using two datasets: subgraph isomorphism task for symbolic rule
learning and an image classification domain with compound relations for
learning objects, relations and rules. We demonstrate that our model scales
beyond state-of-the-art symbolic learners and outperforms deep relational
neural network architectures.
|
[
{
"version": "v1",
"created": "Mon, 14 Jun 2021 15:19:06 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Sep 2021 20:15:43 GMT"
},
{
"version": "v3",
"created": "Mon, 28 Feb 2022 12:47:30 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Cingillioglu",
"Nuri",
""
],
[
"Russo",
"Alessandra",
""
]
] |
new_dataset
| 0.996523 |
2107.12986
|
Khushraj Madnani
|
Shankara Narayanan Krishna, Khushraj Nanik Madnani, Manuel Mazo Jr.,
Paritosh K. Pandya
|
Logics Meet 2-Way 1-Clock Alternating Timed Automata
|
arXiv admin note: text overlap with arXiv:2105.09534
| null | null | null |
cs.FL cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we study the extension of 1-clock Alternating Timed Automata
(1-ATA) with the ability to read in both forward and backward direction, the
2-Way 1-clock Alternating Timed Automata (2-Way 1-ATA). We show that subclass
of 2-Way 1-ATA with reset free loops (2-Way 1-ATA-rfl) is expressively
equivalent to MSO[<] extended with Guarded Metric Quantifiers (GQMSO).
Emptiness Checking problem for 2-Way 1-ATA-rfl (and hence GQMSO) is
undecidable, in general. We propose a "non-punctuality" like restriction,
called non-adjacency, for 2-Way 1-ATA-rfl, and also for GQMSO, for which the
emptiness (respectively, satisfiability) checking becomes decidable.
Non-Adjacent 2-Way 1-ATA is the first such class of Timed Automata with
alternations and 2-wayness for which the emptiness checking is decidable (and
that too with elementary complexity). We also show that 2-Way 1-ATA-rfl, even
with the non-adjacent restrictions, can express properties is not recognizable
using 1-ATA.
|
[
{
"version": "v1",
"created": "Tue, 27 Jul 2021 17:55:36 GMT"
},
{
"version": "v2",
"created": "Sat, 26 Feb 2022 08:25:24 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Krishna",
"Shankara Narayanan",
""
],
[
"Madnani",
"Khushraj Nanik",
""
],
[
"Mazo",
"Manuel",
"Jr."
],
[
"Pandya",
"Paritosh K.",
""
]
] |
new_dataset
| 0.977168 |
2109.00110
|
Kunhao Zheng
|
Kunhao Zheng, Jesse Michael Han, Stanislas Polu
|
MiniF2F: a cross-system benchmark for formal Olympiad-level mathematics
|
Published as a conference paper at ICLR 2022
| null | null | null |
cs.AI cs.FL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present miniF2F, a dataset of formal Olympiad-level mathematics problems
statements intended to provide a unified cross-system benchmark for neural
theorem proving. The miniF2F benchmark currently targets Metamath, Lean,
Isabelle (partially) and HOL Light (partially) and consists of 488 problem
statements drawn from the AIME, AMC, and the International Mathematical
Olympiad (IMO), as well as material from high-school and undergraduate
mathematics courses. We report baseline results using GPT-f, a neural theorem
prover based on GPT-3 and provide an analysis of its performance. We intend for
miniF2F to be a community-driven effort and hope that our benchmark will help
spur advances in neural theorem proving.
|
[
{
"version": "v1",
"created": "Tue, 31 Aug 2021 23:21:12 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Feb 2022 06:03:23 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Zheng",
"Kunhao",
""
],
[
"Han",
"Jesse Michael",
""
],
[
"Polu",
"Stanislas",
""
]
] |
new_dataset
| 0.999835 |
2109.02763
|
Dengxin Dai
|
Dengxin Dai, Arun Balajee Vasudevan, Jiri Matas, and Luc Van Gool
|
Binaural SoundNet: Predicting Semantics, Depth and Motion with Binaural
Sounds
|
Accepted by TPAMI. arXiv admin note: substantial text overlap with
arXiv:2003.04210
| null | null | null |
cs.SD cs.CV eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humans can robustly recognize and localize objects by using visual and/or
auditory cues. While machines are able to do the same with visual data already,
less work has been done with sounds. This work develops an approach for scene
understanding purely based on binaural sounds. The considered tasks include
predicting the semantic masks of sound-making objects, the motion of
sound-making objects, and the depth map of the scene. To this aim, we propose a
novel sensor setup and record a new audio-visual dataset of street scenes with
eight professional binaural microphones and a 360-degree camera. The
co-existence of visual and audio cues is leveraged for supervision transfer. In
particular, we employ a cross-modal distillation framework that consists of
multiple vision teacher methods and a sound student method -- the student
method is trained to generate the same results as the teacher methods do. This
way, the auditory system can be trained without using human annotations. To
further boost the performance, we propose another novel auxiliary task, coined
Spatial Sound Super-Resolution, to increase the directional resolution of
sounds. We then formulate the four tasks into one end-to-end trainable
multi-tasking network aiming to boost the overall performance. Experimental
results show that 1) our method achieves good results for all four tasks, 2)
the four tasks are mutually beneficial -- training them together achieves the
best performance, 3) the number and orientation of microphones are both
important, and 4) features learned from the standard spectrogram and features
obtained by the classic signal processing pipeline are complementary for
auditory perception tasks. The data and code are released.
|
[
{
"version": "v1",
"created": "Mon, 6 Sep 2021 22:24:00 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Feb 2022 13:30:29 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Dai",
"Dengxin",
""
],
[
"Vasudevan",
"Arun Balajee",
""
],
[
"Matas",
"Jiri",
""
],
[
"Van Gool",
"Luc",
""
]
] |
new_dataset
| 0.999346 |
2109.09163
|
Bowen Wen
|
Bowen Wen and Wenzhao Lian and Kostas Bekris and Stefan Schaal
|
CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from
Simulation
|
IEEE International Conference on Robotics and Automation (ICRA) 2022
| null | null | null |
cs.RO cs.AI cs.CV cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Task-relevant grasping is critical for industrial assembly, where downstream
manipulation tasks constrain the set of valid grasps. Learning how to perform
this task, however, is challenging, since task-relevant grasp labels are hard
to define and annotate. There is also yet no consensus on proper
representations for modeling or off-the-shelf tools for performing
task-relevant grasps. This work proposes a framework to learn task-relevant
grasping for industrial objects without the need of time-consuming real-world
data collection or manual annotation. To achieve this, the entire framework is
trained solely in simulation, including supervised training with synthetic
label generation and self-supervised, hand-object interaction. In the context
of this framework, this paper proposes a novel, object-centric canonical
representation at the category level, which allows establishing dense
correspondence across object instances and transferring task-relevant grasps to
novel instances. Extensive experiments on task-relevant grasping of
densely-cluttered industrial objects are conducted in both simulation and
real-world setups, demonstrating the effectiveness of the proposed framework.
Code and data are available at https://sites.google.com/view/catgrasp.
|
[
{
"version": "v1",
"created": "Sun, 19 Sep 2021 16:48:33 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Feb 2022 20:09:44 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Wen",
"Bowen",
""
],
[
"Lian",
"Wenzhao",
""
],
[
"Bekris",
"Kostas",
""
],
[
"Schaal",
"Stefan",
""
]
] |
new_dataset
| 0.994186 |
2109.09227
|
Turab Iqbal
|
Turab Iqbal, Yin Cao, Andrew Bailey, Mark D. Plumbley, Wenwu Wang
|
ARCA23K: An audio dataset for investigating open-set label noise
|
Accepted to the Detection and Classification of Acoustic Scenes and
Events 2021 Workshop (DCASE2021)
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/publicdomain/zero/1.0/
|
The availability of audio data on sound sharing platforms such as Freesound
gives users access to large amounts of annotated audio. Utilising such data for
training is becoming increasingly popular, but the problem of label noise that
is often prevalent in such datasets requires further investigation. This paper
introduces ARCA23K, an Automatically Retrieved and Curated Audio dataset
comprised of over 23000 labelled Freesound clips. Unlike past datasets such as
FSDKaggle2018 and FSDnoisy18K, ARCA23K facilitates the study of label noise in
a more controlled manner. We describe the entire process of creating the
dataset such that it is fully reproducible, meaning researchers can extend our
work with little effort. We show that the majority of labelling errors in
ARCA23K are due to out-of-vocabulary audio clips, and we refer to this type of
label noise as open-set label noise. Experiments are carried out in which we
study the impact of label noise in terms of classification performance and
representation learning.
|
[
{
"version": "v1",
"created": "Sun, 19 Sep 2021 21:10:25 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Feb 2022 09:35:05 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Iqbal",
"Turab",
""
],
[
"Cao",
"Yin",
""
],
[
"Bailey",
"Andrew",
""
],
[
"Plumbley",
"Mark D.",
""
],
[
"Wang",
"Wenwu",
""
]
] |
new_dataset
| 0.999742 |
2110.12715
|
Manuel Stoiber
|
Manuel Stoiber, Martin Pfanne, Klaus H. Strobl, Rudolph Triebel, Alin
Albu-Sch\"affer
|
SRT3D: A Sparse Region-Based 3D Object Tracking Approach for the Real
World
|
Submitted to the International Journal of Computer Vision
| null |
10.1007/s11263-022-01579-8
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Region-based methods have become increasingly popular for model-based,
monocular 3D tracking of texture-less objects in cluttered scenes. However,
while they achieve state-of-the-art results, most methods are computationally
expensive, requiring significant resources to run in real-time. In the
following, we build on our previous work and develop SRT3D, a sparse
region-based approach to 3D object tracking that bridges this gap in
efficiency. Our method considers image information sparsely along so-called
correspondence lines that model the probability of the object's contour
location. We thereby improve on the current state of the art and introduce
smoothed step functions that consider a defined global and local uncertainty.
For the resulting probabilistic formulation, a thorough analysis is provided.
Finally, we use a pre-rendered sparse viewpoint model to create a joint
posterior probability for the object pose. The function is maximized using
second-order Newton optimization with Tikhonov regularization. During the pose
estimation, we differentiate between global and local optimization, using a
novel approximation for the first-order derivative employed in the Newton
method. In multiple experiments, we demonstrate that the resulting algorithm
improves the current state of the art both in terms of runtime and quality,
performing particularly well for noisy and cluttered images encountered in the
real world.
|
[
{
"version": "v1",
"created": "Mon, 25 Oct 2021 07:58:18 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Stoiber",
"Manuel",
""
],
[
"Pfanne",
"Martin",
""
],
[
"Strobl",
"Klaus H.",
""
],
[
"Triebel",
"Rudolph",
""
],
[
"Albu-Schäffer",
"Alin",
""
]
] |
new_dataset
| 0.987108 |
2111.00440
|
Fabio Poiesi
|
Youjie Zhou, Yiming Wang, Fabio Poiesi, Qi Qin and Yi Wan
|
Loop closure detection using local 3D deep descriptors
|
This work is accepted for publication in IEEE Robotics and Automation
Letters
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We present a simple yet effective method to address loop closure detection in
simultaneous localisation and mapping using local 3D deep descriptors (L3Ds).
L3Ds are emerging compact representations of patches extracted from point
clouds that are learnt from data using a deep learning algorithm. We propose a
novel overlap measure for loop detection by computing the metric error between
points that correspond to mutually-nearest-neighbour descriptors after
registering the loop candidate point cloud by its estimated relative pose. This
novel approach enables us to accurately detect loops and estimate six
degrees-of-freedom poses in the case of small overlaps. We compare our
L3D-based loop closure approach with recent approaches on LiDAR data and
achieve state-of-the-art loop closure detection accuracy. Additionally, we
embed our loop closure approach in RESLAM, a recent edge-based SLAM system, and
perform the evaluation on real-world RGBD-TUM and synthetic ICL datasets. Our
approach enables RESLAM to achieve a better localisation accuracy compared to
its original loop closure strategy. Our project page is available at
github.com/yiming107/l3d_loop_closure.
|
[
{
"version": "v1",
"created": "Sun, 31 Oct 2021 09:18:38 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Feb 2022 14:48:13 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Zhou",
"Youjie",
""
],
[
"Wang",
"Yiming",
""
],
[
"Poiesi",
"Fabio",
""
],
[
"Qin",
"Qi",
""
],
[
"Wan",
"Yi",
""
]
] |
new_dataset
| 0.998769 |
2111.03913
|
Katerina Papantoniou
|
Katerina Papantoniou, Panagiotis Papadakos, Giorgos Flouris, Dimitris
Plexousakis
|
Linguistic Cues of Deception in a Multilingual April Fools' Day Context
|
Accepted for publication in the proceedings of the Eighth Italian
Conference on Computational Linguistics (CLIC-it 2021)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In this work we consider the collection of deceptive April Fools' Day(AFD)
news articles as a useful addition in existing datasets for deception detection
tasks. Such collections have an established ground truth and are relatively
easy to construct across languages. As a result, we introduce a corpus that
includes diachronic AFD and normal articles from Greek newspapers and news
websites. On top of that, we build a rich linguistic feature set, and analyze
and compare its deception cues with the only AFD collection currently
available, which is in English. Following a current research thread, we also
discuss the individualism/collectivism dimension in deception with respect to
these two datasets. Lastly, we build classifiers by testing various monolingual
and crosslingual settings. The results showcase that AFD datasets can be
helpful in deception detection studies, and are in alignment with the
observations of other deception detection works.
|
[
{
"version": "v1",
"created": "Sat, 6 Nov 2021 16:28:12 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Nov 2021 09:44:03 GMT"
},
{
"version": "v3",
"created": "Mon, 28 Feb 2022 06:50:12 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Papantoniou",
"Katerina",
""
],
[
"Papadakos",
"Panagiotis",
""
],
[
"Flouris",
"Giorgos",
""
],
[
"Plexousakis",
"Dimitris",
""
]
] |
new_dataset
| 0.967838 |
2111.12116
|
Smail Kourta
|
Smail Kourta, Adel Namani, Fatima Benbouzid-Si Tayeb, Kim Hazelwood,
Chris Cummins, Hugh Leather, and Riyadh Baghdadi
|
Caviar: An E-graph Based TRS for Automatic Code Optimization
|
Accepted in the 31st Conference on Compiler Construction (CC 2022)
| null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Term Rewriting Systems (TRSs) are used in compilers to simplify and prove
expressions. State-of-the-art TRSs in compilers use a greedy algorithm that
applies a set of rewriting rules in a predefined order (where some of the rules
are not axiomatic). This leads to a loss of the ability to simplify certain
expressions. E-graphs and equality saturation sidestep this issue by
representing the different equivalent expressions in a compact manner from
which the optimal expression can be extracted. While an e-graph-based TRS can
be more powerful than a TRS that uses a greedy algorithm, it is slower because
expressions may have a large or sometimes infinite number of equivalent
expressions. Accelerating e-graph construction is crucial for making the use of
e-graphs practical in compilers. In this paper, we present Caviar, an
e-graph-based TRS for proving expressions within compilers. The main advantage
of Caviar is its speed. It can prove expressions much faster than base e-graph
TRSs. It relies on three techniques: 1) a technique that stops e-graphs from
growing when the goal is reached, called Iteration Level Check; 2) a mechanism
that balances exploration and exploitation in the equality saturation
algorithm, called Pulsing Caviar; 3) a technique to stop e-graph construction
before reaching saturation when a non-provable pattern is detected, called
Non-Provable Patterns Detection (NPPD). We evaluate caviar on Halide, an
optimizing compiler that relies on a greedy-algorithm-based TRS to simplify and
prove its expressions. The proposed techniques allow Caviar to accelerate
e-graph expansion for the task of proving expressions. They also allow Caviar
to prove expressions that Halide's TRS cannot prove while being only 0.68x
slower.
|
[
{
"version": "v1",
"created": "Tue, 23 Nov 2021 19:16:33 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Feb 2022 22:50:12 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Kourta",
"Smail",
""
],
[
"Namani",
"Adel",
""
],
[
"Tayeb",
"Fatima Benbouzid-Si",
""
],
[
"Hazelwood",
"Kim",
""
],
[
"Cummins",
"Chris",
""
],
[
"Leather",
"Hugh",
""
],
[
"Baghdadi",
"Riyadh",
""
]
] |
new_dataset
| 0.957091 |
2202.08433
|
Jiangyan Yi
|
Jiangyan Yi, Ruibo Fu, Jianhua Tao, Shuai Nie, Haoxin Ma, Chenglong
Wang, Tao Wang, Zhengkun Tian, Ye Bai, Cunhang Fan, Shan Liang, Shiming Wang,
Shuai Zhang, Xinrui Yan, Le Xu, Zhengqi Wen, Haizhou Li, Zheng Lian, Bin Liu
|
ADD 2022: the First Audio Deep Synthesis Detection Challenge
|
Accepted by ICASSP 2022
| null | null | null |
cs.SD cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Audio deepfake detection is an emerging topic, which was included in the
ASVspoof 2021. However, the recent shared tasks have not covered many real-life
and challenging scenarios. The first Audio Deep synthesis Detection challenge
(ADD) was motivated to fill in the gap. The ADD 2022 includes three tracks:
low-quality fake audio detection (LF), partially fake audio detection (PF) and
audio fake game (FG). The LF track focuses on dealing with bona fide and fully
fake utterances with various real-world noises etc. The PF track aims to
distinguish the partially fake audio from the real. The FG track is a rivalry
game, which includes two tasks: an audio generation task and an audio fake
detection task. In this paper, we describe the datasets, evaluation metrics,
and protocols. We also report major findings that reflect the recent advances
in audio deepfake detection tasks.
|
[
{
"version": "v1",
"created": "Thu, 17 Feb 2022 03:29:20 GMT"
},
{
"version": "v2",
"created": "Sat, 26 Feb 2022 07:06:58 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Yi",
"Jiangyan",
""
],
[
"Fu",
"Ruibo",
""
],
[
"Tao",
"Jianhua",
""
],
[
"Nie",
"Shuai",
""
],
[
"Ma",
"Haoxin",
""
],
[
"Wang",
"Chenglong",
""
],
[
"Wang",
"Tao",
""
],
[
"Tian",
"Zhengkun",
""
],
[
"Bai",
"Ye",
""
],
[
"Fan",
"Cunhang",
""
],
[
"Liang",
"Shan",
""
],
[
"Wang",
"Shiming",
""
],
[
"Zhang",
"Shuai",
""
],
[
"Yan",
"Xinrui",
""
],
[
"Xu",
"Le",
""
],
[
"Wen",
"Zhengqi",
""
],
[
"Li",
"Haizhou",
""
],
[
"Lian",
"Zheng",
""
],
[
"Liu",
"Bin",
""
]
] |
new_dataset
| 0.98261 |
2202.11620
|
Gerg\H{o} Pint\'er
|
Gerg\H{o} Pint\'er, Imre Felde
|
Awakening City: Traces of the Circadian Rhythm within the Mobile Phone
Network Data
| null |
Information 2022, 13(3), 114
|
10.3390/info13030114
| null |
cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
In this study, Call Detail Records (CDR), covering Budapest, Hungary has been
processed to analyze the circadian rhythm of the subscribers. An indicator,
called wake-up time, is introduced to describe a behavior of a group of
subscribers. It is defined as the time, when the mobile phone activity of a
group rises in the morning. Its counterpart is the time, when the activity
falls in the evening. Inhabitant and area-based aggregation are also presented.
The former is to consider the people who live in an area, the latter uses the
transit activity in an area to describe the behavior of a part of the city. The
opening hours of the malls and the nightlife of the party district was used to
demonstrate this application, as real-life examples. The proposed approach was
also used to estimate the working hours of the workplaces. The findings are in
a good agreement with practice in Hungary, and also support the workplace
detection method. Negative correlation was found between wake-up time and
mobility indicators (Entropy, Radius of Gyration): On workdays, people wake up
earlier and travel more, on holidays it is quite the contrary. The wake-up time
was evaluated in different socioeconomic classes, using housing prices and
mobile phones prices, as well. It was found that lower socioeconomic groups
tend to wake up earlier.
|
[
{
"version": "v1",
"created": "Wed, 23 Feb 2022 16:55:25 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Pintér",
"Gergő",
""
],
[
"Felde",
"Imre",
""
]
] |
new_dataset
| 0.997533 |
2202.12607
|
Makoto Morishita
|
Makoto Morishita, Katsuki Chousa, Jun Suzuki, Masaaki Nagata
|
JParaCrawl v3.0: A Large-scale English-Japanese Parallel Corpus
|
7 pages
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most current machine translation models are mainly trained with parallel
corpora, and their translation accuracy largely depends on the quality and
quantity of the corpora. Although there are billions of parallel sentences for
a few language pairs, effectively dealing with most language pairs is difficult
due to a lack of publicly available parallel corpora. This paper creates a
large parallel corpus for English-Japanese, a language pair for which only
limited resources are available, compared to such resource-rich languages as
English-German. It introduces a new web-based English-Japanese parallel corpus
named JParaCrawl v3.0. Our new corpus contains more than 21 million unique
parallel sentence pairs, which is more than twice as many as the previous
JParaCrawl v2.0 corpus. Through experiments, we empirically show how our new
corpus boosts the accuracy of machine translation models on various domains.
The JParaCrawl v3.0 corpus will eventually be publicly available online for
research purposes.
|
[
{
"version": "v1",
"created": "Fri, 25 Feb 2022 10:52:00 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Feb 2022 06:21:03 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Morishita",
"Makoto",
""
],
[
"Chousa",
"Katsuki",
""
],
[
"Suzuki",
"Jun",
""
],
[
"Nagata",
"Masaaki",
""
]
] |
new_dataset
| 0.999744 |
2202.12912
|
Ruinian Xu
|
Ruinian Xu and Hongyi Chen and Yunzhi Lin and Patricio A. Vela
|
SGL: Symbolic Goal Learning in a Hybrid, Modular Framework for Human
Instruction Following
|
8 pages, 3 figures, 3 tables
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper investigates robot manipulation based on human instruction with
ambiguous requests. The intent is to compensate for imperfect natural language
via visual observations. Early symbolic methods, based on manually defined
symbols, built modular framework consist of semantic parsing and task planning
for producing sequences of actions from natural language requests. Modern
connectionist methods employ deep neural networks to automatically learn visual
and linguistic features and map to a sequence of low-level actions, in an
endto-end fashion. These two approaches are blended to create a hybrid, modular
framework: it formulates instruction following as symbolic goal learning via
deep neural networks followed by task planning via symbolic planners.
Connectionist and symbolic modules are bridged with Planning Domain Definition
Language. The vision-and-language learning network predicts its goal
representation, which is sent to a planner for producing a task-completing
action sequence. For improving the flexibility of natural language, we further
incorporate implicit human intents with explicit human instructions. To learn
generic features for vision and language, we propose to separately pretrain
vision and language encoders on scene graph parsing and semantic textual
similarity tasks. Benchmarking evaluates the impacts of different components
of, or options for, the vision-and-language learning model and shows the
effectiveness of pretraining strategies. Manipulation experiments conducted in
the simulator AI2THOR show the robustness of the framework to novel scenarios.
|
[
{
"version": "v1",
"created": "Fri, 25 Feb 2022 19:04:31 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Xu",
"Ruinian",
""
],
[
"Chen",
"Hongyi",
""
],
[
"Lin",
"Yunzhi",
""
],
[
"Vela",
"Patricio A.",
""
]
] |
new_dataset
| 0.995976 |
2202.12984
|
Frances Cleary Ms
|
Frances Cleary, Witawas Srisa-an, Beatriz Gil, Jaideep Kesavan, Tobias
Engel, David C. Henshall, Sasitharan Balasubramaniam
|
Wearable uBrain: Fabric Based-Spiking Neural Network
|
24 pages , 13 figures
| null | null | null |
cs.HC cs.ET
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
On garment intelligence influenced by artificial neural networks and
neuromorphic computing is emerging as a research direction in the e-textile
sector. In particular, bio inspired Spiking Neural Networks mimicking the
workings of the brain show promise in recent ICT research applications. Taking
such technological advancements and new research directions driving forward the
next generation of e-textiles and smart materials, we present a wearable micro
Brain capable of event driven artificial spiking neural network computation in
a fabric based environment. We demonstrate a wearable Brain SNN prototype with
multi-layer computation, enabling scalability and flexibility in terms of
modifications for hidden layers to be augmented to the network. The wearable
micro Brain provides a low size, weight and power artificial on-garment
intelligent wearable solution with embedded functionality enabling offline
adaptive learning through the provision of interchangeable resistor synaptic
weightings. The prototype has been evaluated for fault tolerance, where we have
determine the robustness of the circuit when certain parts are damaged.
Validations were also conducted for movements to determine if the circuit can
still perform accurate computation.
|
[
{
"version": "v1",
"created": "Fri, 25 Feb 2022 21:30:45 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Cleary",
"Frances",
""
],
[
"Srisa-an",
"Witawas",
""
],
[
"Gil",
"Beatriz",
""
],
[
"Kesavan",
"Jaideep",
""
],
[
"Engel",
"Tobias",
""
],
[
"Henshall",
"David C.",
""
],
[
"Balasubramaniam",
"Sasitharan",
""
]
] |
new_dataset
| 0.999506 |
2202.12991
|
Andrea Ceccarelli
|
Niccol\`o Piazzesi, Massimo Hong, Andrea Ceccarelli
|
Attacks and Faults Injection in Self-Driving Agents on the Carla
Simulator -- Experience Report
|
submitted version; appeared at: International Conference on Computer
Safety, Reliability, and Security. Springer, Cham, 2021
| null | null | null |
cs.AI cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine Learning applications are acknowledged at the foundation of
autonomous driving, because they are the enabling technology for most driving
tasks. However, the inclusion of trained agents in automotive systems exposes
the vehicle to novel attacks and faults, that can result in safety threats to
the driv-ing tasks. In this paper we report our experimental campaign on the
injection of adversarial attacks and software faults in a self-driving agent
running in a driving simulator. We show that adversarial attacks and faults
injected in the trained agent can lead to erroneous decisions and severely
jeopardize safety. The paper shows a feasible and easily-reproducible approach
based on open source simula-tor and tools, and the results clearly motivate the
need of both protective measures and extensive testing campaigns.
|
[
{
"version": "v1",
"created": "Fri, 25 Feb 2022 21:46:12 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Piazzesi",
"Niccolò",
""
],
[
"Hong",
"Massimo",
""
],
[
"Ceccarelli",
"Andrea",
""
]
] |
new_dataset
| 0.997674 |
2202.13079
|
Siqu Long
|
Soyeon Caren Han, Siqu Long, Huichun Li, Henry Weld, Josiah Poon
|
Bi-directional Joint Neural Networks for Intent Classification and Slot
Filling
| null |
Proc. Interspeech 2021, pp.4743-4747
| null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Intent classification and slot filling are two critical tasks for natural
language understanding. Traditionally the two tasks proceeded independently.
However, more recently joint models for intent classification and slot filling
have achieved state-of-the-art performance, and have proved that there exists a
strong relationship between the two tasks. In this paper, we propose a
bi-directional joint model for intent classification and slot filling, which
includes a multi-stage hierarchical process via BERT and bi-directional joint
natural language understanding mechanisms, including intent2slot and
slot2intent, to obtain mutual performance enhancement between intent
classification and slot filling. The evaluations show that our model achieves
state-of-the-art results on intent classification accuracy, slot filling F1,
and significantly improves sentence-level semantic frame accuracy when applied
to publicly available benchmark datasets, ATIS (88.6%) and SNIPS (92.8%).
|
[
{
"version": "v1",
"created": "Sat, 26 Feb 2022 06:35:21 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Han",
"Soyeon Caren",
""
],
[
"Long",
"Siqu",
""
],
[
"Li",
"Huichun",
""
],
[
"Weld",
"Henry",
""
],
[
"Poon",
"Josiah",
""
]
] |
new_dataset
| 0.984787 |
2202.13101
|
Bhushan Jagyasi
|
Jinu Jayan, Saurabh Pashine, Pallavi Gawade, Bhushan Jagyasi, Sreedhar
Seetharam, Gopali Contractor, Rajesh kumar Palani, Harshit Sampgaon, Sandeep
Vaity, Tamal Bhattacharyya, Rengaraj Ramasubbu
|
Sustainability using Renewable Electricity (SuRE) towards NetZero
Emissions
|
8 pages, 10 Figures, 3 tables, 20 References, IEEE Conference
template
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Demand for energy has increased significantly across the globe due to
increase in population and economic growth. Growth in energy demand poses
serious threat to the environment since majority of the energy sources are
non-renewable and based on fossil fuels, which leads to emission of harmful
greenhouse gases. Organizations across the world are facing challenges in
transitioning from fossil fuels-based sources to greener sources to reduce
their carbon footprint. As a step towards achieving Net-Zero emission target,
we present a scalable AI based solution that can be used by organizations to
increase their overall renewable electricity share in total energy consumption.
Our solution provides facilities with accurate energy demand forecast,
recommendation for procurement of renewable electricity to optimize cost and
carbon offset recommendations to compensate for Greenhouse Gas (GHG) emissions.
This solution has been used in production for more than a year for four
facilities and has increased their renewable electricity share significantly.
|
[
{
"version": "v1",
"created": "Sat, 26 Feb 2022 10:04:26 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Jayan",
"Jinu",
""
],
[
"Pashine",
"Saurabh",
""
],
[
"Gawade",
"Pallavi",
""
],
[
"Jagyasi",
"Bhushan",
""
],
[
"Seetharam",
"Sreedhar",
""
],
[
"Contractor",
"Gopali",
""
],
[
"Palani",
"Rajesh kumar",
""
],
[
"Sampgaon",
"Harshit",
""
],
[
"Vaity",
"Sandeep",
""
],
[
"Bhattacharyya",
"Tamal",
""
],
[
"Ramasubbu",
"Rengaraj",
""
]
] |
new_dataset
| 0.986478 |
2202.13137
|
Zhe Ming Chng
|
Zhe Ming Chng, Joseph Mun Hung Lew, Jimmy Addison Lee
|
RONELDv2: A faster, improved lane tracking method
|
9 pages, 8 figures, 6 tables
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Lane detection is an integral part of control systems in autonomous vehicles
and lane departure warning systems as lanes are a key component of the
operating environment for road vehicles. In a previous paper, a robust neural
network output enhancement for active lane detection (RONELD) method augmenting
deep learning lane detection models to improve active, or ego, lane accuracy
performance was presented. This paper extends the work by further investigating
the lane tracking methods used to increase robustness of the method to lane
changes and different lane dimensions (e.g. lane marking thickness) and
proposes an improved, lighter weight lane detection method, RONELDv2. It
improves on the previous RONELD method by detecting the lane point variance,
merging lanes to find a more accurate set of lane parameters, and using an
exponential moving average method to calculate more robust lane weights.
Experiments using the proposed improvements show a consistent increase in lane
detection accuracy results across different datasets and deep learning models,
as well as a decrease in computational complexity observed via an up to
two-fold decrease in runtime, which enhances its suitability for real-time use
on autonomous vehicles and lane departure warning systems.
|
[
{
"version": "v1",
"created": "Sat, 26 Feb 2022 13:12:09 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Chng",
"Zhe Ming",
""
],
[
"Lew",
"Joseph Mun Hung",
""
],
[
"Lee",
"Jimmy Addison",
""
]
] |
new_dataset
| 0.998889 |
2202.13185
|
Weidong Cao
|
Weidong Cao, Mouhacine Benosman, Xuan Zhang, Rui Ma
|
Domain Knowledge-Based Automated Analog Circuit Design with Deep
Reinforcement Learning
|
8 pages, 5 figures, 2 tables, Thirty-Sixth AAAI Conference on
Artificial Intelligence, The 1st Annual AAAI Workshop on AI to Accelerate
Science and Engineering (AI2ASE)
| null | null | null |
cs.LG cs.AI cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
The design automation of analog circuits is a longstanding challenge in the
integrated circuit field. This paper presents a deep reinforcement learning
method to expedite the design of analog circuits at the pre-layout stage, where
the goal is to find device parameters to fulfill desired circuit
specifications. Our approach is inspired by experienced human designers who
rely on domain knowledge of analog circuit design (e.g., circuit topology and
couplings between circuit specifications) to tackle the problem. Unlike all
prior methods, our method originally incorporates such key domain knowledge
into policy learning with a graph-based policy network, thereby best modeling
the relations between circuit parameters and design targets. Experimental
results on exemplary circuits show it achieves human-level design accuracy
(~99%) with 1.5x efficiency of existing best-performing methods. Our method
also shows better generalization ability to unseen specifications and
optimality in circuit performance optimization. Moreover, it applies to
designing diverse analog circuits across different semiconductor technologies,
breaking the limitations of prior ad-hoc methods in designing one particular
type of analog circuits with conventional semiconductor technology.
|
[
{
"version": "v1",
"created": "Sat, 26 Feb 2022 16:56:45 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Cao",
"Weidong",
""
],
[
"Benosman",
"Mouhacine",
""
],
[
"Zhang",
"Xuan",
""
],
[
"Ma",
"Rui",
""
]
] |
new_dataset
| 0.969616 |
2202.13202
|
Wensheng Gan
|
Gengsen Huang, Wensheng Gan, and Philip S. Yu
|
TaSPM: Targeted Sequential Pattern Mining
|
Preprint. 5 figures, 3 tables
| null | null | null |
cs.DB cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sequential pattern mining (SPM) is an important technique of pattern mining,
which has many applications in reality. Although many efficient sequential
pattern mining algorithms have been proposed, there are few studies can focus
on target sequences. Targeted querying sequential patterns can not only reduce
the number of sequences generated by SPM, but also improve the efficiency of
users in performing pattern analysis. The current algorithms available on
targeted sequence querying are based on specific scenarios and cannot be
generalized to other applications. In this paper, we formulate the problem of
targeted sequential pattern mining and propose a generic framework namely
TaSPM, based on the fast CM-SPAM algorithm. What's more, to improve the
efficiency of TaSPM on large-scale datasets and multiple-items-based sequence
datasets, we propose several pruning strategies to reduce meaningless
operations in mining processes. Totally four pruning strategies are designed in
TaSPM, and hence it can terminate unnecessary pattern extensions quickly and
achieve better performance. Finally, we conduct extensive experiments on
different datasets to compare the existing SPM algorithms with TaSPM.
Experiments show that the novel targeted mining algorithm TaSPM can achieve
faster running time and less memory consumption.
|
[
{
"version": "v1",
"created": "Sat, 26 Feb 2022 17:49:47 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Huang",
"Gengsen",
""
],
[
"Gan",
"Wensheng",
""
],
[
"Yu",
"Philip S.",
""
]
] |
new_dataset
| 0.98637 |
2202.13275
|
YuLi Sun
|
Junzheng Wu, Ruigang Fu, Qiang Liu, Weiping Ni, Kenan Cheng, Biao Li,
Yuli Sun
|
A Dual Neighborhood Hypergraph Neural Network for Change Detection in
VHR Remote Sensing Images
|
arXiv admin note: text overlap with arXiv:2102.08041
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The very high spatial resolution (VHR) remote sensing images have been an
extremely valuable source for monitoring changes occurred on the earth surface.
However, precisely detecting relevant changes in VHR images still remains a
challenge, due to the complexity of the relationships among ground objects. To
address this limitation, a dual neighborhood hypergraph neural network is
proposed in this article, which combines the multiscale superpixel segmentation
and hypergraph convolution to model and exploit the complex relationships.
First, the bi-temporal image pairs are segmented under two scales and fed to a
pre-trained U-net to obtain node features by treating each object under the
fine scale as a node. The dual neighborhood is then defined using the
father-child and adjacent relationships of the segmented objects to construct
the hypergraph, which permits models to represent the higher-order structured
information far more complex than just pairwise relationships. The hypergraph
convolutions are conducted on the constructed hypergraph to propagate the label
information from a small amount of labeled nodes to the other unlabeled ones by
the node-edge-node transform. Moreover, to alleviate the problem of imbalanced
sample, the focal loss function is adopted to train the hypergraph neural
network. The experimental results on optical, SAR and heterogeneous optical/SAR
data sets demonstrate that the proposed method comprises better effectiveness
and robustness compared to many state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Sun, 27 Feb 2022 02:39:08 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Wu",
"Junzheng",
""
],
[
"Fu",
"Ruigang",
""
],
[
"Liu",
"Qiang",
""
],
[
"Ni",
"Weiping",
""
],
[
"Cheng",
"Kenan",
""
],
[
"Li",
"Biao",
""
],
[
"Sun",
"Yuli",
""
]
] |
new_dataset
| 0.983908 |
2202.13285
|
Philippe Heitzmann
|
Philippe Heitzmann
|
A Computer Vision-assisted Approach to Automated Real-Time Road
Infrastructure Management
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Accurate automated detection of road pavement distresses is critical for the
timely identification and repair of potentially accident-inducing road hazards
such as potholes and other surface-level asphalt cracks. Deployment of such a
system would be further advantageous in low-resource environments where lack of
government funding for infrastructure maintenance typically entails heightened
risks of potentially fatal vehicular road accidents as a result of inadequate
and infrequent manual inspection of road systems for road hazards. To remedy
this, a recent research initiative organized by the Institute of Electrical and
Electronics Engineers ("IEEE") as part of their 2020 Global Road Damage
Detection ("GRDC") Challenge published in May 2020 a novel 21,041 annotated
image dataset of various road distresses calling upon academic and other
researchers to submit innovative deep learning-based solutions to these road
hazard detection problems. Making use of this dataset, we propose a supervised
object detection approach leveraging You Only Look Once ("YOLO") and the Faster
R-CNN frameworks to detect and classify road distresses in real-time via a
vehicle dashboard-mounted smartphone camera, producing 0.68 F1-score
experimental results ranking in the top 5 of 121 teams that entered this
challenge as of December 2021.
|
[
{
"version": "v1",
"created": "Sun, 27 Feb 2022 04:08:00 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Heitzmann",
"Philippe",
""
]
] |
new_dataset
| 0.975203 |
2202.13450
|
Mario Felipe Munoz
|
Mario Felipe Munoz, Kaiwen Zhang and Fatima Amara
|
ZipZap: A Blockchain Solution for Local Energy Trading
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
In the last few years, electric utility companies have increasingly invested
into transactive energy systems. This trend was primarily caused by the
integration of distributed energy resources (DERs) and internet-of-things (IoT)
devices into their existing distribution networks. Influenced by the general
interest in blockchain technologies, many industry specialists are considering
new, more efficient peer-to-peer market structures for DERs. Since
blockchain-based energy exchanges can automate transactions between their
members and provide increased levels of security thanks to smart contracts,
these new initiatives may eventually revolutionize how customers interact with
utility companies. In this paper, we explore the trade-off between cost and
traceability in the form of on-chain and off-chain solutions. We also propose
ZipZap, a first step towards a blockchain-based local smart grid system. ZipZap
is an ERC-1155 compliant solution with four different prototypes: Heavyweight,
Featherweight, Lightweight and Weightless. The first three prototypes were
developed in Solidity and deployed using Ethereum. Heavyweight is fully
on-chain, whereas Featherweight and Lightweight showcase various levels of
hybridization. Weightless, in turn, was deployed using Quorum, a gas-free
alternative to Ethereum. Our evaluation uses realistic parameters and measures
the impact of different types of metadata storage scopes, with some Ethereum
prototypes showcasing gas cost reductions of more than 97% in comparison to our
fully on-chain baseline.
|
[
{
"version": "v1",
"created": "Sun, 27 Feb 2022 20:40:59 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Munoz",
"Mario Felipe",
""
],
[
"Zhang",
"Kaiwen",
""
],
[
"Amara",
"Fatima",
""
]
] |
new_dataset
| 0.998555 |
2202.13452
|
Shang-En Huang
|
Shang-En Huang, Seth Pettie, Leqi Zhu
|
Byzantine Agreement in Polynomial Time with Near-Optimal Resilience
|
submitted to STOC 2022
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It has been known since the early 1980s that Byzantine Agreement in the full
information, asynchronous model is impossible to solve deterministically
against even one crash fault [FLP85], but that it can be solved with
probability 1 [Ben83], even against an adversary that controls the scheduling
of all messages and corrupts up to $f<n/3$ players [Bra87]. The main downside
of [Ben83, Bra87] is that they terminate in $2^{\Theta(n)}$ rounds in
expectation whenever $f=\Theta(n)$.
King and Saia [KS16, KS18(arXiv:1812.10169)] developed a polynomial protocol
(polynomial rounds, polynomial computation) that is resilient to $f <
(1.14\times 10^{-9})n$ Byzantine faults. The new idea in their protocol is to
detect -- and blacklist -- coalitions of likely-bad players by analyzing the
deviations of random variables generated by those players over many rounds.
In this work we design a simple collective coin-flipping protocol such that
if any coalition of faulty players repeatedly does not follow protocol, then
they will eventually be detected by one of two simple statistical tests. Using
this coin-flipping protocol, we solve Byzantine Agreement in a polynomial
number of rounds, even in the presence of up to $f<n/4$ Byzantine faults. This
comes close to the $f<n/3$ upper bound on the maximum number of faults
[BT85,FLM86,LSP82].
|
[
{
"version": "v1",
"created": "Sun, 27 Feb 2022 20:53:57 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Huang",
"Shang-En",
""
],
[
"Pettie",
"Seth",
""
],
[
"Zhu",
"Leqi",
""
]
] |
new_dataset
| 0.995313 |
2202.13456
|
Claudiney Tinoco M.Sc.
|
Claudiney R. Tinoco, Gina M. B. Oliveira (Federal University of
Uberl\^andia, Uberl\^andia/MG, Brazil)
|
PheroCom: Decentralised and asynchronous swarm robotics coordination
based on virtual pheromone and vibroacoustic communication
|
26 pages, 15 figures
| null | null | null |
cs.RO cs.AI cs.MA cs.NE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Representation and control of the dynamics of stigmergic substances used by
bio-inspired approaches is a challenge when applied to robotics. In order to
overcome this challenge, this work proposes a model to coordinate swarms of
robots based on the virtualisation and control of these substances in a local
scope. The model presents a new pheromone modelling, which enables the
decentralisation and asynchronicity of navigation decisions. Each robot
maintains an independent virtual pheromone map, which is continuously updated
with the robot's deposits and pheromone evaporation. Moreover, the individual
pheromone map is also updated by aggregating information from other robots that
are exploring nearby areas. Thus, individual and independent maps replace the
need of a centralising agent that controls and distributes the pheromone
information, which is not always practicable. Pheromone information propagation
is inspired by ants' vibroacoustic communication, which, in turn, is
characterised as an indirect communication through a type of gossip protocol.
The proposed model was evaluated through an agent simulation software,
implemented by the authors, and in the Webots platform. Experiments were
carried out to validate the model in different environments, with different
shapes and sizes, as well as varying the number of robots. The analysis of the
results has shown that the model was able to perform the coordination of the
swarm, and the robots have exhibited an expressive performance executing the
surveillance task.
|
[
{
"version": "v1",
"created": "Sun, 27 Feb 2022 21:22:14 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Tinoco",
"Claudiney R.",
"",
"Federal University of\n Uberlândia, Uberlândia/MG, Brazil"
],
[
"Oliveira",
"Gina M. B.",
"",
"Federal University of\n Uberlândia, Uberlândia/MG, Brazil"
]
] |
new_dataset
| 0.998567 |
2202.13469
|
Jiacheng Li
|
Jiacheng Li, Jingbo Shang, Julian McAuley
|
UCTopic: Unsupervised Contrastive Learning for Phrase Representations
and Topic Mining
|
Accepted as ACL 2022 main conference paper
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
High-quality phrase representations are essential to finding topics and
related terms in documents (a.k.a. topic mining). Existing phrase
representation learning methods either simply combine unigram representations
in a context-free manner or rely on extensive annotations to learn
context-aware knowledge. In this paper, we propose UCTopic, a novel
unsupervised contrastive learning framework for context-aware phrase
representations and topic mining. UCTopic is pretrained in a large scale to
distinguish if the contexts of two phrase mentions have the same semantics. The
key to pretraining is positive pair construction from our phrase-oriented
assumptions. However, we find traditional in-batch negatives cause performance
decay when finetuning on a dataset with small topic numbers. Hence, we propose
cluster-assisted contrastive learning(CCL) which largely reduces noisy
negatives by selecting negatives from clusters and further improves phrase
representations for topics accordingly. UCTopic outperforms the
state-of-the-art phrase representation model by 38.2% NMI in average on four
entity cluster-ing tasks. Comprehensive evaluation on topic mining shows that
UCTopic can extract coherent and diverse topical phrases.
|
[
{
"version": "v1",
"created": "Sun, 27 Feb 2022 22:43:06 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Li",
"Jiacheng",
""
],
[
"Shang",
"Jingbo",
""
],
[
"McAuley",
"Julian",
""
]
] |
new_dataset
| 0.996883 |
2202.13500
|
Nga Than
|
Abhishek Gupta, Iga Kozlowska, Nga Than
|
The Golden Circle: Creating Socio-technical Alignment in Content
Moderation
|
6 pages, 1 figure, 1 table
| null | null | null |
cs.SI cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
This paper outlines a conceptual framework titled The Golden Circle that
describes the roles of actors at individual, organizational, and societal
levels, and their dynamics in the content moderation ecosystem. Centering harm
reduction and context moderation, it argues that the ML community must attend
to multimodal content moderation solutions, align their work with their
organizations' goals and values, and pay attention to the ever changing social
contexts in which their sociotechnical systems are embedded. This is done by
accounting for the why, how, and what of content moderation from a sociological
and technical lens.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 01:39:54 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Gupta",
"Abhishek",
""
],
[
"Kozlowska",
"Iga",
""
],
[
"Than",
"Nga",
""
]
] |
new_dataset
| 0.998876 |
2202.13513
|
Shuaibing Lin
|
Shuaibing Lin, JiaLiang Qu, Zishuo Li, Xiaoqiang Ren, Yilin Mo
|
Aggressive Racecar Drifting Control Using Onboard Cameras and Inertial
Measurement Unit
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Complex autonomous driving, such as drifting, requires high-precision and
high-frequency pose information to ensure accuracy and safety, which is notably
difficult when using only onboard sensors. In this paper, we propose a drift
controller with two feedback control loops: sideslip controller that stabilizes
the sideslip angle by tuning the front wheel steering angle, and circle
controller that maintains a stable trajectory radius and circle center by
controlling the wheel rotational speed. We use an extended Kalman filter to
estimate the state. A robustified KASA algorithm is further proposed to
accurately estimate the parameters of the circle (i.e., the center and radius)
that best fits into the current trajectory. On the premise of the uniform
circular motion of the vehicle in the process of stable drift, we use angle
information instead of acceleration to describe the dynamic of the vehicle. We
implement our method on a 1/10 scale race car. The car drifts stably with a
given center and radius, which illustrates the effectiveness of our method.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 02:35:26 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Lin",
"Shuaibing",
""
],
[
"Qu",
"JiaLiang",
""
],
[
"Li",
"Zishuo",
""
],
[
"Ren",
"Xiaoqiang",
""
],
[
"Mo",
"Yilin",
""
]
] |
new_dataset
| 0.999358 |
2202.13520
|
Sujoy Sikdar
|
Sujoy Sikdar, Sikai Ruan, Qishen Han, Paween Pitimanaaree, Jeremy
Blackthorne, Bulent Yener, Lirong Xia
|
Anti-Malware Sandbox Games
| null | null | null | null |
cs.GT cs.AI cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We develop a game theoretic model of malware protection using the
state-of-the-art sandbox method, to characterize and compute optimal defense
strategies for anti-malware. We model the strategic interaction between
developers of malware (M) and anti-malware (AM) as a two player game, where AM
commits to a strategy of generating sandbox environments, and M responds by
choosing to either attack or hide malicious activity based on the environment
it senses. We characterize the condition for AM to protect all its machines,
and identify conditions under which an optimal AM strategy can be computed
efficiently. For other cases, we provide a quadratically constrained quadratic
program (QCQP)-based optimization framework to compute the optimal AM strategy.
In addition, we identify a natural and easy to compute strategy for AM, which
as we show empirically, achieves AM utility that is close to the optimal AM
utility, in equilibrium.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 03:12:40 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Sikdar",
"Sujoy",
""
],
[
"Ruan",
"Sikai",
""
],
[
"Han",
"Qishen",
""
],
[
"Pitimanaaree",
"Paween",
""
],
[
"Blackthorne",
"Jeremy",
""
],
[
"Yener",
"Bulent",
""
],
[
"Xia",
"Lirong",
""
]
] |
new_dataset
| 0.998544 |
2202.13529
|
Difei Gao
|
Daniel Gao, Yantao Jia, Lei Li, Chengzhen Fu, Zhicheng Dou, Hao Jiang,
Xinyu Zhang, Lei Chen, Zhao Cao
|
KMIR: A Benchmark for Evaluating Knowledge Memorization, Identification
and Reasoning Abilities of Language Models
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Previous works show the great potential of pre-trained language models (PLMs)
for storing a large amount of factual knowledge. However, to figure out whether
PLMs can be reliable knowledge sources and used as alternative knowledge bases
(KBs), we need to further explore some critical features of PLMs. Firstly,
knowledge memorization and identification abilities: traditional KBs can store
various types of entities and relationships; do PLMs have a high knowledge
capacity to store different types of knowledge? Secondly, reasoning ability: a
qualified knowledge source should not only provide a collection of facts, but
support a symbolic reasoner. Can PLMs derive new knowledge based on the
correlations between facts? To evaluate these features of PLMs, we propose a
benchmark, named Knowledge Memorization, Identification, and Reasoning test
(KMIR). KMIR covers 3 types of knowledge, including general knowledge,
domain-specific knowledge, and commonsense, and provides 184,348 well-designed
questions. Preliminary experiments with various representative pre-training
language models on KMIR reveal many interesting phenomenons: 1) The
memorization ability of PLMs depends more on the number of parameters than
training schemes. 2) Current PLMs are struggling to robustly remember the
facts. 3) Model compression technology retains the amount of knowledge well,
but hurts the identification and reasoning abilities. We hope KMIR can
facilitate the design of PLMs as better knowledge sources.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 03:52:57 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Gao",
"Daniel",
""
],
[
"Jia",
"Yantao",
""
],
[
"Li",
"Lei",
""
],
[
"Fu",
"Chengzhen",
""
],
[
"Dou",
"Zhicheng",
""
],
[
"Jiang",
"Hao",
""
],
[
"Zhang",
"Xinyu",
""
],
[
"Chen",
"Lei",
""
],
[
"Cao",
"Zhao",
""
]
] |
new_dataset
| 0.999111 |
2202.13645
|
Yunlong Liang
|
Yunlong Liang, Fandong Meng, Jinan Xu, Yufeng Chen and Jie Zhou
|
MSCTD: A Multimodal Sentiment Chat Translation Dataset
|
Accepted at ACL 2022 as a long paper of main conference. Code and
data: https://github.com/XL2248/MSCTD
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multimodal machine translation and textual chat translation have received
considerable attention in recent years. Although the conversation in its
natural form is usually multimodal, there still lacks work on multimodal
machine translation in conversations. In this work, we introduce a new task
named Multimodal Chat Translation (MCT), aiming to generate more accurate
translations with the help of the associated dialogue history and visual
context. To this end, we firstly construct a Multimodal Sentiment Chat
Translation Dataset (MSCTD) containing 142,871 English-Chinese utterance pairs
in 14,762 bilingual dialogues and 30,370 English-German utterance pairs in
3,079 bilingual dialogues. Each utterance pair, corresponding to the visual
context that reflects the current conversational scene, is annotated with a
sentiment label. Then, we benchmark the task by establishing multiple baseline
systems that incorporate multimodal and sentiment features for MCT. Preliminary
experiments on four language directions (English-Chinese and English-German)
verify the potential of contextual and multimodal information fusion and the
positive impact of sentiment on the MCT task. Additionally, as a by-product of
the MSCTD, it also provides two new benchmarks on multimodal dialogue sentiment
analysis. Our work can facilitate research on both multimodal chat translation
and multimodal dialogue sentiment analysis.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 09:40:46 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Liang",
"Yunlong",
""
],
[
"Meng",
"Fandong",
""
],
[
"Xu",
"Jinan",
""
],
[
"Chen",
"Yufeng",
""
],
[
"Zhou",
"Jie",
""
]
] |
new_dataset
| 0.999715 |
2202.13661
|
Cornelius Brand
|
Cornelius Brand, Esra Ceylan, Christian Hatschka, Robert Ganian,
Viktoriia Korchemna
|
Edge-Cut Width: An Algorithmically Driven Analogue of Treewidth Based on
Edge Cuts
|
27 pages, 4 figures
| null | null | null |
cs.DS cs.CC
|
http://creativecommons.org/licenses/by/4.0/
|
Decompositional parameters such as treewidth are commonly used to obtain
fixed-parameter algorithms for NP-hard graph problems. For problems that are
W[1]-hard parameterized by treewidth, a natural alternative would be to use a
suitable analogue of treewidth that is based on edge cuts instead of vertex
separators. While tree-cut width has been coined as such an analogue of
treewidth for edge cuts, its algorithmic applications have often led to
disappointing results: out of twelve problems where one would hope for
fixed-parameter tractability parameterized by an edge-cut based analogue to
treewidth, eight were shown to be W[1]-hard parameterized by tree-cut width.
As our main contribution, we develop an edge-cut based analogue to treewidth
called edge-cut width. Edge-cut width is, intuitively, based on measuring the
density of cycles passing through a spanning tree of the graph. Its benefits
include not only a comparatively simple definition, but mainly that it has
interesting algorithmic properties: it can be computed by a fixed-parameter
algorithm, and it yields fixed-parameter algorithms for all the aforementioned
problems where tree-cut width failed to do so.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 10:04:38 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Brand",
"Cornelius",
""
],
[
"Ceylan",
"Esra",
""
],
[
"Hatschka",
"Christian",
""
],
[
"Ganian",
"Robert",
""
],
[
"Korchemna",
"Viktoriia",
""
]
] |
new_dataset
| 0.998783 |
2202.13669
|
Jiapeng Wang
|
Jiapeng Wang, Lianwen Jin, Kai Ding
|
LiLT: A Simple yet Effective Language-Independent Layout Transformer for
Structured Document Understanding
|
ACL 2022 Main conference
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Structured document understanding has attracted considerable attention and
made significant progress recently, owing to its crucial role in intelligent
document processing. However, most existing related models can only deal with
the document data of specific language(s) (typically English) included in the
pre-training collection, which is extremely limited. To address this issue, we
propose a simple yet effective Language-independent Layout Transformer (LiLT)
for structured document understanding. LiLT can be pre-trained on the
structured documents of a single language and then directly fine-tuned on other
languages with the corresponding off-the-shelf monolingual/multilingual
pre-trained textual models. Experimental results on eight languages have shown
that LiLT can achieve competitive or even superior performance on diverse
widely-used downstream benchmarks, which enables language-independent benefit
from the pre-training of document layout structure. Code and model are publicly
available at https://github.com/jpWang/LiLT.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 10:33:01 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Wang",
"Jiapeng",
""
],
[
"Jin",
"Lianwen",
""
],
[
"Ding",
"Kai",
""
]
] |
new_dataset
| 0.997801 |
2202.13716
|
Claudio Canella
|
Claudio Canella, Sebastian Dorn, Daniel Gruss, Michael Schwarz
|
SFIP: Coarse-Grained Syscall-Flow-Integrity Protection in Modern Systems
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Growing code bases of modern applications have led to a steady increase in
the number of vulnerabilities. Control-Flow Integrity (CFI) is one promising
mitigation that is more and more widely deployed and prevents numerous
exploits. CFI focuses purely on one security domain. That is, transitions
between user space and kernel space are not protected by CFI. Furthermore, if
user space CFI is bypassed, the system and kernel interfaces remain
unprotected, and an attacker can run arbitrary transitions.
In this paper, we introduce the concept of syscall-flow-integrity protection
(SFIP) that complements the concept of CFI with integrity for user-kernel
transitions. Our proof-of-concept implementation relies on static analysis
during compilation to automatically extract possible syscall transitions. An
application can opt-in to SFIP by providing the extracted information to the
kernel for runtime enforcement. The concept is built on three fully-automated
pillars: First, a syscall state machine, representing possible transitions
according to a syscall digraph model. Second, a syscall-origin mapping, which
maps syscalls to the locations at which they can occur. Third, an efficient
enforcement of syscall-flow integrity in a modified Linux kernel. In our
evaluation, we show that SFIP can be applied to large scale applications with
minimal slowdowns. In a micro- and a macrobenchmark, it only introduces an
overhead of 13.1% and 1.8%, respectively. In terms of security, we discuss and
demonstrate its effectiveness in preventing control-flow-hijacking attacks in
real-world applications. Finally, to highlight the reduction in attack surface,
we perform an analysis of the state machines and syscall-origin mappings of
several real-world applications. On average, SFIP decreases the number of
possible transitions by 38.6% compared to seccomp and 90.9% when no protection
is applied.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 12:17:32 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Canella",
"Claudio",
""
],
[
"Dorn",
"Sebastian",
""
],
[
"Gruss",
"Daniel",
""
],
[
"Schwarz",
"Michael",
""
]
] |
new_dataset
| 0.998883 |
2202.13750
|
Umberto Straccia
|
Umberto Straccia and Giovanni Casini
|
A Minimal Deductive System for RDFS with Negative Statements
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The triple language RDFS is designed to represent and reason with
\emph{positive} statements only (e.g."antipyretics are drugs").
In this paper we show how to extend RDFS to express and reason with various
forms of negative statements under the Open World Assumption (OWA). To do so,
we start from $\rho df$, a minimal, but significant RDFS fragment that covers
all essential features of RDFS, and then extend it to $\rho df_\bot^\neg$,
allowing express also statements such as "radio therapies are non drug
treatments", "Ebola has no treatment", or "opioids and antipyretics are
disjoint classes". The main and, to the best of our knowledge, unique features
of our proposal are: (i) $\rho df_\bot^\neg$ remains syntactically a triple
language by extending $\rho df$ with new symbols with specific semantics and
there is no need to revert to the reification method to represent negative
triples; (ii) the logic is defined in such a way that any RDFS reasoner/store
may handle the new predicates as ordinary terms if it does not want to take
account of the extra capabilities; (iii) despite negated statements, every
$\rho df_\bot^\neg$ knowledge base is satisfiable; (iv) the $\rho df_\bot^\neg$
entailment decision procedure is obtained from $\rho df$ via additional
inference rules favouring a potential implementation; and (v) deciding
entailment in $\rho df_\bot^\neg$ ranges from P to NP.
|
[
{
"version": "v1",
"created": "Fri, 11 Feb 2022 13:56:21 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Straccia",
"Umberto",
""
],
[
"Casini",
"Giovanni",
""
]
] |
new_dataset
| 0.993872 |
2202.13812
|
Qinghua Zhao
|
Qinghua Zhao, Shuai Ma
|
TraceNet: Tracing and Locating the Key Elements in Sentiment Analysis
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study sentiment analysis task where the outcomes are mainly
contributed by a few key elements of the inputs. Motivated by the two-streams
hypothesis, we propose a neural architecture, named TraceNet, to address this
type of task. It not only learns discriminative representations for the target
task via its encoders, but also traces key elements at the same time via its
locators. In TraceNet, both encoders and locators are organized in a layer-wise
manner, and a smoothness regularization is employed between adjacent
encoder-locator combinations. Moreover, a sparsity constraints are enforced on
locators for tracing purposes and items are proactively masked according to the
item weights output by locators.A major advantage of TraceNet is that the
outcomes are easier to understand, since the most responsible parts of inputs
are identified. Also, under the guidance of locators, it is more robust to
attacks due to its focus on key elements and the proactive masking training
strategy. Experimental results show its effectiveness for sentiment
classification. Moreover, we provide several case studies to demonstrate its
robustness and interpretability.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 14:20:34 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Zhao",
"Qinghua",
""
],
[
"Ma",
"Shuai",
""
]
] |
new_dataset
| 0.984677 |
2202.13847
|
Haohao Hu
|
Haohao Hu, Fengze Han, Frank Bieder, Jan-Hendrik Pauls and Christoph
Stiller
|
TEScalib: Targetless Extrinsic Self-Calibration of LiDAR and Stereo
Camera for Automated Driving Vehicles with Uncertainty Analysis
|
8 pages
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present TEScalib, a novel extrinsic self-calibration
approach of LiDAR and stereo camera using the geometric and photometric
information of surrounding environments without any calibration targets for
automated driving vehicles. Since LiDAR and stereo camera are widely used for
sensor data fusion on automated driving vehicles, their extrinsic calibration
is highly important. However, most of the LiDAR and stereo camera calibration
approaches are mainly target-based and therefore time consuming. Even the newly
developed targetless approaches in last years are either inaccurate or
unsuitable for driving platforms.
To address those problems, we introduce TEScalib. By applying a 3D mesh
reconstruction-based point cloud registration, the geometric information is
used to estimate the LiDAR to stereo camera extrinsic parameters accurately and
robustly. To calibrate the stereo camera, a photometric error function is
builded and the LiDAR depth is involved to transform key points from one camera
to another. During driving, these two parts are processed iteratively. Besides
that, we also propose an uncertainty analysis for reflecting the reliability of
the estimated extrinsic parameters. Our TEScalib approach evaluated on the
KITTI dataset achieves very promising results.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 15:04:00 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Hu",
"Haohao",
""
],
[
"Han",
"Fengze",
""
],
[
"Bieder",
"Frank",
""
],
[
"Pauls",
"Jan-Hendrik",
""
],
[
"Stiller",
"Christoph",
""
]
] |
new_dataset
| 0.994709 |
2202.13855
|
Haohao Hu
|
Haohao Hu, Hexing Yang, Jian Wu, Xiao Lei, Frank Bieder, Jan-Hendrik
Pauls and Christoph Stiller
|
Large-Scale 3D Semantic Reconstruction for Automated Driving Vehicles
with Adaptive Truncated Signed Distance Function
|
8 pages
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The Large-scale 3D reconstruction, texturing and semantic mapping are
nowadays widely used for automated driving vehicles, virtual reality and
automatic data generation. However, most approaches are developed for RGB-D
cameras with colored dense point clouds and not suitable for large-scale
outdoor environments using sparse LiDAR point clouds. Since a 3D surface can be
usually observed from multiple camera images with different view poses, an
optimal image patch selection for the texturing and an optimal semantic class
estimation for the semantic mapping are still challenging.
To address these problems, we propose a novel 3D reconstruction, texturing
and semantic mapping system using LiDAR and camera sensors. An Adaptive
Truncated Signed Distance Function is introduced to describe surfaces
implicitly, which can deal with different LiDAR point sparsities and improve
model quality. The from this implicit function extracted triangle mesh map is
then textured from a series of registered camera images by applying an optimal
image patch selection strategy. Besides that, a Markov Random Field-based data
fusion approach is proposed to estimate the optimal semantic class for each
triangle mesh. Our approach is evaluated on a synthetic dataset, the KITTI
dataset and a dataset recorded with our experimental vehicle. The results show
that the 3D models generated using our approach are more accurate in comparison
to using other state-of-the-art approaches. The texturing and semantic mapping
achieve also very promising results.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 15:11:25 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Hu",
"Haohao",
""
],
[
"Yang",
"Hexing",
""
],
[
"Wu",
"Jian",
""
],
[
"Lei",
"Xiao",
""
],
[
"Bieder",
"Frank",
""
],
[
"Pauls",
"Jan-Hendrik",
""
],
[
"Stiller",
"Christoph",
""
]
] |
new_dataset
| 0.974524 |
2202.13922
|
Harel Berger
|
Harel Berger, Chen Hajaj, Enrico Mariconti, Amit Dvir
|
MaMaDroid2.0 -- The Holes of Control Flow Graphs
| null | null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Android malware is a continuously expanding threat to billions of mobile
users around the globe. Detection systems are updated constantly to address
these threats. However, a backlash takes the form of evasion attacks, in which
an adversary changes malicious samples such that those samples will be
misclassified as benign. This paper fully inspects a well-known Android malware
detection system, MaMaDroid, which analyzes the control flow graph of the
application. Changes to the portion of benign samples in the train set and
models are considered to see their effect on the classifier. The changes in the
ratio between benign and malicious samples have a clear effect on each one of
the models, resulting in a decrease of more than 40% in their detection rate.
Moreover, adopted ML models are implemented as well, including 5-NN, Decision
Tree, and Adaboost. Exploration of the six models reveals a typical behavior in
different cases, of tree-based models and distance-based models. Moreover,
three novel attacks that manipulate the CFG and their detection rates are
described for each one of the targeted models. The attacks decrease the
detection rate of most of the models to 0%, with regards to different ratios of
benign to malicious apps. As a result, a new version of MaMaDroid is
engineered. This model fuses the CFG of the app and static analysis of features
of the app. This improved model is proved to be robust against evasion attacks
targeting both CFG-based models and static analysis models, achieving a
detection rate of more than 90% against each one of the attacks.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 16:18:15 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Berger",
"Harel",
""
],
[
"Hajaj",
"Chen",
""
],
[
"Mariconti",
"Enrico",
""
],
[
"Dvir",
"Amit",
""
]
] |
new_dataset
| 0.998748 |
2202.13953
|
Max Schaefer
|
Adriana Sejfia and Max Sch\"afer
|
Practical Automated Detection of Malicious npm Packages
|
12 pages, accepted for publication at ICSE 2022
| null |
10.1145/3510003.3510104
| null |
cs.CR cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
The npm registry is one of the pillars of the JavaScript and TypeScript
ecosystems, hosting over 1.7 million packages ranging from simple utility
libraries to complex frameworks and entire applications. Due to the
overwhelming popularity of npm, it has become a prime target for malicious
actors, who publish new packages or compromise existing packages to introduce
malware that tampers with or exfiltrates sensitive data from users who install
either these packages or any package that (transitively) depends on them.
Defending against such attacks is essential to maintaining the integrity of the
software supply chain, but the sheer volume of package updates makes
comprehensive manual review infeasible.
We present Amalfi, a machine-learning based approach for automatically
detecting potentially malicious packages comprised of three complementary
techniques. We start with classifiers trained on known examples of malicious
and benign packages. If a package is flagged as malicious by a classifier, we
then check whether it includes metadata about its source repository, and if so
whether the package can be reproduced from its source code. Packages that are
reproducible from source are not usually malicious, so this step allows us to
weed out false positives. Finally, we also employ a simple textual
clone-detection technique to identify copies of malicious packages that may
have been missed by the classifiers, reducing the number of false negatives.
Amalfi improves on the state of the art in that it is lightweight, requiring
only a few seconds per package to extract features and run the classifiers, and
gives good results in practice: running it on 96287 package versions published
over the course of one week, we were able to identify 95 previously unknown
malware samples, with a manageable number of false positives.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 17:08:09 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Sejfia",
"Adriana",
""
],
[
"Schäfer",
"Max",
""
]
] |
new_dataset
| 0.999093 |
2202.13974
|
Francois Grondin
|
Simon Michaud, Benjamin Moffett, Ana Tapia Rousiouk, Victoria Duda,
Fran\c{c}ois Grondin
|
SmartBelt: A Wearable Microphone Array for Sound Source Localization
with Haptic Feedback
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces SmartBelt, a wearable microphone array on a belt that
performs sound source localization and returns the direction of arrival with
respect to the user waist. One of the haptic motors on the belt then vibrates
in the corresponding direction to provide useful feedback to the user. We also
introduce a simple calibration step to adapt the belt to different waist sizes.
Experiments are performed to confirm the accuracy of this wearable sound source
localization system, and results show a Mean Average Error (MAE) of 2.90
degrees, and a correct haptic motor selection with a rate of 92.3%. Results
suggest the device can provide useful haptic feedback, and will be evaluated in
a study with people having hearing impairments.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 17:26:07 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Michaud",
"Simon",
""
],
[
"Moffett",
"Benjamin",
""
],
[
"Rousiouk",
"Ana Tapia",
""
],
[
"Duda",
"Victoria",
""
],
[
"Grondin",
"François",
""
]
] |
new_dataset
| 0.99457 |
2202.13982
|
Alexander Khitun
|
Alexander Khitun and Michael Balinskiy
|
Combinatorial logic devices based on a multi-path active ring circuit
|
45 pages, 14 figures
| null | null | null |
cs.ET physics.app-ph
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, we describe a logic device in which an act of computation is
associated with finding a path connecting input and output ports. The device is
based on an active ring circuit comprising electric and magnetic parts. The
electric part includes an amplifier, a phase shifter, and an attenuator. The
magnetic part is a multi-port magnetic matrix comprising delay lines and
frequency filters. Signals propagating on different paths may accumulate
different phase shifts. Auto-oscillations occur in the circuit when the
magnetic and electric parts match each other to meet the resonance amplitude
and phase conditions. The system naturally searches for a resonance path that
depends on the position of the electric phase shifter and amplification level.
The path is detected by the set of power sensors. The proposed logic device can
be used for solving a variety of computational problems. We present the results
of numerical modeling illustrating prime factorization and finding the shortest
path connected selected points on the mesh.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 17:38:53 GMT"
}
] | 2022-03-01T00:00:00 |
[
[
"Khitun",
"Alexander",
""
],
[
"Balinskiy",
"Michael",
""
]
] |
new_dataset
| 0.999123 |
2006.01029
|
Ehud Shapiro
|
Ouri Poupko, Ehud Shapiro and Nimrod Talmon
|
Fault-Tolerant Distributed-Ledger Implementation of Digital Social
Contracts
|
Paper is subsumed by arxiv paper arXiv:2112.13650 and is no longer
relevant
| null | null | null |
cs.DC cs.MA
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
A companion paper defined the notion of digital social contracts, presented a
design for a social-contracts programming language, and demonstrated its
potential utility via example social contracts. The envisioned setup consists
of people with genuine identifiers, which are unique and singular cryptographic
key pairs, that operate software agents thus identified on their mobile device.
The abstract model of digital social contracts consists of a transition system
specifying concurrent, non-deterministic asynchronous agents that operate on a
shared ledger by performing digital speech acts, which are
cryptographically-signed sequentially-indexed digital actions. Here, we address
the distributed-ledger implementation of digital social contracts in the
presence of faulty agents: we present a design of a fault-tolerant
distributed-ledger transition system and show that it implements the abstract
shared-ledger model of digital social contracts, and discuss its resilience to
faulty agents. The result is a novel ledger architecture that is distributed
with a blockchain-per-person (as opposed to centralized with one blockchain for
all), partially-ordered (as opposed to totally-ordered), locally-replicated (as
opposed to globally-replicated), asynchronous (as opposed to
globally-synchronized), peer-to-peer with each agent being both an actor and a
validator (as opposed to having dedicated miners, validators, and clients),
environmentally-friendly (as opposed to the environmentally-harmful
Proof-of-Work), self-sufficient (as opposed to the energy-hogging Proof-of-Work
or capital-hogging Proof-of-Stake) and egalitarian (as opposed to the
plutocratic Proof-of-Work and Proof-of-Stake).
|
[
{
"version": "v1",
"created": "Mon, 1 Jun 2020 15:53:25 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Jul 2020 16:02:26 GMT"
},
{
"version": "v3",
"created": "Fri, 10 Jul 2020 07:56:36 GMT"
},
{
"version": "v4",
"created": "Tue, 21 Jul 2020 13:57:57 GMT"
},
{
"version": "v5",
"created": "Sat, 19 Sep 2020 07:24:08 GMT"
},
{
"version": "v6",
"created": "Thu, 19 Nov 2020 22:40:42 GMT"
},
{
"version": "v7",
"created": "Thu, 24 Feb 2022 20:43:34 GMT"
}
] | 2022-02-28T00:00:00 |
[
[
"Poupko",
"Ouri",
""
],
[
"Shapiro",
"Ehud",
""
],
[
"Talmon",
"Nimrod",
""
]
] |
new_dataset
| 0.994076 |
2006.04337
|
Jayson Lynch
|
Joshua Ani, Sualeh Asif, Erik D. Demaine, Yevhenii Diomidov, Dylan
Hendrickson, Jayson Lynch, Sarah Scheffler, Adam Suhl
|
PSPACE-completeness of Pulling Blocks to Reach a Goal
|
Full version of JCDCGGG2019 paper and now published in Journal of
Information Processing 28 (2020), 22 pages, 25 figures; corrections made to
Figures 10 and 15
|
Journal of Information Processing 28 (2020): 929-941
|
10.2197/ipsjjip.28.929
| null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We prove PSPACE-completeness of all but one problem in a large space of
pulling-block problems where the goal is for the agent to reach a target
destination. The problems are parameterized by whether pulling is optional, the
number of blocks which can be pulled simultaneously, whether there are fixed
blocks or thin walls, and whether there is gravity. We show NP-hardness for the
remaining problem, Pull?-1FG (optional pulling, strength 1, fixed blocks, with
gravity).
|
[
{
"version": "v1",
"created": "Mon, 8 Jun 2020 03:21:45 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Feb 2022 18:47:27 GMT"
}
] | 2022-02-28T00:00:00 |
[
[
"Ani",
"Joshua",
""
],
[
"Asif",
"Sualeh",
""
],
[
"Demaine",
"Erik D.",
""
],
[
"Diomidov",
"Yevhenii",
""
],
[
"Hendrickson",
"Dylan",
""
],
[
"Lynch",
"Jayson",
""
],
[
"Scheffler",
"Sarah",
""
],
[
"Suhl",
"Adam",
""
]
] |
new_dataset
| 0.981232 |
2006.16737
|
George Chacko
|
Wenxi Zhao, Dmitriy Korobskiy, and George Chacko
|
Delayed Recognition; the Co-citation Perspective
| null |
Frontiers in Research Metrics and Analytics (2021)
|
10.3389/frma.2020.577131
| null |
cs.DL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
A Sleeping Beauty is a publication that is apparently unrecognized for some
period of time before experiencing sudden recognition by citation. Various
reasons, including resistance to new ideas, have been attributed to such
delayed recognition. We examine this phenomenon in the special case of
co-citations, which represent new ideas generated through the combination of
existing ones. Using relatively stringent selection criteria derived from the
work of others, we analyze a very large dataset of over 940 million unique
co-cited article pairs, and identified 1,196 cases of delayed co-citations. We
further classify these 1,196 cases with respect to amplitude, rate of citation,
and disciplinary origin and discuss alternative approaches towards identifying
such instances.
|
[
{
"version": "v1",
"created": "Tue, 30 Jun 2020 12:51:49 GMT"
}
] | 2022-02-28T00:00:00 |
[
[
"Zhao",
"Wenxi",
""
],
[
"Korobskiy",
"Dmitriy",
""
],
[
"Chacko",
"George",
""
]
] |
new_dataset
| 0.999327 |
2011.13544
|
Zhenqiang Ying
|
Zhenqiang Ying (1), Maniratnam Mandal (1), Deepti Ghadiyaram (2), Alan
Bovik (1) ((1) University of Texas at Austin, (2) Facebook AI)
|
Patch-VQ: 'Patching Up' the Video Quality Problem
| null | null |
10.1109/CVPR46437.2021.01380
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
No-reference (NR) perceptual video quality assessment (VQA) is a complex,
unsolved, and important problem to social and streaming media applications.
Efficient and accurate video quality predictors are needed to monitor and guide
the processing of billions of shared, often imperfect, user-generated content
(UGC). Unfortunately, current NR models are limited in their prediction
capabilities on real-world, "in-the-wild" UGC video data. To advance progress
on this problem, we created the largest (by far) subjective video quality
dataset, containing 39, 000 realworld distorted videos and 117, 000 space-time
localized video patches ('v-patches'), and 5.5M human perceptual quality
annotations. Using this, we created two unique NR-VQA models: (a) a
local-to-global region-based NR VQA architecture (called PVQ) that learns to
predict global video quality and achieves state-of-the-art performance on 3 UGC
datasets, and (b) a first-of-a-kind space-time video quality mapping engine
(called PVQ Mapper) that helps localize and visualize perceptual distortions in
space and time. We will make the new database and prediction models available
immediately following the review process.
|
[
{
"version": "v1",
"created": "Fri, 27 Nov 2020 03:46:44 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Feb 2022 05:57:21 GMT"
}
] | 2022-02-28T00:00:00 |
[
[
"Ying",
"Zhenqiang",
"",
"University of Texas at Austin"
],
[
"Mandal",
"Maniratnam",
"",
"University of Texas at Austin"
],
[
"Ghadiyaram",
"Deepti",
"",
"Facebook AI"
],
[
"Bovik",
"Alan",
"",
"University of Texas at Austin"
]
] |
new_dataset
| 0.998295 |
2101.05508
|
Pengyuan Zhou
|
Pengyuan Zhou, Pranvera Kortoci, Yui-Pan Yau, Tristan Braud, Xiujun
Wang, Benjamin Finley, Lik-Hang Lee, Sasu Tarkoma, Jussi Kangasharju, Pan Hui
|
AICP: Augmented Informative Cooperative Perception
|
Accepted in IEEE Transactions on Intelligent Transportation Systems
| null | null | null |
cs.MM cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Connected vehicles, whether equipped with advanced driver-assistance systems
or fully autonomous, require human driver supervision and are currently
constrained to visual information in their line-of-sight. A cooperative
perception system among vehicles increases their situational awareness by
extending their perception range. Existing solutions focus on improving
perspective transformation and fast information collection. However, such
solutions fail to filter out large amounts of less relevant data and thus
impose significant network and computation load. Moreover, presenting all this
less relevant data can overwhelm the driver and thus actually hinder them. To
address such issues, we present Augmented Informative Cooperative Perception
(AICP), the first fast-filtering system which optimizes the informativeness of
shared data at vehicles to improve the fused presentation.
To this end, an informativeness maximization problem is presented for
vehicles to select a subset of data to display to their drivers. Specifically,
we propose (i) a dedicated system design with custom data structure and
lightweight routing protocol for convenient data encapsulation, fast
interpretation and transmission, and (ii) a comprehensive problem formulation
and efficient fitness-based sorting algorithm to select the most valuable data
to display at the application layer.
We implement a proof-of-concept prototype of AICP with a bandwidth-hungry,
latency-constrained real-life augmented reality application. The prototype adds
only 12.6 milliseconds of latency to a current informativeness-unaware system.
Next, we test the networking performance of AICP at scale and show that ACIP
effectively filters out less relevant packets and decreases the channel busy
time.
|
[
{
"version": "v1",
"created": "Thu, 14 Jan 2021 09:04:16 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Feb 2022 09:54:53 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Feb 2022 17:29:57 GMT"
}
] | 2022-02-28T00:00:00 |
[
[
"Zhou",
"Pengyuan",
""
],
[
"Kortoci",
"Pranvera",
""
],
[
"Yau",
"Yui-Pan",
""
],
[
"Braud",
"Tristan",
""
],
[
"Wang",
"Xiujun",
""
],
[
"Finley",
"Benjamin",
""
],
[
"Lee",
"Lik-Hang",
""
],
[
"Tarkoma",
"Sasu",
""
],
[
"Kangasharju",
"Jussi",
""
],
[
"Hui",
"Pan",
""
]
] |
new_dataset
| 0.990994 |
2106.09543
|
Naroa Coretti Sanchez
|
Naroa Coretti S\'anchez, Juan M\'ugica Gonz\'alez, Luis Alonso Pastor,
Kent Larson
|
Future urban mobility as a bio-inspired collaborative system of
multi-functional autonomous vehicles
| null | null | null | null |
cs.MA cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The fast urbanization and climate change challenges require solutions that
enable the efficient movement of people and goods in cities. We envision future
cities to be composed of high-performing walkable districts where
transportation needs could be served by fleets of ultra-lightweight shared and
autonomous vehicles. A future in which most vehicles would be autonomous
creates a new paradigm for the possible interactions between vehicles. Natural
swarms are a great example of how rich interactions can be; they can divide
tasks, cluster, build together, or transport cooperatively. The field of swarm
robotics has translated some of the behaviors from natural swarms to artificial
systems, proving to make systems more flexible, scalable, and robust. Inspired
by nature and supported by swarm robotics, this paper proposes a future
mobility in which shared, electric, and autonomous vehicles would be
multi-functional and behave as a collaborative system. In this future, fleets
of multi-functional vehicles would complete different tasks collaboratively,
giving a response to the different urban mobility needs. This paper contributes
with the proposal of a framework for future urban mobility that integrates
current research and mobility trends in a novel and unique way.
|
[
{
"version": "v1",
"created": "Tue, 15 Jun 2021 15:13:18 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Feb 2022 23:15:28 GMT"
}
] | 2022-02-28T00:00:00 |
[
[
"Sánchez",
"Naroa Coretti",
""
],
[
"González",
"Juan Múgica",
""
],
[
"Pastor",
"Luis Alonso",
""
],
[
"Larson",
"Kent",
""
]
] |
new_dataset
| 0.998132 |
2109.10400
|
Kishan Chandan
|
Kishan Chandan, Vidisha Kudalkar, Xiang Li, Shiqi Zhang
|
ARROCH: Augmented Reality for Robots Collaborating with a Human
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human-robot collaboration frequently requires extensive communication, e.g.,
using natural language and gestures. Augmented reality (AR) has provided an
alternative way of bridging the communication gap between robots and people.
However, most current AR-based human-robot communication methods are
unidirectional, focusing on how the human adapts to robot behaviors, and are
limited to single-robot domains. In this paper, we develop AR for Robots
Collaborating with a Human (ARROCH), a novel algorithm and system that supports
bidirectional, multi-turn, human-multi-robot communication in indoor multi-room
environments. The human can see through obstacles to observe the robots'
current states and intentions, and provide feedback, while the robots'
behaviors are then adjusted toward human-multi-robot teamwork. Experiments have
been conducted with real robots and human participants using collaborative
delivery tasks. Results show that ARROCH outperformed a standard non-AR
approach in both user experience and teamwork efficiency. In addition, we have
developed a novel simulation environment using Unity (for AR and human
simulation) and Gazebo (for robot simulation). Results in simulation
demonstrate ARROCH's superiority over AR-based baselines in human-robot
collaboration.
|
[
{
"version": "v1",
"created": "Tue, 21 Sep 2021 18:46:19 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Feb 2022 16:50:37 GMT"
}
] | 2022-02-28T00:00:00 |
[
[
"Chandan",
"Kishan",
""
],
[
"Kudalkar",
"Vidisha",
""
],
[
"Li",
"Xiang",
""
],
[
"Zhang",
"Shiqi",
""
]
] |
new_dataset
| 0.999025 |
2110.07393
|
Fan Yu
|
Fan Yu, Shiliang Zhang, Yihui Fu, Lei Xie, Siqi Zheng, Zhihao Du,
Weilong Huang, Pengcheng Guo, Zhijie Yan, Bin Ma, Xin Xu, Hui Bu
|
M2MeT: The ICASSP 2022 Multi-Channel Multi-Party Meeting Transcription
Challenge
|
Accepted by ICASSP 2022
| null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Recent development of speech processing, such as speech recognition, speaker
diarization, etc., has inspired numerous applications of speech technologies.
The meeting scenario is one of the most valuable and, at the same time, most
challenging scenarios for the deployment of speech technologies. Specifically,
two typical tasks, speaker diarization and multi-speaker automatic speech
recognition have attracted much attention recently. However, the lack of large
public meeting data has been a major obstacle for the advancement of the field.
Therefore, we make available the AliMeeting corpus, which consists of 120 hours
of recorded Mandarin meeting data, including far-field data collected by
8-channel microphone array as well as near-field data collected by headset
microphone. Each meeting session is composed of 2-4 speakers with different
speaker overlap ratio, recorded in rooms with different size. Along with the
dataset, we launch the ICASSP 2022 Multi-channel Multi-party Meeting
Transcription Challenge (M2MeT) with two tracks, namely speaker diarization and
multi-speaker ASR, aiming to provide a common testbed for meeting rich
transcription and promote reproducible research in this field. In this paper we
provide a detailed introduction of the AliMeeting dateset, challenge rules,
evaluation methods and baseline systems.
|
[
{
"version": "v1",
"created": "Thu, 14 Oct 2021 14:27:41 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Dec 2021 09:42:24 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Feb 2022 06:48:01 GMT"
}
] | 2022-02-28T00:00:00 |
[
[
"Yu",
"Fan",
""
],
[
"Zhang",
"Shiliang",
""
],
[
"Fu",
"Yihui",
""
],
[
"Xie",
"Lei",
""
],
[
"Zheng",
"Siqi",
""
],
[
"Du",
"Zhihao",
""
],
[
"Huang",
"Weilong",
""
],
[
"Guo",
"Pengcheng",
""
],
[
"Yan",
"Zhijie",
""
],
[
"Ma",
"Bin",
""
],
[
"Xu",
"Xin",
""
],
[
"Bu",
"Hui",
""
]
] |
new_dataset
| 0.996406 |
2202.09450
|
Shervin Minaee
|
Shervin Minaee, Xiaodan Liang, Shuicheng Yan
|
Modern Augmented Reality: Applications, Trends, and Future Directions
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Augmented reality (AR) is one of the relatively old, yet trending areas in
the intersection of computer vision and computer graphics with numerous
applications in several areas, from gaming and entertainment, to education and
healthcare. Although it has been around for nearly fifty years, it has seen a
lot of interest by the research community in the recent years, mainly because
of the huge success of deep learning models for various computer vision and AR
applications, which made creating new generations of AR technologies possible.
This work tries to provide an overview of modern augmented reality, from both
application-level and technical perspective. We first give an overview of main
AR applications, grouped into more than ten categories. We then give an
overview of around 100 recent promising machine learning based works developed
for AR systems, such as deep learning works for AR shopping (clothing, makeup),
AR based image filters (such as Snapchat's lenses), AR animations, and more. In
the end we discuss about some of the current challenges in AR domain, and the
future directions in this area.
|
[
{
"version": "v1",
"created": "Fri, 18 Feb 2022 22:12:37 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Feb 2022 23:59:00 GMT"
}
] | 2022-02-28T00:00:00 |
[
[
"Minaee",
"Shervin",
""
],
[
"Liang",
"Xiaodan",
""
],
[
"Yan",
"Shuicheng",
""
]
] |
new_dataset
| 0.988639 |
2202.11409
|
Chavhan Sujeet Yashavant
|
Chavhan Sujeet Yashavant, Saurabh Kumar, Amey Karkare
|
ScrawlD: A Dataset of Real World Ethereum Smart Contracts Labelled with
Vulnerabilities
|
5 pages, 2 figures, submitted to Data and Tool Showcase Track MSR
2022 (https://conf.researchr.org/track/msr-2022/msr-2022-data-showcase)
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Smart contracts on Ethereum handle millions of U.S. Dollars and other
financial assets. In the past, attackers have exploited smart contracts to
steal these assets. The Ethereum community has developed plenty of tools to
detect vulnerable smart contracts. However, there is no standardized data set
to evaluate these existing tools, or any new tools developed. There is a need
for an unbiased standard benchmark of real-world Ethereum smart contracts. We
have created ScrawlD: an annotated data set of real-world smart contracts taken
from the Ethereum network. The data set is labelled using 5 tools that detect
various vulnerabilities in smart contracts, using majority voting.
|
[
{
"version": "v1",
"created": "Wed, 23 Feb 2022 10:42:24 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Feb 2022 10:10:57 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Feb 2022 18:26:33 GMT"
}
] | 2022-02-28T00:00:00 |
[
[
"Yashavant",
"Chavhan Sujeet",
""
],
[
"Kumar",
"Saurabh",
""
],
[
"Karkare",
"Amey",
""
]
] |
new_dataset
| 0.999732 |
2202.12076
|
Wei Zhai
|
Liangsheng Lu, Wei Zhai, Hongchen Luo, Yu Kang and Yang Cao
|
Phrase-Based Affordance Detection via Cyclic Bilateral Interaction
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Affordance detection, which refers to perceiving objects with potential
action possibilities in images, is a challenging task since the possible
affordance depends on the person's purpose in real-world application scenarios.
The existing works mainly extract the inherent human-object dependencies from
image/video to accommodate affordance properties that change dynamically. In
this paper, we explore to perceive affordance from a vision-language
perspective and consider the challenging phrase-based affordance detection
problem,i.e., given a set of phrases describing the action purposes, all the
object regions in a scene with the same affordance should be detected. To this
end, we propose a cyclic bilateral consistency enhancement network (CBCE-Net)
to align language and vision features progressively. Specifically, the
presented CBCE-Net consists of a mutual guided vision-language module that
updates the common features of vision and language in a progressive manner, and
a cyclic interaction module (CIM) that facilitates the perception of possible
interaction with objects in a cyclic manner. In addition, we extend the public
Purpose-driven Affordance Dataset (PAD) by annotating affordance categories
with short phrases. The contrastive experimental results demonstrate the
superiority of our method over nine typical methods from four relevant fields
in terms of both objective metrics and visual quality. The related code and
dataset will be released at \url{https://github.com/lulsheng/CBCE-Net}.
|
[
{
"version": "v1",
"created": "Thu, 24 Feb 2022 13:02:27 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Feb 2022 03:25:33 GMT"
}
] | 2022-02-28T00:00:00 |
[
[
"Lu",
"Liangsheng",
""
],
[
"Zhai",
"Wei",
""
],
[
"Luo",
"Hongchen",
""
],
[
"Kang",
"Yu",
""
],
[
"Cao",
"Yang",
""
]
] |
new_dataset
| 0.993631 |
2202.12361
|
Tashnim Chowdhury
|
Tashnim Chowdhury, Robin Murphy, Maryam Rahnemoonfar
|
RescueNet: A High Resolution UAV Semantic Segmentation Benchmark Dataset
for Natural Disaster Damage Assessment
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to climate change, we can observe a recent surge of natural disasters all
around the world. These disasters are causing disastrous impact on both nature
and human lives. Economic losses are getting greater due to the hurricanes.
Quick and prompt response of the rescue teams are crucial in saving human lives
and reducing economic cost. Deep learning based computer vision techniques can
help in scene understanding, and help rescue teams with precise damage
assessment. Semantic segmentation, an active research area in computer vision,
can put labels to each pixel of an image, and therefore can be a valuable
arsenal in the effort of reducing the impacts of hurricanes. Unfortunately,
available datasets for natural disaster damage assessment lack detailed
annotation of the affected areas, and therefore do not support the deep
learning models in total damage assessment. To this end, we introduce the
RescueNet, a high resolution post disaster dataset, for semantic segmentation
to assess damages after natural disasters. The RescueNet consists of post
disaster images collected after Hurricane Michael. The data is collected using
Unmanned Aerial Vehicles (UAVs) from several areas impacted by the hurricane.
The uniqueness of the RescueNet comes from the fact that this dataset provides
high resolution post-disaster images and comprehensive annotation of each
image. While most of the existing dataset offer annotation of only part of the
scene, like building, road, or river, RescueNet provides pixel level annotation
of all the classes including building, road, pool, tree, debris, and so on. We
further analyze the usefulness of the dataset by implementing state-of-the-art
segmentation models on the RescueNet. The experiments demonstrate that our
dataset can be valuable in further improvement of the existing methodologies
for natural disaster damage assessment.
|
[
{
"version": "v1",
"created": "Thu, 24 Feb 2022 20:56:29 GMT"
}
] | 2022-02-28T00:00:00 |
[
[
"Chowdhury",
"Tashnim",
""
],
[
"Murphy",
"Robin",
""
],
[
"Rahnemoonfar",
"Maryam",
""
]
] |
new_dataset
| 0.999782 |
2202.12362
|
Peter Schaldenbrand
|
Peter Schaldenbrand, Zhixuan Liu, Jean Oh
|
StyleCLIPDraw: Coupling Content and Style in Text-to-Drawing Translation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Generating images that fit a given text description using machine learning
has improved greatly with the release of technologies such as the CLIP
image-text encoder model; however, current methods lack artistic control of the
style of image to be generated. We present an approach for generating styled
drawings for a given text description where a user can specify a desired
drawing style using a sample image. Inspired by a theory in art that style and
content are generally inseparable during the creative process, we propose a
coupled approach, known here as StyleCLIPDraw, whereby the drawing is generated
by optimizing for style and content simultaneously throughout the process as
opposed to applying style transfer after creating content in a sequence. Based
on human evaluation, the styles of images generated by StyleCLIPDraw are
strongly preferred to those by the sequential approach. Although the quality of
content generation degrades for certain styles, overall considering both
content \textit{and} style, StyleCLIPDraw is found far more preferred,
indicating the importance of style, look, and feel of machine generated images
to people as well as indicating that style is coupled in the drawing process
itself. Our code (https://github.com/pschaldenbrand/StyleCLIPDraw), a
demonstration (https://replicate.com/pschaldenbrand/style-clip-draw), and style
evaluation data
(https://www.kaggle.com/pittsburghskeet/drawings-with-style-evaluation-styleclipdraw)
are publicly available.
|
[
{
"version": "v1",
"created": "Thu, 24 Feb 2022 21:03:51 GMT"
}
] | 2022-02-28T00:00:00 |
[
[
"Schaldenbrand",
"Peter",
""
],
[
"Liu",
"Zhixuan",
""
],
[
"Oh",
"Jean",
""
]
] |
new_dataset
| 0.984921 |
2202.12385
|
Jia-Ruei Chiu
|
Jia-Ruei Chiu, Jean-Pierre Sleiman, Mayank Mittal, Farbod Farshidian,
Marco Hutter
|
A Collision-Free MPC for Whole-Body Dynamic Locomotion and Manipulation
|
Accepted in IEEE International Conference on Robotics and Automation
(ICRA) 2022 in Philadelphia (PA), USA
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a real-time whole-body planner for collision-free
legged mobile manipulation. We enforce both self-collision and
environment-collision avoidance as soft constraints within a Model Predictive
Control (MPC) scheme that solves a multi-contact optimal control problem. By
penalizing the signed distances among a set of representative primitive
collision bodies, the robot is able to safely execute a variety of dynamic
maneuvers while preventing any self-collisions. Moreover, collision-free
navigation and manipulation in both static and dynamic environments are made
viable through efficient queries of distances and their gradients via a
euclidean signed distance field. We demonstrate through a comparative study
that our approach only slightly increases the computational complexity of the
MPC planning. Finally, we validate the effectiveness of our framework through a
set of hardware experiments involving dynamic mobile manipulation tasks with
potential collisions, such as locomotion balancing with the swinging arm,
weight throwing, and autonomous door opening.
|
[
{
"version": "v1",
"created": "Thu, 24 Feb 2022 22:11:08 GMT"
}
] | 2022-02-28T00:00:00 |
[
[
"Chiu",
"Jia-Ruei",
""
],
[
"Sleiman",
"Jean-Pierre",
""
],
[
"Mittal",
"Mayank",
""
],
[
"Farshidian",
"Farbod",
""
],
[
"Hutter",
"Marco",
""
]
] |
new_dataset
| 0.997966 |
2202.12395
|
Talia Moore
|
Karthik Urs and Challen Enninful Adu and Elliott J. Rouse and Talia Y.
Moore
|
Design and Characterization of 3D Printed, Open-Source Actuators for
Legged Locomotion
|
15 pages, 8 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Impressive animal locomotion capabilities are mediated by the co-evolution of
the skeletal morphology and muscular properties. Legged robot performance would
also likely benefit from the co-optimization of actuators and leg morphology.
However, development of custom actuators for legged robots is often expensive
and time consuming, which discourages roboticists from pursuing performance
gains afforded by application-specific actuator optimization. This paper
presents open-source designs for two quasi-direct-drive actuators with
performance regimes appropriate for an 8--15 kg robot, built completely with
off the shelf and 3D-printed components for less than $200 USD each. The
mechanical, electrical, and thermal properties of each actuator are
characterized and compared to benchmark data. Actuators subjected to 420k
strides of gait data experienced only a 2% reduction in efficiency and 26 mrad
in backlash growth, demonstrating viability for rigorous and sustained research
applications. We present a thermal solution that nearly doubles the
thermally-driven torque limits of our plastic actuator design. The performance
results are comparable to traditional metallic actuators for use in high-speed
legged robots of the same scale. These 3D printed designs demonstrate an
approach for designing and characterizing low-cost, highly customizable, and
highly reproducible actuators, democratizing the field of actuator design and
enabling co-design and optimization of actuators and robot legs.
|
[
{
"version": "v1",
"created": "Thu, 24 Feb 2022 22:31:25 GMT"
}
] | 2022-02-28T00:00:00 |
[
[
"Urs",
"Karthik",
""
],
[
"Adu",
"Challen Enninful",
""
],
[
"Rouse",
"Elliott J.",
""
],
[
"Moore",
"Talia Y.",
""
]
] |
new_dataset
| 0.994978 |
2202.12477
|
Noel Chalmers
|
Noel Chalmers, Abhishek Mishra, Damon McDougall, and Tim Warburton
|
HipBone: A performance-portable GPU-accelerated C++ version of the
NekBone benchmark
| null | null | null | null |
cs.DC cs.PF
|
http://creativecommons.org/licenses/by/4.0/
|
We present hipBone, an open source performance-portable proxy application for
the Nek5000 (and NekRS) CFD applications. HipBone is a fully GPU-accelerated
C++ implementation of the original NekBone CPU proxy application with several
novel algorithmic and implementation improvements which optimize its
performance on modern fine-grain parallel GPU accelerators. Our optimizations
include a conversion to store the degrees of freedom of the problem in
assembled form in order to reduce the amount of data moved during the main
iteration and a portable implementation of the main Poisson operator kernel. We
demonstrate near-roofline performance of the operator kernel on three different
modern GPU accelerators from two different vendors. We present a novel
algorithm for splitting the application of the Poisson operator on GPUs which
aggressively hides MPI communication required for both halo exchange and
assembly. Our implementation of nearest-neighbor MPI communication then
leverages several different routing algorithms and GPU-Direct RDMA
capabilities, when available, which improves scalability of the benchmark. We
demonstrate the performance of hipBone on three different clusters housed at
Oak Ridge National Laboratory, namely the Summit supercomputer and the Frontier
early-access clusters, Spock and Crusher. Our tests demonstrate both
portability across different clusters and very good scaling efficiency,
especially on large problems.
|
[
{
"version": "v1",
"created": "Fri, 25 Feb 2022 03:18:32 GMT"
}
] | 2022-02-28T00:00:00 |
[
[
"Chalmers",
"Noel",
""
],
[
"Mishra",
"Abhishek",
""
],
[
"McDougall",
"Damon",
""
],
[
"Warburton",
"Tim",
""
]
] |
new_dataset
| 0.999725 |
2202.12479
|
Jing Wang
|
Jing Wang, Jinyang Guo, Chao Li
|
On The Design of a Light-weight FPGA Programming Framework for Graph
Applications
| null | null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
FPGA accelerators designed for graph processing are gaining popularity.
Domain Specific Language (DSL) frameworks for graph processing can reduce the
programming complexity and development cost of algorithm design. However,
accelerator-specific development requires certain technical expertise and
significant effort to devise, implement, and validate the system. For most
algorithm designers, the expensive cost for hardware programming experience
makes FPGA accelerators either unavailable or uneconomic. Although
general-purpose High-Level Synthesis (HLS) tools help to map high-level
language to Hardware Description Languages (HDLs), the generated code is often
inefficient and lengthy compared with the highly-optimized graph accelerators.
One cannot make full use of an FPGA accelerator's capacity with low development
cost.
To easily program graph algorithms while keeping performance degradation
acceptable, we propose a graph programming system named JGraph, which contains
two main parts: 1) a DSL for graph atomic operations with a graph library for
high-level abstractions including user-defined functions with parameters, 2) a
light-weight HLS translator to generate high-performance HDL code, cooperating
with a communication manager and a runtime scheduler. To the best of our
knowledge, our work is the first graph programming system with DSL and
translator on the FPGA platform. Our system can generate up to 300 MTEPS BFS
traversal within tens of seconds.
|
[
{
"version": "v1",
"created": "Fri, 25 Feb 2022 03:30:32 GMT"
}
] | 2022-02-28T00:00:00 |
[
[
"Wang",
"Jing",
""
],
[
"Guo",
"Jinyang",
""
],
[
"Li",
"Chao",
""
]
] |
new_dataset
| 0.977718 |
2202.12519
|
Tapas Kumar Mishra
|
Abir Sen, Tapas Kumar Mishra, Ratnakar Dash
|
A Novel Hand Gesture Detection and Recognition system based on
ensemble-based Convolutional Neural Network
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Nowadays, hand gesture recognition has become an alternative for
human-machine interaction. It has covered a large area of applications like 3D
game technology, sign language interpreting, VR (virtual reality) environment,
and robotics. But detection of the hand portion has become a challenging task
in computer vision and pattern recognition communities. Deep learning algorithm
like convolutional neural network (CNN) architecture has become a very popular
choice for classification tasks, but CNN architectures suffer from some
problems like high variance during prediction, overfitting problem and also
prediction errors. To overcome these problems, an ensemble of CNN-based
approaches is presented in this paper. Firstly, the gesture portion is detected
by using the background separation method based on binary thresholding. After
that, the contour portion is extracted, and the hand region is segmented. Then,
the images have been resized and fed into three individual CNN models to train
them in parallel. In the last part, the output scores of CNN models are
averaged to construct an optimal ensemble model for the final prediction. Two
publicly available datasets (labeled as Dataset-1 and Dataset-2) containing
infrared images and one self-constructed dataset have been used to validate the
proposed system. Experimental results are compared with the existing
state-of-the-art approaches, and it is observed that our proposed ensemble
model outperforms other existing proposed methods.
|
[
{
"version": "v1",
"created": "Fri, 25 Feb 2022 06:46:58 GMT"
}
] | 2022-02-28T00:00:00 |
[
[
"Sen",
"Abir",
""
],
[
"Mishra",
"Tapas Kumar",
""
],
[
"Dash",
"Ratnakar",
""
]
] |
new_dataset
| 0.990457 |
2202.12525
|
Ryman Hashem
|
Ryman Hashem and Fumiya Iida
|
Embedded Soft Sensing in Soft Ring Actuator for Aiding with
theSelf-Organisation of the In-Hand Rotational Manipulation
|
The papaer is accepted but not published
|
RoboSoft conference 2022
| null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
This paper proposes a soft sensor embedded in a soft ring actuator with five
fingers as a soft hand to identify the bifurcation of manipulated objects
during the in-hand manipulation process. The manipulation is performed by
breaking the symmetry method with an underactuated control system by
bifurcating the object to clockwise or counter-clockwise rotations. Two soft
sensors are embedded in parallel over a single soft finger, and the difference
in the resistance measurements is compared when the finger is displaced or bent
in a particular direction, which can identify the bifurcation direction and aid
in the break of symmetry approach without the need of external tracking
devices. The sensors performance is also characterised by extending and bending
the finger without an object interaction. During an experiment that performs a
break of symmetry, manipulated objects turn clockwise and counter-clockwise
depending on the perturbation and actuation frequency, sensors can track the
direction of rotation. The embedded sensors provide a self-sensing capability
for implementing a closed-loop control in future work. The soft ring actuator
performance presents a self-organisation behaviour with soft fingers rotating
an object without a required control for rotating the object. Therefore, the
soft fingers are an underactuated system with complex behaviour when
interacting with objects that serve in-hand manipulation field.
|
[
{
"version": "v1",
"created": "Fri, 25 Feb 2022 07:04:08 GMT"
}
] | 2022-02-28T00:00:00 |
[
[
"Hashem",
"Ryman",
""
],
[
"Iida",
"Fumiya",
""
]
] |
new_dataset
| 0.966242 |
2202.12571
|
Xn Chen
|
Wen Zhang, Xiangnan Chen, Zhen Yao, Mingyang Chen, Yushan Zhu, Hongtao
Yu, Yufeng Huang, Zezhong Xu, Yajing Xu, Ningyu Zhang, Zonggang Yuan, Feiyu
Xiong, Huajun Chen
|
NeuralKG: An Open Source Library for Diverse Representation Learning of
Knowledge Graphs
|
work in progress
| null | null | null |
cs.LG cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
NeuralKG is an open-source Python-based library for diverse representation
learning of knowledge graphs. It implements three different series of Knowledge
Graph Embedding (KGE) methods, including conventional KGEs, GNN-based KGEs, and
Rule-based KGEs. With a unified framework, NeuralKG successfully reproduces
link prediction results of these methods on benchmarks, freeing users from the
laborious task of reimplementing them, especially for some methods originally
written in non-python programming languages. Besides, NeuralKG is highly
configurable and extensible. It provides various decoupled modules that can be
mixed and adapted to each other. Thus with NeuralKG, developers and researchers
can quickly implement their own designed models and obtain the optimal training
methods to achieve the best performance efficiently. We built an website in
http://neuralkg.zjukg.cn to organize an open and shared KG representation
learning community. The source code is all publicly released at
https://github.com/zjukg/NeuralKG.
|
[
{
"version": "v1",
"created": "Fri, 25 Feb 2022 09:13:13 GMT"
}
] | 2022-02-28T00:00:00 |
[
[
"Zhang",
"Wen",
""
],
[
"Chen",
"Xiangnan",
""
],
[
"Yao",
"Zhen",
""
],
[
"Chen",
"Mingyang",
""
],
[
"Zhu",
"Yushan",
""
],
[
"Yu",
"Hongtao",
""
],
[
"Huang",
"Yufeng",
""
],
[
"Xu",
"Zezhong",
""
],
[
"Xu",
"Yajing",
""
],
[
"Zhang",
"Ningyu",
""
],
[
"Yuan",
"Zonggang",
""
],
[
"Xiong",
"Feiyu",
""
],
[
"Chen",
"Huajun",
""
]
] |
new_dataset
| 0.985734 |
2202.12693
|
Marcos Faundez-Zanuy
|
Marcos Faundez-Zanuy, Jiri Mekyska, Donato Impedovo
|
Online handwriting, signature and touch dynamics: tasks and potential
applications in the field of security and health
|
27 pages
|
Cognitive Computation 2021
|
10.1007/s12559-021-09938-2
| null |
cs.CR cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Background: An advantageous property of behavioural signals ,e.g.
handwriting, in contrast to morphological ones, such as iris, fingerprint, hand
geometry, etc., is the possibility to ask a user for a very rich amount of
different tasks. Methods: This article summarises recent findings and
applications of different handwriting and drawing tasks in the field of
security and health. More specifically, it is focused on on-line handwriting
and hand-based interaction, i.e. signals that utilise a digitizing device
(specific devoted or general-purpose tablet/smartphone) during the realization
of the tasks. Such devices permit the acquisition of on-surface dynamics as
well as in-air movements in time, thus providing complex and richer information
when compared to the conventional pen and paper method. Conclusions: Although
the scientific literature reports a wide range of tasks and applications, in
this paper, we summarize only those providing competitive results (e.g. in
terms of discrimination power) and having a significant impact in the field.
|
[
{
"version": "v1",
"created": "Thu, 24 Feb 2022 10:10:32 GMT"
}
] | 2022-02-28T00:00:00 |
[
[
"Faundez-Zanuy",
"Marcos",
""
],
[
"Mekyska",
"Jiri",
""
],
[
"Impedovo",
"Donato",
""
]
] |
new_dataset
| 0.988277 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.