id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2202.04058
|
Sikha Pentyala
|
Sikha Pentyala, David Melanson, Martine De Cock, Golnoosh Farnadi
|
PrivFair: a Library for Privacy-Preserving Fairness Auditing
| null | null | null | null |
cs.LG cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine learning (ML) has become prominent in applications that directly
affect people's quality of life, including in healthcare, justice, and finance.
ML models have been found to exhibit discrimination based on sensitive
attributes such as gender, race, or disability. Assessing if an ML model is
free of bias remains challenging to date, and by definition has to be done with
sensitive user characteristics that are subject of anti-discrimination and data
protection law. Existing libraries for fairness auditing of ML models offer no
mechanism to protect the privacy of the audit data. We present PrivFair, a
library for privacy-preserving fairness audits of ML models. Through the use of
Secure Multiparty Computation (MPC), PrivFair protects the confidentiality of
the model under audit and the sensitive data used for the audit, hence it
supports scenarios in which a proprietary classifier owned by a company is
audited using sensitive audit data from an external investigator. We
demonstrate the use of PrivFair for group fairness auditing with tabular data
or image data, without requiring the investigator to disclose their data to
anyone in an unencrypted manner, or the model owner to reveal their model
parameters to anyone in plaintext.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 18:42:50 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Feb 2022 09:42:57 GMT"
},
{
"version": "v3",
"created": "Mon, 23 May 2022 19:43:55 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Pentyala",
"Sikha",
""
],
[
"Melanson",
"David",
""
],
[
"De Cock",
"Martine",
""
],
[
"Farnadi",
"Golnoosh",
""
]
] |
new_dataset
| 0.997606 |
2202.07769
|
Robert Corless
|
Robert M. Corless, George Labahn, Dan Piponi, and Leili Rafiee Sevyeri
|
Bohemian Matrix Geometry
|
22 pages; 12 figures
| null |
10.1145/3476446.3536177
| null |
cs.SC math.CO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
A Bohemian matrix family is a set of matrices all of whose entries are drawn
from a fixed, usually discrete and hence bounded, subset of a field of
characteristic zero. Originally these were integers -- hence the name, from the
acronym BOunded HEight Matrix of Integers (BOHEMI) -- but other kinds of
entries are also interesting. Some kinds of questions about Bohemian matrices
can be answered by numerical computation, but sometimes exact computation is
better. In this paper we explore some Bohemian families (symmetric, upper
Hessenberg, or Toeplitz) computationally, and answer some open questions posed
about the distributions of eigenvalue densities.
|
[
{
"version": "v1",
"created": "Tue, 15 Feb 2022 22:43:30 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Apr 2022 18:50:27 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Corless",
"Robert M.",
""
],
[
"Labahn",
"George",
""
],
[
"Piponi",
"Dan",
""
],
[
"Sevyeri",
"Leili Rafiee",
""
]
] |
new_dataset
| 0.99633 |
2204.02330
|
Yaron Shany
|
Yaron Shany and Amit Berman
|
Fast syndrome-based Chase decoding of binary BCH codes through Wu list
decoding
|
Some improvements in Sec. 5.3.3
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new fast Chase decoding algorithm for binary BCH codes. The new
algorithm reduces the complexity in comparison to a recent fast Chase decoding
algorithm for Reed--Solomon (RS) codes by the authors (IEEE Trans. IT, 2022),
by requiring only a single Koetter iteration per edge of the decoding tree. In
comparison to the fast Chase algorithms presented by Kamiya (IEEE Trans. IT,
2001) and Wu (IEEE Trans. IT, 2012) for binary BCH codes, the polynomials
updated throughout the algorithm of the current paper typically have a much
lower degree.
To achieve the complexity reduction, we build on a new isomorphism between
two solution modules in the binary case, and on a degenerate case of the
soft-decision (SD) version of the Wu list decoding algorithm. Roughly speaking,
we prove that when the maximum list size is $1$ in Wu list decoding of binary
BCH codes, assigning a multiplicity of $1$ to a coordinate has the same effect
as flipping this coordinate in a Chase-decoding trial.
The solution-module isomorphism also provides a systematic way to benefit
from the binary alphabet for reducing the complexity in bounded-distance
hard-decision (HD) decoding. Along the way, we briefly develop the
Groebner-bases formulation of the Wu list decoding algorithm for binary BCH
codes, which is missing in the literature.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 16:35:27 GMT"
},
{
"version": "v2",
"created": "Tue, 24 May 2022 10:09:30 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Shany",
"Yaron",
""
],
[
"Berman",
"Amit",
""
]
] |
new_dataset
| 0.991656 |
2204.10762
|
Ziyi Zhang
|
Qun Li, Ziyi Zhang, Fu Xiao, Feng Zhang and Bir Bhanu
|
Dite-HRNet: Dynamic Lightweight High-Resolution Network for Human Pose
Estimation
|
Accepted by IJCAI-ECAI 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
A high-resolution network exhibits remarkable capability in extracting
multi-scale features for human pose estimation, but fails to capture long-range
interactions between joints and has high computational complexity. To address
these problems, we present a Dynamic lightweight High-Resolution Network
(Dite-HRNet), which can efficiently extract multi-scale contextual information
and model long-range spatial dependency for human pose estimation.
Specifically, we propose two methods, dynamic split convolution and adaptive
context modeling, and embed them into two novel lightweight blocks, which are
named dynamic multi-scale context block and dynamic global context block. These
two blocks, as the basic component units of our Dite-HRNet, are specially
designed for the high-resolution networks to make full use of the parallel
multi-resolution architecture. Experimental results show that the proposed
network achieves superior performance on both COCO and MPII human pose
estimation datasets, surpassing the state-of-the-art lightweight networks. Code
is available at: https://github.com/ZiyiZhang27/Dite-HRNet.
|
[
{
"version": "v1",
"created": "Fri, 22 Apr 2022 15:27:52 GMT"
},
{
"version": "v2",
"created": "Wed, 18 May 2022 04:58:26 GMT"
},
{
"version": "v3",
"created": "Tue, 24 May 2022 11:55:06 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Li",
"Qun",
""
],
[
"Zhang",
"Ziyi",
""
],
[
"Xiao",
"Fu",
""
],
[
"Zhang",
"Feng",
""
],
[
"Bhanu",
"Bir",
""
]
] |
new_dataset
| 0.999325 |
2205.10405
|
Sang-Hyun Park
|
Sang-Hyun Park, Soo-Min Kim, Seonghoon Kim, HongIl Yoo, Byoungnam Kim,
Chan-Byoung Chae
|
Demo: A Transparent Antenna System for In-Building Networks
|
2 pages, 3 figures
| null | null | null |
cs.NI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For in-building networks, the potential of transparent antennas, which are
used as windows of a building, is presented in this paper. In this scenario, a
transparent window antenna communicates with outdoor devices or base stations,
and the indoor repeaters act as relay stations of the transparent window
antenna for indoor devices. At indoor, back lobe waves of the transparent
window antenna are defined as interference to in-building networks. Hence, we
analyze different SIR and SINR results according to the location of an indoor
repeater through 3D ray tracing system-level simulation. Furthermore, a
link-level simulation through a full-duplex software-defined radio platform
with the fabricated transparent antenna is presented to examine the feasibility
of the transparent antenna.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2022 15:42:31 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Park",
"Sang-Hyun",
""
],
[
"Kim",
"Soo-Min",
""
],
[
"Kim",
"Seonghoon",
""
],
[
"Yoo",
"HongIl",
""
],
[
"Kim",
"Byoungnam",
""
],
[
"Chae",
"Chan-Byoung",
""
]
] |
new_dataset
| 0.998893 |
2205.11239
|
Zujun Fu
|
Zujun Fu
|
Vision Transformer: Vit and its Derivatives
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformer, an attention-based encoder-decoder architecture, has not only
revolutionized the field of natural language processing (NLP), but has also
done some pioneering work in the field of computer vision (CV). Compared to
convolutional neural networks (CNNs), the Vision Transformer (ViT) relies on
excellent modeling capabilities to achieve very good performance on several
benchmarks such as ImageNet, COCO, and ADE20k. ViT is inspired by the
self-attention mechanism in natural language processing, where word embeddings
are replaced with patch embeddings.
This paper reviews the derivatives of ViT and the cross-applications of ViT
with other fields.
|
[
{
"version": "v1",
"created": "Thu, 12 May 2022 14:02:39 GMT"
},
{
"version": "v2",
"created": "Tue, 24 May 2022 14:08:01 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Fu",
"Zujun",
""
]
] |
new_dataset
| 0.990853 |
2205.11567
|
Michael Schleiss
|
Michael Schleiss, Fahmi Rouatbi, Daniel Cremers
|
VPAIR -- Aerial Visual Place Recognition and Localization in Large-scale
Outdoor Environments
|
ICRA 2022 AERIAL ROBOTICS WORKSHOP
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Visual Place Recognition and Visual Localization are essential components in
navigation and mapping for autonomous vehicles especially in GNSS-denied
navigation scenarios. Recent work has focused on ground or close to ground
applications such as self-driving cars or indoor-scenarios and low-altitude
drone flights. However, applications such as Urban Air Mobility require
operations in large-scale outdoor environments at medium to high altitudes. We
present a new dataset named VPAIR. The dataset was recorded on board a light
aircraft flying at an altitude of more than 300 meters above ground capturing
images with a downwardfacing camera. Each image is paired with a high
resolution reference render including dense depth information and 6-DoF
reference poses. The dataset covers a more than one hundred kilometers long
trajectory over various types of challenging landscapes, e.g. urban, farmland
and forests. Experiments on this dataset illustrate the challenges introduced
by the change in perspective to a bird's eye view such as in-plane rotations.
|
[
{
"version": "v1",
"created": "Mon, 23 May 2022 18:50:08 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Schleiss",
"Michael",
""
],
[
"Rouatbi",
"Fahmi",
""
],
[
"Cremers",
"Daniel",
""
]
] |
new_dataset
| 0.999847 |
2205.11685
|
Itay Harel
|
Itay Harel, Hagai Taitelbaum, Idan Szpektor, Oren Kurland
|
A Dataset for Sentence Retrieval for Open-Ended Dialogues
| null | null |
10.1145/3477495.3531727
| null |
cs.IR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We address the task of sentence retrieval for open-ended dialogues. The goal
is to retrieve sentences from a document corpus that contain information useful
for generating the next turn in a given dialogue. Prior work on dialogue-based
retrieval focused on specific types of dialogues: either conversational QA or
conversational search. To address a broader scope of this task where any type
of dialogue can be used, we constructed a dataset that includes open-ended
dialogues from Reddit, candidate sentences from Wikipedia for each dialogue and
human annotations for the sentences. We report the performance of several
retrieval baselines, including neural retrieval models, over the dataset. To
adapt neural models to the types of dialogues in the dataset, we explored an
approach to induce a large-scale weakly supervised training data from Reddit.
Using this training set significantly improved the performance over training on
the MS MARCO dataset.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 00:51:39 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Harel",
"Itay",
""
],
[
"Taitelbaum",
"Hagai",
""
],
[
"Szpektor",
"Idan",
""
],
[
"Kurland",
"Oren",
""
]
] |
new_dataset
| 0.999779 |
2205.11692
|
Qianli Xu
|
Qianli Xu, Nicolas Gauthier, Wenyu Liang, Fen Fang, Hui Li Tan, Ying
Sun, Yan Wu, Liyuan Li, Joo-Hwee Lim
|
TAILOR: Teaching with Active and Incremental Learning for Object
Registration
|
5 pages, 4 figures, AAAI conference
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
When deploying a robot to a new task, one often has to train it to detect
novel objects, which is time-consuming and labor-intensive. We present TAILOR
-- a method and system for object registration with active and incremental
learning. When instructed by a human teacher to register an object, TAILOR is
able to automatically select viewpoints to capture informative images by
actively exploring viewpoints, and employs a fast incremental learning
algorithm to learn new objects without potential forgetting of previously
learned objects. We demonstrate the effectiveness of our method with a KUKA
robot to learn novel objects used in a real-world gearbox assembly task through
natural interactions.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 01:14:00 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Xu",
"Qianli",
""
],
[
"Gauthier",
"Nicolas",
""
],
[
"Liang",
"Wenyu",
""
],
[
"Fang",
"Fen",
""
],
[
"Tan",
"Hui Li",
""
],
[
"Sun",
"Ying",
""
],
[
"Wu",
"Yan",
""
],
[
"Li",
"Liyuan",
""
],
[
"Lim",
"Joo-Hwee",
""
]
] |
new_dataset
| 0.981025 |
2205.11694
|
EPTCS
|
Ruben Gamboa (University of Wyoming), Woodrow Gamboa (Stanford
University)
|
All Prime Numbers Have Primitive Roots
|
In Proceedings ACL2 2022, arXiv:2205.11103
|
EPTCS 359, 2022, pp. 9-18
|
10.4204/EPTCS.359.3
| null |
cs.LO cs.DM
|
http://creativecommons.org/licenses/by/4.0/
|
If p is a prime, then the numbers 1, 2, ..., p-1 form a group under
multiplication modulo p. A number g that generates this group is called a
primitive root of p; i.e., g is such that every number between 1 and p-1 can be
written as a power of g modulo p. Building on prior work in the ACL2 community,
this paper describes a constructive proof that every prime number has a
primitive root.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 01:15:02 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Gamboa",
"Ruben",
"",
"University of Wyoming"
],
[
"Gamboa",
"Woodrow",
"",
"Stanford\n University"
]
] |
new_dataset
| 0.999289 |
2205.11695
|
EPTCS
|
Ruben Gamboa (University of Wyoming), Alicia Thoney (University of
Wyoming)
|
Using ACL2 To Teach Students About Software Testing
|
In Proceedings ACL2 2022, arXiv:2205.11103
|
EPTCS 359, 2022, pp. 19-32
|
10.4204/EPTCS.359.4
| null |
cs.LO cs.DM
|
http://creativecommons.org/licenses/by/4.0/
|
We report on our experience using ACL2 in the classroom to teach students
about software testing. The course COSC2300 at the University of Wyoming is a
mostly traditional Discrete Mathematics course, but with a clear focus on
computer science applications. For instance, the section on logic and proofs is
motivated by the desire to write proofs about computer software. We emphasize
that the importance of software correctness falls along a spectrum with casual
programs on one end and mission-critical ones on the other. Corresponding to
this spectrum is a variety of tools, ranging from unit tests, randomized
testing of properties, and even formal proofs. In this paper, we describe one
of the major activities, in which students use the ACL2 Sedan's counter-example
generation facility to investigate properties of various existing checksum
algorithms used in error detection. Students are challenged to state the
relevant properties correctly, so that the counter-example generation tool is
used effectively in all cases, and ACL2 can find formal proofs automatically in
some of those.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 01:15:21 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Gamboa",
"Ruben",
"",
"University of Wyoming"
],
[
"Thoney",
"Alicia",
"",
"University of\n Wyoming"
]
] |
new_dataset
| 0.99898 |
2205.11698
|
EPTCS
|
Warren A. Hunt Jr. (The University of Texas, ForrestHunt, Inc.), Vivek
Ramanathan (The University of Texas, ForrestHunt, Inc.), J Strother Moore
(The University of Texas, ForrestHunt, Inc.)
|
VWSIM: A Circuit Simulator
|
In Proceedings ACL2 2022, arXiv:2205.11103
|
EPTCS 359, 2022, pp. 61-75
|
10.4204/EPTCS.359.7
| null |
cs.LO cs.MS cs.SC
|
http://creativecommons.org/licenses/by/4.0/
|
VWSIM is a circuit simulator for rapid, single-flux, quantum (RSFQ) circuits.
The simulator is designed to model and simulate primitive-circuit devices such
as capacitors, inductors, Josephson Junctions, and can be extended to simulate
other circuit families, such as CMOS. Circuit models can be provided in the
native VWSIM netlist format or as SPICE-compatible netlists, which are
flattened and transformed into symbolic equations that can be manipulated and
simulated. Written in the ACL2 logic, VWSIM provides logical guarantees about
each of the circuit models it simulates. Note, our matrix solving and
evaluation routines use Common Lisp floating-point numbers, and work is ongoing
to admit these models into ACL2. We currently use VWSIM to help us design
self-timed, RSFQ-based circuits. Our eventual goal is to prove properties of
RSFQ circuit models. The ACL2-based definition of the VWSIM simulator offers a
path for specifying and verifying RSFQ circuit models.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 01:16:21 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Hunt",
"Warren A.",
"Jr.",
"The University of Texas, ForrestHunt, Inc."
],
[
"Ramanathan",
"Vivek",
"",
"The University of Texas, ForrestHunt, Inc."
],
[
"Moore",
"J Strother",
"",
"The University of Texas, ForrestHunt, Inc."
]
] |
new_dataset
| 0.999912 |
2205.11699
|
EPTCS
|
Jagadish Bapanapally (University of Wyoming), Ruben Gamboa (University
of Wyoming)
|
A Free Group of Rotations of Rank 2
|
In Proceedings ACL2 2022, arXiv:2205.11103
|
EPTCS 359, 2022, pp. 76-82
|
10.4204/EPTCS.359.8
| null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
One of the key steps in the proof of the Banach-Tarski Theorem is the
introduction of a free group of rotations. First, a free group of reduced words
is generated where each element of the set is represented as an ACL2 list. Then
we demonstrate that there is a one-to-one relation between the set of reduced
words and a set of 3D rotations. In this paper we present a way to generate
this set of reduced words and we prove group properties for this set. Then, we
show a way to generate a set of 3D matrices using the set of reduced words.
Finally we show a formalization of 3D rotations and prove that every element of
the 3D matrices set is a rotation.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 01:16:49 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Bapanapally",
"Jagadish",
"",
"University of Wyoming"
],
[
"Gamboa",
"Ruben",
"",
"University\n of Wyoming"
]
] |
new_dataset
| 0.981807 |
2205.11704
|
EPTCS
|
Andrew T. Walter, Panagiotis Manolios
|
ACL2s Systems Programming
|
In Proceedings ACL2 2022, arXiv:2205.11103
|
EPTCS 359, 2022, pp. 134-150
|
10.4204/EPTCS.359.12
| null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
ACL2 provides a systems programming capability that allows one to write code
that uses and extends ACL2 inside of ACL2. However, for soundness reasons, ACL2
bars the unrestricted use of certain kinds of programming constructs, like
destructive updates, higher-order functions, eval, and arbitrary macros. We
devised a methodology for writing code in Common Lisp that allows one to access
ACL2, ACL2s, and Common Lisp functionality in a unified way. We arrived at this
methodology in the process of developing the ACL2 Sedan (ACL2s) and using it as
a key component in formal-methods-enabled projects relating to gamified
verification, education, proof checking, interfacing with external theorem
provers and security. The methodology includes a library for performing ACL2
queries from Common Lisp, as well as guidelines and utilities that help address
common needs. We call this methodology "ACL2s systems programming," to
distinguish it from ACL2 systems programming. We show how our methodology makes
it possible to easily develop tools that interface with ACL2 and ACL2s, and
describe our experience using it in our research.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 01:18:06 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Walter",
"Andrew T.",
""
],
[
"Manolios",
"Panagiotis",
""
]
] |
new_dataset
| 0.999518 |
2205.11705
|
Peike Li
|
Zhikang Li, Huiling Zhou, Shuai Bai, Peike Li, Chang Zhou, Hongxia
Yang
|
M6-Fashion: High-Fidelity Multi-modal Image Generation and Editing
|
arXiv admin note: text overlap with arXiv:2105.14211
| null | null | null |
cs.CV cs.AI cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
The fashion industry has diverse applications in multi-modal image generation
and editing. It aims to create a desired high-fidelity image with the
multi-modal conditional signal as guidance. Most existing methods learn
different condition guidance controls by introducing extra models or ignoring
the style prior knowledge, which is difficult to handle multiple signal
combinations and faces a low-fidelity problem. In this paper, we adapt both
style prior knowledge and flexibility of multi-modal control into one unified
two-stage framework, M6-Fashion, focusing on the practical AI-aided Fashion
design. It decouples style codes in both spatial and semantic dimensions to
guarantee high-fidelity image generation in the first stage. M6-Fashion
utilizes self-correction for the non-autoregressive generation to improve
inference speed, enhance holistic consistency, and support various signal
controls. Extensive experiments on a large-scale clothing dataset M2C-Fashion
demonstrate superior performances on various image generation and editing
tasks. M6-Fashion model serves as a highly potential AI designer for the
fashion industry.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 01:18:14 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Li",
"Zhikang",
""
],
[
"Zhou",
"Huiling",
""
],
[
"Bai",
"Shuai",
""
],
[
"Li",
"Peike",
""
],
[
"Zhou",
"Chang",
""
],
[
"Yang",
"Hongxia",
""
]
] |
new_dataset
| 0.990182 |
2205.11706
|
EPTCS
|
Alessandro Coglio (Kestrel Institute), Eric McCarthy (Kestrel
Institute), Stephen Westfold (Kestrel Institute), Daniel Balasubramanian
(Institute for Software-Integrated Systems, Vanderbilt University), Abhishek
Dubey (Institute for Software-Integrated Systems, Vanderbilt University),
Gabor Karsai (Institute for Software-Integrated Systems, Vanderbilt
University)
|
Syntheto: A Surface Language for APT and ACL2
|
In Proceedings ACL2 2022, arXiv:2205.11103
|
EPTCS 359, 2022, pp. 151-167
|
10.4204/EPTCS.359.13
| null |
cs.SE cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Syntheto is a surface language for carrying out formally verified program
synthesis by transformational refinement in ACL2 using the APT toolkit.
Syntheto aims at providing more familiarity and automation, in order to make
this technology more widely usable. Syntheto is a strongly statically typed
functional language that includes both executable and non-executable
constructs, including facilities to state and prove theorems and facilities to
apply proof-generating transformations. Syntheto is integrated into an IDE with
a notebook-style, interactive interface that translates Syntheto to ACL2
definitions and APT transformation invocations, and back-translates the
prover's results to Syntheto; the bidirectional translation happens behind the
scenes, with the user interacting solely with Syntheto.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 01:18:26 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Coglio",
"Alessandro",
"",
"Kestrel Institute"
],
[
"McCarthy",
"Eric",
"",
"Kestrel\n Institute"
],
[
"Westfold",
"Stephen",
"",
"Kestrel Institute"
],
[
"Balasubramanian",
"Daniel",
"",
"Institute for Software-Integrated Systems, Vanderbilt University"
],
[
"Dubey",
"Abhishek",
"",
"Institute for Software-Integrated Systems, Vanderbilt University"
],
[
"Karsai",
"Gabor",
"",
"Institute for Software-Integrated Systems, Vanderbilt\n University"
]
] |
new_dataset
| 0.999822 |
2205.11709
|
EPTCS
|
David Hardin (Collins Aerospace)
|
Hardware/Software Co-Assurance using the Rust Programming Language and
ACL2
|
In Proceedings ACL2 2022, arXiv:2205.11103
|
EPTCS 359, 2022, pp. 202-216
|
10.4204/EPTCS.359.16
| null |
cs.LO cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
The Rust programming language has garnered significant interest and use as a
modern, type-safe, memory-safe, and potentially formally analyzable programming
language. Our interest in Rust stems from its potential as a hardware/software
co-assurance language, with application to critical systems such as autonomous
vehicles. We report on the first known use of Rust as a High-Level Synthesis
(HLS) language. Most incumbent HLS languages are a subset of C. A Rust-based
HLS brings a single modern, type-safe, and memory-safe expression language for
both hardware and software realizations with high assurance. As a a study of
the suitability of Rust as an HLS, we have crafted a Rust subset, inspired by
Russinoff's Restricted Algorithmic C (RAC), which we have imaginatively named
Restricted Algorithmic Rust, or RAR. In our first implementation of a RAR
toolchain, we simply transpile the RAR source into RAC. By so doing, we
leverage a number of existing hardware/software co-assurance tools with a
minimum investment of time and effort. In this paper, we describe the RAR Rust
subset, detail our prototype RAR toolchain, and describe the implementation and
verification of several representative algorithms and data structures written
in RAR, with proofs of correctness conducted using the ACL2 theorem prover.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 01:19:24 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Hardin",
"David",
"",
"Collins Aerospace"
]
] |
new_dataset
| 0.999788 |
2205.11721
|
Ryo Shibata
|
Ryo Shibata and Hiroyuki Yashima
|
Delayed Coding Scheme for Channels with Insertion, Deletion, and
Substitution Errors
|
Submitted to IEEE conference
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a new coding scheme, called the delayed coding (DC) scheme, for
channels with insertion, deletion, and substitution (IDS) errors. The proposed
scheme employs delayed encoding and non-iterative detection and decoding
strategies to manage the transmission of multiple codewords in a linear code.
In the DC scheme, a channel input sequence consists of subblocks of multiple
codewords from the previous to current time instances. At the receiver side,
the maximum a posteriori detection applies to the received sequences that
contain information of the codeword at the current time instance, where priorly
decoded codewords aid the detection. The channel code decoding is then
performed, and extrinsic messages are exploited for the codeword estimations of
the following time instances. We show that the rate achievable with the DC
scheme over the IDS channel approaches the symmetric information rate of the
channel. Moreover, we show the excellent asymptotic and finite-length
performances of the DC scheme in conjunction with low-density parity-check
codes.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 02:03:32 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Shibata",
"Ryo",
""
],
[
"Yashima",
"Hiroyuki",
""
]
] |
new_dataset
| 0.998081 |
2205.11737
|
Jinghui Xiao
|
Jinghui Xiao, Qun Liu, Xin Jiang, Yuanfeng Xiong, Haiteng Wu, Zhe
Zhang
|
PERT: A New Solution to Pinyin to Character Conversion Task
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pinyin to Character conversion (P2C) task is the key task of Input Method
Engine (IME) in commercial input software for Asian languages, such as Chinese,
Japanese, Thai language and so on. It's usually treated as sequence labelling
task and resolved by language model, i.e. n-gram or RNN. However, the low
capacity of the n-gram or RNN limits its performance. This paper introduces a
new solution named PERT which stands for bidirectional Pinyin Encoder
Representations from Transformers. It achieves significant improvement of
performance over baselines. Furthermore, we combine PERT with n-gram under a
Markov framework, and improve performance further. Lastly, the external lexicon
is incorporated into PERT so as to resolve the OOD issue of IME.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 03:08:27 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Xiao",
"Jinghui",
""
],
[
"Liu",
"Qun",
""
],
[
"Jiang",
"Xin",
""
],
[
"Xiong",
"Yuanfeng",
""
],
[
"Wu",
"Haiteng",
""
],
[
"Zhang",
"Zhe",
""
]
] |
new_dataset
| 0.999591 |
2205.11755
|
Sourav Chatterjee
|
Sourav Chatterjee, Rohan Bopardikar, Marius Guerard, Uttam Thakore,
Xiaodong Jiang
|
MOSPAT: AutoML based Model Selection and Parameter Tuning for Time
Series Anomaly Detection
|
10 pages, submitted originally to KDD'22
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Organizations leverage anomaly and changepoint detection algorithms to detect
changes in user behavior or service availability and performance. Many
off-the-shelf detection algorithms, though effective, cannot readily be used in
large organizations where thousands of users monitor millions of use cases and
metrics with varied time series characteristics and anomaly patterns. The
selection of algorithm and parameters needs to be precise for each use case:
manual tuning does not scale, and automated tuning requires ground truth, which
is rarely available.
In this paper, we explore MOSPAT, an end-to-end automated machine learning
based approach for model and parameter selection, combined with a generative
model to produce labeled data. Our scalable end-to-end system allows individual
users in large organizations to tailor time-series monitoring to their specific
use case and data characteristics, without expert knowledge of anomaly
detection algorithms or laborious manual labeling. Our extensive experiments on
real and synthetic data demonstrate that this method consistently outperforms
using any single algorithm.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 03:28:52 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Chatterjee",
"Sourav",
""
],
[
"Bopardikar",
"Rohan",
""
],
[
"Guerard",
"Marius",
""
],
[
"Thakore",
"Uttam",
""
],
[
"Jiang",
"Xiaodong",
""
]
] |
new_dataset
| 0.990323 |
2205.11804
|
Hung-Min Hsu
|
Hung-Min Hsu, Xinyu Yuan, Baohua Zhu, Zhongwei Cheng and Lin Chen
|
Package Theft Detection from Smart Home Security Cameras
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Package theft detection has been a challenging task mainly due to lack of
training data and a wide variety of package theft cases in reality. In this
paper, we propose a new Global and Local Fusion Package Theft Detection
Embedding (GLF-PTDE) framework to generate package theft scores for each
segment within a video to fulfill the real-world requirements on package theft
detection. Moreover, we construct a novel Package Theft Detection dataset to
facilitate the research on this task. Our method achieves 80% AUC performance
on the newly proposed dataset, showing the effectiveness of the proposed
GLF-PTDE framework and its robustness in different real scenes for package
theft detection.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 05:54:19 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Hsu",
"Hung-Min",
""
],
[
"Yuan",
"Xinyu",
""
],
[
"Zhu",
"Baohua",
""
],
[
"Cheng",
"Zhongwei",
""
],
[
"Chen",
"Lin",
""
]
] |
new_dataset
| 0.999721 |
2205.11819
|
Zhenyu Zhang
|
Tianlong Chen, Zhenyu Zhang, Yihua Zhang, Shiyu Chang, Sijia Liu,
Zhangyang Wang
|
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free
| null | null | null | null |
cs.LG cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Trojan attacks threaten deep neural networks (DNNs) by poisoning them to
behave normally on most samples, yet to produce manipulated results for inputs
attached with a particular trigger. Several works attempt to detect whether a
given DNN has been injected with a specific trigger during the training. In a
parallel line of research, the lottery ticket hypothesis reveals the existence
of sparse subnetworks which are capable of reaching competitive performance as
the dense network after independent training. Connecting these two dots, we
investigate the problem of Trojan DNN detection from the brand new lens of
sparsity, even when no clean training data is available. Our crucial
observation is that the Trojan features are significantly more stable to
network pruning than benign features. Leveraging that, we propose a novel
Trojan network detection regime: first locating a "winning Trojan lottery
ticket" which preserves nearly full Trojan information yet only chance-level
performance on clean inputs; then recovering the trigger embedded in this
already isolated subnetwork. Extensive experiments on various datasets, i.e.,
CIFAR-10, CIFAR-100, and ImageNet, with different network architectures, i.e.,
VGG-16, ResNet-18, ResNet-20s, and DenseNet-100 demonstrate the effectiveness
of our proposal. Codes are available at
https://github.com/VITA-Group/Backdoor-LTH.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 06:33:31 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Chen",
"Tianlong",
""
],
[
"Zhang",
"Zhenyu",
""
],
[
"Zhang",
"Yihua",
""
],
[
"Chang",
"Shiyu",
""
],
[
"Liu",
"Sijia",
""
],
[
"Wang",
"Zhangyang",
""
]
] |
new_dataset
| 0.975536 |
2205.11824
|
Xulong Zhang
|
Xulong Zhang, Jianzong Wang, Ning Cheng, Jing Xiao
|
TDASS: Target Domain Adaptation Speech Synthesis Framework for
Multi-speaker Low-Resource TTS
|
Accepted by IJCNN2022 (The 2022 International Joint Conference on
Neural Networks)
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, synthesizing personalized speech by text-to-speech (TTS)
application is highly demanded. But the previous TTS models require a mass of
target speaker speeches for training. It is a high-cost task, and hard to
record lots of utterances from the target speaker. Data augmentation of the
speeches is a solution but leads to the low-quality synthesis speech problem.
Some multi-speaker TTS models are proposed to address the issue. But the
quantity of utterances of each speaker imbalance leads to the voice similarity
problem. We propose the Target Domain Adaptation Speech Synthesis Network
(TDASS) to address these issues. Based on the backbone of the Tacotron2 model,
which is the high-quality TTS model, TDASS introduces a self-interested
classifier for reducing the non-target influence. Besides, a special gradient
reversal layer with different operations for target and non-target is added to
the classifier. We evaluate the model on a Chinese speech corpus, the
experiments show the proposed method outperforms the baseline method in terms
of voice quality and voice similarity.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 06:41:05 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Zhang",
"Xulong",
""
],
[
"Wang",
"Jianzong",
""
],
[
"Cheng",
"Ning",
""
],
[
"Xiao",
"Jing",
""
]
] |
new_dataset
| 0.994058 |
2205.11825
|
Fangyu Shen
|
Fangyu Shen and Wei Gao
|
A Rate Control Algorithm for Video-based Point Cloud Compression
|
5 pages, 3 figures, 4 tables
|
2021 International Conference on Visual Communications and Image
Processing (VCIP)
|
10.1109/VCIP53242.2021.9675449.
| null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video-based point cloud compression (V-PCC) has been an emerging compression
technology that projects the 3D point cloud into a 2D plane and uses high
efficiency video coding (HEVC) to encode the projected 2D videos (geometry
video and color video). In this work, we propose a rate control algorithm for
the all-intra (AI) configuration of V-PCC. Specifically, based on the
quality-dependency existing in the projected videos, we develop an optimization
formulation to allocate target bits between the geometry video and the color
video. Furthermore, we design a two-pass method for HEVC to adapt to the new
characteristics of projected videos, which significantly improves the accuracy
of rate control. Experimental results demonstrate that our algorithm
outperforms V-PCC without rate control in R-D performance with just 0.43%
bitrate error.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 06:42:49 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Shen",
"Fangyu",
""
],
[
"Gao",
"Wei",
""
]
] |
new_dataset
| 0.98371 |
2205.11830
|
Iason Katsamenis
|
Iason Katsamenis, Eleni Eirini Karolou, Agapi Davradou, Eftychios
Protopapadakis, Anastasios Doulamis, Nikolaos Doulamis, Dimitris Kalogeras
|
TraCon: A novel dataset for real-time traffic cones detection using deep
learning
|
10 pages, 5 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Substantial progress has been made in the field of object detection in road
scenes. However, it is mainly focused on vehicles and pedestrians. To this end,
we investigate traffic cone detection, an object category crucial for road
effects and maintenance. In this work, the YOLOv5 algorithm is employed, in
order to find a solution for the efficient and fast detection of traffic cones.
The YOLOv5 can achieve a high detection accuracy with the score of IoU up to
91.31%. The proposed method is been applied to an RGB roadwork image dataset,
collected from various sources.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 06:51:58 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Katsamenis",
"Iason",
""
],
[
"Karolou",
"Eleni Eirini",
""
],
[
"Davradou",
"Agapi",
""
],
[
"Protopapadakis",
"Eftychios",
""
],
[
"Doulamis",
"Anastasios",
""
],
[
"Doulamis",
"Nikolaos",
""
],
[
"Kalogeras",
"Dimitris",
""
]
] |
new_dataset
| 0.999768 |
2205.11836
|
Marcelo Viridiano
|
Frederico Belcavello, Marcelo Viridiano, Ely Edison Matos, Tiago
Timponi Torrent
|
Charon: a FrameNet Annotation Tool for Multimodal Corpora
|
Accepted submission for the The Sixteenth Linguistic Annotation
Workshop (LAW-XVI 2022)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper presents Charon, a web tool for annotating multimodal corpora with
FrameNet categories. Annotation can be made for corpora containing both static
images and video sequences paired - or not - with text sequences. The pipeline
features, besides the annotation interface, corpus import and pre-processing
tools.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 06:58:07 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Belcavello",
"Frederico",
""
],
[
"Viridiano",
"Marcelo",
""
],
[
"Matos",
"Ely Edison",
""
],
[
"Torrent",
"Tiago Timponi",
""
]
] |
new_dataset
| 0.997368 |
2205.11840
|
Marcelo Viridiano
|
Tiago Timponi Torrent, Arthur Lorenzi, Ely Edison da Silva Matos,
Frederico Belcavello, Marcelo Viridiano, Maucha Andrade Gamonal
|
Lutma: a Frame-Making Tool for Collaborative FrameNet Development
|
Accepted submission for the 1st Workshop on Perspectivist Approaches
to NLP (NLPerspectives)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper presents Lutma, a collaborative, semi-constrained, tutorial-based
tool for contributing frames and lexical units to the Global FrameNet
initiative. The tool parameterizes the process of frame creation, avoiding
consistency violations and promoting the integration of frames contributed by
the community with existing frames. Lutma is structured in a wizard-like
fashion so as to provide users with text and video tutorials relevant for each
step in the frame creation process. We argue that this tool will allow for a
sensible expansion of FrameNet coverage in terms of both languages and cultural
perspectives encoded by them, positioning frames as a viable alternative for
representing perspective in language models.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 07:04:43 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Torrent",
"Tiago Timponi",
""
],
[
"Lorenzi",
"Arthur",
""
],
[
"Matos",
"Ely Edison da Silva",
""
],
[
"Belcavello",
"Frederico",
""
],
[
"Viridiano",
"Marcelo",
""
],
[
"Gamonal",
"Maucha Andrade",
""
]
] |
new_dataset
| 0.981215 |
2205.11867
|
Tatsuya Ide
|
Tatsuya Ide and Daisuke Kawahara
|
Building a Dialogue Corpus Annotated with Expressed and Experienced
Emotions
|
ACL Student Research Workshop (SRW) 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In communication, a human would recognize the emotion of an interlocutor and
respond with an appropriate emotion, such as empathy and comfort. Toward
developing a dialogue system with such a human-like ability, we propose a
method to build a dialogue corpus annotated with two kinds of emotions. We
collect dialogues from Twitter and annotate each utterance with the emotion
that a speaker put into the utterance (expressed emotion) and the emotion that
a listener felt after listening to the utterance (experienced emotion). We
built a dialogue corpus in Japanese using this method, and its statistical
analysis revealed the differences between expressed and experienced emotions.
We conducted experiments on recognition of the two kinds of emotions. The
experimental results indicated the difficulty in recognizing experienced
emotions and the effectiveness of multi-task learning of the two kinds of
emotions. We hope that the constructed corpus will facilitate the study on
emotion recognition in a dialogue and emotion-aware dialogue response
generation.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 07:40:11 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Ide",
"Tatsuya",
""
],
[
"Kawahara",
"Daisuke",
""
]
] |
new_dataset
| 0.952332 |
2205.11939
|
Bugra Caskurlu
|
Bugra Caskurlu and Fatih Erdem Kizilkaya
|
On Hedonic Games with Common Ranking Property
| null | null | null | null |
cs.GT econ.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hedonic games are a prominent model of coalition formation, in which each
agent's utility only depends on the coalition she resides. The subclass of
hedonic games that models the formation of general partnerships, where output
is shared equally among affiliates, is referred to as hedonic games with common
ranking property (HGCRP). Aside from their economic motivation, HGCRP came into
prominence since they are guaranteed to have core stable solutions that can be
found efficiently. We improve upon existing results by proving that every
instance of HGCRP has a solution that is Pareto optimal, core stable and
individually stable. The economic significance of this result is that
efficiency is not to be totally sacrificed for the sake of stability in HGCRP.
We establish that finding such a solution is {\bf NP-hard} even if the sizes of
the coalitions are bounded above by $3$; however, it is polynomial time
solvable if the sizes of the coalitions are bounded above by $2$. We show that
the gap between the total utility of a core stable solution and that of the
socially-optimal solution (OPT) is bounded above by $n$, where $n$ is the
number of agents, and that this bound is tight. Our investigations reveal that
computing OPT is inapproximable within better than $O(n^{1-\epsilon})$ for any
fixed $\epsilon > 0$, and that this inapproximability lower bound is
polynomially tight. However, OPT can be computed in polynomial time if the
sizes of the coalitions are bounded above by $2$.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 10:10:40 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Caskurlu",
"Bugra",
""
],
[
"Kizilkaya",
"Fatih Erdem",
""
]
] |
new_dataset
| 0.986911 |
2205.11962
|
Yanling Hao
|
Yanling Hao, Zhiyuan Shi, Yuanwei Liu
|
A Wireless-Vision Dataset for Privacy Preserving Human Activity
Recognition
| null | null | null | null |
cs.CV eess.IV eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human Activity Recognition (HAR) has recently received remarkable attention
in numerous applications such as assisted living and remote monitoring.
Existing solutions based on sensors and vision technologies have obtained
achievements but still suffering from considerable limitations in the
environmental requirement. Wireless signals like WiFi-based sensing have
emerged as a new paradigm since it is convenient and not restricted in the
environment. In this paper, a new WiFi-based and video-based neural network
(WiNN) is proposed to improve the robustness of activity recognition where the
synchronized video serves as the supplement for the wireless data. Moreover, a
wireless-vision benchmark (WiVi) is collected for 9 class actions recognition
in three different visual conditions, including the scenes without occlusion,
with partial occlusion, and with full occlusion. Both machine learning methods
- support vector machine (SVM) as well as deep learning methods are used for
the accuracy verification of the data set. Our results show that WiVi data set
satisfies the primary demand and all three branches in the proposed pipeline
keep more than $80\%$ of activity recognition accuracy over multiple action
segmentation from 1s to 3s. In particular, WiNN is the most robust method in
terms of all the actions on three action segmentation compared to the others.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 10:49:11 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Hao",
"Yanling",
""
],
[
"Shi",
"Zhiyuan",
""
],
[
"Liu",
"Yuanwei",
""
]
] |
new_dataset
| 0.999808 |
2205.11976
|
Shantipriya Parida
|
Shantipriya Parida, Kalyanamalini Sahoo, Atul Kr. Ojha, Saraswati
Sahoo, Satya Ranjan Dash, Bijayalaxmi Dash
|
Universal Dependency Treebank for Odia Language
|
To be appear in 6th Workshop on Indian Language Data: Resources and
Evaluation (WILDRE-6) @ LREC 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This paper presents the first publicly available treebank of Odia, a
morphologically rich low resource Indian language. The treebank contains
approx. 1082 tokens (100 sentences) in Odia selected from "Samantar", the
largest available parallel corpora collection for Indic languages. All the
selected sentences are manually annotated following the ``Universal Dependency
(UD)" guidelines. The morphological analysis of the Odia treebank was performed
using machine learning techniques. The Odia annotated treebank will enrich the
Odia language resource and will help in building language technology tools for
cross-lingual learning and typological research. We also build a preliminary
Odia parser using a machine learning approach. The accuracy of the parser is
86.6% Tokenization, 64.1% UPOS, 63.78% XPOS, 42.04% UAS and 21.34% LAS.
Finally, the paper briefly discusses the linguistic analysis of the Odia UD
treebank.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 11:19:26 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Parida",
"Shantipriya",
""
],
[
"Sahoo",
"Kalyanamalini",
""
],
[
"Ojha",
"Atul Kr.",
""
],
[
"Sahoo",
"Saraswati",
""
],
[
"Dash",
"Satya Ranjan",
""
],
[
"Dash",
"Bijayalaxmi",
""
]
] |
new_dataset
| 0.996612 |
2205.11981
|
Yaoyao Zhong
|
Yaoyao Zhong and Weihong Deng
|
OPOM: Customized Invisible Cloak towards Face Privacy Protection
|
This article has been accepted by IEEE Transactions on Pattern
Analysis & Machine Intelligence. Datasets and code are available at
https://github.com/zhongyy/OPOM
| null |
10.1109/TPAMI.2022.3175602
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While convenient in daily life, face recognition technologies also raise
privacy concerns for regular users on the social media since they could be used
to analyze face images and videos, efficiently and surreptitiously without any
security restrictions. In this paper, we investigate the face privacy
protection from a technology standpoint based on a new type of customized
cloak, which can be applied to all the images of a regular user, to prevent
malicious face recognition systems from uncovering their identity.
Specifically, we propose a new method, named one person one mask (OPOM), to
generate person-specific (class-wise) universal masks by optimizing each
training sample in the direction away from the feature subspace of the source
identity. To make full use of the limited training images, we investigate
several modeling methods, including affine hulls, class centers, and convex
hulls, to obtain a better description of the feature subspace of source
identities. The effectiveness of the proposed method is evaluated on both
common and celebrity datasets against black-box face recognition models with
different loss functions and network architectures. In addition, we discuss the
advantages and potential problems of the proposed method. In particular, we
conduct an application study on the privacy protection of a video dataset,
Sherlock, to demonstrate the potential practical usage of the proposed method.
Datasets and code are available at https://github.com/zhongyy/OPOM.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 11:29:37 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Zhong",
"Yaoyao",
""
],
[
"Deng",
"Weihong",
""
]
] |
new_dataset
| 0.990632 |
2205.12002
|
Nurettin Turan
|
Nurettin Turan, Michael Koller, Benedikt Fesl, Samer Bazzi, Wen Xu,
Wolfgang Utschick
|
GMM-based Codebook Construction and Feedback Encoding in FDD Systems
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a precoder codebook construction and feedback encoding scheme
which is based on Gaussian mixture models (GMMs). In an offline phase, the base
station (BS) first fits a GMM to uplink (UL) training samples. Thereafter, it
designs a codebook in an unsupervised manner by exploiting the GMM's clustering
capability. We design one codebook entry per GMM component. After offloading
the GMM-but not the codebook-to the mobile terminal (MT) in the online phase,
the MT utilizes the GMM to determine the best fitting codebook entry. To this
end, no channel estimation is necessary at the MT. Instead, the MT's observed
signal is used to evaluate how responsible each component of the GMM is for the
signal. The feedback consists of the index of the GMM component with the
highest responsibility and the BS then employs the corresponding codebook
entry. Simulation results show that the proposed codebook design and feedback
encoding scheme outperforms conventional Lloyd clustering based codebook design
algorithms, especially in configurations with reduced pilot overhead.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 11:48:12 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Turan",
"Nurettin",
""
],
[
"Koller",
"Michael",
""
],
[
"Fesl",
"Benedikt",
""
],
[
"Bazzi",
"Samer",
""
],
[
"Xu",
"Wen",
""
],
[
"Utschick",
"Wolfgang",
""
]
] |
new_dataset
| 0.997686 |
2205.12133
|
Lin Li
|
Ming Li, Lin Li, Qing Xie, Jingling Yuan, Xiaohui Tao
|
MealRec: A Meal Recommendation Dataset
| null | null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bundle recommendation systems aim to recommend a bundle of items for a user
to consider as a whole. They have become a norm in modern life and have been
applied to many real-world settings, such as product bundle recommendation,
music playlist recommendation and travel package recommendation. However,
compared to studies of bundle recommendation approaches in areas such as online
shopping and digital music services, research on meal recommendations for
restaurants in the hospitality industry has made limited progress, due largely
to the lack of high-quality benchmark datasets. A publicly available dataset
specialising in meal recommendation research for the research community is in
urgent demand. In this paper, we introduce a meal recommendation dataset
(MealRec) that aims to facilitate future research. MealRec is constructed from
the user review records of Allrecipe.com, covering 1,500+ users, 7,200+ recipes
and 3,800+ meals. Each recipe is described with rich information, such as
ingredients, instructions, pictures, category and tags, etc; and each meal is
three-course, consisting of an appetizer, a main dish and a dessert.
Furthermore, we propose a category-constrained meal recommendation model that
is evaluated through comparative experiments with several state-of-the-art
bundle recommendation methods on MealRec. Experimental results confirm the
superiority of our model and demonstrate that MealRec is a promising testbed
for meal recommendation related research.
The MealRec dataset and the source code of our proposed model are available
at https://github.com/WUT-IDEA/MealRec for access and reproducibility.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 15:09:43 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Li",
"Ming",
""
],
[
"Li",
"Lin",
""
],
[
"Xie",
"Qing",
""
],
[
"Yuan",
"Jingling",
""
],
[
"Tao",
"Xiaohui",
""
]
] |
new_dataset
| 0.999861 |
2205.12138
|
Oliver Gasser
|
Tanya Shreedhar, Danesh Zeynali, Oliver Gasser, Nitinder Mohan, J\"org
Ott
|
A Longitudinal View at the Adoption of Multipath TCP
|
arXiv admin note: substantial text overlap with arXiv:2106.07351
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multipath TCP (MPTCP) extends traditional TCP to enable simultaneous use of
multiple connection endpoints at the source and destination. MPTCP has been
under active development since its standardization in 2013, and more recently
in February 2020, MPTCP was upstreamed to the Linux kernel. In this paper, we
provide an in-depth analysis of MPTCPv0 in the Internet and the first analysis
of MPTCPv1 to date. We probe the entire IPv4 address space and an IPv6 hitlist
to detect MPTCP-enabled systems operational on port 80 and 443. Our scans
reveal a steady increase in MPTCPv0-capable IPs, reaching 13k+ on IPv4
(2$\times$ increase in one year) and 1k on IPv6 (40$\times$ increase). MPTCPv1
deployment is comparatively low with $\approx$100 supporting hosts in IPv4 and
IPv6, most of which belong to Apple. We also discover a substantial share of
seemingly MPTCP-capable hosts, an artifact of middleboxes mirroring TCP
options. We conduct targeted HTTP(S) measurements towards select hosts and find
that middleboxes can aggressively impact the perceived quality of applications
utilizing MPTCP. Finally, we analyze two complementary traffic traces from
CAIDA and MAWI to shed light on the real-world usage of MPTCP. We find that
while MPTCP usage has increased by a factor of 20 over the past few years, its
traffic share is still quite low.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 15:14:47 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Shreedhar",
"Tanya",
""
],
[
"Zeynali",
"Danesh",
""
],
[
"Gasser",
"Oliver",
""
],
[
"Mohan",
"Nitinder",
""
],
[
"Ott",
"Jörg",
""
]
] |
new_dataset
| 0.954522 |
2205.12194
|
Debjoy Saha
|
Debjoy Saha, Shravan Nayak, Timo Baumann
|
Merkel Podcast Corpus: A Multimodal Dataset Compiled from 16 Years of
Angela Merkel's Weekly Video Podcasts
|
Accepted at LREC 2022
| null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce the Merkel Podcast Corpus, an audio-visual-text corpus in German
collected from 16 years of (almost) weekly Internet podcasts of former German
chancellor Angela Merkel. To the best of our knowledge, this is the first
single speaker corpus in the German language consisting of audio, visual and
text modalities of comparable size and temporal extent. We describe the methods
used with which we have collected and edited the data which involves
downloading the videos, transcripts and other metadata, forced alignment,
performing active speaker recognition and face detection to finally curate the
single speaker dataset consisting of utterances spoken by Angela Merkel. The
proposed pipeline is general and can be used to curate other datasets of
similar nature, such as talk show contents. Through various statistical
analyses and applications of the dataset in talking face generation and TTS, we
show the utility of the dataset. We argue that it is a valuable contribution to
the research community, in particular, due to its realistic and challenging
material at the boundary between prepared and spontaneous speech.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 16:48:07 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Saha",
"Debjoy",
""
],
[
"Nayak",
"Shravan",
""
],
[
"Baumann",
"Timo",
""
]
] |
new_dataset
| 0.999786 |
2205.12240
|
Roni Friedman
|
Roni Friedman, Jo\~ao Sedoc, Shai Gretz, Assaf Toledo, Rose Weeks,
Naor Bar-Zeev, Yoav Katz, Noam Slonim
|
VIRATrustData: A Trust-Annotated Corpus of Human-Chatbot Conversations
About COVID-19 Vaccines
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Public trust in medical information is crucial for successful application of
public health policies such as vaccine uptake. This is especially true when the
information is offered remotely, by chatbots, which have become increasingly
popular in recent years. Here, we explore the challenging task of human-bot
turn-level trust classification. We rely on a recently released data of
observationally-collected (rather than crowdsourced) dialogs with VIRA chatbot,
a COVID-19 Vaccine Information Resource Assistant. These dialogs are centered
around questions and concerns about COVID-19 vaccines, where trust is
particularly acute. We annotated $3k$ VIRA system-user conversational turns for
Low Institutional Trust or Low Agent Trust vs. Neutral or High Trust. We
release the labeled dataset, VIRATrustData, the first of its kind to the best
of our knowledge. We demonstrate how this task is non-trivial and compare
several models that predict the different levels of trust.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 17:48:04 GMT"
}
] | 2022-05-25T00:00:00 |
[
[
"Friedman",
"Roni",
""
],
[
"Sedoc",
"João",
""
],
[
"Gretz",
"Shai",
""
],
[
"Toledo",
"Assaf",
""
],
[
"Weeks",
"Rose",
""
],
[
"Bar-Zeev",
"Naor",
""
],
[
"Katz",
"Yoav",
""
],
[
"Slonim",
"Noam",
""
]
] |
new_dataset
| 0.99866 |
1907.08433
|
Joseph O'Rourke
|
Erik D. Demaine, Martin L. Demaine, David Eppstein, Joseph O'Rourke
|
Some Polycubes Have No Edge Zipper Unfolding
|
11 pages, 10 figures, 9 references. Updated to match the version that
will appear in the Canad. Conf. Comput. Geom., Aug. 2020
|
Geombinatorics, Vol. XXXI, Issue 3 (Jan 2022), pp.101-109
| null | null |
cs.CG cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is unknown whether every polycube (polyhedron constructed by gluing cubes
face-to-face) has an edge unfolding, that is, cuts along edges of the cubes
that unfolds the polycube to a single nonoverlapping polygon in the plane. Here
we construct polycubes that have no *edge zipper unfolding* where the cut edges
are further restricted to form a path.
|
[
{
"version": "v1",
"created": "Fri, 19 Jul 2019 09:46:30 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Jul 2019 12:33:04 GMT"
},
{
"version": "v3",
"created": "Wed, 22 Jul 2020 19:23:16 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Demaine",
"Erik D.",
""
],
[
"Demaine",
"Martin L.",
""
],
[
"Eppstein",
"David",
""
],
[
"O'Rourke",
"Joseph",
""
]
] |
new_dataset
| 0.998653 |
2007.03262
|
Chenglong Li
|
Zhengzheng Tu, Yan Ma, Zhun Li, Chenglong Li, Jieming Xu, Yongtao Liu
|
RGBT Salient Object Detection: A Large-scale Dataset and Benchmark
|
12 pages, 10 figures
https://github.com/lz118/RGBT-Salient-Object-Detection
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Salient object detection in complex scenes and environments is a challenging
research topic. Most works focus on RGB-based salient object detection, which
limits its performance of real-life applications when confronted with adverse
conditions such as dark environments and complex backgrounds. Taking advantage
of RGB and thermal infrared images becomes a new research direction for
detecting salient object in complex scenes recently, as thermal infrared
spectrum imaging provides the complementary information and has been applied to
many computer vision tasks. However, current research for RGBT salient object
detection is limited by the lack of a large-scale dataset and comprehensive
benchmark. This work contributes such a RGBT image dataset named VT5000,
including 5000 spatially aligned RGBT image pairs with ground truth
annotations. VT5000 has 11 challenges collected in different scenes and
environments for exploring the robustness of algorithms. With this dataset, we
propose a powerful baseline approach, which extracts multi-level features
within each modality and aggregates these features of all modalities with the
attention mechanism, for accurate RGBT salient object detection. Extensive
experiments show that the proposed baseline approach outperforms the
state-of-the-art methods on VT5000 dataset and other two public datasets. In
addition, we carry out a comprehensive analysis of different algorithms of RGBT
salient object detection on VT5000 dataset, and then make several valuable
conclusions and provide some potential research directions for RGBT salient
object detection.
|
[
{
"version": "v1",
"created": "Tue, 7 Jul 2020 07:58:14 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Jul 2020 02:17:41 GMT"
},
{
"version": "v3",
"created": "Mon, 9 Nov 2020 07:18:44 GMT"
},
{
"version": "v4",
"created": "Tue, 10 Nov 2020 02:07:42 GMT"
},
{
"version": "v5",
"created": "Wed, 18 Nov 2020 12:27:14 GMT"
},
{
"version": "v6",
"created": "Mon, 23 May 2022 03:38:28 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Tu",
"Zhengzheng",
""
],
[
"Ma",
"Yan",
""
],
[
"Li",
"Zhun",
""
],
[
"Li",
"Chenglong",
""
],
[
"Xu",
"Jieming",
""
],
[
"Liu",
"Yongtao",
""
]
] |
new_dataset
| 0.999875 |
2105.14151
|
Farah Ferdaus
|
Farah Ferdaus, B. M. S. Bahar Talukder, and Md Tauhidur Rahman
|
Approximate MRAM: High-performance and Power-efficient Computing with
MRAM Chips for Error-tolerant Applications
| null | null |
10.1109/TC.2022.3174584
| null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Approximate computing (AC) leverages the inherent error resilience and is
used in many big-data applications from various domains such as multimedia,
computer vision, signal processing, and machine learning to improve systems
performance and power consumption. Like many other approximate circuits and
algorithms, the memory subsystem can also be used to enhance performance and
save power significantly. This paper proposes an efficient and effective
systematic methodology to construct an approximate non-volatile
magneto-resistive RAM (MRAM) framework using consumer-off-the-shelf (COTS) MRAM
chips. In the proposed scheme, an extensive experimental characterization of
memory errors is performed by manipulating the write latency of MRAM chips
which exploits the inherent (intrinsic/extrinsic process variation) stochastic
switching behavior of magnetic tunnel junctions (MTJs). The experimental
results and error-resilient image application reveal that the proposed AC
framework provides a significant performance improvement and demonstrates a
maximum reduction in MRAM write current of ~66% on average with negligible or
no loss in output quality.
|
[
{
"version": "v1",
"created": "Sat, 29 May 2021 00:11:00 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Mar 2022 16:59:38 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Ferdaus",
"Farah",
""
],
[
"Talukder",
"B. M. S. Bahar",
""
],
[
"Rahman",
"Md Tauhidur",
""
]
] |
new_dataset
| 0.997309 |
2106.15611
|
James Bagrow
|
Milo Z. Trujillo, Laurent H\'ebert-Dufresne and James Bagrow
|
The penumbra of open source: projects outside of centralized platforms
are longer maintained, more academic and more collaborative
|
20 pages, 7 figures, 3 tables
|
EPJ Data Science 11:31 (2022)
|
10.1140/epjds/s13688-022-00345-7
| null |
cs.CY cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
GitHub has become the central online platform for much of open source,
hosting most open source code repositories. With this popularity, the public
digital traces of GitHub are now a valuable means to study teamwork and
collaboration. In many ways, however, GitHub is a convenience sample, and may
not be representative of open source development off the platform. Here we
develop a novel, extensive sample of public open source project repositories
outside of centralized platforms. We characterized these projects along a
number of dimensions, and compare to a time-matched sample of corresponding
GitHub projects. Our sample projects tend to have more collaborators, are
maintained for longer periods, and tend to be more focused on academic and
scientific problems.
|
[
{
"version": "v1",
"created": "Tue, 29 Jun 2021 17:54:26 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Jul 2021 16:03:03 GMT"
},
{
"version": "v3",
"created": "Sun, 22 May 2022 17:48:55 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Trujillo",
"Milo Z.",
""
],
[
"Hébert-Dufresne",
"Laurent",
""
],
[
"Bagrow",
"James",
""
]
] |
new_dataset
| 0.999442 |
2108.12603
|
Giannis Karamanolakis
|
Guoqing Zheng, Giannis Karamanolakis, Kai Shu, Ahmed Hassan Awadallah
|
WALNUT: A Benchmark on Semi-weakly Supervised Learning for Natural
Language Understanding
|
Accepted to NAACL 2022 (Long Paper)
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Building machine learning models for natural language understanding (NLU)
tasks relies heavily on labeled data. Weak supervision has been proven valuable
when large amount of labeled data is unavailable or expensive to obtain.
Existing works studying weak supervision for NLU either mostly focus on a
specific task or simulate weak supervision signals from ground-truth labels. It
is thus hard to compare different approaches and evaluate the benefit of weak
supervision without access to a unified and systematic benchmark with diverse
tasks and real-world weak labeling rules. In this paper, we propose such a
benchmark, named WALNUT (semi-WeAkly supervised Learning for Natural language
Understanding Testbed), to advocate and facilitate research on weak supervision
for NLU. WALNUT consists of NLU tasks with different types, including
document-level and token-level prediction tasks. WALNUT is the first
semi-weakly supervised learning benchmark for NLU, where each task contains
weak labels generated by multiple real-world weak sources, together with a
small set of clean labels. We conduct baseline evaluations on WALNUT to
systematically evaluate the effectiveness of various weak supervision methods
and model architectures. Our results demonstrate the benefit of weak
supervision for low-resource NLU tasks and highlight interesting patterns
across tasks. We expect WALNUT to stimulate further research on methodologies
to leverage weak supervision more effectively. The benchmark and code for
baselines are available at \url{aka.ms/walnut_benchmark}.
|
[
{
"version": "v1",
"created": "Sat, 28 Aug 2021 08:33:23 GMT"
},
{
"version": "v2",
"created": "Fri, 20 May 2022 16:30:49 GMT"
},
{
"version": "v3",
"created": "Mon, 23 May 2022 00:48:39 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Zheng",
"Guoqing",
""
],
[
"Karamanolakis",
"Giannis",
""
],
[
"Shu",
"Kai",
""
],
[
"Awadallah",
"Ahmed Hassan",
""
]
] |
new_dataset
| 0.997171 |
2109.13046
|
Stefano Cresci
|
Kristina Hristakieva, Stefano Cresci, Giovanni Da San Martino, Mauro
Conti, Preslav Nakov
|
The Spread of Propaganda by Coordinated Communities on Social Media
|
The 14th ACM Web Science Conference 2022 (WebSci '22)
| null |
10.1145/3501247.3531543
| null |
cs.SI cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Large-scale manipulations on social media have two important characteristics:
(i) use of propaganda to influence others, and (ii) adoption of coordinated
behavior to spread it and to amplify its impact. Despite the connection between
them, these two characteristics have so far been considered in isolation. Here
we aim to bridge this gap. In particular, we analyze the spread of propaganda
and its interplay with coordinated behavior on a large Twitter dataset about
the 2019 UK general election. We first propose and evaluate several metrics for
measuring the use of propaganda on Twitter. Then, we investigate the use of
propaganda by different coordinated communities that participated in the online
debate. The combination of the use of propaganda and coordinated behavior
allows us to uncover the authenticity and harmfulness of the different
communities. Finally, we compare our measures of propaganda and coordination
with automation (i.e., bot) scores and Twitter suspensions, revealing
interesting trends. From a theoretical viewpoint, we introduce a methodology
for analyzing several important dimensions of online behavior that are seldom
conjointly considered. From a practical viewpoint, we provide new insights into
authentic and inauthentic online activities during the 2019 UK general
election.
|
[
{
"version": "v1",
"created": "Mon, 27 Sep 2021 13:39:10 GMT"
},
{
"version": "v2",
"created": "Thu, 19 May 2022 12:17:54 GMT"
},
{
"version": "v3",
"created": "Sat, 21 May 2022 09:04:22 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Hristakieva",
"Kristina",
""
],
[
"Cresci",
"Stefano",
""
],
[
"Martino",
"Giovanni Da San",
""
],
[
"Conti",
"Mauro",
""
],
[
"Nakov",
"Preslav",
""
]
] |
new_dataset
| 0.996377 |
2110.06324
|
Xiangtian Zheng
|
Xiangtian Zheng, Nan Xu, Loc Trinh, Dongqi Wu, Tong Huang, S.
Sivaranjani, Yan Liu, Le Xie
|
A Multi-scale Time-series Dataset with Benchmark for Machine Learning in
Decarbonized Energy Grids
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The electric grid is a key enabling infrastructure for the ambitious
transition towards carbon neutrality as we grapple with climate change. With
deepening penetration of renewable energy resources and electrified
transportation, the reliable and secure operation of the electric grid becomes
increasingly challenging. In this paper, we present PSML, a first-of-its-kind
open-access multi-scale time-series dataset, to aid in the development of
data-driven machine learning (ML) based approaches towards reliable operation
of future electric grids. The dataset is generated through a novel transmission
+ distribution (T+D) co-simulation designed to capture the increasingly
important interactions and uncertainties of the grid dynamics, containing
electric load, renewable generation, weather, voltage and current measurements
over multiple spatio-temporal scales. Using PSML, we provide state-of-the-art
ML baselines on three challenging use cases of critical importance to achieve:
(i) early detection, accurate classification and localization of dynamic
disturbance events; (ii) robust hierarchical forecasting of load and renewable
energy with the presence of uncertainties and extreme events; and (iii)
realistic synthetic generation of physical-law-constrained measurement time
series. We envision that this dataset will enable advances for ML in dynamic
systems, while simultaneously allowing ML researchers to contribute towards
carbon-neutral electricity and mobility.
|
[
{
"version": "v1",
"created": "Tue, 12 Oct 2021 20:18:49 GMT"
},
{
"version": "v2",
"created": "Mon, 23 May 2022 00:08:06 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Zheng",
"Xiangtian",
""
],
[
"Xu",
"Nan",
""
],
[
"Trinh",
"Loc",
""
],
[
"Wu",
"Dongqi",
""
],
[
"Huang",
"Tong",
""
],
[
"Sivaranjani",
"S.",
""
],
[
"Liu",
"Yan",
""
],
[
"Xie",
"Le",
""
]
] |
new_dataset
| 0.999803 |
2112.08754
|
Lukas Lange
|
Lukas Lange, Heike Adel, Jannik Str\"otgen, Dietrich Klakow
|
CLIN-X: pre-trained language models and a study on cross-task transfer
for concept extraction in the clinical domain
|
This article has been accepted for publication in Bioinformatics
\c{opyright}: 2022 The Author(s). Published by Oxford University Press. All
rights reserved. The published manuscript can be found here:
https://doi.org/10.1093/bioinformatics/btac297
| null |
10.1093/bioinformatics/btac297
| null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The field of natural language processing (NLP) has recently seen a large
change towards using pre-trained language models for solving almost any task.
Despite showing great improvements in benchmark datasets for various tasks,
these models often perform sub-optimal in non-standard domains like the
clinical domain where a large gap between pre-training documents and target
documents is observed. In this paper, we aim at closing this gap with
domain-specific training of the language model and we investigate its effect on
a diverse set of downstream tasks and settings. We introduce the pre-trained
CLIN-X (Clinical XLM-R) language models and show how CLIN-X outperforms other
pre-trained transformer models by a large margin for ten clinical concept
extraction tasks from two languages. In addition, we demonstrate how the
transformer model can be further improved with our proposed task- and
language-agnostic model architecture based on ensembles over random splits and
cross-sentence context. Our studies in low-resource and transfer settings
reveal stable model performance despite a lack of annotated data with
improvements of up to 47 F1 points when only 250 labeled sentences are
available. Our results highlight the importance of specialized language models
as CLIN-X for concept extraction in non-standard domains, but also show that
our task-agnostic model architecture is robust across the tested tasks and
languages so that domain- or task-specific adaptations are not required.
|
[
{
"version": "v1",
"created": "Thu, 16 Dec 2021 10:07:39 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Dec 2021 11:45:41 GMT"
},
{
"version": "v3",
"created": "Fri, 20 May 2022 18:19:23 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Lange",
"Lukas",
""
],
[
"Adel",
"Heike",
""
],
[
"Strötgen",
"Jannik",
""
],
[
"Klakow",
"Dietrich",
""
]
] |
new_dataset
| 0.991381 |
2201.03168
|
\`Eric Pairet
|
\`Eric Pairet, Simone Span\`o, Nikita Mankovskii, Paolo Pellegrino,
Igor Zhilin, Jeremy Nicola, Francesco La Gala, Giulia De Masi
|
Nukhada USV: a Robot for Autonomous Surveying and Support to Underwater
Operations
|
OCEANS 2022 - Chennai
|
OCEANS 2022 - Chennai, 2022
|
10.1109/OCEANSChennai45887.2022.9775538
| null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Technology Innovation Institute in Abu Dhabi, United Arab Emirates, has
recently finished the production and testing of a new unmanned surface vehicle,
called Nukhada, specifically designed for autonomous survey, inspection, and
support to underwater operations. This manuscript describes the main
characteristics of the Nukhada USV, as well as some of the trials conducted
during the development.
|
[
{
"version": "v1",
"created": "Mon, 10 Jan 2022 05:24:37 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Pairet",
"Èric",
""
],
[
"Spanò",
"Simone",
""
],
[
"Mankovskii",
"Nikita",
""
],
[
"Pellegrino",
"Paolo",
""
],
[
"Zhilin",
"Igor",
""
],
[
"Nicola",
"Jeremy",
""
],
[
"La Gala",
"Francesco",
""
],
[
"De Masi",
"Giulia",
""
]
] |
new_dataset
| 0.999882 |
2202.01268
|
Alexander Tyurin
|
Alexander Tyurin, Peter Richt\'arik
|
DASHA: Distributed Nonconvex Optimization with Communication
Compression, Optimal Oracle Complexity, and No Client Synchronization
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We develop and analyze DASHA: a new family of methods for nonconvex
distributed optimization problems. When the local functions at the nodes have a
finite-sum or an expectation form, our new methods, DASHA-PAGE and
DASHA-SYNC-MVR, improve the theoretical oracle and communication complexity of
the previous state-of-the-art method MARINA by Gorbunov et al. (2020). In
particular, to achieve an epsilon-stationary point, and considering the random
sparsifier RandK as an example, our methods compute the optimal number of
gradients $\mathcal{O}\left(\frac{\sqrt{m}}{\varepsilon\sqrt{n}}\right)$ and
$\mathcal{O}\left(\frac{\sigma}{\varepsilon^{3/2}n}\right)$ in finite-sum and
expectation form cases, respectively, while maintaining the SOTA communication
complexity $\mathcal{O}\left(\frac{d}{\varepsilon \sqrt{n}}\right)$.
Furthermore, unlike MARINA, the new methods DASHA, DASHA-PAGE and DASHA-MVR
send compressed vectors only and never synchronize the nodes, which makes them
more practical for federated learning. We extend our results to the case when
the functions satisfy the Polyak-Lojasiewicz condition. Finally, our theory is
corroborated in practice: we see a significant improvement in experiments with
nonconvex classification and training of deep learning models.
|
[
{
"version": "v1",
"created": "Wed, 2 Feb 2022 20:10:40 GMT"
},
{
"version": "v2",
"created": "Sun, 22 May 2022 10:31:19 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Tyurin",
"Alexander",
""
],
[
"Richtárik",
"Peter",
""
]
] |
new_dataset
| 0.998752 |
2203.13778
|
Raviraj Joshi
|
Abhishek Velankar, Hrushikesh Patil, Amol Gore, Shubham Salunke,
Raviraj Joshi
|
L3Cube-MahaHate: A Tweet-based Marathi Hate Speech Detection Dataset and
BERT models
| null | null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social media platforms are used by a large number of people prominently to
express their thoughts and opinions. However, these platforms have contributed
to a substantial amount of hateful and abusive content as well. Therefore, it
is important to curb the spread of hate speech on these platforms. In India,
Marathi is one of the most popular languages used by a wide audience. In this
work, we present L3Cube-MahaHate, the first major Hate Speech Dataset in
Marathi. The dataset is curated from Twitter, annotated manually. Our dataset
consists of over 25000 distinct tweets labeled into four major classes i.e
hate, offensive, profane, and not. We present the approaches used for
collecting and annotating the data and the challenges faced during the process.
Finally, we present baseline classification results using deep learning models
based on CNN, LSTM, and Transformers. We explore mono-lingual and multi-lingual
variants of BERT like MahaBERT, IndicBERT, mBERT, and xlm-RoBERTa and show that
mono-lingual models perform better than their multi-lingual counterparts. The
MahaBERT model provides the best results on L3Cube-MahaHate Corpus. The data
and models are available at https://github.com/l3cube-pune/MarathiNLP .
|
[
{
"version": "v1",
"created": "Fri, 25 Mar 2022 17:00:33 GMT"
},
{
"version": "v2",
"created": "Sun, 22 May 2022 07:00:37 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Velankar",
"Abhishek",
""
],
[
"Patil",
"Hrushikesh",
""
],
[
"Gore",
"Amol",
""
],
[
"Salunke",
"Shubham",
""
],
[
"Joshi",
"Raviraj",
""
]
] |
new_dataset
| 0.99989 |
2205.00159
|
Yongkun Du
|
Yongkun Du and Zhineng Chen and Caiyan Jia and Xiaoting Yin and
Tianlun Zheng and Chenxia Li and Yuning Du and Yu-Gang Jiang
|
SVTR: Scene Text Recognition with a Single Visual Model
|
Accepted by IJCAI 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Dominant scene text recognition models commonly contain two building blocks,
a visual model for feature extraction and a sequence model for text
transcription. This hybrid architecture, although accurate, is complex and less
efficient. In this study, we propose a Single Visual model for Scene Text
recognition within the patch-wise image tokenization framework, which dispenses
with the sequential modeling entirely. The method, termed SVTR, firstly
decomposes an image text into small patches named character components.
Afterward, hierarchical stages are recurrently carried out by component-level
mixing, merging and/or combining. Global and local mixing blocks are devised to
perceive the inter-character and intra-character patterns, leading to a
multi-grained character component perception. Thus, characters are recognized
by a simple linear prediction. Experimental results on both English and Chinese
scene text recognition tasks demonstrate the effectiveness of SVTR. SVTR-L
(Large) achieves highly competitive accuracy in English and outperforms
existing methods by a large margin in Chinese, while running faster. In
addition, SVTR-T (Tiny) is an effective and much smaller model, which shows
appealing speed at inference. The code is publicly available at
https://github.com/PaddlePaddle/PaddleOCR.
|
[
{
"version": "v1",
"created": "Sat, 30 Apr 2022 04:37:01 GMT"
},
{
"version": "v2",
"created": "Mon, 23 May 2022 05:52:33 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Du",
"Yongkun",
""
],
[
"Chen",
"Zhineng",
""
],
[
"Jia",
"Caiyan",
""
],
[
"Yin",
"Xiaoting",
""
],
[
"Zheng",
"Tianlun",
""
],
[
"Li",
"Chenxia",
""
],
[
"Du",
"Yuning",
""
],
[
"Jiang",
"Yu-Gang",
""
]
] |
new_dataset
| 0.992411 |
2205.05166
|
Charlie C.L. Wang Prof. Dr.
|
Yingjun Tian, Guoxin Fang, Justas Petrulis, Andrew Weightman, Charlie
C.L. Wang
|
Soft Robotic Mannequin: Design and Algorithm for Deformation Control
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a novel soft robotic system for a deformable mannequin
that can be employed to physically realize the 3D geometry of different human
bodies. The soft membrane on a mannequin is deformed by inflating several
curved chambers using pneumatic actuation. Controlling the freeform surface of
a soft membrane by adjusting the pneumatic actuation in different chambers is
challenging as the membrane's shape is commonly determined by the interaction
between all chambers. Using vision feedback provided by a structured-light
based 3D scanner, we developed an efficient algorithm to compute the optimized
actuation of all chambers which could drive the soft membrane to deform into
the best approximation of different target shapes. Our algorithm converges
quickly by including pose estimation in the loop of optimization. The
time-consuming step of evaluating derivatives on the deformable membrane is
avoided by using the Broyden update when possible. The effectiveness of our
soft robotic mannequin with controlled deformation has been verified in
experiments.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2022 21:00:49 GMT"
},
{
"version": "v2",
"created": "Mon, 23 May 2022 10:32:39 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Tian",
"Yingjun",
""
],
[
"Fang",
"Guoxin",
""
],
[
"Petrulis",
"Justas",
""
],
[
"Weightman",
"Andrew",
""
],
[
"Wang",
"Charlie C. L.",
""
]
] |
new_dataset
| 0.998951 |
2205.09233
|
Andrei Popescu
|
Andrei Popescu
|
Rensets and Renaming-Based Recursion for Syntax with Bindings
|
This is an extended technical report associated to an identically
titled conference paper that will appear in IJCAR 2022
| null | null | null |
cs.LO math.LO
|
http://creativecommons.org/licenses/by/4.0/
|
I introduce renaming-enriched sets (rensets for short), which are algebraic
structures axiomatizing fundamental properties of renaming (also known as
variable-for-variable substitution) on syntax with bindings. Rensets compare
favorably in some respects with the well-known foundation based on nominal
sets. In particular, renaming is a more fundamental operator than the nominal
swapping operator and enjoys a simpler, equationally expressed relationship
with the variable freshness predicate. Together with some natural axioms
matching properties of the syntactic constructors, rensets yield a truly
minimalistic characterization of lambda-calculus terms as an abstract datatype
-- one involving a recursively enumerable set of unconditional equations,
referring only to the most fundamental term operators: the constructors and
renaming. This characterization yields a recursion principle, which (similarly
to the case of nominal sets) can be improved by incorporating Barendregt's
variable convention. When interpreting syntax in semantic domains, my
renaming-based recursor is easier to deploy than the nominal recursor. My
results have been validated with the proof assistant Isabelle/HOL.
|
[
{
"version": "v1",
"created": "Wed, 18 May 2022 22:22:01 GMT"
},
{
"version": "v2",
"created": "Fri, 20 May 2022 18:45:47 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Popescu",
"Andrei",
""
]
] |
new_dataset
| 0.978613 |
2205.09651
|
Mustafa Jarrar
|
Mustafa Jarrar, Mohammed Khalilia, Sana Ghanem
|
Wojood: Nested Arabic Named Entity Corpus and Recognition using BERT
| null |
In Proceedings of the International Conference on Language
Resources and Evaluation (LREC 2022), Marseille, France. 2022
| null | null |
cs.CL cs.AI cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents Wojood, a corpus for Arabic nested Named Entity
Recognition (NER). Nested entities occur when one entity mention is embedded
inside another entity mention. Wojood consists of about 550K Modern Standard
Arabic (MSA) and dialect tokens that are manually annotated with 21 entity
types including person, organization, location, event and date. More
importantly, the corpus is annotated with nested entities instead of the more
common flat annotations. The data contains about 75K entities and 22.5% of
which are nested. The inter-annotator evaluation of the corpus demonstrated a
strong agreement with Cohen's Kappa of 0.979 and an F1-score of 0.976. To
validate our data, we used the corpus to train a nested NER model based on
multi-task learning and AraBERT (Arabic BERT). The model achieved an overall
micro F1-score of 0.884. Our corpus, the annotation guidelines, the source code
and the pre-trained model are publicly available.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2022 16:06:49 GMT"
},
{
"version": "v2",
"created": "Mon, 23 May 2022 07:33:05 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Jarrar",
"Mustafa",
""
],
[
"Khalilia",
"Mohammed",
""
],
[
"Ghanem",
"Sana",
""
]
] |
new_dataset
| 0.999549 |
2205.09978
|
Songlin Xu
|
Songlin Xu, Guanjie Wang, Ziyuan Fang, Guangwei Zhang, Guangzhu Shang,
Rongde Lu, Liqun He
|
HeadText: Exploring Hands-free Text Entry using Head Gestures by Motion
Sensing on a Smart Earpiece
|
23 pages
| null | null | null |
cs.HC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present HeadText, a hands-free technique on a smart earpiece for text
entry by motion sensing. Users input text utilizing only 7 head gestures for
key selection, word selection, word commitment and word cancelling tasks. Head
gesture recognition is supported by motion sensing on a smart earpiece to
capture head moving signals and machine learning algorithms (K-Nearest-Neighbor
(KNN) with a Dynamic Time Warping (DTW) distance measurement). A 10-participant
user study proved that HeadText could recognize 7 head gestures at an accuracy
of 94.29%. After that, the second user study presented that HeadText could
achieve a maximum accuracy of 10.65 WPM and an average accuracy of 9.84 WPM for
text entry. Finally, we demonstrate potential applications of HeadText in
hands-free scenarios for (a). text entry of people with motor impairments, (b).
private text entry, and (c). socially acceptable text entry.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2022 06:13:36 GMT"
},
{
"version": "v2",
"created": "Mon, 23 May 2022 01:14:06 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Xu",
"Songlin",
""
],
[
"Wang",
"Guanjie",
""
],
[
"Fang",
"Ziyuan",
""
],
[
"Zhang",
"Guangwei",
""
],
[
"Shang",
"Guangzhu",
""
],
[
"Lu",
"Rongde",
""
],
[
"He",
"Liqun",
""
]
] |
new_dataset
| 0.959096 |
2205.10101
|
Wang Jing
|
Jing Wang, Haotian Fan, Xiaoxia Hou, Yitian Xu, Tao Li, Xuechao Lu and
Lean Fu
|
MSTRIQ: No Reference Image Quality Assessment Based on Swin Transformer
with Multi-Stage Fusion
|
8 pages, 4 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Measuring the perceptual quality of images automatically is an essential task
in the area of computer vision, as degradations on image quality can exist in
many processes from image acquisition, transmission to enhancing. Many Image
Quality Assessment(IQA) algorithms have been designed to tackle this problem.
However, it still remains un settled due to the various types of image
distortions and the lack of large-scale human-rated datasets. In this paper, we
propose a novel algorithm based on the Swin Transformer [31] with fused
features from multiple stages, which aggregates information from both local and
global features to better predict the quality. To address the issues of
small-scale datasets, relative rankings of images have been taken into account
together with regression loss to simultaneously optimize the model.
Furthermore, effective data augmentation strategies are also used to improve
the performance. In comparisons with previous works, experiments are carried
out on two standard IQA datasets and a challenge dataset. The results
demonstrate the effectiveness of our work. The proposed method outperforms
other methods on standard datasets and ranks 2nd in the no-reference track of
NTIRE 2022 Perceptual Image Quality Assessment Challenge [53]. It verifies that
our method is promising in solving diverse IQA problems and thus can be used to
real-word applications.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2022 11:34:35 GMT"
},
{
"version": "v2",
"created": "Mon, 23 May 2022 06:39:01 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Wang",
"Jing",
""
],
[
"Fan",
"Haotian",
""
],
[
"Hou",
"Xiaoxia",
""
],
[
"Xu",
"Yitian",
""
],
[
"Li",
"Tao",
""
],
[
"Lu",
"Xuechao",
""
],
[
"Fu",
"Lean",
""
]
] |
new_dataset
| 0.997338 |
2205.10400
|
Chia-Chien Hung
|
Chia-Chien Hung, Anne Lauscher, Ivan Vuli\'c, Simone Paolo Ponzetto,
Goran Glava\v{s}
|
Multi2WOZ: A Robust Multilingual Dataset and Conversational Pretraining
for Task-Oriented Dialog
|
NAACL 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Research on (multi-domain) task-oriented dialog (TOD) has predominantly
focused on the English language, primarily due to the shortage of robust TOD
datasets in other languages, preventing the systematic investigation of
cross-lingual transfer for this crucial NLP application area. In this work, we
introduce Multi2WOZ, a new multilingual multi-domain TOD dataset, derived from
the well-established English dataset MultiWOZ, that spans four typologically
diverse languages: Chinese, German, Arabic, and Russian. In contrast to
concurrent efforts, Multi2WOZ contains gold-standard dialogs in target
languages that are directly comparable with development and test portions of
the English dataset, enabling reliable and comparative estimates of
cross-lingual transfer performance for TOD. We then introduce a new framework
for multilingual conversational specialization of pretrained language models
(PrLMs) that aims to facilitate cross-lingual transfer for arbitrary downstream
TOD tasks. Using such conversational PrLMs specialized for concrete target
languages, we systematically benchmark a number of zero-shot and few-shot
cross-lingual transfer approaches on two standard TOD tasks: Dialog State
Tracking and Response Retrieval. Our experiments show that, in most setups, the
best performance entails the combination of (I) conversational specialization
in the target language and (ii) few-shot transfer for the concrete TOD task.
Most importantly, we show that our conversational specialization in the target
language allows for an exceptionally sample-efficient few-shot transfer for
downstream TOD tasks.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2022 18:35:38 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Hung",
"Chia-Chien",
""
],
[
"Lauscher",
"Anne",
""
],
[
"Vulić",
"Ivan",
""
],
[
"Ponzetto",
"Simone Paolo",
""
],
[
"Glavaš",
"Goran",
""
]
] |
new_dataset
| 0.999659 |
2205.10411
|
Antonios Anastasopoulos
|
Cristian Ahumada, Claudio Gutierrez, Antonios Anastasopoulos
|
Educational Tools for Mapuzugun
|
To be presented at the 17th Workshop on Innovative Use of NLP for
Building Educational Applications
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mapuzugun is the language of the Mapuche people. Due to political and
historical reasons, its number of speakers has decreased and the language has
been excluded from the educational system in Chile and Argentina. For this
reason, it is very important to support the revitalization of the Mapuzugun in
all spaces and media of society. In this work we present a tool towards
supporting educational activities of Mapuzugun, tailored to the characteristics
of the language. The tool consists of three parts: design and development of an
orthography detector and converter; a morphological analyzer; and an informal
translator. We also present a case study with Mapuzugun students showing
promising results.
Short Abstract in Mapuzuzgun: T\"ufachi k\"uzaw pegelfi ki\~ne zugun
k\"uzawpey\"um kelluaetew pu mapuzugun chillkatufe kimal kizu ta\~ni zugun.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2022 03:19:32 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Ahumada",
"Cristian",
""
],
[
"Gutierrez",
"Claudio",
""
],
[
"Anastasopoulos",
"Antonios",
""
]
] |
new_dataset
| 0.999061 |
2205.10441
|
Alessandro Provetti
|
Paschalis Lagias, George D. Magoulas, Ylli Prifti and Alessandro
Provetti
|
Predicting Seriousness of Injury in a Traffic Accident: A New Imbalanced
Dataset and Benchmark
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The paper introduces a new dataset to assess the performance of machine
learning algorithms in the prediction of the seriousness of injury in a traffic
accident. The dataset is created by aggregating publicly available datasets
from the UK Department for Transport, which are drastically imbalanced with
missing attributes sometimes approaching 50\% of the overall data
dimensionality. The paper presents the data analysis pipeline starting from the
publicly available data of road traffic accidents and ending with predictors of
possible injuries and their degree of severity. It addresses the huge
incompleteness of public data with a MissForest model. The paper also
introduces two baseline approaches to create injury predictors: a supervised
artificial neural network and a reinforcement learning model. The dataset can
potentially stimulate diverse aspects of machine learning research on
imbalanced datasets and the two approaches can be used as baseline references
when researchers test more advanced learning algorithms in this area.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2022 21:15:26 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Lagias",
"Paschalis",
""
],
[
"Magoulas",
"George D.",
""
],
[
"Prifti",
"Ylli",
""
],
[
"Provetti",
"Alessandro",
""
]
] |
new_dataset
| 0.999788 |
2205.10442
|
Saurabh Kulshreshtha
|
Saurabh Kulshreshtha, Olga Kovaleva, Namrata Shivagunde, Anna
Rumshisky
|
Down and Across: Introducing Crossword-Solving as a New NLP Benchmark
|
Accepted as long paper at ACL 2022
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Solving crossword puzzles requires diverse reasoning capabilities, access to
a vast amount of knowledge about language and the world, and the ability to
satisfy the constraints imposed by the structure of the puzzle. In this work,
we introduce solving crossword puzzles as a new natural language understanding
task. We release the specification of a corpus of crossword puzzles collected
from the New York Times daily crossword spanning 25 years and comprised of a
total of around nine thousand puzzles. These puzzles include a diverse set of
clues: historic, factual, word meaning, synonyms/antonyms, fill-in-the-blank,
abbreviations, prefixes/suffixes, wordplay, and cross-lingual, as well as clues
that depend on the answers to other clues. We separately release the
clue-answer pairs from these puzzles as an open-domain question answering
dataset containing over half a million unique clue-answer pairs. For the
question answering task, our baselines include several sequence-to-sequence and
retrieval-based generative models. We also introduce a non-parametric
constraint satisfaction baseline for solving the entire crossword puzzle.
Finally, we propose an evaluation framework which consists of several
complementary performance metrics.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2022 21:16:44 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Kulshreshtha",
"Saurabh",
""
],
[
"Kovaleva",
"Olga",
""
],
[
"Shivagunde",
"Namrata",
""
],
[
"Rumshisky",
"Anna",
""
]
] |
new_dataset
| 0.99876 |
2205.10464
|
Suguman Bansal
|
Suguman Bansal, Lydia Kavraki, Moshe Y. Vardi, Andrew Wells
|
Synthesis from Satisficing and Temporal Goals
| null | null | null | null |
cs.AI cs.LO cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Reactive synthesis from high-level specifications that combine hard
constraints expressed in Linear Temporal Logic LTL with soft constraints
expressed by discounted-sum (DS) rewards has applications in planning and
reinforcement learning. An existing approach combines techniques from LTL
synthesis with optimization for the DS rewards but has failed to yield a sound
algorithm. An alternative approach combining LTL synthesis with satisficing DS
rewards (rewards that achieve a threshold) is sound and complete for integer
discount factors, but, in practice, a fractional discount factor is desired.
This work extends the existing satisficing approach, presenting the first sound
algorithm for synthesis from LTL and DS rewards with fractional discount
factors. The utility of our algorithm is demonstrated on robotic planning
domains.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2022 23:46:31 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Bansal",
"Suguman",
""
],
[
"Kavraki",
"Lydia",
""
],
[
"Vardi",
"Moshe Y.",
""
],
[
"Wells",
"Andrew",
""
]
] |
new_dataset
| 0.997247 |
2205.10473
|
Andrew McNaughton Jr.
|
Andrew D. McNaughton, Mridula S. Bontha, Carter R. Knutson, Jenna A.
Pope, Neeraj Kumar
|
De novo design of protein target specific scaffold-based Inhibitors via
Reinforcement Learning
|
Published at the MLDD workshop, ICLR 2022
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Efficient design and discovery of target-driven molecules is a critical step
in facilitating lead optimization in drug discovery. Current approaches to
develop molecules for a target protein are intuition-driven, hampered by slow
iterative design-test cycles due to computational challenges in utilizing 3D
structural data, and ultimately limited by the expertise of the chemist -
leading to bottlenecks in molecular design. In this contribution, we propose a
novel framework, called 3D-MolGNN$_{RL}$, coupling reinforcement learning (RL)
to a deep generative model based on 3D-Scaffold to generate target candidates
specific to a protein building up atom by atom from the starting core scaffold.
3D-MolGNN$_{RL}$ provides an efficient way to optimize key features by
multi-objective reward function within a protein pocket using parallel graph
neural network models. The agent learns to build molecules in 3D space while
optimizing the activity, binding affinity, potency, and synthetic accessibility
of the candidates generated for infectious disease protein targets. Our
approach can serve as an interpretable artificial intelligence (AI) tool for
lead optimization with optimized activity, potency, and biophysical properties.
|
[
{
"version": "v1",
"created": "Sat, 21 May 2022 00:47:35 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"McNaughton",
"Andrew D.",
""
],
[
"Bontha",
"Mridula S.",
""
],
[
"Knutson",
"Carter R.",
""
],
[
"Pope",
"Jenna A.",
""
],
[
"Kumar",
"Neeraj",
""
]
] |
new_dataset
| 0.998774 |
2205.10553
|
Adarsh Ghimire
|
Adarsh Ghimire, Xiaoxiong Zhang, Sajid Javed, Jorge Dias, Naoufel
Werghi
|
Robot Person Following in Uniform Crowd Environment
| null |
ICRA Workshop 2022: ROBOTIC PERCEPTION AND MAPPING: EMERGING
TECHNIQUES
| null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Person-tracking robots have many applications, such as in security, elderly
care, and socializing robots. Such a task is particularly challenging when the
person is moving in a Uniform crowd. Also, despite significant progress of
trackers reported in the literature, state-of-the-art trackers have hardly
addressed person following in such scenarios. In this work, we focus on
improving the perceptivity of a robot for a person following task by developing
a robust and real-time applicable object tracker. We present a new robot person
tracking system with a new RGB-D tracker, Deep Tracking with RGB-D (DTRD) that
is resilient to tricky challenges introduced by the uniform crowd environment.
Our tracker utilizes transformer encoder-decoder architecture with RGB and
depth information to discriminate the target person from similar distractors. A
substantial amount of comprehensive experiments and results demonstrate that
our tracker has higher performance in two quantitative evaluation metrics and
confirms its superiority over other SOTA trackers.
|
[
{
"version": "v1",
"created": "Sat, 21 May 2022 10:20:14 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Ghimire",
"Adarsh",
""
],
[
"Zhang",
"Xiaoxiong",
""
],
[
"Javed",
"Sajid",
""
],
[
"Dias",
"Jorge",
""
],
[
"Werghi",
"Naoufel",
""
]
] |
new_dataset
| 0.997692 |
2205.10627
|
Alex Morehead
|
Xiao Chen, Alex Morehead, Jian Liu, Jianlin Cheng
|
DProQ: A Gated-Graph Transformer for Protein Complex Structure
Assessment
|
18 pages, 3 figures, 13 tables. Under review
| null | null | null |
cs.LG cs.AI q-bio.BM q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Proteins interact to form complexes to carry out essential biological
functions. Computational methods have been developed to predict the structures
of protein complexes. However, an important challenge in protein complex
structure prediction is to estimate the quality of predicted protein complex
structures without any knowledge of the corresponding native structures. Such
estimations can then be used to select high-quality predicted complex
structures to facilitate biomedical research such as protein function analysis
and drug discovery. We challenge this significant task with DProQ, which
introduces a gated neighborhood-modulating Graph Transformer (GGT) designed to
predict the quality of 3D protein complex structures. Notably, we incorporate
node and edge gates within a novel Graph Transformer framework to control
information flow during graph message passing. We train and evaluate DProQ on
four newly-developed datasets that we make publicly available in this work. Our
rigorous experiments demonstrate that DProQ achieves state-of-the-art
performance in ranking protein complex structures.
|
[
{
"version": "v1",
"created": "Sat, 21 May 2022 15:41:46 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Chen",
"Xiao",
""
],
[
"Morehead",
"Alex",
""
],
[
"Liu",
"Jian",
""
],
[
"Cheng",
"Jianlin",
""
]
] |
new_dataset
| 0.998997 |
2205.10712
|
Yash Kant
|
Yash Kant, Arun Ramachandran, Sriram Yenamandra, Igor Gilitschenski,
Dhruv Batra, Andrew Szot, Harsh Agrawal
|
Housekeep: Tidying Virtual Households using Commonsense Reasoning
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce Housekeep, a benchmark to evaluate commonsense reasoning in the
home for embodied AI. In Housekeep, an embodied agent must tidy a house by
rearranging misplaced objects without explicit instructions specifying which
objects need to be rearranged. Instead, the agent must learn from and is
evaluated against human preferences of which objects belong where in a tidy
house. Specifically, we collect a dataset of where humans typically place
objects in tidy and untidy houses constituting 1799 objects, 268 object
categories, 585 placements, and 105 rooms. Next, we propose a modular baseline
approach for Housekeep that integrates planning, exploration, and navigation.
It leverages a fine-tuned large language model (LLM) trained on an internet
text corpus for effective planning. We show that our baseline agent generalizes
to rearranging unseen objects in unknown environments. See our webpage for more
details: https://yashkant.github.io/housekeep/
|
[
{
"version": "v1",
"created": "Sun, 22 May 2022 02:37:09 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Kant",
"Yash",
""
],
[
"Ramachandran",
"Arun",
""
],
[
"Yenamandra",
"Sriram",
""
],
[
"Gilitschenski",
"Igor",
""
],
[
"Batra",
"Dhruv",
""
],
[
"Szot",
"Andrew",
""
],
[
"Agrawal",
"Harsh",
""
]
] |
new_dataset
| 0.999857 |
2205.10782
|
Or Honovich
|
Or Honovich, Uri Shaham, Samuel R. Bowman, Omer Levy
|
Instruction Induction: From Few Examples to Natural Language Task
Descriptions
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models are able to perform a task by conditioning on a few
input-output demonstrations - a paradigm known as in-context learning. We show
that language models can explicitly infer an underlying task from a few
demonstrations by prompting them to generate a natural language instruction
that fits the examples. To explore this ability, we introduce the instruction
induction challenge, compile a dataset consisting of 24 tasks, and define a
novel evaluation metric based on executing the generated instruction. We
discover that, to a large extent, the ability to generate instructions does
indeed emerge when using a model that is both large enough and aligned to
follow instructions; InstructGPT achieves 65.7% of human performance in our
execution-based metric, while the original GPT-3 model reaches only 9.8% of
human performance. This surprising result suggests that instruction induction
might be a viable learning paradigm in and of itself, where instead of fitting
a set of latent continuous parameters to the data, one searches for the best
description in the natural language hypothesis space.
|
[
{
"version": "v1",
"created": "Sun, 22 May 2022 09:22:37 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Honovich",
"Or",
""
],
[
"Shaham",
"Uri",
""
],
[
"Bowman",
"Samuel R.",
""
],
[
"Levy",
"Omer",
""
]
] |
new_dataset
| 0.999538 |
2205.10850
|
Yubo Xie
|
Yubo Xie, Junze Li, Pearl Pu
|
AFEC: A Knowledge Graph Capturing Social Intelligence in Casual
Conversations
|
11 pages
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces AFEC, an automatically curated knowledge graph based on
people's day-to-day casual conversations. The knowledge captured in this graph
bears potential for conversational systems to understand how people offer
acknowledgement, consoling, and a wide range of empathetic responses in social
conversations. For this body of knowledge to be comprehensive and meaningful,
we curated a large-scale corpus from the r/CasualConversation SubReddit. After
taking the first two turns of all conversations, we obtained 134K speaker nodes
and 666K listener nodes. To demonstrate how a chatbot can converse in social
settings, we built a retrieval-based chatbot and compared it with existing
empathetic dialog models. Experiments show that our model is capable of
generating much more diverse responses (at least 15% higher diversity scores in
human evaluation), while still outperforming two out of the four baselines in
terms of response quality.
|
[
{
"version": "v1",
"created": "Sun, 22 May 2022 15:19:12 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Xie",
"Yubo",
""
],
[
"Li",
"Junze",
""
],
[
"Pu",
"Pearl",
""
]
] |
new_dataset
| 0.973893 |
2205.10851
|
Dong Wang
|
Jie Zhao, Jingshu Zhang, Dongdong Li, Dong Wang
|
Vision-based Anti-UAV Detection and Tracking
|
Accepted by IEEE Transactions on Intelligent Transportation Systems
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Unmanned aerial vehicles (UAV) have been widely used in various fields, and
their invasion of security and privacy has aroused social concern. Several
detection and tracking systems for UAVs have been introduced in recent years,
but most of them are based on radio frequency, radar, and other media. We
assume that the field of computer vision is mature enough to detect and track
invading UAVs. Thus we propose a visible light mode dataset called Dalian
University of Technology Anti-UAV dataset, DUT Anti-UAV for short. It contains
a detection dataset with a total of 10,000 images and a tracking dataset with
20 videos that include short-term and long-term sequences. All frames and
images are manually annotated precisely. We use this dataset to train several
existing detection algorithms and evaluate the algorithms' performance. Several
tracking methods are also tested on our tracking dataset. Furthermore, we
propose a clear and simple tracking algorithm combined with detection that
inherits the detector's high precision. Extensive experiments show that the
tracking performance is improved considerably after fusing detection, thus
providing a new attempt at UAV tracking using our dataset.The datasets and
results are publicly available at: https://github.com/wangdongdut/DUT-Anti-UAV
|
[
{
"version": "v1",
"created": "Sun, 22 May 2022 15:21:45 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Zhao",
"Jie",
""
],
[
"Zhang",
"Jingshu",
""
],
[
"Li",
"Dongdong",
""
],
[
"Wang",
"Dong",
""
]
] |
new_dataset
| 0.999678 |
2205.10857
|
Han Wang
|
Han Wang, Ruiliu Fu, Xuejun Zhang, Jun Zhou
|
RVAE-LAMOL: Residual Variational Autoencoder to Enhance Lifelong
Language Learning
|
This paper has been accepted for publication at IJCNN 2022 on IEEE
WCCI 2022; Oral presentation
| null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lifelong Language Learning (LLL) aims to train a neural network to learn a
stream of NLP tasks while retaining knowledge from previous tasks. However,
previous works which followed data-free constraint still suffer from
catastrophic forgetting issue, where the model forgets what it just learned
from previous tasks. In order to alleviate catastrophic forgetting, we propose
the residual variational autoencoder (RVAE) to enhance LAMOL, a recent LLL
model, by mapping different tasks into a limited unified semantic space. In
this space, previous tasks are easy to be correct to their own distribution by
pseudo samples. Furthermore, we propose an identity task to make the model is
discriminative to recognize the sample belonging to which task. For training
RVAE-LAMOL better, we propose a novel training scheme Alternate Lag Training.
In the experiments, we test RVAE-LAMOL on permutations of three datasets from
DecaNLP. The experimental results demonstrate that RVAE-LAMOL outperforms
na\"ive LAMOL on all permutations and generates more meaningful pseudo-samples.
|
[
{
"version": "v1",
"created": "Sun, 22 May 2022 15:52:35 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Wang",
"Han",
""
],
[
"Fu",
"Ruiliu",
""
],
[
"Zhang",
"Xuejun",
""
],
[
"Zhou",
"Jun",
""
]
] |
new_dataset
| 0.951188 |
2205.10866
|
Paola Merlo
|
Paola Merlo, Aixiu An and Maria A. Rodriguez
|
Blackbird's language matrices (BLMs): a new benchmark to investigate
disentangled generalisation in neural networks
|
15 pages, 9 figures, 1 table
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Current successes of machine learning architectures are based on
computationally expensive algorithms and prohibitively large amounts of data.
We need to develop tasks and data to train networks to reach more complex and
more compositional skills. In this paper, we illustrate Blackbird's language
matrices (BLMs), a novel grammatical dataset developed to test a linguistic
variant of Raven's progressive matrices, an intelligence test usually based on
visual stimuli. The dataset consists of 44800 sentences, generatively
constructed to support investigations of current models' linguistic mastery of
grammatical agreement rules and their ability to generalise them. We present
the logic of the dataset, the method to automatically construct data on a large
scale and the architecture to learn them. Through error analysis and several
experiments on variations of the dataset, we demonstrate that this language
task and the data that instantiate it provide a new challenging testbed to
understand generalisation and abstraction.
|
[
{
"version": "v1",
"created": "Sun, 22 May 2022 16:51:24 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Merlo",
"Paola",
""
],
[
"An",
"Aixiu",
""
],
[
"Rodriguez",
"Maria A.",
""
]
] |
new_dataset
| 0.999228 |
2205.10953
|
Nader Zare
|
Nader Zare, Arad Firouzkouhi, Omid Amini, Mahtab Sarvmaili, Aref
Sayareh, Saba Ramezani Rad, Stan Matwin, Amilcar Soares
|
CYRUS Soccer Simulation 2D Team Description Paper 2022
| null | null | null | null |
cs.AI cs.LG cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Soccer Simulation 2D League is one of the major leagues of RoboCup
competitions. In a Soccer Simulation 2D (SS2D) game, two teams of 11 players
and one coach compete against each other. The players are only allowed to
communicate with the server that is called Soccer Simulation Server. This paper
introduces the previous and current research of the CYRUS soccer simulation
team, the champion of RoboCup 2021. We will present our idea about improving
Unmarking Decisioning and Positioning by using Pass Prediction Deep Neural
Network. Based on our experimental results, this idea proven to be effective on
increasing the winning rate of Cyrus against opponents.
|
[
{
"version": "v1",
"created": "Sun, 22 May 2022 23:16:37 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Zare",
"Nader",
""
],
[
"Firouzkouhi",
"Arad",
""
],
[
"Amini",
"Omid",
""
],
[
"Sarvmaili",
"Mahtab",
""
],
[
"Sayareh",
"Aref",
""
],
[
"Rad",
"Saba Ramezani",
""
],
[
"Matwin",
"Stan",
""
],
[
"Soares",
"Amilcar",
""
]
] |
new_dataset
| 0.99529 |
2205.11004
|
Brian Montambault
|
Brian Montambault, Camelia D. Brumar, Michael Behrisch, Remco Chang
|
PIXAL: Anomaly Reasoning with Visual Analytics
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Anomaly detection remains an open challenge in many application areas. While
there are a number of available machine learning algorithms for detecting
anomalies, analysts are frequently asked to take additional steps in reasoning
about the root cause of the anomalies and form actionable hypotheses that can
be communicated to business stakeholders. Without the appropriate tools, this
reasoning process is time-consuming, tedious, and potentially error-prone. In
this paper we present PIXAL, a visual analytics system developed following an
iterative design process with professional analysts responsible for anomaly
detection. PIXAL is designed to fill gaps in existing tools commonly used by
analysts to reason with and make sense of anomalies. PIXAL consists of three
components: (1) an algorithm that finds patterns by aggregating multiple
anomalous data points using first-order predicates, (2) a visualization tool
that allows the analyst to build trust in the algorithmically-generated
predicates by performing comparative and counterfactual analyses, and (3) a
visualization tool that helps the analyst generate and validate hypotheses by
exploring which features in the data most explain the anomalies. Finally, we
present the results of a qualitative observational study with professional
analysts. These results of the study indicate that PIXAL facilitates the
anomaly reasoning process, allowing analysts to make sense of anomalies and
generate hypotheses that are meaningful and actionable to business
stakeholders.
|
[
{
"version": "v1",
"created": "Mon, 23 May 2022 02:36:55 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Montambault",
"Brian",
""
],
[
"Brumar",
"Camelia D.",
""
],
[
"Behrisch",
"Michael",
""
],
[
"Chang",
"Remco",
""
]
] |
new_dataset
| 0.966613 |
2205.11008
|
Peilin Zhou
|
Peilin Zhou, Dading Chong, Helin Wang, Qingcheng Zeng
|
Calibrate and Refine! A Novel and Agile Framework for ASR-error Robust
Intent Detection
|
Submit to INTERSPEECH 2022
| null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/publicdomain/zero/1.0/
|
The past ten years have witnessed the rapid development of text-based intent
detection, whose benchmark performances have already been taken to a remarkable
level by deep learning techniques. However, automatic speech recognition (ASR)
errors are inevitable in real-world applications due to the environment noise,
unique speech patterns and etc, leading to sharp performance drop in
state-of-the-art text-based intent detection models. Essentially, this
phenomenon is caused by the semantic drift brought by ASR errors and most
existing works tend to focus on designing new model structures to reduce its
impact, which is at the expense of versatility and flexibility. Different from
previous one-piece model, in this paper, we propose a novel and agile framework
called CR-ID for ASR error robust intent detection with two plug-and-play
modules, namely semantic drift calibration module (SDCM) and phonemic
refinement module (PRM), which are both model-agnostic and thus could be easily
integrated to any existing intent detection models without modifying their
structures. Experimental results on SNIPS dataset show that, our proposed CR-ID
framework achieves competitive performance and outperform all the baseline
methods on ASR outputs, which verifies that CR-ID can effectively alleviate the
semantic drift caused by ASR errors.
|
[
{
"version": "v1",
"created": "Mon, 23 May 2022 02:54:11 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Zhou",
"Peilin",
""
],
[
"Chong",
"Dading",
""
],
[
"Wang",
"Helin",
""
],
[
"Zeng",
"Qingcheng",
""
]
] |
new_dataset
| 0.967324 |
2205.11047
|
Stan Birchfield
|
Yunzhi Lin, Jonathan Tremblay, Stephen Tyree, Patricio A. Vela, Stan
Birchfield
|
Keypoint-Based Category-Level Object Pose Tracking from an RGB Sequence
with Uncertainty Estimation
|
ICRA 2022. Project site is at
https://sites.google.com/view/centerposetrack
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a single-stage, category-level 6-DoF pose estimation algorithm
that simultaneously detects and tracks instances of objects within a known
category. Our method takes as input the previous and current frame from a
monocular RGB video, as well as predictions from the previous frame, to predict
the bounding cuboid and 6-DoF pose (up to scale). Internally, a deep network
predicts distributions over object keypoints (vertices of the bounding cuboid)
in image coordinates, after which a novel probabilistic filtering process
integrates across estimates before computing the final pose using PnP. Our
framework allows the system to take previous uncertainties into consideration
when predicting the current frame, resulting in predictions that are more
accurate and stable than single frame methods. Extensive experiments show that
our method outperforms existing approaches on the challenging Objectron
benchmark of annotated object videos. We also demonstrate the usability of our
work in an augmented reality setting.
|
[
{
"version": "v1",
"created": "Mon, 23 May 2022 05:20:22 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Lin",
"Yunzhi",
""
],
[
"Tremblay",
"Jonathan",
""
],
[
"Tyree",
"Stephen",
""
],
[
"Vela",
"Patricio A.",
""
],
[
"Birchfield",
"Stan",
""
]
] |
new_dataset
| 0.997279 |
2205.11090
|
Kai Wang
|
Kai Wang, Bo Zhao, Xiangyu Peng, Zheng Zhu, Jiankang Deng, Xinchao
Wang, Hakan Bilen, Yang You
|
FaceMAE: Privacy-Preserving Face Recognition via Masked Autoencoders
|
A new paradigm for privacy-preserving face recognition via MAE
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Face recognition, as one of the most successful applications in artificial
intelligence, has been widely used in security, administration, advertising,
and healthcare. However, the privacy issues of public face datasets have
attracted increasing attention in recent years. Previous works simply mask most
areas of faces or synthesize samples using generative models to construct
privacy-preserving face datasets, which overlooks the trade-off between privacy
protection and data utility. In this paper, we propose a novel framework
FaceMAE, where the face privacy and recognition performance are considered
simultaneously. Firstly, randomly masked face images are used to train the
reconstruction module in FaceMAE. We tailor the instance relation matching
(IRM) module to minimize the distribution gap between real faces and FaceMAE
reconstructed ones. During the deployment phase, we use trained FaceMAE to
reconstruct images from masked faces of unseen identities without extra
training. The risk of privacy leakage is measured based on face retrieval
between reconstructed and original datasets. Experiments prove that the
identities of reconstructed images are difficult to be retrieved. We also
perform sufficient privacy-preserving face recognition on several public face
datasets (i.e. CASIA-WebFace and WebFace260M). Compared to previous state of
the arts, FaceMAE consistently \textbf{reduces at least 50\% error rate} on
LFW, CFP-FP and AgeDB.
|
[
{
"version": "v1",
"created": "Mon, 23 May 2022 07:19:42 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Wang",
"Kai",
""
],
[
"Zhao",
"Bo",
""
],
[
"Peng",
"Xiangyu",
""
],
[
"Zhu",
"Zheng",
""
],
[
"Deng",
"Jiankang",
""
],
[
"Wang",
"Xinchao",
""
],
[
"Bilen",
"Hakan",
""
],
[
"You",
"Yang",
""
]
] |
new_dataset
| 0.999565 |
2205.11111
|
Cyrile Delestre
|
Cyrile Delestre, Abibatou Amar
|
DistilCamemBERT: a distillation of the French model CamemBERT
|
in French language. CAp (Conf{\'e}rence sur l'Apprentissage
automatique), Jul 2022, Vannes, France
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern Natural Language Processing (NLP) models based on Transformer
structures represent the state of the art in terms of performance on very
diverse tasks. However, these models are complex and represent several hundred
million parameters for the smallest of them. This may hinder their adoption at
the industrial level, making it difficult to scale up to a reasonable
infrastructure and/or to comply with societal and environmental
responsibilities. To this end, we present in this paper a model that
drastically reduces the computational cost of a well-known French model
(CamemBERT), while preserving good performance.
|
[
{
"version": "v1",
"created": "Mon, 23 May 2022 08:04:58 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Delestre",
"Cyrile",
""
],
[
"Amar",
"Abibatou",
""
]
] |
new_dataset
| 0.981319 |
2205.11212
|
Mojtaba Eshghie
|
Mojtaba Eshghie, Li Quan, Gustav Andersson Kasche, Filip Jacobson,
Cosimo Bassi, Cyrille Artho
|
CircleChain: Tokenizing Products with a Role-based Scheme for a Circular
Economy
| null | null | null | null |
cs.DC cs.CR cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
In a circular economy, tracking the flow of second-life components for
quality control is critical. Tokenization can enhance the transparency of the
flow of second-life components. However, simple tokenization does not
correspond to real economic models and lacks the ability to finely manage
complex business processes. In particular, existing systems have to take into
account the different roles of the parties in the supply chain. Based on the
Algorand blockchain, we propose a role-based token management scheme, which can
achieve authentication, synthesis, circulation, and reuse of these second-life
components in a trustless environment. The proposed scheme not only achieves
fine-grained and scalable second-life component management, but also enables
on-chain trading, subsidies, and green-bond issuance. Furthermore, we
implemented and performed scalability tests for the proposed architecture on
Algorand blockchain using its smart contracts and Algorand Standard Assets
(ASA). The open-source implementation, tests, along with results are available
on our Github page.
|
[
{
"version": "v1",
"created": "Mon, 23 May 2022 11:43:31 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Eshghie",
"Mojtaba",
""
],
[
"Quan",
"Li",
""
],
[
"Kasche",
"Gustav Andersson",
""
],
[
"Jacobson",
"Filip",
""
],
[
"Bassi",
"Cosimo",
""
],
[
"Artho",
"Cyrille",
""
]
] |
new_dataset
| 0.990134 |
2205.11242
|
Anselmo Ferreira
|
Anselmo Ferreira, Changcheng Chen and Mauro Barni
|
Fusing Multiscale Texture and Residual Descriptors for Multilevel 2D
Barcode Rebroadcasting Detection
| null | null |
10.1109/WIFS53200.2021.9648391
| null |
cs.CV cs.AI cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Nowadays, 2D barcodes have been widely used for advertisement, mobile
payment, and product authentication. However, in applications related to
product authentication, an authentic 2D barcode can be illegally copied and
attached to a counterfeited product in such a way to bypass the authentication
scheme. In this paper, we employ a proprietary 2D barcode pattern and use
multimedia forensics methods to analyse the scanning and printing artefacts
resulting from the copy (rebroadcasting) attack. A diverse and complementary
feature set is proposed to quantify the barcode texture distortions introduced
during the illegal copying process. The proposed features are composed of
global and local descriptors, which characterize the multi-scale texture
appearance and the points of interest distribution, respectively. The proposed
descriptors are compared against some existing texture descriptors and deep
learning-based approaches under various scenarios, such as cross-datasets and
cross-size. Experimental results highlight the practicality of the proposed
method in real-world settings.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 06:26:20 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Ferreira",
"Anselmo",
""
],
[
"Chen",
"Changcheng",
""
],
[
"Barni",
"Mauro",
""
]
] |
new_dataset
| 0.996876 |
2205.11388
|
Tom\'a\v{s} Ko\v{c}isk\'y
|
Adam Li\v{s}ka, Tom\'a\v{s} Ko\v{c}isk\'y, Elena Gribovskaya, Tayfun
Terzi, Eren Sezener, Devang Agrawal, Cyprien de Masson d'Autume, Tim
Scholtes, Manzil Zaheer, Susannah Young, Ellen Gilsenan-McMahon, Sophia
Austin, Phil Blunsom, Angeliki Lazaridou
|
StreamingQA: A Benchmark for Adaptation to New Knowledge over Time in
Question Answering Models
| null | null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Knowledge and language understanding of models evaluated through question
answering (QA) has been usually studied on static snapshots of knowledge, like
Wikipedia. However, our world is dynamic, evolves over time, and our models'
knowledge becomes outdated. To study how semi-parametric QA models and their
underlying parametric language models (LMs) adapt to evolving knowledge, we
construct a new large-scale dataset, StreamingQA, with human written and
generated questions asked on a given date, to be answered from 14 years of
time-stamped news articles. We evaluate our models quarterly as they read new
articles not seen in pre-training. We show that parametric models can be
updated without full retraining, while avoiding catastrophic forgetting. For
semi-parametric models, adding new articles into the search space allows for
rapid adaptation, however, models with an outdated underlying LM under-perform
those with a retrained LM. For questions about higher-frequency named entities,
parametric updates are particularly beneficial. In our dynamic world, the
StreamingQA dataset enables a more realistic evaluation of QA models, and our
experiments highlight several promising directions for future research.
|
[
{
"version": "v1",
"created": "Mon, 23 May 2022 15:33:41 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Liška",
"Adam",
""
],
[
"Kočiský",
"Tomáš",
""
],
[
"Gribovskaya",
"Elena",
""
],
[
"Terzi",
"Tayfun",
""
],
[
"Sezener",
"Eren",
""
],
[
"Agrawal",
"Devang",
""
],
[
"d'Autume",
"Cyprien de Masson",
""
],
[
"Scholtes",
"Tim",
""
],
[
"Zaheer",
"Manzil",
""
],
[
"Young",
"Susannah",
""
],
[
"Gilsenan-McMahon",
"Ellen",
""
],
[
"Austin",
"Sophia",
""
],
[
"Blunsom",
"Phil",
""
],
[
"Lazaridou",
"Angeliki",
""
]
] |
new_dataset
| 0.996291 |
2205.11389
|
Muhammed Omer Sayin
|
Muhammed O. Sayin and Kaiqing Zhang and Asuman Ozdaglar
|
Fictitious Play in Markov Games with Single Controller
|
Accepted to ACM Conference on Economics and Computation (EC) 2022
| null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Certain but important classes of strategic-form games, including zero-sum and
identical-interest games, have the fictitious-play-property (FPP), i.e.,
beliefs formed in fictitious play dynamics always converge to a Nash
equilibrium (NE) in the repeated play of these games. Such convergence results
are seen as a (behavioral) justification for the game-theoretical equilibrium
analysis. Markov games (MGs), also known as stochastic games, generalize the
repeated play of strategic-form games to dynamic multi-state settings with
Markovian state transitions. In particular, MGs are standard models for
multi-agent reinforcement learning -- a reviving research area in learning and
games, and their game-theoretical equilibrium analyses have also been conducted
extensively. However, whether certain classes of MGs have the FPP or not (i.e.,
whether there is a behavioral justification for equilibrium analysis or not)
remains largely elusive. In this paper, we study a new variant of fictitious
play dynamics for MGs and show its convergence to an NE in n-player
identical-interest MGs in which a single player controls the state transitions.
Such games are of interest in communications, control, and economics
applications. Our result together with the recent results in [Sayin et al.
2020] establishes the FPP of two-player zero-sum MGs and n-player
identical-interest MGs with a single controller (standing at two different ends
of the MG spectrum from fully competitive to fully cooperative).
|
[
{
"version": "v1",
"created": "Mon, 23 May 2022 15:34:41 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Sayin",
"Muhammed O.",
""
],
[
"Zhang",
"Kaiqing",
""
],
[
"Ozdaglar",
"Asuman",
""
]
] |
new_dataset
| 0.997992 |
2205.11465
|
Alex Wang
|
Alex Wang, Richard Yuanzhe Pang, Angelica Chen, Jason Phang, Samuel R.
Bowman
|
SQuALITY: Building a Long-Document Summarization Dataset the Hard Way
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Summarization datasets are often assembled either by scraping naturally
occurring public-domain summaries -- which are nearly always in
difficult-to-work-with technical domains -- or by using approximate heuristics
to extract them from everyday text -- which frequently yields unfaithful
summaries. In this work, we turn to a slower but more straightforward approach
to developing summarization benchmark data: We hire highly-qualified
contractors to read stories and write original summaries from scratch. To
amortize reading time, we collect five summaries per document, with the first
giving an overview and the subsequent four addressing specific questions. We
use this protocol to collect SQuALITY, a dataset of question-focused summaries
built on the same public-domain short stories as the multiple-choice dataset
QuALITY (Pang et al., 2021). Experiments with state-of-the-art summarization
systems show that our dataset is challenging and that existing automatic
evaluation metrics are weak indicators of quality.
|
[
{
"version": "v1",
"created": "Mon, 23 May 2022 17:02:07 GMT"
}
] | 2022-05-24T00:00:00 |
[
[
"Wang",
"Alex",
""
],
[
"Pang",
"Richard Yuanzhe",
""
],
[
"Chen",
"Angelica",
""
],
[
"Phang",
"Jason",
""
],
[
"Bowman",
"Samuel R.",
""
]
] |
new_dataset
| 0.999259 |
1910.03090
|
Fatih Cagatay Akyon
|
Fatih Cagatay Akyon, Esat Kalfaoglu
|
Instagram Fake and Automated Account Detection
| null |
2019 Innovations in Intelligent Systems and Applications
Conference (ASYU)
|
10.1109/ASYU48272.2019.8946437
| null |
cs.IR cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fake engagement is one of the significant problems in Online Social Networks
(OSNs) which is used to increase the popularity of an account in an inorganic
manner. The detection of fake engagement is crucial because it leads to loss of
money for businesses, wrong audience targeting in advertising, wrong product
predictions systems, and unhealthy social network environment. This study is
related with the detection of fake and automated accounts which leads to fake
engagement on Instagram. Prior to this work, there were no publicly available
dataset for fake and automated accounts. For this purpose, two datasets have
been published for the detection of fake and automated accounts. For the
detection of these accounts, machine learning algorithms like Naive Bayes,
Logistic Regression, Support Vector Machines and Neural Networks are applied.
Additionally, for the detection of automated accounts, cost sensitive genetic
algorithm is proposed to handle the unnatural bias in the dataset. To deal with
the unevenness problem in the fake dataset, Smote-nc algorithm is implemented.
For the automated and fake account detection datasets, 86% and 96%
classification accuracies are obtained, respectively.
|
[
{
"version": "v1",
"created": "Fri, 13 Sep 2019 12:51:01 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Oct 2019 10:08:35 GMT"
},
{
"version": "v3",
"created": "Thu, 19 May 2022 20:04:52 GMT"
}
] | 2022-05-23T00:00:00 |
[
[
"Akyon",
"Fatih Cagatay",
""
],
[
"Kalfaoglu",
"Esat",
""
]
] |
new_dataset
| 0.996101 |
2011.12807
|
Thibault Maho
|
Thibault Maho, Teddy Furon, Erwan Le Merrer
|
SurFree: a fast surrogate-free black-box attack
|
8 pages
|
Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition, CVPR 2021
| null | null |
cs.CR cs.AI cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine learning classifiers are critically prone to evasion attacks.
Adversarial examples are slightly modified inputs that are then misclassified,
while remaining perceptively close to their originals. Last couple of years
have witnessed a striking decrease in the amount of queries a black box attack
submits to the target classifier, in order to forge adversarials. This
particularly concerns the black-box score-based setup, where the attacker has
access to top predicted probabilites: the amount of queries went from to
millions of to less than a thousand. This paper presents SurFree, a geometrical
approach that achieves a similar drastic reduction in the amount of queries in
the hardest setup: black box decision-based attacks (only the top-1 label is
available). We first highlight that the most recent attacks in that setup,
HSJA, QEBA and GeoDA all perform costly gradient surrogate estimations. SurFree
proposes to bypass these, by instead focusing on careful trials along diverse
directions, guided by precise indications of geometrical properties of the
classifier decision boundaries. We motivate this geometric approach before
performing a head-to-head comparison with previous attacks with the amount of
queries as a first class citizen. We exhibit a faster distortion decay under
low query amounts (few hundreds to a thousand), while remaining competitive at
higher query budgets.
|
[
{
"version": "v1",
"created": "Wed, 25 Nov 2020 15:08:19 GMT"
}
] | 2022-05-23T00:00:00 |
[
[
"Maho",
"Thibault",
""
],
[
"Furon",
"Teddy",
""
],
[
"Merrer",
"Erwan Le",
""
]
] |
new_dataset
| 0.962544 |
2104.08223
|
Alexander Richard
|
Alexander Richard, Michael Zollhoefer, Yandong Wen, Fernando de la
Torre, Yaser Sheikh
|
MeshTalk: 3D Face Animation from Speech using Cross-Modality
Disentanglement
|
updated link to github repository and supplemental video
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a generic method for generating full facial 3D animation
from speech. Existing approaches to audio-driven facial animation exhibit
uncanny or static upper face animation, fail to produce accurate and plausible
co-articulation or rely on person-specific models that limit their scalability.
To improve upon existing models, we propose a generic audio-driven facial
animation approach that achieves highly realistic motion synthesis results for
the entire face. At the core of our approach is a categorical latent space for
facial animation that disentangles audio-correlated and audio-uncorrelated
information based on a novel cross-modality loss. Our approach ensures highly
accurate lip motion, while also synthesizing plausible animation of the parts
of the face that are uncorrelated to the audio signal, such as eye blinks and
eye brow motion. We demonstrate that our approach outperforms several baselines
and obtains state-of-the-art quality both qualitatively and quantitatively. A
perceptual user study demonstrates that our approach is deemed more realistic
than the current state-of-the-art in over 75% of cases. We recommend watching
the supplemental video before reading the paper:
https://github.com/facebookresearch/meshtalk
|
[
{
"version": "v1",
"created": "Fri, 16 Apr 2021 17:05:40 GMT"
},
{
"version": "v2",
"created": "Fri, 20 May 2022 17:57:36 GMT"
}
] | 2022-05-23T00:00:00 |
[
[
"Richard",
"Alexander",
""
],
[
"Zollhoefer",
"Michael",
""
],
[
"Wen",
"Yandong",
""
],
[
"de la Torre",
"Fernando",
""
],
[
"Sheikh",
"Yaser",
""
]
] |
new_dataset
| 0.982303 |
2105.09978
|
Ayrat Khalimov
|
L\'eo Exibard, Emmanuel Filiot, Ayrat Khalimov
|
A Generic Solution to Register-bounded Synthesis with an Application to
Discrete Orders
|
Previously this version appeared as arXiv:2205.01952 which was
submitted as a new work by accident. This is a full version of same-name
paper accepted to ICALP'22
| null |
10.4230/LIPIcs.ICALP.2022.116
| null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study synthesis of reactive systems interacting with environments using an
infinite data domain. A popular formalism for specifying and modelling such
systems is register automata and transducers. They extend finite-state automata
by adding registers to store data values and to compare the incoming data
values against stored ones. Synthesis from nondeterministic or universal
register automata is undecidable in general. However, its register-bounded
variant, where additionally a bound on the number of registers in a sought
transducer is given, is known to be decidable for universal register automata
which can compare data for equality, i.e., for data domain (N,=). This paper
extends the decidability border to the domain (N,<) of natural numbers with
linear order. Our solution is generic: we define a sufficient condition on data
domains (regular approximability) for decidability of register-bounded
synthesis. The condition is satisfied by natural data domains like (N,<). It
allows one to use simple language-theoretic arguments and avoid technical
game-theoretic reasoning. Further, by defining a generic notion of reducibility
between data domains, we show the decidability of synthesis in the domain
(N^d,<^d) of tuples of numbers equipped with the component-wise partial order
and in the domain (\Sigma^*,\prec) of finite strings with the prefix relation.
|
[
{
"version": "v1",
"created": "Thu, 20 May 2021 18:21:21 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Sep 2021 06:54:48 GMT"
},
{
"version": "v3",
"created": "Fri, 15 Oct 2021 12:41:25 GMT"
},
{
"version": "v4",
"created": "Fri, 20 May 2022 12:15:50 GMT"
}
] | 2022-05-23T00:00:00 |
[
[
"Exibard",
"Léo",
""
],
[
"Filiot",
"Emmanuel",
""
],
[
"Khalimov",
"Ayrat",
""
]
] |
new_dataset
| 0.99528 |
2109.11797
|
Yuan Yao
|
Yuan Yao, Ao Zhang, Zhengyan Zhang, Zhiyuan Liu, Tat-Seng Chua,
Maosong Sun
|
CPT: Colorful Prompt Tuning for Pre-trained Vision-Language Models
|
Work in progress
| null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pre-Trained Vision-Language Models (VL-PTMs) have shown promising
capabilities in grounding natural language in image data, facilitating a broad
variety of cross-modal tasks. However, we note that there exists a significant
gap between the objective forms of model pre-training and fine-tuning,
resulting in a need for large amounts of labeled data to stimulate the visual
grounding capability of VL-PTMs for downstream tasks. To address the challenge,
we present Cross-modal Prompt Tuning (CPT, alternatively, Colorful Prompt
Tuning), a novel paradigm for tuning VL-PTMs, which reformulates visual
grounding into a fill-in-the-blank problem with color-based co-referential
markers in image and text, maximally mitigating the gap. In this way, CPT
enables strong few-shot and even zero-shot visual grounding capabilities of
VL-PTMs. Comprehensive experimental results show that the prompt-tuned VL-PTMs
outperform their fine-tuned counterparts by a large margin (e.g., 17.3%
absolute accuracy improvement, and 73.8% relative standard deviation reduction
on average with one shot in RefCOCO evaluation). We make the data and code for
this paper publicly available at https://github.com/thunlp/CPT.
|
[
{
"version": "v1",
"created": "Fri, 24 Sep 2021 08:07:29 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Oct 2021 09:18:15 GMT"
},
{
"version": "v3",
"created": "Fri, 20 May 2022 07:05:41 GMT"
}
] | 2022-05-23T00:00:00 |
[
[
"Yao",
"Yuan",
""
],
[
"Zhang",
"Ao",
""
],
[
"Zhang",
"Zhengyan",
""
],
[
"Liu",
"Zhiyuan",
""
],
[
"Chua",
"Tat-Seng",
""
],
[
"Sun",
"Maosong",
""
]
] |
new_dataset
| 0.998958 |
2111.06336
|
Tomer Wullach
|
Tomer Wullach, Amir Adler, Einat Minkov
|
Character-level HyperNetworks for Hate Speech Detection
| null | null |
10.1016/j.eswa.2022.117571
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The massive spread of hate speech, hateful content targeted at specific
subpopulations, is a problem of critical social importance. Automated methods
of hate speech detection typically employ state-of-the-art deep learning
(DL)-based text classifiers-large pretrained neural language models of over 100
million parameters, adapting these models to the task of hate speech detection
using relevant labeled datasets. Unfortunately, there are only a few public
labeled datasets of limited size that are available for this purpose. We make
several contributions with high potential for advancing this state of affairs.
We present HyperNetworks for hate speech detection, a special class of DL
networks whose weights are regulated by a small-scale auxiliary network. These
architectures operate at character-level, as opposed to word or subword-level,
and are several orders of magnitude smaller compared to the popular DL
classifiers. We further show that training hate detection classifiers using
additional large amounts of automatically generated examples is beneficial in
general, yet this practice especially boosts the performance of the proposed
HyperNetworks. We report the results of extensive experiments, assessing the
performance of multiple neural architectures on hate detection using five
public datasets. The assessed methods include the pretrained language models of
BERT, RoBERTa, ALBERT, MobileBERT and CharBERT, a variant of BERT that
incorporates character alongside subword embeddings. In addition to the
traditional setup of within-dataset evaluation, we perform cross-dataset
evaluation experiments, testing the generalization of the various models in
conditions of data shift. Our results show that the proposed HyperNetworks
achieve performance that is competitive, and better in some cases, than these
pretrained language models, while being smaller by orders of magnitude.
|
[
{
"version": "v1",
"created": "Thu, 11 Nov 2021 17:48:31 GMT"
},
{
"version": "v2",
"created": "Thu, 19 May 2022 19:35:49 GMT"
}
] | 2022-05-23T00:00:00 |
[
[
"Wullach",
"Tomer",
""
],
[
"Adler",
"Amir",
""
],
[
"Minkov",
"Einat",
""
]
] |
new_dataset
| 0.99558 |
2202.11891
|
Mitchell Doughty
|
Mitchell Doughty and Nilesh R. Ghugre
|
HMD-EgoPose: Head-Mounted Display-Based Egocentric Marker-Less Tool and
Hand Pose Estimation for Augmented Surgical Guidance
|
Accepted for publication in IJCARS; 17 pages, 3 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The success or failure of modern computer-assisted surgery procedures hinges
on the precise six-degree-of-freedom (6DoF) position and orientation (pose)
estimation of tracked instruments and tissue. In this paper, we present
HMD-EgoPose, a single-shot learning-based approach to hand and object pose
estimation and demonstrate state-of-the-art performance on a benchmark dataset
for monocular red-green-blue (RGB) 6DoF marker-less hand and surgical
instrument pose tracking. Further, we reveal the capacity of our HMD-EgoPose
framework for performant 6DoF pose estimation on a commercially available
optical see-through head-mounted display (OST-HMD) through a low-latency
streaming approach. Our framework utilized an efficient convolutional neural
network (CNN) backbone for multi-scale feature extraction and a set of
subnetworks to jointly learn the 6DoF pose representation of the rigid surgical
drill instrument and the grasping orientation of the hand of a user. To make
our approach accessible to a commercially available OST-HMD, the Microsoft
HoloLens 2, we created a pipeline for low-latency video and data communication
with a high-performance computing workstation capable of optimized network
inference. HMD-EgoPose outperformed current state-of-the-art approaches on a
benchmark dataset for surgical tool pose estimation, achieving an average tool
3D vertex error of 11.0 mm on real data and furthering the progress towards a
clinically viable marker-free tracking strategy. Through our low-latency
streaming approach, we achieved a round trip latency of 199.1 ms for pose
estimation and augmented visualization of the tracked model when integrated
with the OST-HMD. Our single-shot learned approach was robust to occlusion and
complex surfaces and improved on current state-of-the-art approaches to
marker-less tool and hand pose estimation.
|
[
{
"version": "v1",
"created": "Thu, 24 Feb 2022 04:07:34 GMT"
},
{
"version": "v2",
"created": "Fri, 20 May 2022 14:12:26 GMT"
}
] | 2022-05-23T00:00:00 |
[
[
"Doughty",
"Mitchell",
""
],
[
"Ghugre",
"Nilesh R.",
""
]
] |
new_dataset
| 0.999756 |
2203.14465
|
Eric Zelikman
|
Eric Zelikman, Yuhuai Wu, Jesse Mu, Noah D. Goodman
|
STaR: Bootstrapping Reasoning With Reasoning
| null | null | null | null |
cs.LG cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generating step-by-step "chain-of-thought" rationales improves language model
performance on complex reasoning tasks like mathematics or commonsense
question-answering. However, inducing language model rationale generation
currently requires either constructing massive rationale datasets or
sacrificing accuracy by using only few-shot inference. We propose a technique
to iteratively leverage a small number of rationale examples and a large
dataset without rationales, to bootstrap the ability to perform successively
more complex reasoning. This technique, the "Self-Taught Reasoner" (STaR),
relies on a simple loop: generate rationales to answer many questions, prompted
with a few rationale examples; if the generated answers are wrong, try again to
generate a rationale given the correct answer; fine-tune on all the rationales
that ultimately yielded correct answers; repeat. We show that STaR
significantly improves performance on multiple datasets compared to a model
fine-tuned to directly predict final answers, and performs comparably to
fine-tuning a 30$\times$ larger state-of-the-art language model on
CommensenseQA. Thus, STaR lets a model improve itself by learning from its own
generated reasoning.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 03:12:15 GMT"
},
{
"version": "v2",
"created": "Fri, 20 May 2022 13:52:54 GMT"
}
] | 2022-05-23T00:00:00 |
[
[
"Zelikman",
"Eric",
""
],
[
"Wu",
"Yuhuai",
""
],
[
"Mu",
"Jesse",
""
],
[
"Goodman",
"Noah D.",
""
]
] |
new_dataset
| 0.997126 |
2205.02070
|
Xian Wu
|
Xian Wu, Chen Wang, Hongbo Fu, Ariel Shamir, Song-Hai Zhang, Shi-Min
Hu
|
DeepPortraitDrawing: Generating Human Body Images from Freehand Sketches
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Researchers have explored various ways to generate realistic images from
freehand sketches, e.g., for objects and human faces. However, how to generate
realistic human body images from sketches is still a challenging problem. It
is, first because of the sensitivity to human shapes, second because of the
complexity of human images caused by body shape and pose changes, and third
because of the domain gap between realistic images and freehand sketches. In
this work, we present DeepPortraitDrawing, a deep generative framework for
converting roughly drawn sketches to realistic human body images. To encode
complicated body shapes under various poses, we take a local-to-global
approach. Locally, we employ semantic part auto-encoders to construct
part-level shape spaces, which are useful for refining the geometry of an input
pre-segmented hand-drawn sketch. Globally, we employ a cascaded spatial
transformer network to refine the structure of body parts by adjusting their
spatial locations and relative proportions. Finally, we use a global synthesis
network for the sketch-to-image translation task, and a face refinement network
to enhance facial details. Extensive experiments have shown that given roughly
sketched human portraits, our method produces more realistic images than the
state-of-the-art sketch-to-image synthesis techniques.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 14:02:45 GMT"
},
{
"version": "v2",
"created": "Fri, 20 May 2022 17:00:19 GMT"
}
] | 2022-05-23T00:00:00 |
[
[
"Wu",
"Xian",
""
],
[
"Wang",
"Chen",
""
],
[
"Fu",
"Hongbo",
""
],
[
"Shamir",
"Ariel",
""
],
[
"Zhang",
"Song-Hai",
""
],
[
"Hu",
"Shi-Min",
""
]
] |
new_dataset
| 0.999041 |
2205.09869
|
Rui Liu
|
Rui Liu and Barzan Mozafari
|
Transformer with Memory Replay
|
Accepted to AAAI 2022
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Transformers achieve state-of-the-art performance for natural language
processing tasks by pre-training on large-scale text corpora. They are
extremely compute-intensive and have very high sample complexity. Memory replay
is a mechanism that remembers and reuses past examples by saving to and
replaying from a memory buffer. It has been successfully used in reinforcement
learning and GANs due to better sample efficiency. In this paper, we propose
\emph{Transformer with Memory Replay} (TMR), which integrates memory replay
with transformer, making transformer more sample-efficient. Experiments on GLUE
and SQuAD benchmark datasets show that Transformer with Memory Replay achieves
at least $1\%$ point increase compared to the baseline transformer model when
pretrained with the same number of examples. Further, by adopting a careful
design that reduces the wall-clock time overhead of memory replay, we also
empirically achieve a better runtime efficiency.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2022 21:27:36 GMT"
}
] | 2022-05-23T00:00:00 |
[
[
"Liu",
"Rui",
""
],
[
"Mozafari",
"Barzan",
""
]
] |
new_dataset
| 0.973785 |
2205.09878
|
Mrinal Mathur
|
Mrinal Mathur, Archana Benkkallpalli Chandrashekhar, Venkata Krishna
Chaithanya Nuthalapati
|
Real Time Multi-Object Detection for Helmet Safety
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The National Football League and Amazon Web Services teamed up to develop the
best sports injury surveillance and mitigation program via the Kaggle
competition. Through which the NFL wants to assign specific players to each
helmet, which would help accurately identify each player's "exposures"
throughout a football play. We are trying to implement a computer vision based
ML algorithms capable of assigning detected helmet impacts to correct players
via tracking information. Our paper will explain the approach to automatically
track player helmets and their collisions. This will also allow them to review
previous plays and explore the trends in exposure over time.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2022 21:56:03 GMT"
}
] | 2022-05-23T00:00:00 |
[
[
"Mathur",
"Mrinal",
""
],
[
"Chandrashekhar",
"Archana Benkkallpalli",
""
],
[
"Nuthalapati",
"Venkata Krishna Chaithanya",
""
]
] |
new_dataset
| 0.986244 |
2205.09947
|
Yihan Hao
|
Yihan Hao (1 and 2), Mingliang Zhang (2 and 3), Fei Yin (2 and 3) and
Linlin Huang (1) ((1) Beijing Jiaotong University, (2) Institute of
Automation of Chinese Academy of Science, (3) University of Chinese Academy
of Sciences)
|
PGDP5K: A Diagram Parsing Dataset for Plane Geometry Problems
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Diagram parsing is an important foundation for geometry problem solving,
attracting increasing attention in the field of intelligent education and
document image understanding. Due to the complex layout and between-primitive
relationship, plane geometry diagram parsing (PGDP) is still a challenging task
deserving further research and exploration. An appropriate dataset is critical
for the research of PGDP. Although some datasets with rough annotations have
been proposed to solve geometric problems, they are either small in scale or
not publicly available. The rough annotations also make them not very useful.
Thus, we propose a new large-scale geometry diagram dataset named PGDP5K and a
novel annotation method. Our dataset consists of 5000 diagram samples composed
of 16 shapes, covering 5 positional relations, 22 symbol types and 6 text
types. Different from previous datasets, our PGDP5K dataset is labeled with
more fine-grained annotations at primitive level, including primitive classes,
locations and relationships. What is more, combined with above annotations and
geometric prior knowledge, it can generate intelligible geometric propositions
automatically and uniquely. We performed experiments on PGDP5K and
IMP-Geometry3K datasets reveal that the state-of-the-art (SOTA) method achieves
only 66.07% F1 value. This shows that PGDP5K presents a challenge for future
research. Our dataset is available at
http://www.nlpr.ia.ac.cn/databases/CASIA-PGDP5K/.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2022 03:41:41 GMT"
}
] | 2022-05-23T00:00:00 |
[
[
"Hao",
"Yihan",
"",
"1 and 2"
],
[
"Zhang",
"Mingliang",
"",
"2 and 3"
],
[
"Yin",
"Fei",
"",
"2 and 3"
],
[
"Huang",
"Linlin",
""
]
] |
new_dataset
| 0.999832 |
2205.09992
|
Francois Taiani
|
Timoth\'e Albouy (WIDE), Davide Frey (WIDE), Michel Raynal (WIDE),
Fran\c{c}ois Ta\"iani (WIDE)
|
Asynchronous Byzantine Reliable Broadcast With a Message Adversary
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper considers the problem of reliable broadcast in asynchronous
authenticated systems, in which n processes communicate using signed messages
and up to t processes may behave arbitrarily (Byzantine processes). In
addition, for each message m broadcast by a correct (i.e., non-Byzantine)
process, a message adversary may prevent up to d correct processes from
receiving m. (This message adversary captures network failures such as
transient disconnections, silent churn, or message losses.) Considering such a
"double" adversarial context and assuming n > 3t + 2d, a reliable broadcast
algorithm is presented. Interestingly, when there is no message adversary
(i.e., d = 0), the algorithm terminates in two communication steps (so, in this
case, this algorithm is optimal in terms of both Byzantine tolerance and time
efficiency). It is then shown that the condition n > 3t + 2d is necessary for
implementing reliable broadcast in the presence of both Byzantine processes and
a message adversary (whether the underlying system is enriched with signatures
or not).
|
[
{
"version": "v1",
"created": "Fri, 20 May 2022 07:06:53 GMT"
}
] | 2022-05-23T00:00:00 |
[
[
"Albouy",
"Timothé",
"",
"WIDE"
],
[
"Frey",
"Davide",
"",
"WIDE"
],
[
"Raynal",
"Michel",
"",
"WIDE"
],
[
"Taïani",
"François",
"",
"WIDE"
]
] |
new_dataset
| 0.976557 |
2205.10037
|
Marius Bozga
|
Marius Bozga and Joseph Sifakis
|
Correct by Design Coordination of Autonomous Driving Systems
| null | null | null | null |
cs.MA cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
The paper proposes a method for the correct by design coordination of
autonomous driving systems (ADS). It builds on previous results on collision
avoidance policies and the modeling of ADS by combining descriptions of their
static environment in the form of maps, and the dynamic behavior of their
vehicles. An ADS is modeled as a dynamic system involving a set of vehicles
coordinated by a Runtime that based on vehicle positions on a map and their
kinetic attributes, computes free spaces for each vehicle. Vehicles are bounded
to move within the corresponding allocated free spaces. We provide a correct by
design safe control policy for an ADS if its vehicles and the Runtime respect
corresponding assume-guarantee contracts. The result is established by showing
that the composition of assume-guarantee contracts is an inductive invariant
that entails ADS safety. We show that it is practically possible to define
speed control policies for vehicles that comply with their contracts.
Furthermore, we show that traffic rules can be specified in a linear-time
temporal logic, as a class of formulas that constrain vehicle speeds. The main
result is that, given a set of traffic rules, it is possible to derive free
space policies of the Runtime such that the resulting system behavior is safe
by design with respect to the rules.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2022 09:17:42 GMT"
}
] | 2022-05-23T00:00:00 |
[
[
"Bozga",
"Marius",
""
],
[
"Sifakis",
"Joseph",
""
]
] |
new_dataset
| 0.992251 |
2205.10078
|
Ulugbek Salaev
|
Maksud Sharipov, Ulugbek Salaev
|
Uzbek affix finite state machine for stemming
|
Accepted for publication in the IX International Conference on
Computer Processing of Turkic Languages "TurkLang 2021", 15 pages, 12 figures
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This work presents a morphological analyzer for the Uzbek language using a
finite state machine. The proposed methodology is a morphologic analysis of
Uzbek words by using an affix striping to find a root and without including any
lexicon. This method helps to perform morphological analysis of words from a
large amount of text at high speed as well as it is not required using of
memory for keeping vocabulary. According to Uzbek, an agglutinative language
can be designed with finite state machines (FSMs). In contrast to the previous
works, this study modeled the completed FSMs for all word classes by using the
Uzbek language's morphotactic rules in right to left order. This paper shows
the stages of this methodology including the classification of the affixes, the
generation of the FSMs for each affix class, and the combination into a head
machine to make analysis a word.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2022 10:46:53 GMT"
}
] | 2022-05-23T00:00:00 |
[
[
"Sharipov",
"Maksud",
""
],
[
"Salaev",
"Ulugbek",
""
]
] |
new_dataset
| 0.999133 |
2205.10222
|
Cristina Gena
|
Cristina Gena, Alberto Lillo, Claudio Mattutino, Enrico Mosca
|
An affective and adaptive educational robot
|
extened version of the paper Wolly: An affective and adaptive
educational robot, submitted to CAESAR 2022. arXiv admin note: text overlap
with arXiv:2203.06439
| null | null | null |
cs.RO cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper we present an educational robot called Wolly, designed to
engage children in an affective and social interaction. Indeed, we are now
focusing on its role as an educational and affective robot capable of being
controlled by coding instructions and at the same time interacting verbally and
affectively with children by recognizing their emotions and remembering their
interests, and adapting its behavior accordingly.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2022 14:57:15 GMT"
}
] | 2022-05-23T00:00:00 |
[
[
"Gena",
"Cristina",
""
],
[
"Lillo",
"Alberto",
""
],
[
"Mattutino",
"Claudio",
""
],
[
"Mosca",
"Enrico",
""
]
] |
new_dataset
| 0.951294 |
2205.10237
|
Jinming Zhao
|
Jinming Zhao, Tenggan Zhang, Jingwen Hu, Yuchen Liu, Qin Jin, Xinchao
Wang, Haizhou Li
|
M3ED: Multi-modal Multi-scene Multi-label Emotional Dialogue Database
| null |
published at ACL 2022
| null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The emotional state of a speaker can be influenced by many different factors
in dialogues, such as dialogue scene, dialogue topic, and interlocutor
stimulus. The currently available data resources to support such multimodal
affective analysis in dialogues are however limited in scale and diversity. In
this work, we propose a Multi-modal Multi-scene Multi-label Emotional Dialogue
dataset, M3ED, which contains 990 dyadic emotional dialogues from 56 different
TV series, a total of 9,082 turns and 24,449 utterances. M3 ED is annotated
with 7 emotion categories (happy, surprise, sad, disgust, anger, fear, and
neutral) at utterance level, and encompasses acoustic, visual, and textual
modalities. To the best of our knowledge, M3ED is the first multimodal
emotional dialogue dataset in Chinese. It is valuable for cross-culture emotion
analysis and recognition. We apply several state-of-the-art methods on the M3ED
dataset to verify the validity and quality of the dataset. We also propose a
general Multimodal Dialogue-aware Interaction framework, MDI, to model the
dialogue context for emotion recognition, which achieves comparable performance
to the state-of-the-art methods on the M3ED. The full dataset and codes are
available.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2022 06:52:51 GMT"
}
] | 2022-05-23T00:00:00 |
[
[
"Zhao",
"Jinming",
""
],
[
"Zhang",
"Tenggan",
""
],
[
"Hu",
"Jingwen",
""
],
[
"Liu",
"Yuchen",
""
],
[
"Jin",
"Qin",
""
],
[
"Wang",
"Xinchao",
""
],
[
"Li",
"Haizhou",
""
]
] |
new_dataset
| 0.999859 |
2205.10247
|
Wei Zhang
|
Wei Zhang, Yu Bao
|
SADAM: Stochastic Adam, A Stochastic Operator for First-Order
Gradient-based Optimizer
|
9 pages, 4 figures, an advanced first-order optimizer
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, to efficiently help escape the stationary and saddle points, we
propose, analyze, and generalize a stochastic strategy performed as an operator
for a first-order gradient descent algorithm in order to increase the target
accuracy and reduce time consumption. Unlike existing algorithms, the proposed
stochastic the strategy does not require any batches and sampling techniques,
enabling efficient implementation and maintaining the initial first-order
optimizer's convergence rate, but provides an incomparable improvement of
target accuracy when optimizing the target functions. In short, the proposed
strategy is generalized, applied to Adam, and validated via the decomposition
of biomedical signals using Deep Matrix Fitting and another four peer
optimizers. The validation results show that the proposed random strategy can
be easily generalized for first-order optimizers and efficiently improve the
target accuracy.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2022 15:20:19 GMT"
}
] | 2022-05-23T00:00:00 |
[
[
"Zhang",
"Wei",
""
],
[
"Bao",
"Yu",
""
]
] |
new_dataset
| 0.99504 |
2008.08401
|
C\'esar Soto-Valero
|
C\'esar Soto-Valero, Thomas Durieux, Nicolas Harrand, Benoit Baudry
|
Coverage-Based Debloating for Java Bytecode
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Software bloat is code that is packaged in an application but is actually not
necessary to run the application. The presence of software bloat is an issue
for security, for performance, and for maintenance. In this paper, we introduce
a novel technique for debloating, which we call coverage-based debloating. We
implement the technique for one single language: Java bytecode. We leverage a
combination of state-of-the-art Java bytecode coverage tools to precisely
capture what parts of a project and its dependencies are used when running with
a specific workload. Then, we automatically remove the parts that are not
covered, in order to generate a debloated version of the project. We succeed to
debloat 211 library versions from a dataset of 94 unique open-source Java
libraries. The debloated versions are syntactically correct and preserve their
original behavior according to the workload. Our results indicate that 68.3% of
the libraries' bytecode and 20.3% of their total dependencies can be removed
through coverage-based debloating. For the first time in the literature on
software debloating, we assess the utility of debloated libraries with respect
to client applications that reuse them. We select 988 client projects that
either have a direct reference to the debloated library in their source code or
which test suite covers at least one class of the libraries that we debloat.
Our results show that 81.5% of the clients, with at least one test that uses
the library, successfully compile and pass their test suite when the original
library is replaced by its debloated version.
|
[
{
"version": "v1",
"created": "Wed, 19 Aug 2020 12:44:05 GMT"
},
{
"version": "v2",
"created": "Thu, 6 May 2021 07:29:06 GMT"
},
{
"version": "v3",
"created": "Wed, 8 Dec 2021 12:58:16 GMT"
},
{
"version": "v4",
"created": "Thu, 19 May 2022 07:55:16 GMT"
}
] | 2022-05-20T00:00:00 |
[
[
"Soto-Valero",
"César",
""
],
[
"Durieux",
"Thomas",
""
],
[
"Harrand",
"Nicolas",
""
],
[
"Baudry",
"Benoit",
""
]
] |
new_dataset
| 0.998922 |
2102.00610
|
Nathan White
|
Nathan M. White and Timothy Henry-Rodriguez
|
The Harrington Yowlumne Narrative Corpus
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Minority languages continue to lack adequate resources for their development,
especially in the technological domain. Likewise, the J.P. Harrington Papers
collection at the Smithsonian Institution are difficult to access in practical
terms for community members and researchers due to its handwritten and
disorganized format. Our current work seeks to make a portion of this
publicly-available yet problematic material practically accessible for natural
language processing use. Here, we present the Harrington Yowlumne Narrative
Corpus, a corpus of 20 narrative texts that derive from the Tejone\~no Yowlumne
community of the Tinliw rancheria in Kern County, California between 1910 and
1925. We digitally transcribe the texts and, through a Levenshtein
distance-based algorithm and manual checking, we provide gold-standard aligned
normalized and lemmatized text. We likewise provide POS tags for each
lemmatized token via a lexicon-based deterministic approach. Altogether, the
corpus contains 57,136 transcribed characters aligned with 10,719 gold standard
text-normalized words.
|
[
{
"version": "v1",
"created": "Mon, 1 Feb 2021 03:16:24 GMT"
},
{
"version": "v2",
"created": "Thu, 19 May 2022 10:52:10 GMT"
}
] | 2022-05-20T00:00:00 |
[
[
"White",
"Nathan M.",
""
],
[
"Henry-Rodriguez",
"Timothy",
""
]
] |
new_dataset
| 0.999598 |
2103.08560
|
Eric Wagner
|
Eric Wagner, Jan Bauer, Martin Henze
|
Take a Bite of the Reality Sandwich: Revisiting the Security of
Progressive Message Authentication Codes
|
ACM WiSec'22
| null |
10.1145/3507657.3528539
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Message authentication guarantees the integrity of messages exchanged over
untrusted channels. However, to achieve this goal, message authentication
considerably expands packet sizes, which is especially problematic in
constrained wireless environments. To address this issue, progressive message
authentication provides initially reduced integrity protection that is often
sufficient to process messages upon reception. This reduced security is then
successively improved with subsequent messages to uphold the strong guarantees
of traditional integrity protection. However, contrary to previous claims, we
show in this paper that existing progressive message authentication schemes are
highly susceptible to packet loss induced by poor channel conditions or jamming
attacks. Thus, we consider it imperative to rethink how authentication tags
depend on the successful reception of surrounding packets. To this end, we
propose R2-D2, which uses randomized dependencies with parameterized security
guarantees to increase the resilience of progressive authentication against
packet loss. To deploy our approach to resource-constrained devices, we
introduce SP-MAC, which implements R2-D2 using efficient XOR operations. Our
evaluation shows that SP-MAC is resilient to sophisticated network-level
attacks and operates as resources-conscious and fast as existing, yet insecure,
progressive message authentication schemes.
|
[
{
"version": "v1",
"created": "Mon, 15 Mar 2021 17:24:37 GMT"
},
{
"version": "v2",
"created": "Fri, 7 May 2021 11:23:41 GMT"
},
{
"version": "v3",
"created": "Thu, 19 May 2022 16:13:41 GMT"
}
] | 2022-05-20T00:00:00 |
[
[
"Wagner",
"Eric",
""
],
[
"Bauer",
"Jan",
""
],
[
"Henze",
"Martin",
""
]
] |
new_dataset
| 0.999104 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.