id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2203.08566
|
Mengyang Pu
|
Mengyang Pu and Yaping Huang and Yuming Liu and Qingji Guan and Haibin
Ling
|
EDTER: Edge Detection with Transformer
|
Accepted by CVPR2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Convolutional neural networks have made significant progresses in edge
detection by progressively exploring the context and semantic features.
However, local details are gradually suppressed with the enlarging of receptive
fields. Recently, vision transformer has shown excellent capability in
capturing long-range dependencies. Inspired by this, we propose a novel
transformer-based edge detector, \emph{Edge Detection TransformER (EDTER)}, to
extract clear and crisp object boundaries and meaningful edges by exploiting
the full image context information and detailed local cues simultaneously.
EDTER works in two stages. In Stage I, a global transformer encoder is used to
capture long-range global context on coarse-grained image patches. Then in
Stage II, a local transformer encoder works on fine-grained patches to excavate
the short-range local cues. Each transformer encoder is followed by an
elaborately designed Bi-directional Multi-Level Aggregation decoder to achieve
high-resolution features. Finally, the global context and local cues are
combined by a Feature Fusion Module and fed into a decision head for edge
prediction. Extensive experiments on BSDS500, NYUDv2, and Multicue demonstrate
the superiority of EDTER in comparison with state-of-the-arts.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 11:55:55 GMT"
}
] | 2022-03-17T00:00:00 |
[
[
"Pu",
"Mengyang",
""
],
[
"Huang",
"Yaping",
""
],
[
"Liu",
"Yuming",
""
],
[
"Guan",
"Qingji",
""
],
[
"Ling",
"Haibin",
""
]
] |
new_dataset
| 0.981204 |
2203.08578
|
Roger Moore
|
Roger K. Moore
|
Whither the Priors for (Vocal) Interactivity?
|
Accepted for the THEORIA Workshop "Towards a Common Understanding and
Vision for Theory-Grounded Human-Robot Interaction" at HRI-2022, 7 March 2022
| null | null | null |
cs.RO cs.CL cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Voice-based communication is often cited as one of the most `natural' ways in
which humans and robots might interact, and the recent availability of accurate
automatic speech recognition and intelligible speech synthesis has enabled
researchers to integrate advanced off-the-shelf spoken language technology
components into their robot platforms. Despite this, the resulting interactions
are anything but `natural'. It transpires that simply giving a robot a voice
doesn't mean that a user will know how (or when) to talk to it, and the
resulting `conversations' tend to be stilted, one-sided and short. On the
surface, these difficulties might appear to be fairly trivial consequences of
users' unfamiliarity with robots (and \emph{vice versa}), and that any problems
would be mitigated by long-term use by the human, coupled with `deep learning'
by the robot. However, it is argued here that such communication failures are
indicative of a deeper malaise: a fundamental lack of basic principles --
\emph{priors} -- underpinning not only speech-based interaction in particular,
but (vocal) interactivity in general. This is evidenced not only by the fact
that contemporary spoken language systems already require training data sets
that are orders-of-magnitude greater than that experienced by a young child,
but also by the lack of design principles for creating effective communicative
human-robot interaction. This short position paper identifies some of the key
areas where theoretical insights might help overcome these shortfalls.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 12:06:46 GMT"
}
] | 2022-03-17T00:00:00 |
[
[
"Moore",
"Roger K.",
""
]
] |
new_dataset
| 0.984126 |
2203.08600
|
Benjamin Horne
|
Benjamin D. Horne, Maur\'icio Gruppi, Kenneth Joseph, Jon Green, John
P. Wihbey, and Sibel Adal{\i}
|
NELA-Local: A Dataset of U.S. Local News Articles for the Study of
County-level News Ecosystems
|
Published at ICWSM 2022
| null | null | null |
cs.CY cs.MM cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a dataset of over 1.4M online news articles from
313 local U.S. news outlets published over 20 months (between April 4th, 2020
and December 31st, 2021). These outlets cover a geographically diverse set of
communities across the United States. In order to estimate characteristics of
the local audience, included with this news article data is a wide range of
county-level metadata, including demographics, 2020 Presidential Election vote
shares, and community resilience estimates from the U.S. Census Bureau. The
NELA-Local dataset can be found at:
https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/GFE66K.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 13:19:21 GMT"
}
] | 2022-03-17T00:00:00 |
[
[
"Horne",
"Benjamin D.",
""
],
[
"Gruppi",
"Maurício",
""
],
[
"Joseph",
"Kenneth",
""
],
[
"Green",
"Jon",
""
],
[
"Wihbey",
"John P.",
""
],
[
"Adalı",
"Sibel",
""
]
] |
new_dataset
| 0.99988 |
2203.08630
|
Shiliang Zhang
|
Shiliang Zhang, Dyako Fatih, Fahmi Abdulqadir, Tobias Schwarz, Xuehui
Ma
|
Extended vehicle energy dataset (eVED): an enhanced large-scale dataset
for deep learning on vehicle trip energy consumption
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This work presents an extended version of the Vehicle Energy Dataset (VED),
which is a openly released large-scale dataset for vehicle energy consumption
analysis. Compared with its original version, the extended VED (eVED) dataset
is enhanced with accurate vehicle trip GPS coordinates, serving as a basis to
associate the VED trip records with external information, e.g., road speed
limit and intersections, from accessible map services to accumulate attributes
that is essential in analyzing vehicle energy consumption. In particularly, we
calibrate all the GPS trace records in the original VED data, upon which we
associated the VED data with attributes extracted from the Geographic
Information System (QGIS), the Overpass API, the Open Street Map API, and
Google Maps API. The associated attributes include 12,609,170 records of road
elevation, 12,203,044 of speed limit, 12,281,719 of speed limit with direction
(in case the road is bi-directional), 584,551 of intersections, 429,638 of bus
stop, 312,196 of crossings, 195,856 of traffic signals, 29,397 of stop signs,
5,848 of turning loops, 4,053 of railway crossings (level crossing), 3,554 of
turning circles, and 2,938 of motorway junctions. With the accurate GPS
coordinates and enriched features of the vehicle trip record, the obtained eVED
dataset can provide a precise and abundant medium to feed a learning engine,
especially a deep learning engine that is more demanding on data sufficiency
and richness. Moreover, our software work for data calibration and enrichment
can be reused to generate further vehicle trip datasets for specific user
cases, contributing to deep insights into vehicle behaviors and traffic
dynamics analyses. We anticipate that the eVED dataset and our data enrichment
software can serve the academic and industrial automotive section as apparatus
in developing future technologies.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 13:56:36 GMT"
}
] | 2022-03-17T00:00:00 |
[
[
"Zhang",
"Shiliang",
""
],
[
"Fatih",
"Dyako",
""
],
[
"Abdulqadir",
"Fahmi",
""
],
[
"Schwarz",
"Tobias",
""
],
[
"Ma",
"Xuehui",
""
]
] |
new_dataset
| 0.999762 |
2203.08694
|
Anthony Rios
|
Richard Alvarez, Paras Bhatt, Xingmeng Zhao, Anthony Rios
|
Turning Stocks into Memes: A Dataset for Understanding How Social
Communities Can Drive Wall Street
|
Accepted to ICWSM 2022
| null | null | null |
cs.CL cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Who actually expresses an intent to buy GameStop shares on Reddit? What
convinces people to buy stocks? Are people convinced to support a coordinated
plan to adversely impact Wall Street investors? Existing literature on
understanding intent has mainly relied on surveys and self reporting; however
there are limitations to these methodologies. Hence, in this paper, we develop
an annotated dataset of communications centered on the GameStop phenomenon to
analyze the subscriber intentions behaviors within the r/WallStreetBets
community to buy (or not buy) stocks. Likewise, we curate a dataset to better
understand how intent interacts with a user's general support towards the
coordinated actions of the community for GameStop. Overall, our dataset can
provide insight to social scientists on the persuasive power to buy into social
movements online by adopting common language and narrative. WARNING: This paper
contains offensive language that commonly appears on Reddit's r/WallStreetBets
subreddit.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 15:34:10 GMT"
}
] | 2022-03-17T00:00:00 |
[
[
"Alvarez",
"Richard",
""
],
[
"Bhatt",
"Paras",
""
],
[
"Zhao",
"Xingmeng",
""
],
[
"Rios",
"Anthony",
""
]
] |
new_dataset
| 0.999037 |
1811.07644
|
Henning Basold
|
Henning Basold and Ekaterina Komendantskaya and Yue Li
|
Coinduction in Uniform: Foundations for Corecursive Proof Search with
Horn Clauses
| null |
LNCS 11423 (2019) 783-813
|
10.1007/978-3-030-17184-1_28
| null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We establish proof-theoretic, constructive and coalgebraic foundations for
proof search in coinductive Horn clause theories. Operational semantics of
coinductive Horn clause resolution is cast in terms of coinductive uniform
proofs; its constructive content is exposed via soundness relative to an
intuitionistic first-order logic with recursion controlled by the later
modality; and soundness of both proof systems is proven relative to a novel
coalgebraic description of complete Herbrand models.
|
[
{
"version": "v1",
"created": "Mon, 19 Nov 2018 12:30:17 GMT"
},
{
"version": "v2",
"created": "Sat, 4 May 2019 10:55:13 GMT"
},
{
"version": "v3",
"created": "Tue, 15 Mar 2022 12:08:37 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Basold",
"Henning",
""
],
[
"Komendantskaya",
"Ekaterina",
""
],
[
"Li",
"Yue",
""
]
] |
new_dataset
| 0.972027 |
2103.01528
|
Fanruiqi Zeng
|
Fanruiqi Zeng, Zaiwei Chen, John-Paul Clarke, David Goldsman
|
Nested Vehicle Routing Problem: Optimizing Drone-Truck Surveillance
Operations
|
40 pages, 20 figures
| null | null | null |
cs.DM cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Unmanned aerial vehicles or drones are becoming increasingly popular due to
their low cost and high mobility. In this paper we address the routing and
coordination of a drone-truck pairing where the drone travels to multiple
locations to perform specified observation tasks and rendezvous periodically
with the truck to swap its batteries. We refer to this as the Nested-Vehicle
Routing Problem (Nested-VRP) and develop a Mixed Integer Quadratically
Constrained Programming (MIQCP) formulation with critical operational
constraints, including drone battery capacity and synchronization of both
vehicles during scheduled rendezvous. An enhancement of the MIQCP model for the
Nested-VRP is achieved by deriving the equivalent Mixed Integer Linear
Programming (MILP) formulation as well as leveraging lifting and
Reformulation-Linearization techniques to strengthen the subtour elimination
constraints of the drone. Given the NP-hard nature of the Nested-VRP, we
further propose an efficient neighborhood search (NS) heuristic where we
generate and improve on a good initial solution by iteratively solving the
Nested-VRP on a local scale. We provide comparisons of both the exact
approaches based on MIQCP or its enhanced formulations and NS heuristic methods
with a relaxation lower bound in the cases of small and large problem sizes,
and present the results of a computational study to show the effectiveness of
the MIQCP model and its variants as well as the efficiency of the NS heuristic,
including for a real-life instance with 631 locations. We envision that this
framework will facilitate the planning and operations of combined drone-truck
missions.
|
[
{
"version": "v1",
"created": "Tue, 2 Mar 2021 07:17:32 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Mar 2022 18:24:57 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Zeng",
"Fanruiqi",
""
],
[
"Chen",
"Zaiwei",
""
],
[
"Clarke",
"John-Paul",
""
],
[
"Goldsman",
"David",
""
]
] |
new_dataset
| 0.991361 |
2107.02153
|
Jungsoo Park
|
Jungsoo Park, Sewon Min, Jaewoo Kang, Luke Zettlemoyer, Hannaneh
Hajishirzi
|
FaVIQ: FAct Verification from Information-seeking Questions
|
ACL 2022(long). Data & Code available at https://faviq.github.io
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Despite significant interest in developing general purpose fact checking
models, it is challenging to construct a large-scale fact verification dataset
with realistic real-world claims. Existing claims are either authored by
crowdworkers, thereby introducing subtle biases that are difficult to control
for, or manually verified by professional fact checkers, causing them to be
expensive and limited in scale. In this paper, we construct a large-scale
challenging fact verification dataset called FAVIQ, consisting of 188k claims
derived from an existing corpus of ambiguous information-seeking questions. The
ambiguities in the questions enable automatically constructing true and false
claims that reflect user confusions (e.g., the year of the movie being filmed
vs. being released). Claims in FAVIQ are verified to be natural, contain little
lexical bias, and require a complete understanding of the evidence for
verification. Our experiments show that the state-of-the-art models are far
from solving our new task. Moreover, training on our data helps in professional
fact-checking, outperforming models trained on the widely used dataset FEVER or
in-domain data by up to 17% absolute. Altogether, our data will serve as a
challenging benchmark for natural language understanding and support future
progress in professional fact checking.
|
[
{
"version": "v1",
"created": "Mon, 5 Jul 2021 17:31:44 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Mar 2022 07:38:56 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Park",
"Jungsoo",
""
],
[
"Min",
"Sewon",
""
],
[
"Kang",
"Jaewoo",
""
],
[
"Zettlemoyer",
"Luke",
""
],
[
"Hajishirzi",
"Hannaneh",
""
]
] |
new_dataset
| 0.999832 |
2108.11792
|
Antoine Louis
|
Antoine Louis and Gerasimos Spanakis
|
A Statutory Article Retrieval Dataset in French
|
ACL 2022. Code and dataset are available at
https://github.com/maastrichtlawtech/bsard
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Statutory article retrieval is the task of automatically retrieving law
articles relevant to a legal question. While recent advances in natural
language processing have sparked considerable interest in many legal tasks,
statutory article retrieval remains primarily untouched due to the scarcity of
large-scale and high-quality annotated datasets. To address this bottleneck, we
introduce the Belgian Statutory Article Retrieval Dataset (BSARD), which
consists of 1,100+ French native legal questions labeled by experienced jurists
with relevant articles from a corpus of 22,600+ Belgian law articles. Using
BSARD, we benchmark several state-of-the-art retrieval approaches, including
lexical and dense architectures, both in zero-shot and supervised setups. We
find that fine-tuned dense retrieval models significantly outperform other
systems. Our best performing baseline achieves 74.8% R@100, which is promising
for the feasibility of the task and indicates there is still room for
improvement. By the specificity of the domain and addressed task, BSARD
presents a unique challenge problem for future research on legal information
retrieval. Our dataset and source code are publicly available.
|
[
{
"version": "v1",
"created": "Thu, 26 Aug 2021 13:50:20 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Mar 2022 11:56:24 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Louis",
"Antoine",
""
],
[
"Spanakis",
"Gerasimos",
""
]
] |
new_dataset
| 0.999809 |
2109.01036
|
Thach Le Nguyen
|
Thach Le Nguyen and Georgiana Ifrim
|
MrSQM: Fast Time Series Classification with Symbolic Representations
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Symbolic representations of time series have proven to be effective for time
series classification, with many recent approaches including SAX-VSM, BOSS,
WEASEL, and MrSEQL. The key idea is to transform numerical time series to
symbolic representations in the time or frequency domain, i.e., sequences of
symbols, and then extract features from these sequences. While achieving high
accuracy, existing symbolic classifiers are computationally expensive. In this
paper we present MrSQM, a new time series classifier which uses multiple
symbolic representations and efficient sequence mining, to extract important
time series features. We study four feature selection approaches on symbolic
sequences, ranging from fully supervised, to unsupervised and hybrids. We
propose a new approach for optimal supervised symbolic feature selection in
all-subsequence space, by adapting a Chi-squared bound developed for
discriminative pattern mining, to time series. Our extensive experiments on 112
datasets of the UEA/UCR benchmark demonstrate that MrSQM can quickly extract
useful features and learn accurate classifiers with the classic logistic
regression algorithm. Interestingly, we find that a very simple and fast
feature selection strategy can be highly effective as compared with more
sophisticated and expensive methods. MrSQM advances the state-of-the-art for
symbolic time series classifiers and it is an effective method to achieve high
accuracy, with fast runtime.
|
[
{
"version": "v1",
"created": "Thu, 2 Sep 2021 15:54:46 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Mar 2022 21:08:17 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Nguyen",
"Thach Le",
""
],
[
"Ifrim",
"Georgiana",
""
]
] |
new_dataset
| 0.998493 |
2109.10115
|
Xingyu Liu
|
Xingyu Liu, Shun Iwase, Kris M. Kitani
|
StereOBJ-1M: Large-scale Stereo Image Dataset for 6D Object Pose
Estimation
|
ICCV 2021
| null | null | null |
cs.CV cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a large-scale stereo RGB image object pose estimation dataset
named the $\textbf{StereOBJ-1M}$ dataset. The dataset is designed to address
challenging cases such as object transparency, translucency, and specular
reflection, in addition to the common challenges of occlusion, symmetry, and
variations in illumination and environments. In order to collect data of
sufficient scale for modern deep learning models, we propose a novel method for
efficiently annotating pose data in a multi-view fashion that allows data
capturing in complex and flexible environments. Fully annotated with 6D object
poses, our dataset contains over 393K frames and over 1.5M annotations of 18
objects recorded in 182 scenes constructed in 11 different environments. The 18
objects include 8 symmetric objects, 7 transparent objects, and 8 reflective
objects. We benchmark two state-of-the-art pose estimation frameworks on
StereOBJ-1M as baselines for future work. We also propose a novel object-level
pose optimization method for computing 6D pose from keypoint predictions in
multiple images. Project website: https://sites.google.com/view/stereobj-1m.
|
[
{
"version": "v1",
"created": "Tue, 21 Sep 2021 11:56:38 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Sep 2021 17:38:33 GMT"
},
{
"version": "v3",
"created": "Mon, 14 Mar 2022 18:35:24 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Liu",
"Xingyu",
""
],
[
"Iwase",
"Shun",
""
],
[
"Kitani",
"Kris M.",
""
]
] |
new_dataset
| 0.999764 |
2109.12068
|
El Moatez Billah Nagoudi
|
El Moatez Billah Nagoudi and AbdelRahim Elmadany and Muhammad
Abdul-Mageed
|
AraT5: Text-to-Text Transformers for Arabic Language Generation
|
Proceedings of the 60th Annual Meeting of the Association for
Computational Linguistics (ACL 2022). All authors contributed equally
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Transfer learning with a unified Transformer framework (T5) that converts all
language problems into a text-to-text format was recently proposed as a simple
and effective transfer learning approach. Although a multilingual version of
the T5 model (mT5) was also introduced, it is not clear how well it can fare on
non-English tasks involving diverse data. To investigate this question, we
apply mT5 on a language with a wide variety of dialects--Arabic. For
evaluation, we introduce a novel benchmark for ARabic language GENeration
(ARGEN), covering seven important tasks. For model comparison, we pre-train
three powerful Arabic T5-style models and evaluate them on ARGEN. Although
pre-trained with ~49 less data, our new models perform significantly better
than mT5 on all ARGEN tasks (in 52 out of 59 test sets) and set several new
SOTAs. Our models also establish new SOTA on the recently-proposed, large
Arabic language understanding evaluation benchmark ARLUE (Abdul-Mageed et al.,
2021). Our new models are publicly available. We also link to ARGEN datasets
through our repository: https://github.com/UBC-NLP/araT5.
|
[
{
"version": "v1",
"created": "Tue, 31 Aug 2021 02:02:10 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Oct 2021 20:06:53 GMT"
},
{
"version": "v3",
"created": "Fri, 22 Oct 2021 19:41:22 GMT"
},
{
"version": "v4",
"created": "Tue, 15 Mar 2022 17:57:28 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Nagoudi",
"El Moatez Billah",
""
],
[
"Elmadany",
"AbdelRahim",
""
],
[
"Abdul-Mageed",
"Muhammad",
""
]
] |
new_dataset
| 0.999591 |
2110.08296
|
Ojas Ahuja
|
Ojas Ahuja, Jiacheng Xu, Akshay Gupta, Kevin Horecka, Greg Durrett
|
ASPECTNEWS: Aspect-Oriented Summarization of News Documents
|
ACL 2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generic summaries try to cover an entire document and query-based summaries
try to answer document-specific questions. But real users' needs often fall in
between these extremes and correspond to aspects, high-level topics discussed
among similar types of documents. In this paper, we collect a dataset of
realistic aspect-oriented summaries, AspectNews, which covers different
subtopics about articles in news sub-domains. We annotate data across two
domains of articles, earthquakes and fraud investigations, where each article
is annotated with two distinct summaries focusing on different aspects for each
domain. A system producing a single generic summary cannot concisely satisfy
both aspects. Our focus in evaluation is how well existing techniques can
generalize to these domains without seeing in-domain training data, so we turn
to techniques to construct synthetic training data that have been used in
query-focused summarization work. We compare several training schemes that
differ in how strongly keywords are used and how oracle summaries are
extracted. Our evaluation shows that our final approach yields (a) focused
summaries, better than those from a generic summarization system or from
keyword matching; (b) a system sensitive to the choice of keywords.
|
[
{
"version": "v1",
"created": "Fri, 15 Oct 2021 18:06:21 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Mar 2022 06:42:38 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Ahuja",
"Ojas",
""
],
[
"Xu",
"Jiacheng",
""
],
[
"Gupta",
"Akshay",
""
],
[
"Horecka",
"Kevin",
""
],
[
"Durrett",
"Greg",
""
]
] |
new_dataset
| 0.974804 |
2110.08303
|
Liwei Guo
|
Liwei Guo, Felix Xiaozhu Lin
|
Minimum Viable Device Drivers for ARM TrustZone
|
Eurosys 2022
| null |
10.1145/3492321.3519565
| null |
cs.OS cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
While TrustZone can isolate IO hardware, it lacks drivers for modern IO
devices. Rather than porting drivers, we propose a novel approach to deriving
minimum viable drivers: developers exercise a full driver and record the
driver/device interactions; the processed recordings, dubbed driverlets, are
replayed in the TEE at run time to access IO devices.
Driverlets address two key challenges: correctness and expressiveness, for
which they build on a key construct called interaction template. The
interaction template ensures faithful reproduction of recorded IO jobs (albeit
on new IO data); it accepts dynamic input values; it tolerates nondeterministic
device behaviors.
We demonstrate driverlets on a series of sophisticated devices, making them
accessible to TrustZone for the first time to our knowledge. Our experiments
show that driverlets are secure, easy to build, and incur acceptable overhead
(1.4x -2.7x compared to native drivers). Driverlets fill a critical gap in the
TrustZone TEE, realizing its long-promised vision of secure IO.
|
[
{
"version": "v1",
"created": "Fri, 15 Oct 2021 18:22:10 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Mar 2022 15:50:04 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Guo",
"Liwei",
""
],
[
"Lin",
"Felix Xiaozhu",
""
]
] |
new_dataset
| 0.994315 |
2110.09437
|
Samuel Dooley
|
Angelica Goetzen, Samuel Dooley, Elissa M. Redmiles
|
Ctrl-Shift: How Privacy Sentiment Changed from 2019 to 2021
| null | null | null | null |
cs.CY cs.CR cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
People's privacy sentiments influence changes in legislation as well as
technology design and use. While single-point-in-time investigations of privacy
sentiment offer useful insight, study of people's privacy sentiments over time
is also necessary to better understand and anticipate evolving privacy
attitudes. In this work, we use repeated cross-sectional surveys (n=6,676) to
model the sentiments of people in the U.S. toward collection and use of data
for government- and health-related purposes from 2019-2021. After the onset of
COVID-19, we observe significant decreases in respondent acceptance of
government data use and significant increases in acceptance of health-related
data uses. While differences in privacy attitudes between sociodemographic
groups largely decreased over this time period, following the 2020 U.S.
national elections, we observe some of the first evidence that privacy
sentiments may change based on the alignment between a user's politics and the
political party in power. Our results offer insight into how privacy attitudes
may have been impacted by recent events and allow us to identify potential
predictors of changes in privacy attitudes during times of geopolitical or
national change.
|
[
{
"version": "v1",
"created": "Mon, 18 Oct 2021 16:13:02 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Mar 2022 15:21:36 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Goetzen",
"Angelica",
""
],
[
"Dooley",
"Samuel",
""
],
[
"Redmiles",
"Elissa M.",
""
]
] |
new_dataset
| 0.989811 |
2110.11405
|
Gautam Singh
|
Gautam Singh, Fei Deng and Sungjin Ahn
|
Illiterate DALL-E Learns to Compose
|
Published as a conference paper at ICLR 2022
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although DALL-E has shown an impressive ability of composition-based
systematic generalization in image generation, it requires the dataset of
text-image pairs and the compositionality is provided by the text. In contrast,
object-centric representation models like the Slot Attention model learn
composable representations without the text prompt. However, unlike DALL-E its
ability to systematically generalize for zero-shot generation is significantly
limited. In this paper, we propose a simple but novel slot-based autoencoding
architecture, called SLATE, for combining the best of both worlds: learning
object-centric representations that allows systematic generalization in
zero-shot image generation without text. As such, this model can also be seen
as an illiterate DALL-E model. Unlike the pixel-mixture decoders of existing
object-centric representation models, we propose to use the Image GPT decoder
conditioned on the slots for capturing complex interactions among the slots and
pixels. In experiments, we show that this simple and easy-to-implement
architecture not requiring a text prompt achieves significant improvement in
in-distribution and out-of-distribution (zero-shot) image generation and
qualitatively comparable or better slot-attention structure than the models
based on mixture decoders.
|
[
{
"version": "v1",
"created": "Sun, 17 Oct 2021 16:40:47 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Oct 2021 18:46:24 GMT"
},
{
"version": "v3",
"created": "Mon, 14 Mar 2022 21:10:39 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Singh",
"Gautam",
""
],
[
"Deng",
"Fei",
""
],
[
"Ahn",
"Sungjin",
""
]
] |
new_dataset
| 0.98887 |
2111.03017
|
Joshua Gardner
|
Josh Gardner, Ian Simon, Ethan Manilow, Curtis Hawthorne, Jesse Engel
|
MT3: Multi-Task Multitrack Music Transcription
|
ICLR 2022 camera-ready version
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Automatic Music Transcription (AMT), inferring musical notes from raw audio,
is a challenging task at the core of music understanding. Unlike Automatic
Speech Recognition (ASR), which typically focuses on the words of a single
speaker, AMT often requires transcribing multiple instruments simultaneously,
all while preserving fine-scale pitch and timing information. Further, many AMT
datasets are "low-resource", as even expert musicians find music transcription
difficult and time-consuming. Thus, prior work has focused on task-specific
architectures, tailored to the individual instruments of each task. In this
work, motivated by the promising results of sequence-to-sequence transfer
learning for low-resource Natural Language Processing (NLP), we demonstrate
that a general-purpose Transformer model can perform multi-task AMT, jointly
transcribing arbitrary combinations of musical instruments across several
transcription datasets. We show this unified training framework achieves
high-quality transcription results across a range of datasets, dramatically
improving performance for low-resource instruments (such as guitar), while
preserving strong performance for abundant instruments (such as piano).
Finally, by expanding the scope of AMT, we expose the need for more consistent
evaluation metrics and better dataset alignment, and provide a strong baseline
for this new direction of multi-task AMT.
|
[
{
"version": "v1",
"created": "Thu, 4 Nov 2021 17:19:39 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Nov 2021 00:30:59 GMT"
},
{
"version": "v3",
"created": "Fri, 17 Dec 2021 00:40:21 GMT"
},
{
"version": "v4",
"created": "Tue, 15 Mar 2022 17:13:12 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Gardner",
"Josh",
""
],
[
"Simon",
"Ian",
""
],
[
"Manilow",
"Ethan",
""
],
[
"Hawthorne",
"Curtis",
""
],
[
"Engel",
"Jesse",
""
]
] |
new_dataset
| 0.999644 |
2111.08349
|
Alistair Francis
|
Alistair Francis, John Mrziglod, Panagiotis Sidiropoulos, Jan-Peter
Muller
|
SEnSeI: A Deep Learning Module for Creating Sensor Independent Cloud
Masks
|
22 pages, 7 figures. This is an accepted version of work to be
published in the IEEE Transactions on Geoscience and Remote Sensing
| null |
10.1109/TGRS.2021.3128280
| null |
cs.CV eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a novel neural network architecture -- Spectral ENcoder for
SEnsor Independence (SEnSeI) -- by which several multispectral instruments,
each with different combinations of spectral bands, can be used to train a
generalised deep learning model. We focus on the problem of cloud masking,
using several pre-existing datasets, and a new, freely available dataset for
Sentinel-2. Our model is shown to achieve state-of-the-art performance on the
satellites it was trained on (Sentinel-2 and Landsat 8), and is able to
extrapolate to sensors it has not seen during training such as Landsat 7,
Per\'uSat-1, and Sentinel-3 SLSTR. Model performance is shown to improve when
multiple satellites are used in training, approaching or surpassing the
performance of specialised, single-sensor models. This work is motivated by the
fact that the remote sensing community has access to data taken with a hugely
variety of sensors. This has inevitably led to labelling efforts being
undertaken separately for different sensors, which limits the performance of
deep learning models, given their need for huge training sets to perform
optimally. Sensor independence can enable deep learning models to utilise
multiple datasets for training simultaneously, boosting performance and making
them much more widely applicable. This may lead to deep learning approaches
being used more frequently for on-board applications and in ground segment data
processing, which generally require models to be ready at launch or soon
afterwards.
|
[
{
"version": "v1",
"created": "Tue, 16 Nov 2021 10:47:10 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Francis",
"Alistair",
""
],
[
"Mrziglod",
"John",
""
],
[
"Sidiropoulos",
"Panagiotis",
""
],
[
"Muller",
"Jan-Peter",
""
]
] |
new_dataset
| 0.999473 |
2111.10958
|
JongMok Kim
|
JongMok Kim, Jooyoung Jang, Seunghyeon Seo, Jisoo Jeong, Jongkeun Na,
Nojun Kwak
|
MUM : Mix Image Tiles and UnMix Feature Tiles for Semi-Supervised Object
Detection
|
Accept to CVPR2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Many recent semi-supervised learning (SSL) studies build teacher-student
architecture and train the student network by the generated supervisory signal
from the teacher. Data augmentation strategy plays a significant role in the
SSL framework since it is hard to create a weak-strong augmented input pair
without losing label information. Especially when extending SSL to
semi-supervised object detection (SSOD), many strong augmentation methodologies
related to image geometry and interpolation-regularization are hard to utilize
since they possibly hurt the location information of the bounding box in the
object detection task. To address this, we introduce a simple yet effective
data augmentation method, Mix/UnMix (MUM), which unmixes feature tiles for the
mixed image tiles for the SSOD framework. Our proposed method makes mixed input
image tiles and reconstructs them in the feature space. Thus, MUM can enjoy the
interpolation-regularization effect from non-interpolated pseudo-labels and
successfully generate a meaningful weak-strong pair. Furthermore, MUM can be
easily equipped on top of various SSOD methods. Extensive experiments on
MS-COCO and PASCAL VOC datasets demonstrate the superiority of MUM by
consistently improving the mAP performance over the baseline in all the tested
SSOD benchmark protocols.
|
[
{
"version": "v1",
"created": "Mon, 22 Nov 2021 02:46:27 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Mar 2022 08:53:31 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Kim",
"JongMok",
""
],
[
"Jang",
"Jooyoung",
""
],
[
"Seo",
"Seunghyeon",
""
],
[
"Jeong",
"Jisoo",
""
],
[
"Na",
"Jongkeun",
""
],
[
"Kwak",
"Nojun",
""
]
] |
new_dataset
| 0.99081 |
2111.15174
|
Qiang Li Capasso
|
Zhaoqing Wang, Yu Lu, Qiang Li, Xunqiang Tao, Yandong Guo, Mingming
Gong, Tongliang Liu
|
CRIS: CLIP-Driven Referring Image Segmentation
|
10 pages, 5 figures, Accepted by CVPR2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Referring image segmentation aims to segment a referent via a natural
linguistic expression.Due to the distinct data properties between text and
image, it is challenging for a network to well align text and pixel-level
features. Existing approaches use pretrained models to facilitate learning, yet
separately transfer the language/vision knowledge from pretrained models,
ignoring the multi-modal corresponding information. Inspired by the recent
advance in Contrastive Language-Image Pretraining (CLIP), in this paper, we
propose an end-to-end CLIP-Driven Referring Image Segmentation framework
(CRIS). To transfer the multi-modal knowledge effectively, CRIS resorts to
vision-language decoding and contrastive learning for achieving the
text-to-pixel alignment. More specifically, we design a vision-language decoder
to propagate fine-grained semantic information from textual representations to
each pixel-level activation, which promotes consistency between the two
modalities. In addition, we present text-to-pixel contrastive learning to
explicitly enforce the text feature similar to the related pixel-level features
and dissimilar to the irrelevances. The experimental results on three benchmark
datasets demonstrate that our proposed framework significantly outperforms the
state-of-the-art performance without any post-processing. The code will be
released.
|
[
{
"version": "v1",
"created": "Tue, 30 Nov 2021 07:29:08 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Mar 2022 18:43:05 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Wang",
"Zhaoqing",
""
],
[
"Lu",
"Yu",
""
],
[
"Li",
"Qiang",
""
],
[
"Tao",
"Xunqiang",
""
],
[
"Guo",
"Yandong",
""
],
[
"Gong",
"Mingming",
""
],
[
"Liu",
"Tongliang",
""
]
] |
new_dataset
| 0.999615 |
2112.02932
|
Sajal Mukhopadhyay
|
Arghya Bandyopadhyay and Sajal Mukhopadhyay
|
Indian Kidney Exchange Program: A Game Theoretic Perspective
|
43 pages, 52 figures, 9 tables
| null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a way in which Kidney exchange can be feasibly, economically and
efficiently implemented in Indian medical space, named as Indian Kidney
Exchange Program(IKEP) along with Indian specific influence on compatibility
and final outcomes. Kidney exchange is a boon for those suffering from renal
kidney failure and do have a donor with an incompatible kidney (compatible
kidney also encouraged for better matches). In such situations the patient,
donor pair is matched to another patient, donor pair having the same problem
and are compatible to each other. Hospitals put up their patient-donor data.
Using the biological data, compatibility scores(or weights) are generated and
preferences are formed accordingly. Indian influence on weights, modify the
compatibility scores generated and hence, the preferences. The pairs are then
allocated using game theoretic matching algorithms for markets without money.
|
[
{
"version": "v1",
"created": "Mon, 6 Dec 2021 11:11:55 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Mar 2022 11:35:55 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Bandyopadhyay",
"Arghya",
""
],
[
"Mukhopadhyay",
"Sajal",
""
]
] |
new_dataset
| 0.998335 |
2201.01480
|
Gaotian Wang
|
Zhanchi Wang, Gaotian Wang, Xiaoping Chen, and Nikolaos M. Freris
|
Control of a Soft Robotic Arm Using a Piecewise Universal Joint Model
|
The paper will be merged into a new one
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
The 'infinite' passive degrees of freedom of soft robotic arms render their
control especially challenging. In this paper, we leverage a previously
developed model, which drawing equivalence of the soft arm to a series of
universal joints, to design two closed-loop controllers: a configuration space
controller for trajectory tracking and a task space controller for position
control of the end effector. Extensive experiments and simulations on a
four-segment soft arm attest to substantial improvement in terms of: a)
superior tracking accuracy of the configuration space controller and b) reduced
settling time and steady-state error of the task space controller. The task
space controller is also verified to be effective in the presence of
interactions between the soft arm and the environment.
|
[
{
"version": "v1",
"created": "Wed, 5 Jan 2022 06:57:04 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Mar 2022 13:44:58 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Wang",
"Zhanchi",
""
],
[
"Wang",
"Gaotian",
""
],
[
"Chen",
"Xiaoping",
""
],
[
"Freris",
"Nikolaos M.",
""
]
] |
new_dataset
| 0.984312 |
2202.06205
|
Toby Jia-Jun Li
|
Zheng Zhang, Ying Xu, Yanhao Wang, Bingsheng Yao, Daniel Ritchie,
Tongshuang Wu, Mo Yu, Dakuo Wang, Toby Jia-Jun Li
|
StoryBuddy: A Human-AI Collaborative Chatbot for Parent-Child
Interactive Storytelling with Flexible Parental Involvement
|
Published at CHI 2022
| null |
10.1145/3491102.3517479
| null |
cs.HC cs.AI cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Despite its benefits for children's skill development and parent-child
bonding, many parents do not often engage in interactive storytelling by having
story-related dialogues with their child due to limited availability or
challenges in coming up with appropriate questions. While recent advances made
AI generation of questions from stories possible, the fully-automated approach
excludes parent involvement, disregards educational goals, and underoptimizes
for child engagement. Informed by need-finding interviews and participatory
design (PD) results, we developed StoryBuddy, an AI-enabled system for parents
to create interactive storytelling experiences. StoryBuddy's design highlighted
the need for accommodating dynamic user needs between the desire for parent
involvement and parent-child bonding and the goal of minimizing parent
intervention when busy. The PD revealed varied assessment and educational goals
of parents, which StoryBuddy addressed by supporting configuring question types
and tracking child progress. A user study validated StoryBuddy's usability and
suggested design insights for future parent-AI collaboration systems.
|
[
{
"version": "v1",
"created": "Sun, 13 Feb 2022 04:53:28 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Mar 2022 18:36:00 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Zhang",
"Zheng",
""
],
[
"Xu",
"Ying",
""
],
[
"Wang",
"Yanhao",
""
],
[
"Yao",
"Bingsheng",
""
],
[
"Ritchie",
"Daniel",
""
],
[
"Wu",
"Tongshuang",
""
],
[
"Yu",
"Mo",
""
],
[
"Wang",
"Dakuo",
""
],
[
"Li",
"Toby Jia-Jun",
""
]
] |
new_dataset
| 0.99672 |
2203.06728
|
Mobashir Sadat
|
Mobashir Sadat and Cornelia Caragea
|
SciNLI: A Corpus for Natural Language Inference on Scientific Text
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing Natural Language Inference (NLI) datasets, while being instrumental
in the advancement of Natural Language Understanding (NLU) research, are not
related to scientific text. In this paper, we introduce SciNLI, a large dataset
for NLI that captures the formality in scientific text and contains 107,412
sentence pairs extracted from scholarly papers on NLP and computational
linguistics. Given that the text used in scientific literature differs vastly
from the text used in everyday language both in terms of vocabulary and
sentence structure, our dataset is well suited to serve as a benchmark for the
evaluation of scientific NLU models. Our experiments show that SciNLI is harder
to classify than the existing NLI datasets. Our best performing model with
XLNet achieves a Macro F1 score of only 78.18% and an accuracy of 78.23%
showing that there is substantial room for improvement.
|
[
{
"version": "v1",
"created": "Sun, 13 Mar 2022 18:23:37 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Mar 2022 02:27:08 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Sadat",
"Mobashir",
""
],
[
"Caragea",
"Cornelia",
""
]
] |
new_dataset
| 0.999817 |
2203.06947
|
Zhangxuan Gu
|
Zhangxuan Gu, Changhua Meng, Ke Wang, Jun Lan, Weiqiang Wang, Ming Gu,
Liqing Zhang
|
XYLayoutLM: Towards Layout-Aware Multimodal Networks For Visually-Rich
Document Understanding
|
Accepted by CVPR2022
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, various multimodal networks for Visually-Rich Document
Understanding(VRDU) have been proposed, showing the promotion of transformers
by integrating visual and layout information with the text embeddings. However,
most existing approaches utilize the position embeddings to incorporate the
sequence information, neglecting the noisy improper reading order obtained by
OCR tools. In this paper, we propose a robust layout-aware multimodal network
named XYLayoutLM to capture and leverage rich layout information from proper
reading orders produced by our Augmented XY Cut. Moreover, a Dilated
Conditional Position Encoding module is proposed to deal with the input
sequence of variable lengths, and it additionally extracts local layout
information from both textual and visual modalities while generating position
embeddings. Experiment results show that our XYLayoutLM achieves competitive
results on document understanding tasks.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 09:19:12 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Mar 2022 14:51:16 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Gu",
"Zhangxuan",
""
],
[
"Meng",
"Changhua",
""
],
[
"Wang",
"Ke",
""
],
[
"Lan",
"Jun",
""
],
[
"Wang",
"Weiqiang",
""
],
[
"Gu",
"Ming",
""
],
[
"Zhang",
"Liqing",
""
]
] |
new_dataset
| 0.984101 |
2203.07444
|
Sandor P. Fekete
|
S\'andor P. Fekete and Phillip Keldenich and Dominik Krupke and Stefan
Schirra
|
Minimum Partition into Plane Subgraphs: The CG:SHOP Challenge 2022
|
13 pages, 5 figures, 1 table
| null | null | null |
cs.CG cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We give an overview of the 2022 Computational Geometry Challenge targeting
the problem Minimum Partition into Plane Subsets, which consists of
partitioning a given set of line segments into a minimum number of non-crossing
subsets.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 19:02:24 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Fekete",
"Sándor P.",
""
],
[
"Keldenich",
"Phillip",
""
],
[
"Krupke",
"Dominik",
""
],
[
"Schirra",
"Stefan",
""
]
] |
new_dataset
| 0.998239 |
2203.07454
|
Erik Johnson
|
Erik C. Johnson, Eric Q. Nguyen, Blake Schreurs, Chigozie S. Ewulum,
Chace Ashcraft, Neil M. Fendley, Megan M. Baker, Alexander New, Gautam K.
Vallabha
|
L2Explorer: A Lifelong Reinforcement Learning Assessment Environment
|
10 Pages submitted to AAAI AI for Open Worlds Symposium 2022
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Despite groundbreaking progress in reinforcement learning for robotics,
gameplay, and other complex domains, major challenges remain in applying
reinforcement learning to the evolving, open-world problems often found in
critical application spaces. Reinforcement learning solutions tend to
generalize poorly when exposed to new tasks outside of the data distribution
they are trained on, prompting an interest in continual learning algorithms. In
tandem with research on continual learning algorithms, there is a need for
challenge environments, carefully designed experiments, and metrics to assess
research progress. We address the latter need by introducing a framework for
continual reinforcement-learning development and assessment using Lifelong
Learning Explorer (L2Explorer), a new, Unity-based, first-person 3D exploration
environment that can be continuously reconfigured to generate a range of tasks
and task variants structured into complex and evolving evaluation curricula. In
contrast to procedurally generated worlds with randomized components, we have
developed a systematic approach to defining curricula in response to controlled
changes with accompanying metrics to assess transfer, performance recovery, and
data efficiency. Taken together, the L2Explorer environment and evaluation
approach provides a framework for developing future evaluation methodologies in
open-world settings and rigorously evaluating approaches to lifelong learning.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 19:20:26 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Johnson",
"Erik C.",
""
],
[
"Nguyen",
"Eric Q.",
""
],
[
"Schreurs",
"Blake",
""
],
[
"Ewulum",
"Chigozie S.",
""
],
[
"Ashcraft",
"Chace",
""
],
[
"Fendley",
"Neil M.",
""
],
[
"Baker",
"Megan M.",
""
],
[
"New",
"Alexander",
""
],
[
"Vallabha",
"Gautam K.",
""
]
] |
new_dataset
| 0.998164 |
2203.07474
|
Saavan Patel
|
Jorge Gomez, Saavan Patel, Syed Shakib Sarwar, Ziyun Li, Raffaele
Capoccia, Zhao Wang, Reid Pinkham, Andrew Berkovich, Tsung-Hsun Tsai, Barbara
De Salvo and Chiao Liu
|
Distributed On-Sensor Compute System for AR/VR Devices: A
Semi-Analytical Simulation Framework for Power Estimation
|
6 pages, 5 figures, TinyML Research Symposium
| null | null | null |
cs.AR cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Augmented Reality/Virtual Reality (AR/VR) glasses are widely foreseen as the
next generation computing platform. AR/VR glasses are a complex "system of
systems" which must satisfy stringent form factor, computing-, power- and
thermal- requirements. In this paper, we will show that a novel distributed
on-sensor compute architecture, coupled with new semiconductor technologies
(such as dense 3D-IC interconnects and Spin-Transfer Torque Magneto Random
Access Memory, STT-MRAM) and, most importantly, a full hardware-software
co-optimization are the solutions to achieve attractive and socially acceptable
AR/VR glasses. To this end, we developed a semi-analytical simulation framework
to estimate the power consumption of novel AR/VR distributed on-sensor
computing architectures. The model allows the optimization of the main
technological features of the system modules, as well as the computer-vision
algorithm partition strategy across the distributed compute architecture. We
show that, in the case of the compute-intensive machine learning based Hand
Tracking algorithm, the distributed on-sensor compute architecture can reduce
the system power consumption compared to a centralized system, with the
additional benefits in terms of latency and privacy.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 20:18:24 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Gomez",
"Jorge",
""
],
[
"Patel",
"Saavan",
""
],
[
"Sarwar",
"Syed Shakib",
""
],
[
"Li",
"Ziyun",
""
],
[
"Capoccia",
"Raffaele",
""
],
[
"Wang",
"Zhao",
""
],
[
"Pinkham",
"Reid",
""
],
[
"Berkovich",
"Andrew",
""
],
[
"Tsai",
"Tsung-Hsun",
""
],
[
"De Salvo",
"Barbara",
""
],
[
"Liu",
"Chiao",
""
]
] |
new_dataset
| 0.996205 |
2203.07543
|
Cristina Gena
|
Cristina Gena, Claudio Mattutino, Stefania Brighenti, Andrea Meirone,
Francesco Petriglia, Loredana Mazzotta, Federica Liscio, Matteo Nazzario,
Valeria Ricci, Camilla Quarato, Cesare Pecone, Giuseppe Piccinni
|
Sugar, Salt & Pepper -- Humanoid robotics for autism
|
IUI Workshops 2021
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces an experimental trial that will take place from
February to June 2021, and which will see the use of the Pepper robot in a
therapeutic laboratory on autonomies that will promote functional acquisitions
in children diagnosed with high functioning autism/Asperger's syndrome.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 23:04:25 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Gena",
"Cristina",
""
],
[
"Mattutino",
"Claudio",
""
],
[
"Brighenti",
"Stefania",
""
],
[
"Meirone",
"Andrea",
""
],
[
"Petriglia",
"Francesco",
""
],
[
"Mazzotta",
"Loredana",
""
],
[
"Liscio",
"Federica",
""
],
[
"Nazzario",
"Matteo",
""
],
[
"Ricci",
"Valeria",
""
],
[
"Quarato",
"Camilla",
""
],
[
"Pecone",
"Cesare",
""
],
[
"Piccinni",
"Giuseppe",
""
]
] |
new_dataset
| 0.991609 |
2203.07567
|
Justin Chan
|
Justin Chan, Ananditha Raghunath, Kelly E. Michaelsen, and Shyamnath
Gollakota
|
Testing a Drop of Liquid Using Smartphone LiDAR
|
27 pages, 15 figures, accepted at IMWUT
| null | null | null |
cs.CY physics.flu-dyn
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the first system to determine fluid properties using the LiDAR
sensors present on modern smartphones. Traditional methods of measuring
properties like viscosity require expensive laboratory equipment or a
relatively large amount of fluid. In contrast, our smartphone-based method is
accessible, contactless and works with just a single drop of liquid. Our design
works by targeting a coherent LiDAR beam from the phone onto the liquid. Using
the phone's camera, we capture the characteristic laser speckle pattern that is
formed by the interference of light reflecting from light-scattering particles.
By correlating the fluctuations in speckle intensity over time, we can
characterize the Brownian motion within the liquid which is correlated with its
viscosity. The speckle pattern can be captured on a range of phone cameras and
does not require external magnifiers. Our results show that we can distinguish
between different fat contents as well as identify adulterated milk. Further,
algorithms can classify between ten different liquids using the smartphone
LiDAR speckle patterns. Finally, we conducted a clinical study with whole blood
samples across 30 patients showing that our approach can distinguish between
coagulated and uncoagulated blood using a single drop of blood.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 00:12:39 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Chan",
"Justin",
""
],
[
"Raghunath",
"Ananditha",
""
],
[
"Michaelsen",
"Kelly E.",
""
],
[
"Gollakota",
"Shyamnath",
""
]
] |
new_dataset
| 0.964793 |
2203.07580
|
Roelien Christien Timmer
|
Roelien C. Timmer and David Liebowitz and Surya Nepal and Salil
Kanhere
|
TSM: Measuring the Enticement of Honeyfiles with Natural Language
Processing
| null | null | null | null |
cs.CL cs.CR cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Honeyfile deployment is a useful breach detection method in cyber deception
that can also inform defenders about the intent and interests of intruders and
malicious insiders. A key property of a honeyfile, enticement, is the extent to
which the file can attract an intruder to interact with it. We introduce a
novel metric, Topic Semantic Matching (TSM), which uses topic modelling to
represent files in the repository and semantic matching in an embedding vector
space to compare honeyfile text and topic words robustly. We also present a
honeyfile corpus created with different Natural Language Processing (NLP)
methods. Experiments show that TSM is effective in inter-corpus comparisons and
is a promising tool to measure the enticement of honeyfiles. TSM is the first
measure to use NLP techniques to quantify the enticement of honeyfile content
that compares the essential topical content of local contexts to honeyfiles and
is robust to paraphrasing.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 01:07:51 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Timmer",
"Roelien C.",
""
],
[
"Liebowitz",
"David",
""
],
[
"Nepal",
"Surya",
""
],
[
"Kanhere",
"Salil",
""
]
] |
new_dataset
| 0.991585 |
2203.07603
|
Chadni Islam
|
Chadni Islam, M. Ali Babar, Roland Croft and Helge Janicke
|
SmartValidator: A Framework for Automatic Identification and
Classification of Cyber Threat Data
| null | null | null | null |
cs.CR cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
A wide variety of Cyber Threat Information (CTI) is used by Security
Operation Centres (SOCs) to perform validation of security incidents and
alerts. Security experts manually define different types of rules and scripts
based on CTI to perform validation tasks. These rules and scripts need to be
updated continuously due to evolving threats, changing SOCs' requirements and
dynamic nature of CTI. The manual process of updating rules and scripts delays
the response to attacks. To reduce the burden of human experts and accelerate
response, we propose a novel Artificial Intelligence (AI) based framework,
SmartValidator. SmartValidator leverages Machine Learning (ML) techniques to
enable automated validation of alerts. It consists of three layers to perform
the tasks of data collection, model building and alert validation. It projects
the validation task as a classification problem. Instead of building and saving
models for all possible requirements, we propose to automatically construct the
validation models based on SOC's requirements and CTI. We built a Proof of
Concept (PoC) system with eight ML algorithms, two feature engineering
techniques and 18 requirements to investigate the effectiveness and efficiency
of SmartValidator. The evaluation results showed that when prediction models
were built automatically for classifying cyber threat data, the F1-score of
75\% of the models were above 0.8, which indicates adequate performance of the
PoC for use in a real-world organization. The results further showed that
dynamic construction of prediction models required 99\% less models to be built
than pre-building models for all possible requirements. The framework can be
followed by various industries to accelerate and automate the validation of
alerts and incidents based on their CTI and SOC's preferences.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 02:35:14 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Islam",
"Chadni",
""
],
[
"Babar",
"M. Ali",
""
],
[
"Croft",
"Roland",
""
],
[
"Janicke",
"Helge",
""
]
] |
new_dataset
| 0.98243 |
2203.07613
|
Carlos E. Jimenez
|
Carlos E. Jimenez, Olga Russakovsky, Karthik Narasimhan
|
CARETS: A Consistency And Robustness Evaluative Test Suite for VQA
|
ACL 2022
| null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce CARETS, a systematic test suite to measure consistency and
robustness of modern VQA models through a series of six fine-grained capability
tests. In contrast to existing VQA test sets, CARETS features balanced question
generation to create pairs of instances to test models, with each pair focusing
on a specific capability such as rephrasing, logical symmetry or image
obfuscation. We evaluate six modern VQA systems on CARETS and identify several
actionable weaknesses in model comprehension, especially with concepts such as
negation, disjunction, or hypernym invariance. Interestingly, even the most
sophisticated models are sensitive to aspects such as swapping the order of
terms in a conjunction or varying the number of answer choices mentioned in the
question. We release CARETS to be used as an extensible tool for evaluating
multi-modal model robustness.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 03:01:03 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Jimenez",
"Carlos E.",
""
],
[
"Russakovsky",
"Olga",
""
],
[
"Narasimhan",
"Karthik",
""
]
] |
new_dataset
| 0.998343 |
2203.07705
|
Yangming Shi
|
Yangming Shi, Haisong Ding, Kai Chen, Qiang Huo
|
APRNet: Attention-based Pixel-wise Rendering Network for Photo-Realistic
Text Image Generation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Style-guided text image generation tries to synthesize text image by
imitating reference image's appearance while keeping text content unaltered.
The text image appearance includes many aspects. In this paper, we focus on
transferring style image's background and foreground color patterns to the
content image to generate photo-realistic text image. To achieve this goal, we
propose 1) a content-style cross attention based pixel sampling approach to
roughly mimicking the style text image's background; 2) a pixel-wise style
modulation technique to transfer varying color patterns of the style image to
the content image spatial-adaptively; 3) a cross attention based multi-scale
style fusion approach to solving text foreground misalignment issue between
style and content images; 4) an image patch shuffling strategy to create style,
content and ground truth image tuples for training. Experimental results on
Chinese handwriting text image synthesis with SCUT-HCCDoc and CASIA-OLHWDB
datasets demonstrate that the proposed method can improve the quality of
synthetic text images and make them more photo-realistic.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 07:48:34 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Shi",
"Yangming",
""
],
[
"Ding",
"Haisong",
""
],
[
"Chen",
"Kai",
""
],
[
"Huo",
"Qiang",
""
]
] |
new_dataset
| 0.992996 |
2203.07722
|
Shuai Lu
|
Shuai Lu, Nan Duan, Hojae Han, Daya Guo, Seung-won Hwang, Alexey
Svyatkovskiy
|
ReACC: A Retrieval-Augmented Code Completion Framework
|
Published in ACL 2022
| null | null | null |
cs.SE cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Code completion, which aims to predict the following code token(s) according
to the code context, can improve the productivity of software development.
Recent work has proved that statistical language modeling with transformers can
greatly improve the performance in the code completion task via learning from
large-scale source code datasets. However, current approaches focus only on
code context within the file or project, i.e. internal context. Our distinction
is utilizing "external" context, inspired by human behaviors of copying from
the related code snippets when writing code. Specifically, we propose a
retrieval-augmented code completion framework, leveraging both lexical copying
and referring to code with similar semantics by retrieval. We adopt a
stage-wise training approach that combines a source code retriever and an
auto-regressive language model for programming language. We evaluate our
approach in the code completion task in Python and Java programming languages,
achieving a state-of-the-art performance on CodeXGLUE benchmark.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 08:25:08 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Lu",
"Shuai",
""
],
[
"Duan",
"Nan",
""
],
[
"Han",
"Hojae",
""
],
[
"Guo",
"Daya",
""
],
[
"Hwang",
"Seung-won",
""
],
[
"Svyatkovskiy",
"Alexey",
""
]
] |
new_dataset
| 0.990264 |
2203.07742
|
Nguyen Phi
|
Phi Nguyen Van, Tung Cao Hoang, Dung Nguyen Manh, Quan Nguyen Minh,
Long Tran Quoc
|
ViWOZ: A Multi-Domain Task-Oriented Dialogue Systems Dataset For
Low-resource Language
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Most of the current task-oriented dialogue systems (ToD), despite having
interesting results, are designed for a handful of languages like Chinese and
English. Therefore, their performance in low-resource languages is still a
significant problem due to the absence of a standard dataset and evaluation
policy. To address this problem, we proposed ViWOZ, a fully-annotated
Vietnamese task-oriented dialogue dataset. ViWOZ is the first multi-turn,
multi-domain tasked oriented dataset in Vietnamese, a low-resource language.
The dataset consists of a total of 5,000 dialogues, including 60,946 fully
annotated utterances. Furthermore, we provide a comprehensive benchmark of both
modular and end-to-end models in low-resource language scenarios. With those
characteristics, the ViWOZ dataset enables future studies on creating a
multilingual task-oriented dialogue system.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 09:22:04 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Van",
"Phi Nguyen",
""
],
[
"Hoang",
"Tung Cao",
""
],
[
"Manh",
"Dung Nguyen",
""
],
[
"Minh",
"Quan Nguyen",
""
],
[
"Quoc",
"Long Tran",
""
]
] |
new_dataset
| 0.999867 |
2203.07837
|
JongMok Kim
|
JongMok Kim, Hwijun Lee, Jaeseung Lim, Jongkeun Na, Nojun Kwak, Jin
Young Choi
|
Pose-MUM : Reinforcing Key Points Relationship for Semi-Supervised Human
Pose Estimation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
A well-designed strong-weak augmentation strategy and the stable teacher to
generate reliable pseudo labels are essential in the teacher-student framework
of semi-supervised learning (SSL). Considering these in mind, to suit the
semi-supervised human pose estimation (SSHPE) task, we propose a novel approach
referred to as Pose-MUM that modifies Mix/UnMix (MUM) augmentation. Like MUM in
the dense prediction task, the proposed Pose-MUM makes strong-weak augmentation
for pose estimation and leads the network to learn the relationship between
each human key point much better than the conventional methods by adding the
mixing process in intermediate layers in a stochastic manner. In addition, we
employ the exponential-moving-average-normalization (EMAN) teacher, which is
stable and well-suited to the SSL framework and furthermore boosts the
performance. Extensive experiments on MS-COCO dataset show the superiority of
our proposed method by consistently improving the performance over the previous
methods following SSHPE benchmark.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 12:48:40 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Kim",
"JongMok",
""
],
[
"Lee",
"Hwijun",
""
],
[
"Lim",
"Jaeseung",
""
],
[
"Na",
"Jongkeun",
""
],
[
"Kwak",
"Nojun",
""
],
[
"Choi",
"Jin Young",
""
]
] |
new_dataset
| 0.991492 |
2203.07890
|
Kohei Uehara
|
Kohei Uehara, Tatsuya Harada
|
K-VQG: Knowledge-aware Visual Question Generation for Common-sense
Acquisition
| null | null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Visual Question Generation (VQG) is a task to generate questions from images.
When humans ask questions about an image, their goal is often to acquire some
new knowledge. However, existing studies on VQG have mainly addressed question
generation from answers or question categories, overlooking the objectives of
knowledge acquisition. To introduce a knowledge acquisition perspective into
VQG, we constructed a novel knowledge-aware VQG dataset called K-VQG. This is
the first large, humanly annotated dataset in which questions regarding images
are tied to structured knowledge. We also developed a new VQG model that can
encode and use knowledge as the target for a question. The experiment results
show that our model outperforms existing models on the K-VQG dataset.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 13:38:10 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Uehara",
"Kohei",
""
],
[
"Harada",
"Tatsuya",
""
]
] |
new_dataset
| 0.997963 |
2203.07902
|
Antoine Brochard
|
Antoine Brochard, Sixin Zhang, St\'ephane Mallat
|
Generalized Rectifier Wavelet Covariance Models For Texture Synthesis
|
To be published as a conference paper at the International Conference
on Learning Representations (ICLR) 2022
| null | null | null |
cs.CV cs.LG eess.IV eess.SP stat.ML
|
http://creativecommons.org/licenses/by-sa/4.0/
|
State-of-the-art maximum entropy models for texture synthesis are built from
statistics relying on image representations defined by convolutional neural
networks (CNN). Such representations capture rich structures in texture images,
outperforming wavelet-based representations in this regard. However, conversely
to neural networks, wavelets offer meaningful representations, as they are
known to detect structures at multiple scales (e.g. edges) in images. In this
work, we propose a family of statistics built upon non-linear wavelet based
representations, that can be viewed as a particular instance of a one-layer
CNN, using a generalized rectifier non-linearity. These statistics
significantly improve the visual quality of previous classical wavelet-based
models, and allow one to produce syntheses of similar quality to
state-of-the-art models, on both gray-scale and color textures.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 17:07:40 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Brochard",
"Antoine",
""
],
[
"Zhang",
"Sixin",
""
],
[
"Mallat",
"Stéphane",
""
]
] |
new_dataset
| 0.979042 |
2203.07948
|
Xunzhao Yin
|
Xunzhao Yin, Franz M\"uller, Qingrong Huang, Chao Li, Mohsen Imani,
Zeyu Yang, Jiahao Cai, Maximilian Lederer, Ricardo Olivo, Nellie Laleni, Shan
Deng, Zijian Zhao, Cheng Zhuo, Thomas K\"ampfe, Kai Ni
|
An Ultra-Compact Single FeFET Binary and Multi-Bit Associative Search
Engine
|
20 pages, 14 figures
| null | null | null |
cs.ET eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Content addressable memory (CAM) is widely used in associative search tasks
for its highly parallel pattern matching capability. To accommodate the
increasingly complex and data-intensive pattern matching tasks, it is critical
to keep improving the CAM density to enhance the performance and area
efficiency. In this work, we demonstrate: i) a novel ultra-compact 1FeFET CAM
design that enables parallel associative search and in-memory hamming distance
calculation; ii) a multi-bit CAM for exact search using the same CAM cell; iii)
compact device designs that integrate the series resistor current limiter into
the intrinsic FeFET structure to turn the 1FeFET1R into an effective 1FeFET
cell; iv) a successful 2-step search operation and a sufficient sensing margin
of the proposed binary and multi-bit 1FeFET1R CAM array with sizes of practical
interests in both experiments and simulations, given the existing unoptimized
FeFET device variation; v) 89.9x speedup and 66.5x energy efficiency
improvement over the state-of-the art alignment tools on GPU in accelerating
genome pattern matching applications through the hyperdimensional computing
paradigm.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 14:29:28 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Yin",
"Xunzhao",
""
],
[
"Müller",
"Franz",
""
],
[
"Huang",
"Qingrong",
""
],
[
"Li",
"Chao",
""
],
[
"Imani",
"Mohsen",
""
],
[
"Yang",
"Zeyu",
""
],
[
"Cai",
"Jiahao",
""
],
[
"Lederer",
"Maximilian",
""
],
[
"Olivo",
"Ricardo",
""
],
[
"Laleni",
"Nellie",
""
],
[
"Deng",
"Shan",
""
],
[
"Zhao",
"Zijian",
""
],
[
"Zhuo",
"Cheng",
""
],
[
"Kämpfe",
"Thomas",
""
],
[
"Ni",
"Kai",
""
]
] |
new_dataset
| 0.998574 |
2203.07969
|
Mohamed Nabeel
|
Udesh Kumarasinghe, Fatih Deniz, Mohamed Nabeel
|
PDNS-Net: A Large Heterogeneous Graph Benchmark Dataset of Network
Resolutions for Graph Learning
|
Workshop on Graph Learning Benchmark 2022
| null | null | null |
cs.LG cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
In order to advance the state of the art in graph learning algorithms, it is
necessary to construct large real-world datasets. While there are many
benchmark datasets for homogeneous graphs, only a few of them are available for
heterogeneous graphs. Furthermore, the latter graphs are small in size
rendering them insufficient to understand how graph learning algorithms perform
in terms of classification metrics and computational resource utilization. We
introduce, PDNS-Net, the largest public heterogeneous graph dataset containing
447K nodes and 897K edges for the malicious domain classification task.
Compared to the popular heterogeneous datasets IMDB and DBLP, PDNS-Net is 38
and 17 times bigger respectively. We provide a detailed analysis of PDNS-Net
including the data collection methodology, heterogeneous graph construction,
descriptive statistics and preliminary graph classification performance. The
dataset is publicly available at https://github.com/qcri/PDNS-Net. Our
preliminary evaluation of both popular homogeneous and heterogeneous graph
neural networks on PDNS-Net reveals that further research is required to
improve the performance of these models on large heterogeneous graphs.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 14:57:20 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Kumarasinghe",
"Udesh",
""
],
[
"Deniz",
"Fatih",
""
],
[
"Nabeel",
"Mohamed",
""
]
] |
new_dataset
| 0.99917 |
2203.07973
|
Luca Ciampi
|
Donato Cafarelli and Luca Ciampi and Lucia Vadicamo and Claudio
Gennaro and Andrea Berton and Marco Paterni and Chiara Benvenuti and Mirko
Passera and Fabrizio Falchi
|
MOBDrone: a Drone Video Dataset for Man OverBoard Rescue
|
Accepted at ICIAP 2021
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Modern Unmanned Aerial Vehicles (UAV) equipped with cameras can play an
essential role in speeding up the identification and rescue of people who have
fallen overboard, i.e., man overboard (MOB). To this end, Artificial
Intelligence techniques can be leveraged for the automatic understanding of
visual data acquired from drones. However, detecting people at sea in aerial
imagery is challenging primarily due to the lack of specialized annotated
datasets for training and testing detectors for this task. To fill this gap, we
introduce and publicly release the MOBDrone benchmark, a collection of more
than 125K drone-view images in a marine environment under several conditions,
such as different altitudes, camera shooting angles, and illumination. We
manually annotated more than 180K objects, of which about 113K man overboard,
precisely localizing them with bounding boxes. Moreover, we conduct a thorough
performance analysis of several state-of-the-art object detectors on the
MOBDrone data, serving as baselines for further research.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 15:02:23 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Cafarelli",
"Donato",
""
],
[
"Ciampi",
"Luca",
""
],
[
"Vadicamo",
"Lucia",
""
],
[
"Gennaro",
"Claudio",
""
],
[
"Berton",
"Andrea",
""
],
[
"Paterni",
"Marco",
""
],
[
"Benvenuti",
"Chiara",
""
],
[
"Passera",
"Mirko",
""
],
[
"Falchi",
"Fabrizio",
""
]
] |
new_dataset
| 0.99982 |
2203.07990
|
Abhishek Dhankar
|
Abhishek Dhankar, Osmar R. Za\"iane and Francois Bolduc
|
UofA-Truth at Factify 2022 : Transformer And Transfer Learning Based
Multi-Modal Fact-Checking
| null | null | null | null |
cs.MM cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Identifying fake news is a very difficult task, especially when considering
the multiple modes of conveying information through text, image, video and/or
audio. We attempted to tackle the problem of automated
misinformation/disinformation detection in multi-modal news sources (including
text and images) through our simple, yet effective, approach in the FACTIFY
shared task at De-Factify@AAAI2022. Our model produced an F1-weighted score of
74.807%, which was the fourth best out of all the submissions. In this paper we
will explain our approach to undertake the shared task.
|
[
{
"version": "v1",
"created": "Fri, 28 Jan 2022 18:13:03 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Dhankar",
"Abhishek",
""
],
[
"Zaïane",
"Osmar R.",
""
],
[
"Bolduc",
"Francois",
""
]
] |
new_dataset
| 0.970491 |
2203.08029
|
Yihao Wan
|
Yihao Wan, Daniel Gebbran, Tomislav Dragi\v{c}evi\'c
|
Optimal dispatch schedule for a fast EV charging station with account to
supplementary battery health degradation
|
To be published at ITEC+EATS, 2022
| null | null | null |
cs.CE math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper investigates the usage of battery storage systems in a fast
charging station (FCS) for participation in energy markets and charging
electrical vehicles (EVs) simultaneously. In particular, we focus on optimizing
the scheduling strategies to reduce the overall operational cost of the system
over its lifetime by combining the model of battery degradation and energy
arbitrage. We implement the battery degradation as a penalty term within an
energy arbitrage model and show that the battery degradation plays an important
role in the optimal energy dispatch scheduling of the FCS system. In this case
study, with different penalty coefficients for the battery degradation penalty
term, it is found that including the penalty of battery usage in the scheduling
model will reduce the number of small charging/discharging cycles, thereby
prolonging the battery lifetime, while maintaining near optimal revenue from
grid services.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 10:33:49 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Wan",
"Yihao",
""
],
[
"Gebbran",
"Daniel",
""
],
[
"Dragičević",
"Tomislav",
""
]
] |
new_dataset
| 0.996372 |
2203.08046
|
Luca Sanguinetti
|
Andrea De Jesus Torres and Luca Sanguinetti and Emil Bj\"ornson
|
Intelligent Reconfigurable Surfaces vs. Decode-and-Forward: What is the
Impact of Electromagnetic Interference?
|
5 pages, 9 figures, submitted to the 23rd IEEE International Workshop
on Signal Processing Advances in Wireless Communications (SPAWC2022)
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper considers the use of an intelligent reconfigurable surface (IRS)
to aid wireless communication systems. The main goal is to compare this
emerging technology with conventional decode-and-forward (DF) relaying. Unlike
prior comparisons, we assume that electromagnetic interference (EMI),
consisting of incoming waves from external sources, is present at the location
where the IRS or DF relay are placed. The analysis, in terms of minimizing the
total transmit power, shows that EMI has a strong impact on DF relay-assisted
communications, even when the relaying protocol is optimized against EMI. It
turns out that IRS-aided communications is more resilient to EMI. To beat an
IRS, we show that the DF relay must use multiple antennas and actively suppress
the EMI by beamforming.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 16:39:55 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Torres",
"Andrea De Jesus",
""
],
[
"Sanguinetti",
"Luca",
""
],
[
"Björnson",
"Emil",
""
]
] |
new_dataset
| 0.989891 |
2203.08047
|
Henrik Ryden
|
Henrik Ryd\'en, Alex Palaios, L\'aszl\'o H\'evizi, David Sandberg, Tor
Kvernvik, Hamed Farhadi
|
Mobility, traffic and radio channel prediction: 5G and beyond
applications
|
6 pages, submitted to IEEE conference
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine learning (ML) is an important component for enabling automation in
Radio Access Networks (RANs). The work on applying ML for RAN has been under
development for many years and is now also drawing attention in 3GPP and
Open-RAN standardization fora. A key component of multiple features, also
highlighted in the recent 3GPP specification work, is the use of mobility,
traffic and radio channel prediction. These types of predictions form the
intelligence enablers to leverage the potentials for ML for RAN, both for
current and future wireless networks. This paper provides an overview with
evaluation results of current applications that utilize such intelligence
enablers, we then discuss how those enablers likely will be a cornerstone for
emerging 6G use cases such as wireless energy transmission.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 16:40:21 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Rydén",
"Henrik",
""
],
[
"Palaios",
"Alex",
""
],
[
"Hévizi",
"László",
""
],
[
"Sandberg",
"David",
""
],
[
"Kvernvik",
"Tor",
""
],
[
"Farhadi",
"Hamed",
""
]
] |
new_dataset
| 0.954493 |
2203.08063
|
Guy Tevet
|
Guy Tevet, Brian Gordon, Amir Hertz, Amit H. Bermano, Daniel Cohen-Or
|
MotionCLIP: Exposing Human Motion Generation to CLIP Space
| null | null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We introduce MotionCLIP, a 3D human motion auto-encoder featuring a latent
embedding that is disentangled, well behaved, and supports highly semantic
textual descriptions. MotionCLIP gains its unique power by aligning its latent
space with that of the Contrastive Language-Image Pre-training (CLIP) model.
Aligning the human motion manifold to CLIP space implicitly infuses the
extremely rich semantic knowledge of CLIP into the manifold. In particular, it
helps continuity by placing semantically similar motions close to one another,
and disentanglement, which is inherited from the CLIP-space structure.
MotionCLIP comprises a transformer-based motion auto-encoder, trained to
reconstruct motion while being aligned to its text label's position in
CLIP-space. We further leverage CLIP's unique visual understanding and inject
an even stronger signal through aligning motion to rendered frames in a
self-supervised manner. We show that although CLIP has never seen the motion
domain, MotionCLIP offers unprecedented text-to-motion abilities, allowing
out-of-domain actions, disentangled editing, and abstract language
specification. For example, the text prompt "couch" is decoded into a sitting
down motion, due to lingual similarity, and the prompt "Spiderman" results in a
web-swinging-like solution that is far from seen during training. In addition,
we show how the introduced latent space can be leveraged for motion
interpolation, editing and recognition.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 16:56:22 GMT"
}
] | 2022-03-16T00:00:00 |
[
[
"Tevet",
"Guy",
""
],
[
"Gordon",
"Brian",
""
],
[
"Hertz",
"Amir",
""
],
[
"Bermano",
"Amit H.",
""
],
[
"Cohen-Or",
"Daniel",
""
]
] |
new_dataset
| 0.998662 |
1211.2687
|
Varun Gupta
|
Varun Gupta, Ana Radovanovic
|
Online Stochastic Bin Packing
| null |
Operations Research 68(5):1474-1492. (2020)
|
10.1287/opre.2019.1914
| null |
cs.DS math.PR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Bin packing is an algorithmic problem that arises in diverse applications
such as remnant inventory systems, shipping logistics, and appointment
scheduling. In its simplest variant, a sequence of $T$ items (e.g., orders for
raw material, packages for delivery) is revealed one at a time, and each item
must be packed on arrival in an available bin (e.g., remnant pieces of raw
material in inventory, shipping containers). The sizes of items are i.i.d.
samples from an unknown distribution, but the sizes are known when the items
arrive. The goal is to minimize the number of non-empty bins (equivalently
waste, defined to be the total unused space in non-empty bins). This problem
has been extensively studied in the Operations Research and Theoretical
Computer Science communities, yet all existing heuristics either rely on
learning the distribution or exhibit $o(T)$ additive suboptimality compared to
the optimal offline algorithm only for certain classes of distributions (those
with sublinear optimal expected waste). In this paper, we propose a family of
algorithms which are the first truly distribution-oblivious algorithms for
stochastic bin packing, and achieve $\mathcal{O}(\sqrt{T})$ additive
suboptimality for all item size distributions. Our algorithms are inspired by
approximate interior-point algorithms for convex optimization. In addition to
regret guarantees for discrete i.i.d. sequences, we extend our results to
continuous item size distribution with bounded density, and also prove a family
of novel regret bounds for non-i.i.d. input sequences. To the best of our
knowledge these are the first such results for non-i.i.d. and
non-random-permutation input sequences for online stochastic packing.
|
[
{
"version": "v1",
"created": "Mon, 12 Nov 2012 16:35:25 GMT"
},
{
"version": "v2",
"created": "Sat, 12 Mar 2022 17:38:15 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Gupta",
"Varun",
""
],
[
"Radovanovic",
"Ana",
""
]
] |
new_dataset
| 0.998236 |
1804.05236
|
Bas Spitters
|
Lars Birkedal, Ranald Clouston, Bassel Mannaa, Rasmus Ejlers
M{\o}gelberg, Andrew M. Pitts, Bas Spitters
|
Modal Dependent Type Theory and Dependent Right Adjoints
| null |
Math. Struct. Comp. Sci. 30 (2020) 118-138
|
10.1017/S0960129519000197
| null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years we have seen several new models of dependent type theory
extended with some form of modal necessity operator, including nominal type
theory, guarded and clocked type theory, and spatial and cohesive type theory.
In this paper we study modal dependent type theory: dependent type theory with
an operator satisfying (a dependent version of) the K-axiom of modal logic. We
investigate both semantics and syntax. For the semantics, we introduce
categories with families with a dependent right adjoint (CwDRA) and show that
the examples above can be presented as such. Indeed, we show that any finite
limit category with an adjunction of endofunctors gives rise to a CwDRA via the
local universe construction. For the syntax, we introduce a dependently typed
extension of Fitch-style modal lambda-calculus, show that it can be interpreted
in any CwDRA, and build a term model. We extend the syntax and semantics with
universes.
|
[
{
"version": "v1",
"created": "Sat, 14 Apr 2018 15:13:53 GMT"
},
{
"version": "v2",
"created": "Sat, 27 Oct 2018 10:17:57 GMT"
},
{
"version": "v3",
"created": "Thu, 25 Jul 2019 12:11:43 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Birkedal",
"Lars",
""
],
[
"Clouston",
"Ranald",
""
],
[
"Mannaa",
"Bassel",
""
],
[
"Møgelberg",
"Rasmus Ejlers",
""
],
[
"Pitts",
"Andrew M.",
""
],
[
"Spitters",
"Bas",
""
]
] |
new_dataset
| 0.977036 |
1911.07440
|
Muhammet Bastan
|
Muhammet Bastan, Hao-Yu Wu, Tian Cao, Bhargava Kota, Mehmet Tek
|
Large Scale Open-Set Deep Logo Detection
|
Open Set Logo Detection (OSLD) dataset available at
https://github.com/mubastan/osld
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an open-set logo detection (OSLD) system, which can detect
(localize and recognize) any number of unseen logo classes without re-training;
it only requires a small set of canonical logo images for each logo class. We
achieve this using a two-stage approach: (1) Generic logo detection to detect
candidate logo regions in an image. (2) Logo matching for matching the detected
logo regions to a set of canonical logo images to recognize them.
We constructed an open-set logo detection dataset with 12.1k logo classes and
released it for research purposes.We demonstrate the effectiveness of OSLD on
our dataset and on the standard Flickr-32 logo dataset, outperforming the
state-of-the-art open-set and closed-set logo detection methods by a large
margin. OSLD is scalable to millions of logo classes.
|
[
{
"version": "v1",
"created": "Mon, 18 Nov 2019 05:44:17 GMT"
},
{
"version": "v2",
"created": "Sun, 29 Aug 2021 23:04:01 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Jan 2022 07:05:05 GMT"
},
{
"version": "v4",
"created": "Sat, 12 Mar 2022 23:47:45 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Bastan",
"Muhammet",
""
],
[
"Wu",
"Hao-Yu",
""
],
[
"Cao",
"Tian",
""
],
[
"Kota",
"Bhargava",
""
],
[
"Tek",
"Mehmet",
""
]
] |
new_dataset
| 0.998183 |
2010.10006
|
Long Chen
|
Long Chen, Feixiang Zhou, Shengke Wang, Junyu Dong, Ning Li, Haiping
Ma, Xin Wang and Huiyu Zhou
|
SWIPENET: Object detection in noisy underwater images
|
arXiv admin note: text overlap with arXiv:2005.11552
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, deep learning based object detection methods have achieved
promising performance in controlled environments. However, these methods lack
sufficient capabilities to handle underwater object detection due to these
challenges: (1) images in the underwater datasets and real applications are
blurry whilst accompanying severe noise that confuses the detectors and (2)
objects in real applications are usually small. In this paper, we propose a
novel Sample-WeIghted hyPEr Network (SWIPENET), and a robust training paradigm
named Curriculum Multi-Class Adaboost (CMA), to address these two problems at
the same time. Firstly, the backbone of SWIPENET produces multiple high
resolution and semantic-rich Hyper Feature Maps, which significantly improve
small object detection. Secondly, a novel sample-weighted detection loss
function is designed for SWIPENET, which focuses on learning high weight
samples and ignore learning low weight samples. Moreover, inspired by the human
education process that drives the learning from easy to hard concepts, we here
propose the CMA training paradigm that first trains a clean detector which is
free from the influence of noisy data. Then, based on the clean detector,
multiple detectors focusing on learning diverse noisy data are trained and
incorporated into a unified deep ensemble of strong noise immunity. Experiments
on two underwater robot picking contest datasets (URPC2017 and URPC2018) show
that the proposed SWIPENET+CMA framework achieves better accuracy in object
detection against several state-of-the-art approaches.
|
[
{
"version": "v1",
"created": "Mon, 19 Oct 2020 16:41:20 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Dec 2020 17:30:41 GMT"
},
{
"version": "v3",
"created": "Sun, 13 Mar 2022 04:45:54 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Chen",
"Long",
""
],
[
"Zhou",
"Feixiang",
""
],
[
"Wang",
"Shengke",
""
],
[
"Dong",
"Junyu",
""
],
[
"Li",
"Ning",
""
],
[
"Ma",
"Haiping",
""
],
[
"Wang",
"Xin",
""
],
[
"Zhou",
"Huiyu",
""
]
] |
new_dataset
| 0.96141 |
2104.02542
|
Rosanna Turrisi
|
Rosanna Turrisi, Arianna Braccia, Marco Emanuele, Simone Giulietti,
Maura Pugliatti, Mariachiara Sensi, Luciano Fadiga, Leonardo Badino
|
EasyCall corpus: a dysarthric speech dataset
| null |
Interspeech 2021
|
10.21437/Interspeech.2021-549
| null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces a new dysarthric speech command dataset in Italian,
called EasyCall corpus. The dataset consists of 21386 audio recordings from 24
healthy and 31 dysarthric speakers, whose individual degree of speech
impairment was assessed by neurologists through the Therapy Outcome Measure.
The corpus aims at providing a resource for the development of ASR-based
assistive technologies for patients with dysarthria. In particular, it may be
exploited to develop a voice-controlled contact application for commercial
smartphones, aiming at improving dysarthric patients' ability to communicate
with their family and caregivers. Before recording the dataset, participants
were administered a survey to evaluate which commands are more likely to be
employed by dysarthric individuals in a voice-controlled contact application.
In addition, the dataset includes a list of non-commands (i.e., words
near/inside commands or phonetically close to commands) that can be leveraged
to build a more robust command recognition system. At present commercial ASR
systems perform poorly on the EasyCall Corpus as we report in this paper. This
result corroborates the need for dysarthric speech corpora for developing
effective assistive technologies. To the best of our knowledge, this database
represents the richest corpus of dysarthric speech to date.
|
[
{
"version": "v1",
"created": "Tue, 6 Apr 2021 14:32:47 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Turrisi",
"Rosanna",
""
],
[
"Braccia",
"Arianna",
""
],
[
"Emanuele",
"Marco",
""
],
[
"Giulietti",
"Simone",
""
],
[
"Pugliatti",
"Maura",
""
],
[
"Sensi",
"Mariachiara",
""
],
[
"Fadiga",
"Luciano",
""
],
[
"Badino",
"Leonardo",
""
]
] |
new_dataset
| 0.999716 |
2106.15708
|
Luca Giuzzi DPhil
|
Luca Giuzzi, Guglielmo Lunardon
|
A remark on ${\mathbb F}_{q^n}$-Linear MRD codes
|
The results are not relevant enough in light of previous literature
| null | null | null |
cs.IT math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this note, we provide a description of the elements of minimum rank of a
generalized Gabidulin code in terms of Grassmann coordinates. As a consequence,
a characterization of linearized polynomials of rank at most $n-k$ is obtained,
as well as parametric equations for MRD-codes of distance $d=n-k+1$.
|
[
{
"version": "v1",
"created": "Tue, 29 Jun 2021 20:21:10 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Mar 2022 15:14:01 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Giuzzi",
"Luca",
""
],
[
"Lunardon",
"Guglielmo",
""
]
] |
new_dataset
| 0.971361 |
2107.05464
|
Yao Yao
|
Xiaoyan Cao, Yao Yao, Lanqing Li, Wanpeng Zhang, Zhicheng An, Zhong
Zhang, Li Xiao, Shihui Guo, Xiaoyu Cao, Meihong Wu and Dijun Luo
|
IGrow: A Smart Agriculture Solution to Autonomous Greenhouse Control
|
9 pages, 5 figures, 2 tables, accepted by AAAI 2022
| null | null | null |
cs.AI cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Agriculture is the foundation of human civilization. However, the rapid
increase of the global population poses a challenge on this cornerstone by
demanding more food. Modern autonomous greenhouses, equipped with sensors and
actuators, provide a promising solution to the problem by empowering precise
control for high-efficient food production. However, the optimal control of
autonomous greenhouses is challenging, requiring decision-making based on
high-dimensional sensory data, and the scaling of production is limited by the
scarcity of labor capable of handling this task. With the advances of
artificial intelligence (AI), the internet of things (IoT), and cloud computing
technologies, we are hopeful to provide a solution to automate and smarten
greenhouse control to address the above challenges. In this paper, we propose a
smart agriculture solution named iGrow, for autonomous greenhouse control
(AGC): (1) for the first time, we formulate the AGC problem as a Markov
decision process (MDP) optimization problem; (2) we design a neural
network-based simulator incorporated with the incremental mechanism to simulate
the complete planting process of an autonomous greenhouse, which provides a
testbed for the optimization of control strategies; (3) we propose a
closed-loop bi-level optimization algorithm, which can dynamically re-optimize
the greenhouse control strategy with newly observed data during real-world
production. We not only conduct simulation experiments but also deploy iGrow in
real scenarios, and experimental results demonstrate the effectiveness and
superiority of iGrow in autonomous greenhouse simulation and optimal control.
Particularly, compelling results from the tomato pilot project in real
autonomous greenhouses show that our solution significantly increases crop
yield (+10.15\%) and net profit (+92.70\%) with statistical significance
compared to planting experts.
|
[
{
"version": "v1",
"created": "Tue, 6 Jul 2021 11:35:50 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Mar 2022 11:53:30 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Cao",
"Xiaoyan",
""
],
[
"Yao",
"Yao",
""
],
[
"Li",
"Lanqing",
""
],
[
"Zhang",
"Wanpeng",
""
],
[
"An",
"Zhicheng",
""
],
[
"Zhang",
"Zhong",
""
],
[
"Xiao",
"Li",
""
],
[
"Guo",
"Shihui",
""
],
[
"Cao",
"Xiaoyu",
""
],
[
"Wu",
"Meihong",
""
],
[
"Luo",
"Dijun",
""
]
] |
new_dataset
| 0.998483 |
2108.02740
|
Lei Li
|
Lei Li, Hongbo Fu, Maks Ovsjanikov
|
WSDesc: Weakly Supervised 3D Local Descriptor Learning for Point Cloud
Registration
|
To appear in IEEE TVCG
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present a novel method called WSDesc to learn 3D local
descriptors in a weakly supervised manner for robust point cloud registration.
Our work builds upon recent 3D CNN-based descriptor extractors, which leverage
a voxel-based representation to parameterize local geometry of 3D points.
Instead of using a predefined fixed-size local support in voxelization, we
propose to learn the optimal support in a data-driven manner. To this end, we
design a novel differentiable voxelization layer that can back-propagate the
gradient to the support size optimization. To train the extracted descriptors,
we propose a novel registration loss based on the deviation from rigidity of 3D
transformations, and the loss is weakly supervised by the prior knowledge that
the input point clouds have partial overlap, without requiring ground-truth
alignment information. Through extensive experiments, we show that our learned
descriptors yield superior performance on existing geometric registration
benchmarks.
|
[
{
"version": "v1",
"created": "Thu, 5 Aug 2021 17:11:08 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Mar 2022 13:28:07 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Li",
"Lei",
""
],
[
"Fu",
"Hongbo",
""
],
[
"Ovsjanikov",
"Maks",
""
]
] |
new_dataset
| 0.99831 |
2109.03970
|
Ting-Han Fan
|
Ting-Han Fan, Xian Yeow Lee, Yubo Wang
|
PowerGym: A Reinforcement Learning Environment for Volt-Var Control in
Power Distribution Systems
|
The 4th Annual Learning for Dynamics & Control Conference (L4DC) 2022
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce PowerGym, an open-source reinforcement learning environment for
Volt-Var control in power distribution systems. Following OpenAI Gym APIs,
PowerGym targets minimizing power loss and voltage violations under physical
networked constraints. PowerGym provides four distribution systems (13Bus,
34Bus, 123Bus, and 8500Node) based on IEEE benchmark systems and design
variants for various control difficulties. To foster generalization, PowerGym
offers a detailed customization guide for users working with their distribution
systems. As a demonstration, we examine state-of-the-art reinforcement learning
algorithms in PowerGym and validate the environment by studying controller
behaviors. The repository is available at
\url{https://github.com/siemens/powergym}.
|
[
{
"version": "v1",
"created": "Wed, 8 Sep 2021 23:23:21 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Sep 2021 14:52:35 GMT"
},
{
"version": "v3",
"created": "Mon, 14 Mar 2022 17:46:09 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Fan",
"Ting-Han",
""
],
[
"Lee",
"Xian Yeow",
""
],
[
"Wang",
"Yubo",
""
]
] |
new_dataset
| 0.999394 |
2109.07024
|
Zhefan Xu
|
Zhefan Xu, Di Deng, Yiping Dong, Kenji Shimada
|
DPMPC-Planner: A real-time UAV trajectory planning framework for complex
static environments with dynamic obstacles
|
7pages, 8 figures
|
2022 IEEE International Conference on Robotics and Automation
(ICRA)
| null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Safe UAV navigation is challenging due to the complex environment structures,
dynamic obstacles, and uncertainties from measurement noises and unpredictable
moving obstacle behaviors. Although plenty of recent works achieve safe
navigation in complex static environments with sophisticated mapping
algorithms, such as occupancy map and ESDF map, these methods cannot reliably
handle dynamic environments due to the mapping limitation from moving
obstacles. To address the limitation, this paper proposes a trajectory planning
framework to achieve safe navigation considering complex static environments
with dynamic obstacles. To reliably handle dynamic obstacles, we divide the
environment representation into static mapping and dynamic object
representation, which can be obtained from computer vision methods. Our
framework first generates a static trajectory based on the proposed iterative
corridor shrinking algorithm. Then, reactive chance-constrained model
predictive control with temporal goal tracking is applied to avoid dynamic
obstacles with uncertainties. The simulation results in various environments
demonstrate the ability of our algorithm to navigate safely in complex static
environments with dynamic obstacles.
|
[
{
"version": "v1",
"created": "Tue, 14 Sep 2021 23:51:02 GMT"
},
{
"version": "v2",
"created": "Sat, 12 Mar 2022 22:19:23 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Xu",
"Zhefan",
""
],
[
"Deng",
"Di",
""
],
[
"Dong",
"Yiping",
""
],
[
"Shimada",
"Kenji",
""
]
] |
new_dataset
| 0.980106 |
2110.07058
|
Kristen Grauman
|
Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis,
Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu,
Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh
Kumar Ramakrishnan, Fiona Ryan, Jayant Sharma, Michael Wray, Mengmeng Xu,
Eric Zhongcong Xu, Chen Zhao, Siddhant Bansal, Dhruv Batra, Vincent
Cartillier, Sean Crane, Tien Do, Morrie Doulaty, Akshay Erapalli, Christoph
Feichtenhofer, Adriano Fragomeni, Qichen Fu, Abrham Gebreselasie, Cristina
Gonzalez, James Hillis, Xuhua Huang, Yifei Huang, Wenqi Jia, Weslie Khoo,
Jachym Kolar, Satwik Kottur, Anurag Kumar, Federico Landini, Chao Li, Yanghao
Li, Zhenqiang Li, Karttikeya Mangalam, Raghava Modhugu, Jonathan Munro,
Tullie Murrell, Takumi Nishiyasu, Will Price, Paola Ruiz Puentes, Merey
Ramazanova, Leda Sari, Kiran Somasundaram, Audrey Southerland, Yusuke Sugano,
Ruijie Tao, Minh Vo, Yuchen Wang, Xindi Wu, Takuma Yagi, Ziwei Zhao, Yunyi
Zhu, Pablo Arbelaez, David Crandall, Dima Damen, Giovanni Maria Farinella,
Christian Fuegen, Bernard Ghanem, Vamsi Krishna Ithapu, C. V. Jawahar,
Hanbyul Joo, Kris Kitani, Haizhou Li, Richard Newcombe, Aude Oliva, Hyun Soo
Park, James M. Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio
Torralba, Lorenzo Torresani, Mingfei Yan, Jitendra Malik
|
Ego4D: Around the World in 3,000 Hours of Egocentric Video
|
To appear in the Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition (CVPR), 2022. This version updates the
baseline result numbers for the Hands and Objects benchmark (appendix)
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce Ego4D, a massive-scale egocentric video dataset and benchmark
suite. It offers 3,670 hours of daily-life activity video spanning hundreds of
scenarios (household, outdoor, workplace, leisure, etc.) captured by 931 unique
camera wearers from 74 worldwide locations and 9 different countries. The
approach to collection is designed to uphold rigorous privacy and ethics
standards with consenting participants and robust de-identification procedures
where relevant. Ego4D dramatically expands the volume of diverse egocentric
video footage publicly available to the research community. Portions of the
video are accompanied by audio, 3D meshes of the environment, eye gaze, stereo,
and/or synchronized videos from multiple egocentric cameras at the same event.
Furthermore, we present a host of new benchmark challenges centered around
understanding the first-person visual experience in the past (querying an
episodic memory), present (analyzing hand-object manipulation, audio-visual
conversation, and social interactions), and future (forecasting activities). By
publicly sharing this massive annotated dataset and benchmark suite, we aim to
push the frontier of first-person perception. Project page:
https://ego4d-data.org/
|
[
{
"version": "v1",
"created": "Wed, 13 Oct 2021 22:19:32 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Mar 2022 18:43:52 GMT"
},
{
"version": "v3",
"created": "Fri, 11 Mar 2022 19:40:26 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Grauman",
"Kristen",
""
],
[
"Westbury",
"Andrew",
""
],
[
"Byrne",
"Eugene",
""
],
[
"Chavis",
"Zachary",
""
],
[
"Furnari",
"Antonino",
""
],
[
"Girdhar",
"Rohit",
""
],
[
"Hamburger",
"Jackson",
""
],
[
"Jiang",
"Hao",
""
],
[
"Liu",
"Miao",
""
],
[
"Liu",
"Xingyu",
""
],
[
"Martin",
"Miguel",
""
],
[
"Nagarajan",
"Tushar",
""
],
[
"Radosavovic",
"Ilija",
""
],
[
"Ramakrishnan",
"Santhosh Kumar",
""
],
[
"Ryan",
"Fiona",
""
],
[
"Sharma",
"Jayant",
""
],
[
"Wray",
"Michael",
""
],
[
"Xu",
"Mengmeng",
""
],
[
"Xu",
"Eric Zhongcong",
""
],
[
"Zhao",
"Chen",
""
],
[
"Bansal",
"Siddhant",
""
],
[
"Batra",
"Dhruv",
""
],
[
"Cartillier",
"Vincent",
""
],
[
"Crane",
"Sean",
""
],
[
"Do",
"Tien",
""
],
[
"Doulaty",
"Morrie",
""
],
[
"Erapalli",
"Akshay",
""
],
[
"Feichtenhofer",
"Christoph",
""
],
[
"Fragomeni",
"Adriano",
""
],
[
"Fu",
"Qichen",
""
],
[
"Gebreselasie",
"Abrham",
""
],
[
"Gonzalez",
"Cristina",
""
],
[
"Hillis",
"James",
""
],
[
"Huang",
"Xuhua",
""
],
[
"Huang",
"Yifei",
""
],
[
"Jia",
"Wenqi",
""
],
[
"Khoo",
"Weslie",
""
],
[
"Kolar",
"Jachym",
""
],
[
"Kottur",
"Satwik",
""
],
[
"Kumar",
"Anurag",
""
],
[
"Landini",
"Federico",
""
],
[
"Li",
"Chao",
""
],
[
"Li",
"Yanghao",
""
],
[
"Li",
"Zhenqiang",
""
],
[
"Mangalam",
"Karttikeya",
""
],
[
"Modhugu",
"Raghava",
""
],
[
"Munro",
"Jonathan",
""
],
[
"Murrell",
"Tullie",
""
],
[
"Nishiyasu",
"Takumi",
""
],
[
"Price",
"Will",
""
],
[
"Puentes",
"Paola Ruiz",
""
],
[
"Ramazanova",
"Merey",
""
],
[
"Sari",
"Leda",
""
],
[
"Somasundaram",
"Kiran",
""
],
[
"Southerland",
"Audrey",
""
],
[
"Sugano",
"Yusuke",
""
],
[
"Tao",
"Ruijie",
""
],
[
"Vo",
"Minh",
""
],
[
"Wang",
"Yuchen",
""
],
[
"Wu",
"Xindi",
""
],
[
"Yagi",
"Takuma",
""
],
[
"Zhao",
"Ziwei",
""
],
[
"Zhu",
"Yunyi",
""
],
[
"Arbelaez",
"Pablo",
""
],
[
"Crandall",
"David",
""
],
[
"Damen",
"Dima",
""
],
[
"Farinella",
"Giovanni Maria",
""
],
[
"Fuegen",
"Christian",
""
],
[
"Ghanem",
"Bernard",
""
],
[
"Ithapu",
"Vamsi Krishna",
""
],
[
"Jawahar",
"C. V.",
""
],
[
"Joo",
"Hanbyul",
""
],
[
"Kitani",
"Kris",
""
],
[
"Li",
"Haizhou",
""
],
[
"Newcombe",
"Richard",
""
],
[
"Oliva",
"Aude",
""
],
[
"Park",
"Hyun Soo",
""
],
[
"Rehg",
"James M.",
""
],
[
"Sato",
"Yoichi",
""
],
[
"Shi",
"Jianbo",
""
],
[
"Shou",
"Mike Zheng",
""
],
[
"Torralba",
"Antonio",
""
],
[
"Torresani",
"Lorenzo",
""
],
[
"Yan",
"Mingfei",
""
],
[
"Malik",
"Jitendra",
""
]
] |
new_dataset
| 0.999785 |
2110.10919
|
Rajath Shashidhara
|
Rajath Shashidhara, Timothy Stamler, Antoine Kaufmann, Simon Peter
|
FlexTOE: Flexible TCP Offload with Fine-Grained Parallelism
|
Published in 19th USENIX Symposium on Networked Systems Design and
Implementation (NSDI 22). See
https://www.usenix.org/conference/nsdi22/presentation/shashidhara
| null | null | null |
cs.NI cs.OS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
FlexTOE is a flexible, yet high-performance TCP offload engine (TOE) to
SmartNICs. FlexTOE eliminates almost all host data-path TCP processing and is
fully customizable. FlexTOE interoperates well with other TCP stacks, is robust
under adverse network conditions, and supports POSIX sockets.
FlexTOE focuses on data-path offload of established connections, avoiding
complex control logic and packet buffering in the NIC. FlexTOE leverages
fine-grained parallelization of the TCP data-path and segment reordering for
high performance on wimpy SmartNIC architectures, while remaining flexible via
a modular design. We compare FlexTOE on an Agilio-CX40 to host TCP stacks Linux
and TAS, and to the Chelsio Terminator TOE. We find that Memcached scales up to
38% better on FlexTOE versus TAS, while saving up to 81% host CPU cycles versus
Chelsio. FlexTOE provides competitive performance for RPCs, even with wimpy
SmartNICs. FlexTOE cuts 99.99th-percentile RPC RTT by 3.2$\times$ and 50%
versus Chelsio and TAS, respectively. FlexTOE's data-path parallelism
generalizes across hardware architectures, improving single connection RPC
throughput up to 2.4$\times$ on x86 and 4$\times$ on BlueField. FlexTOE
supports C and XDP programs written in eBPF. It allows us to implement popular
data center transport features, such as TCP tracing, packet filtering and
capture, VLAN stripping, flow classification, firewalling, and connection
splicing.
|
[
{
"version": "v1",
"created": "Thu, 21 Oct 2021 06:19:31 GMT"
},
{
"version": "v2",
"created": "Sun, 13 Mar 2022 19:04:02 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Shashidhara",
"Rajath",
""
],
[
"Stamler",
"Timothy",
""
],
[
"Kaufmann",
"Antoine",
""
],
[
"Peter",
"Simon",
""
]
] |
new_dataset
| 0.999156 |
2112.07566
|
Letitia Parcalabescu
|
Letitia Parcalabescu, Michele Cafagna, Lilitta Muradjan, Anette Frank,
Iacer Calixto, Albert Gatt
|
VALSE: A Task-Independent Benchmark for Vision and Language Models
Centered on Linguistic Phenomena
|
Paper accepted for publication at ACL 2022 Main; 28 pages, 4 figures,
11 tables
| null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose VALSE (Vision And Language Structured Evaluation), a novel
benchmark designed for testing general-purpose pretrained vision and language
(V&L) models for their visio-linguistic grounding capabilities on specific
linguistic phenomena. VALSE offers a suite of six tests covering various
linguistic constructs. Solving these requires models to ground linguistic
phenomena in the visual modality, allowing more fine-grained evaluations than
hitherto possible. We build VALSE using methods that support the construction
of valid foils, and report results from evaluating five widely-used V&L models.
Our experiments suggest that current models have considerable difficulty
addressing most phenomena. Hence, we expect VALSE to serve as an important
benchmark to measure future progress of pretrained V&L models from a linguistic
perspective, complementing the canonical task-centred V&L evaluations.
|
[
{
"version": "v1",
"created": "Tue, 14 Dec 2021 17:15:04 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Mar 2022 15:08:08 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Parcalabescu",
"Letitia",
""
],
[
"Cafagna",
"Michele",
""
],
[
"Muradjan",
"Lilitta",
""
],
[
"Frank",
"Anette",
""
],
[
"Calixto",
"Iacer",
""
],
[
"Gatt",
"Albert",
""
]
] |
new_dataset
| 0.998506 |
2201.03804
|
Wenliang Dai
|
Wenliang Dai, Samuel Cahyawijaya, Tiezheng Yu, Elham J. Barezi, Peng
Xu, Cheuk Tung Shadow Yiu, Rita Frieske, Holy Lovenia, Genta Indra Winata,
Qifeng Chen, Xiaojuan Ma, Bertram E. Shi, Pascale Fung
|
CI-AVSR: A Cantonese Audio-Visual Speech Dataset for In-car Command
Recognition
|
6 pages
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
With the rise of deep learning and intelligent vehicle, the smart assistant
has become an essential in-car component to facilitate driving and provide
extra functionalities. In-car smart assistants should be able to process
general as well as car-related commands and perform corresponding actions,
which eases driving and improves safety. However, there is a data scarcity
issue for low resource languages, hindering the development of research and
applications. In this paper, we introduce a new dataset, Cantonese In-car
Audio-Visual Speech Recognition (CI-AVSR), for in-car command recognition in
the Cantonese language with both video and audio data. It consists of 4,984
samples (8.3 hours) of 200 in-car commands recorded by 30 native Cantonese
speakers. Furthermore, we augment our dataset using common in-car background
noises to simulate real environments, producing a dataset 10 times larger than
the collected one. We provide detailed statistics of both the clean and the
augmented versions of our dataset. Moreover, we implement two multimodal
baselines to demonstrate the validity of CI-AVSR. Experiment results show that
leveraging the visual signal improves the overall performance of the model.
Although our best model can achieve a considerable quality on the clean test
set, the speech recognition quality on the noisy data is still inferior and
remains as an extremely challenging task for real in-car speech recognition
systems. The dataset and code will be released at
https://github.com/HLTCHKUST/CI-AVSR.
|
[
{
"version": "v1",
"created": "Tue, 11 Jan 2022 06:32:12 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Mar 2022 05:29:02 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Dai",
"Wenliang",
""
],
[
"Cahyawijaya",
"Samuel",
""
],
[
"Yu",
"Tiezheng",
""
],
[
"Barezi",
"Elham J.",
""
],
[
"Xu",
"Peng",
""
],
[
"Yiu",
"Cheuk Tung Shadow",
""
],
[
"Frieske",
"Rita",
""
],
[
"Lovenia",
"Holy",
""
],
[
"Winata",
"Genta Indra",
""
],
[
"Chen",
"Qifeng",
""
],
[
"Ma",
"Xiaojuan",
""
],
[
"Shi",
"Bertram E.",
""
],
[
"Fung",
"Pascale",
""
]
] |
new_dataset
| 0.999847 |
2202.01159
|
Raviraj Joshi
|
Raviraj Joshi
|
L3Cube-MahaCorpus and MahaBERT: Marathi Monolingual Corpus, Marathi BERT
Language Models, and Resources
| null | null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present L3Cube-MahaCorpus a Marathi monolingual data set scraped from
different internet sources. We expand the existing Marathi monolingual corpus
with 24.8M sentences and 289M tokens. We further present, MahaBERT, MahaAlBERT,
and MahaRoBerta all BERT-based masked language models, and MahaFT, the fast
text word embeddings both trained on full Marathi corpus with 752M tokens. We
show the effectiveness of these resources on downstream Marathi sentiment
analysis, text classification, and named entity recognition (NER) tasks. We
also release MahaGPT, a generative Marathi GPT model trained on Marathi corpus.
Marathi is a popular language in India but still lacks these resources. This
work is a step forward in building open resources for the Marathi language. The
data and models are available at https://github.com/l3cube-pune/MarathiNLP .
|
[
{
"version": "v1",
"created": "Wed, 2 Feb 2022 17:35:52 GMT"
},
{
"version": "v2",
"created": "Sun, 13 Mar 2022 07:27:40 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Joshi",
"Raviraj",
""
]
] |
new_dataset
| 0.999505 |
2202.12419
|
Yanran Wang
|
Yanran Wang, James O'Keeffe, Qiuchen Qian and David Boyle
|
KinoJGM: A framework for efficient and accurate quadrotor trajectory
generation and tracking in dynamic environments
|
7pages, 8 figures, IEEE International Conference on Robotics and
Automation 2022, accepted
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unmapped areas and aerodynamic disturbances render autonomous navigation with
quadrotors extremely challenging. To fly safely and efficiently, trajectory
planners and trackers must be able to navigate unknown environments with
unpredictable aerodynamic effects in real-time. When encountering aerodynamic
effects such as strong winds, most current approaches to quadrotor trajectory
planning and tracking will not attempt to deviate from a determined plan, even
if it is risky, in the hope that any aerodynamic disturbances can be resisted
by a robust controller. This paper presents a novel systematic trajectory
planning and tracking framework for autonomous quadrotors. We propose a
Kinodynamic Jump Space Search (Kino-JSS) to generate a safe and efficient route
in unknown environments with aerodynamic disturbances. A real-time Gaussian
Process is employed to model the effects of aerodynamic disturbances, which we
then integrate with a Model Predictive Controller to achieve efficient and
accurate trajectory optimization and tracking. We demonstrate our system to
improve the efficiency of trajectory generation in unknown environments by up
to 75\% in the cases tested, compared with recent state-of-the-art. We also
demonstrate that our system improves the accuracy of tracking in selected
environments with unpredictable aerodynamic effects.
|
[
{
"version": "v1",
"created": "Thu, 24 Feb 2022 23:31:19 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Mar 2022 14:30:42 GMT"
},
{
"version": "v3",
"created": "Sun, 6 Mar 2022 22:42:03 GMT"
},
{
"version": "v4",
"created": "Sat, 12 Mar 2022 00:19:48 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Wang",
"Yanran",
""
],
[
"O'Keeffe",
"James",
""
],
[
"Qian",
"Qiuchen",
""
],
[
"Boyle",
"David",
""
]
] |
new_dataset
| 0.991883 |
2202.13145
|
Fanchao Qi
|
Fanchao Qi, Yanhui Yang, Jing Yi, Zhili Cheng, Zhiyuan Liu, Maosong
Sun
|
QuoteR: A Benchmark of Quote Recommendation for Writing
|
Accepted by the main conference of ACL 2022 as a long paper. The
camera-ready version
| null | null | null |
cs.CL cs.AI cs.IR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
It is very common to use quotations (quotes) to make our writings more
elegant or convincing. To help people find appropriate quotes efficiently, the
task of quote recommendation is presented, aiming to recommend quotes that fit
the current context of writing. There have been various quote recommendation
approaches, but they are evaluated on different unpublished datasets. To
facilitate the research on this task, we build a large and fully open quote
recommendation dataset called QuoteR, which comprises three parts including
English, standard Chinese and classical Chinese. Any part of it is larger than
previous unpublished counterparts. We conduct an extensive evaluation of
existing quote recommendation methods on QuoteR. Furthermore, we propose a new
quote recommendation model that significantly outperforms previous methods on
all three parts of QuoteR. All the code and data of this paper are available at
https://github.com/thunlp/QuoteR.
|
[
{
"version": "v1",
"created": "Sat, 26 Feb 2022 14:01:44 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Mar 2022 15:31:07 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Qi",
"Fanchao",
""
],
[
"Yang",
"Yanhui",
""
],
[
"Yi",
"Jing",
""
],
[
"Cheng",
"Zhili",
""
],
[
"Liu",
"Zhiyuan",
""
],
[
"Sun",
"Maosong",
""
]
] |
new_dataset
| 0.999845 |
2203.01562
|
Zuheng Ming
|
Zuheng Ming, Zitong Yu, Musab Al-Ghadi, Muriel Visani, Muhammad
MuzzamilLuqman, Jean-Christophe Burie
|
ViTransPAD: Video Transformer using convolution and self-attention for
Face Presentation Attack Detection
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Face Presentation Attack Detection (PAD) is an important measure to prevent
spoof attacks for face biometric systems. Many works based on Convolution
Neural Networks (CNNs) for face PAD formulate the problem as an image-level
binary classification task without considering the context. Alternatively,
Vision Transformers (ViT) using self-attention to attend the context of an
image become the mainstreams in face PAD. Inspired by ViT, we propose a
Video-based Transformer for face PAD (ViTransPAD) with short/long-range
spatio-temporal attention which can not only focus on local details with short
attention within a frame but also capture long-range dependencies over frames.
Instead of using coarse image patches with single-scale as in ViT, we propose
the Multi-scale Multi-Head Self-Attention (MsMHSA) architecture to accommodate
multi-scale patch partitions of Q, K, V feature maps to the heads of
transformer in a coarse-to-fine manner, which enables to learn a fine-grained
representation to perform pixel-level discrimination for face PAD. Due to lack
inductive biases of convolutions in pure transformers, we also introduce
convolutions to the proposed ViTransPAD to integrate the desirable properties
of CNNs by using convolution patch embedding and convolution projection. The
extensive experiments show the effectiveness of our proposed ViTransPAD with a
preferable accuracy-computation balance, which can serve as a new backbone for
face PAD.
|
[
{
"version": "v1",
"created": "Thu, 3 Mar 2022 08:23:20 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Mar 2022 10:44:06 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Ming",
"Zuheng",
""
],
[
"Yu",
"Zitong",
""
],
[
"Al-Ghadi",
"Musab",
""
],
[
"Visani",
"Muriel",
""
],
[
"MuzzamilLuqman",
"Muhammad",
""
],
[
"Burie",
"Jean-Christophe",
""
]
] |
new_dataset
| 0.998588 |
2203.02119
|
Tianhao Wu
|
Tianhao Wu, Fangwei Zhong, Yiran Geng, Hongchen Wang, Yongjian Zhu,
Yizhou Wang, Hao Dong
|
GraspARL: Dynamic Grasping via Adversarial Reinforcement Learning
| null | null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Grasping moving objects, such as goods on a belt or living animals, is an
important but challenging task in robotics. Conventional approaches rely on a
set of manually defined object motion patterns for training, resulting in poor
generalization to unseen object trajectories. In this work, we introduce an
adversarial reinforcement learning framework for dynamic grasping, namely
GraspARL. To be specific. we formulate the dynamic grasping problem as a
'move-and-grasp' game, where the robot is to pick up the object on the mover
and the adversarial mover is to find a path to escape it. Hence, the two agents
play a min-max game and are trained by reinforcement learning. In this way, the
mover can auto-generate diverse moving trajectories while training. And the
robot trained with the adversarial trajectories can generalize to various
motion patterns. Empirical results on the simulator and real-world scenario
demonstrate the effectiveness of each and good generalization of our method.
|
[
{
"version": "v1",
"created": "Fri, 4 Mar 2022 03:25:09 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Mar 2022 08:27:19 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Wu",
"Tianhao",
""
],
[
"Zhong",
"Fangwei",
""
],
[
"Geng",
"Yiran",
""
],
[
"Wang",
"Hongchen",
""
],
[
"Zhu",
"Yongjian",
""
],
[
"Wang",
"Yizhou",
""
],
[
"Dong",
"Hao",
""
]
] |
new_dataset
| 0.999436 |
2203.04214
|
Ziming Zhao
|
Md Armanuzzaman and Ziming Zhao
|
BYOTee: Towards Building Your Own Trusted Execution Environments Using
FPGA
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, we have witnessed unprecedented growth in using
hardware-assisted Trusted Execution Environments (TEE) or enclaves to protect
sensitive code and data on commodity devices thanks to new hardware security
features, such as Intel SGX and Arm TrustZone. Even though the proprietary TEEs
bring many benefits, they have been criticized for lack of transparency,
vulnerabilities, and various restrictions. For example, existing TEEs only
provide a static and fixed hardware Trusted Computing Base (TCB), which cannot
be customized for different applications. Existing TEEs time-share a processor
core with the Rich Execution Environment (REE), making execution less efficient
and vulnerable to cache side-channel attacks. Moreover, TrustZone lacks
hardware support for multiple TEEs, remote attestation, and memory encryption.
In this paper, we present BYOTee (Build Your Own Trusted Execution
Environments), which is an easy-to-use infrastructure for building multiple
equally secure enclaves by utilizing commodity Field Programmable Gate Arrays
(FPGA) devices. BYOTee creates enclaves with customized hardware TCBs, which
include softcore CPUs, block RAMs, and peripheral connections, in FPGA on
demand. Additionally, BYOTee provides mechanisms to attest the integrity of the
customized enclaves' hardware and software stacks, including bitstream,
firmware, and the Security-Sensitive Applications (SSA) along with their inputs
and outputs to remote verifiers. We implement a BYOTee system for the Xilinx
System-on-Chip (SoC) FPGA. The evaluations on the low-end Zynq-7000 system for
four SSAs and 12 benchmark applications demonstrate the usage, security,
effectiveness, and performance of the BYOTee framework.
|
[
{
"version": "v1",
"created": "Tue, 8 Mar 2022 17:22:52 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Mar 2022 16:00:33 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Armanuzzaman",
"Md",
""
],
[
"Zhao",
"Ziming",
""
]
] |
new_dataset
| 0.999178 |
2203.04566
|
Brijen Thananjeyan
|
Brijen Thananjeyan, Justin Kerr, Huang Huang, Joseph E. Gonzalez, Ken
Goldberg
|
All You Need is LUV: Unsupervised Collection of Labeled Images using
Invisible UV Fluorescent Indicators
| null | null | null | null |
cs.CV cs.AI cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large-scale semantic image annotation is a significant challenge for
learning-based perception systems in robotics. Current approaches often rely on
human labelers, which can be expensive, or simulation data, which can visually
or physically differ from real data. This paper proposes Labels from
UltraViolet (LUV), a novel framework that enables rapid, labeled data
collection in real manipulation environments without human labeling. LUV uses
transparent, ultraviolet-fluorescent paint with programmable ultraviolet LEDs
to collect paired images of a scene in standard lighting and UV lighting to
autonomously extract segmentation masks and keypoints via color segmentation.
We apply LUV to a suite of diverse robot perception tasks to evaluate its
labeling quality, flexibility, and data collection rate. Results suggest that
LUV is 180-2500 times faster than a human labeler across the tasks. We show
that LUV provides labels consistent with human annotations on unpainted test
images. The networks trained on these labels are used to smooth and fold
crumpled towels with 83% success rate and achieve 1.7mm position error with
respect to human labels on a surgical needle pose estimation task. The low cost
of LUV makes it ideal as a lightweight replacement for human labeling systems,
with the one-time setup costs at $300 equivalent to the cost of collecting
around 200 semantic segmentation labels on Amazon Mechanical Turk. Code,
datasets, visualizations, and supplementary material can be found at
https://sites.google.com/berkeley.edu/luv
|
[
{
"version": "v1",
"created": "Wed, 9 Mar 2022 08:03:07 GMT"
},
{
"version": "v2",
"created": "Sun, 13 Mar 2022 07:51:46 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Thananjeyan",
"Brijen",
""
],
[
"Kerr",
"Justin",
""
],
[
"Huang",
"Huang",
""
],
[
"Gonzalez",
"Joseph E.",
""
],
[
"Goldberg",
"Ken",
""
]
] |
new_dataset
| 0.955232 |
2203.05797
|
Zhibin Gou
|
Xinchao Xu, Zhibin Gou, Wenquan Wu, Zheng-Yu Niu, Hua Wu, Haifeng
Wang, Shihang Wang
|
Long Time No See! Open-Domain Conversation with Long-Term Persona Memory
|
Accepted by Findings of ACL 2022 (Camera-ready version)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most of the open-domain dialogue models tend to perform poorly in the setting
of long-term human-bot conversations. The possible reason is that they lack the
capability of understanding and memorizing long-term dialogue history
information. To address this issue, we present a novel task of Long-term Memory
Conversation (LeMon) and then build a new dialogue dataset DuLeMon and a
dialogue generation framework with Long-Term Memory (LTM) mechanism (called
PLATO-LTM). This LTM mechanism enables our system to accurately extract and
continuously update long-term persona memory without requiring multiple-session
dialogue datasets for model training. To our knowledge, this is the first
attempt to conduct real-time dynamic management of persona information of both
parties, including the user and the bot. Results on DuLeMon indicate that
PLATO-LTM can significantly outperform baselines in terms of long-term dialogue
consistency, leading to better dialogue engagingness.
|
[
{
"version": "v1",
"created": "Fri, 11 Mar 2022 08:41:14 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Mar 2022 12:01:20 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Xu",
"Xinchao",
""
],
[
"Gou",
"Zhibin",
""
],
[
"Wu",
"Wenquan",
""
],
[
"Niu",
"Zheng-Yu",
""
],
[
"Wu",
"Hua",
""
],
[
"Wang",
"Haifeng",
""
],
[
"Wang",
"Shihang",
""
]
] |
new_dataset
| 0.955129 |
2203.06183
|
Mansi Sharma
|
Sachidanand V S and Mansi Sharma
|
Tactile-ViewGCN: Learning Shape Descriptor from Tactile Data using Graph
Convolutional Network
| null | null | null | null |
cs.RO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For humans, our "senses of touch" have always been necessary for our ability
to precisely and efficiently manipulate objects of all shapes in any
environment, but until recently, not many works have been done to fully
understand haptic feedback. This work proposed a novel method for getting a
better shape descriptor than existing methods for classifying an object from
multiple tactile data collected from a tactile glove. It focuses on improving
previous works on object classification using tactile data. The major problem
for object classification from multiple tactile data is to find a good way to
aggregate features extracted from multiple tactile images. We propose a novel
method, dubbed as Tactile-ViewGCN, that hierarchically aggregate tactile
features considering relations among different features by using Graph
Convolutional Network. Our model outperforms previous methods on the STAG
dataset with an accuracy of 81.82%.
|
[
{
"version": "v1",
"created": "Sat, 12 Mar 2022 05:58:21 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"S",
"Sachidanand V",
""
],
[
"Sharma",
"Mansi",
""
]
] |
new_dataset
| 0.995723 |
2203.06228
|
L\"utfi Kerem \c{S}enel
|
L\"utfi Kerem Senel, Timo Schick and Hinrich Sch\"utze
|
CoDA21: Evaluating Language Understanding Capabilities of NLP Models
With Context-Definition Alignment
|
To appear in ACL 2022, 5 pages, 2 figures
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Pretrained language models (PLMs) have achieved superhuman performance on
many benchmarks, creating a need for harder tasks. We introduce CoDA21 (Context
Definition Alignment), a challenging benchmark that measures natural language
understanding (NLU) capabilities of PLMs: Given a definition and a context each
for k words, but not the words themselves, the task is to align the k
definitions with the k contexts. CoDA21 requires a deep understanding of
contexts and definitions, including complex inference and world knowledge. We
find that there is a large gap between human and PLM performance, suggesting
that CoDA21 measures an aspect of NLU that is not sufficiently covered in
existing benchmarks.
|
[
{
"version": "v1",
"created": "Fri, 11 Mar 2022 20:12:49 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Senel",
"Lütfi Kerem",
""
],
[
"Schick",
"Timo",
""
],
[
"Schütze",
"Hinrich",
""
]
] |
new_dataset
| 0.956433 |
2203.06264
|
Tianyi Li
|
Tianyi Li, Sabine Weber, Mohammad Javad Hosseini, Liane Guillou, Mark
Steedman
|
Cross-lingual Inference with A Chinese Entailment Graph
|
Accepted to Findings of ACL 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Predicate entailment detection is a crucial task for question-answering from
text, where previous work has explored unsupervised learning of entailment
graphs from typed open relation triples. In this paper, we present the first
pipeline for building Chinese entailment graphs, which involves a novel
high-recall open relation extraction (ORE) method and the first Chinese
fine-grained entity typing dataset under the FIGER type ontology. Through
experiments on the Levy-Holt dataset, we verify the strength of our Chinese
entailment graph, and reveal the cross-lingual complementarity: on the parallel
Levy-Holt dataset, an ensemble of Chinese and English entailment graphs
outperforms both monolingual graphs, and raises unsupervised SOTA by 4.7 AUC
points.
|
[
{
"version": "v1",
"created": "Fri, 11 Mar 2022 21:45:33 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Li",
"Tianyi",
""
],
[
"Weber",
"Sabine",
""
],
[
"Hosseini",
"Mohammad Javad",
""
],
[
"Guillou",
"Liane",
""
],
[
"Steedman",
"Mark",
""
]
] |
new_dataset
| 0.998984 |
2203.06369
|
Nicholas Kuo
|
Nicholas I-Hsien Kuo, Mark N. Polizzotto, Simon Finfer, Federico
Garcia, Anders S\"onnerborg, Maurizio Zazzi, Michael B\"ohm, Louisa Jorm and
Sebastiano Barbieri
|
The Health Gym: Synthetic Health-Related Datasets for the Development of
Reinforcement Learning Algorithms
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, the machine learning research community has benefited
tremendously from the availability of openly accessible benchmark datasets.
Clinical data are usually not openly available due to their highly confidential
nature. This has hampered the development of reproducible and generalisable
machine learning applications in health care. Here we introduce the Health Gym
- a growing collection of highly realistic synthetic medical datasets that can
be freely accessed to prototype, evaluate, and compare machine learning
algorithms, with a specific focus on reinforcement learning. The three
synthetic datasets described in this paper present patient cohorts with acute
hypotension and sepsis in the intensive care unit, and people with human
immunodeficiency virus (HIV) receiving antiretroviral therapy in ambulatory
care. The datasets were created using a novel generative adversarial network
(GAN). The distributions of variables, and correlations between variables and
trends over time in the synthetic datasets mirror those in the real datasets.
Furthermore, the risk of sensitive information disclosure associated with the
public distribution of the synthetic datasets is estimated to be very low.
|
[
{
"version": "v1",
"created": "Sat, 12 Mar 2022 07:28:02 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Kuo",
"Nicholas I-Hsien",
""
],
[
"Polizzotto",
"Mark N.",
""
],
[
"Finfer",
"Simon",
""
],
[
"Garcia",
"Federico",
""
],
[
"Sönnerborg",
"Anders",
""
],
[
"Zazzi",
"Maurizio",
""
],
[
"Böhm",
"Michael",
""
],
[
"Jorm",
"Louisa",
""
],
[
"Barbieri",
"Sebastiano",
""
]
] |
new_dataset
| 0.999765 |
2203.06413
|
Youngsun Kwon
|
Youngsun Kwon, Minhyuk Sung, Sung-Eui Yoon
|
Implicit LiDAR Network: LiDAR Super-Resolution via Interpolation Weight
Prediction
|
7 pages, to be published in ICRA 2022
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Super-resolution of LiDAR range images is crucial to improving many
downstream tasks such as object detection, recognition, and tracking. While
deep learning has made a remarkable advances in super-resolution techniques,
typical convolutional architectures limit upscaling factors to specific output
resolutions in training. Recent work has shown that a continuous representation
of an image and learning its implicit function enable almost limitless
upscaling. However, the detailed approach, predicting values (depths) for
neighbor pixels in the input and then linearly interpolating them, does not
best fit the LiDAR range images since it does not fill the unmeasured details
but creates a new image with regression in a high-dimensional space. In
addition, the linear interpolation blurs sharp edges providing important
boundary information of objects in 3-D points. To handle these problems, we
propose a novel network, Implicit LiDAR Network (ILN), which learns not the
values per pixels but weights in the interpolation so that the superresolution
can be done by blending the input pixel depths but with non-linear weights.
Also, the weights can be considered as attentions from the query to the
neighbor pixels, and thus an attention module in the recent Transformer
architecture can be leveraged. Our experiments with a novel large-scale
synthetic dataset demonstrate that the proposed network reconstructs more
accurately than the state-of-the-art methods, achieving much faster convergence
in training.
|
[
{
"version": "v1",
"created": "Sat, 12 Mar 2022 11:30:03 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Kwon",
"Youngsun",
""
],
[
"Sung",
"Minhyuk",
""
],
[
"Yoon",
"Sung-Eui",
""
]
] |
new_dataset
| 0.984948 |
2203.06421
|
Minghan Li
|
Minghan Li and Lei Zhang
|
One-stage Video Instance Segmentation: From Frame-in Frame-out to
Clip-in Clip-out
|
20 pages
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many video instance segmentation (VIS) methods partition a video sequence
into individual frames to detect and segment objects frame by frame. However,
such a frame-in frame-out (FiFo) pipeline is ineffective to exploit the
temporal information. Based on the fact that adjacent frames in a short clip
are highly coherent in content, we propose to extend the one-stage FiFo
framework to a clip-in clip-out (CiCo) one, which performs VIS clip by clip.
Specifically, we stack FPN features of all frames in a short video clip to
build a spatio-temporal feature cube, and replace the 2D conv layers in the
prediction heads and the mask branch with 3D conv layers, forming clip-level
prediction heads (CPH) and clip-level mask heads (CMH). Then the clip-level
masks of an instance can be generated by feeding its box-level predictions from
CPH and clip-level features from CMH into a small fully convolutional network.
A clip-level segmentation loss is proposed to ensure that the generated
instance masks are temporally coherent in the clip. The proposed CiCo strategy
is free of inter-frame alignment, and can be easily embedded into existing FiFo
based VIS approaches. To validate the generality and effectiveness of our CiCo
strategy, we apply it to two representative FiFo methods, Yolact
\cite{bolya2019yolact} and CondInst \cite{tian2020conditional}, resulting in
two new one-stage VIS models, namely CiCo-Yolact and CiCo-CondInst, which
achieve 37.1/37.3\%, 35.2/35.4\% and 17.2/18.0\% mask AP using the ResNet50
backbone, and 41.8/41.4\%, 38.0/38.9\% and 18.0/18.2\% mask AP using the Swin
Transformer tiny backbone on YouTube-VIS 2019, 2021 and OVIS valid sets,
respectively, recording new state-of-the-arts. Code and video demos of CiCo can
be found at \url{https://github.com/MinghanLi/CiCo}.
|
[
{
"version": "v1",
"created": "Sat, 12 Mar 2022 12:23:21 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Li",
"Minghan",
""
],
[
"Zhang",
"Lei",
""
]
] |
new_dataset
| 0.981613 |
2203.06439
|
Cristina Gena
|
Cristina Gena, Claudio Mattutino, Enrico Mosca and Alberto Lillo
|
An end-user coding-based environment for programming an educational
affective robot
| null | null | null | null |
cs.RO cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper we present an open source educational robot, designed both to
engage children in an affective and social interaction, and to be programmable
also in its social and affective behaviour. Indeed the robot, in addition to
classic programming tasks, can also be programmed as a social robot. In
addition to movements, the user can make the robot express emotions and make it
say things. The robot can also be left in autonomous mode, in which it is able
to carry out both biometric user's features and emotion recognition, and
greeting the user.
|
[
{
"version": "v1",
"created": "Sat, 12 Mar 2022 14:10:42 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Gena",
"Cristina",
""
],
[
"Mattutino",
"Claudio",
""
],
[
"Mosca",
"Enrico",
""
],
[
"Lillo",
"Alberto",
""
]
] |
new_dataset
| 0.998804 |
2203.06457
|
MInsoo Lee
|
Minsoo Lee, Chaeyeon Chung, Hojun Cho, Minjung Kim, Sanghun Jung,
Jaegul Choo, and Minhyuk Sung
|
3D-GIF: 3D-Controllable Object Generation via Implicit Factorized
Representations
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
While NeRF-based 3D-aware image generation methods enable viewpoint control,
limitations still remain to be adopted to various 3D applications. Due to their
view-dependent and light-entangled volume representation, the 3D geometry
presents unrealistic quality and the color should be re-rendered for every
desired viewpoint. To broaden the 3D applicability from 3D-aware image
generation to 3D-controllable object generation, we propose the factorized
representations which are view-independent and light-disentangled, and training
schemes with randomly sampled light conditions. We demonstrate the superiority
of our method by visualizing factorized representations, re-lighted images, and
albedo-textured meshes. In addition, we show that our approach improves the
quality of the generated geometry via visualization and quantitative
comparison. To the best of our knowledge, this is the first work that extracts
albedo-textured meshes with unposed 2D images without any additional labels or
assumptions.
|
[
{
"version": "v1",
"created": "Sat, 12 Mar 2022 15:23:17 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Lee",
"Minsoo",
""
],
[
"Chung",
"Chaeyeon",
""
],
[
"Cho",
"Hojun",
""
],
[
"Kim",
"Minjung",
""
],
[
"Jung",
"Sanghun",
""
],
[
"Choo",
"Jaegul",
""
],
[
"Sung",
"Minhyuk",
""
]
] |
new_dataset
| 0.955583 |
2203.06663
|
Fuhai Chen
|
Chengpeng Dai, Fuhai Chen, Xiaoshuai Sun, Rongrong Ji, Qixiang Ye,
Yongjian Wu
|
Global2Local: A Joint-Hierarchical Attention for Video Captioning
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, automatic video captioning has attracted increasing attention,
where the core challenge lies in capturing the key semantic items, like objects
and actions as well as their spatial-temporal correlations from the redundant
frames and semantic content. To this end, existing works select either the key
video clips in a global level~(across multi frames), or key regions within each
frame, which, however, neglect the hierarchical order, i.e., key frames first
and key regions latter. In this paper, we propose a novel joint-hierarchical
attention model for video captioning, which embeds the key clips, the key
frames and the key regions jointly into the captioning model in a hierarchical
manner. Such a joint-hierarchical attention model first conducts a global
selection to identify key frames, followed by a Gumbel sampling operation to
identify further key regions based on the key frames, achieving an accurate
global-to-local feature representation to guide the captioning. Extensive
quantitative evaluations on two public benchmark datasets MSVD and MSR-VTT
demonstrates the superiority of the proposed method over the state-of-the-art
methods.
|
[
{
"version": "v1",
"created": "Sun, 13 Mar 2022 14:31:54 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Dai",
"Chengpeng",
""
],
[
"Chen",
"Fuhai",
""
],
[
"Sun",
"Xiaoshuai",
""
],
[
"Ji",
"Rongrong",
""
],
[
"Ye",
"Qixiang",
""
],
[
"Wu",
"Yongjian",
""
]
] |
new_dataset
| 0.991031 |
2203.06677
|
Han Zhang
|
Han Zhang, Zihao Zhang, Wenhao Zheng, Wei Xu
|
PNM: Pixel Null Model for General Image Segmentation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
A major challenge in image segmentation is classifying object boundaries.
Recent efforts propose to refine the segmentation result with boundary masks.
However, models are still prone to misclassifying boundary pixels even when
they correctly capture the object contours. In such cases, even a perfect
boundary map is unhelpful for segmentation refinement. In this paper, we argue
that assigning proper prior weights to error-prone pixels such as object
boundaries can significantly improve the segmentation quality. Specifically, we
present the \textit{pixel null model} (PNM), a prior model that weights each
pixel according to its probability of being correctly classified by a random
segmenter. Empirical analysis shows that PNM captures the misclassification
distribution of different state-of-the-art (SOTA) segmenters. Extensive
experiments on semantic, instance, and panoptic segmentation tasks over three
datasets (Cityscapes, ADE20K, MS COCO) confirm that PNM consistently improves
the segmentation quality of most SOTA methods (including the vision
transformers) and outperforms boundary-based methods by a large margin. We also
observe that the widely-used mean IoU (mIoU) metric is insensitive to
boundaries of different sharpness. As a byproduct, we propose a new metric,
\textit{PNM IoU}, which perceives the boundary sharpness and better reflects
the model segmentation performance in error-prone regions.
|
[
{
"version": "v1",
"created": "Sun, 13 Mar 2022 15:17:41 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Zhang",
"Han",
""
],
[
"Zhang",
"Zihao",
""
],
[
"Zheng",
"Wenhao",
""
],
[
"Xu",
"Wei",
""
]
] |
new_dataset
| 0.984988 |
2203.06679
|
Shaun Sweeney
|
Shaun Sweeney, Robert Shorten, David Timoney, Giovanni Russo,
Francesco Pilla
|
A smart electric bike for smart cities
| null | null | null | null |
cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
This is a Masters Thesis completed at University College Dublin, Ireland in
2017 which involved augmenting an off-the-shelf electric bike with sensors to
enable new services to be delivered to cyclists in cities. The application of
primary interest was to control the cyclist's ventilation rate based on the
concentration of local air pollutants. Detailed modelling and system design is
presented for our Cyberphysical system which consisted of a modified BTwin
e-bike, Cycle Analyst sensors, the cyclist themselves, a Bluetooth connected
smartphone and our algorithms. Control algorithms to regulate the proportion of
power the cyclist provided as a proxy for their ventilation rate were proposed
and validated in a basic way, which were later proven significantly further in
Further Work (see IEEE Transactions on Intelligent Transportation Systems
paper: https://ieeexplore.ieee.org/abstract/document/8357977). The basic idea
was to provide more electrical assistance to cyclists in areas of high air
pollution to reduce the cyclist ventilation rate and thereby the amount of air
pollutants inhaled. This presents an interesting control challenge due to the
human-in-the-loop characteristics and the potential for impactful real life
applications. A background literature review is provided on energy as it
relates to cycling and some other applications are also discussed. A link to a
video which demonstrates the system is provided, and also to a blog published
by IBM Research about the system.
|
[
{
"version": "v1",
"created": "Sun, 13 Mar 2022 15:28:12 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Sweeney",
"Shaun",
""
],
[
"Shorten",
"Robert",
""
],
[
"Timoney",
"David",
""
],
[
"Russo",
"Giovanni",
""
],
[
"Pilla",
"Francesco",
""
]
] |
new_dataset
| 0.99947 |
2203.06766
|
Van Bang Le
|
Sun-Yuan Hsieh, Hoang-Oanh Le, Van Bang Le, Sheng-Lung Peng
|
On the $d$-Claw Vertex Deletion Problem
| null | null | null | null |
cs.DM cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Let $d$-claw (or $d$-star) stand for $K_{1,d}$, the complete bipartite graph
with 1 and $d\ge 1$ vertices on each part. The $d$-claw vertex deletion
problem, $d$-CLAW-VD, asks for a given graph $G$ and an integer $k$ if one can
delete at most $k$ vertices from $G$ such that the resulting graph has no
$d$-claw as an induced subgraph. Thus, 1-CLAW-VD and 2-CLAW-VD are just the
famous VERTEX COVER problem and the CLUSTER VERTEX DELETION problem,
respectively. In this paper, we strengthen a hardness result in [M. Yannakakis,
Node-Deletion Problems on Bipartite Graphs, SIAM J. Comput. (1981)], by showing
that CLUSTER VERTEX DELETION remains NP-complete when restricted to bipartite
graphs of maximum degree 3. Moreover, for every $d\ge 3$, we show that
$d$-CLAW-VD is NP-complete even when restricted to bipartite graphs of maximum
degree $d$. These hardness results are optimal with respect to degree
constraint. By extending the hardness result in [F. Bonomo-Braberman et al.,
Linear-Time Algorithms for Eliminating Claws in Graphs, COCOON 2020], we show
that, for every $d\ge 3$, $d$-CLAW-VD is NP-complete even when restricted to
split graphs without $(d+1)$-claws, and split graphs of diameter 2. On the
positive side, we prove that $d$-CLAW-VD is polynomially solvable on what we
call $d$-block graphs, a class properly contains all block graphs. This result
extends the polynomial-time algorithm in [Y. Cao et al., Vertex deletion
problems on chordal graphs, Theor. Comput. Sci. (2018)] for 2-CLAW-VD on block
graphs to $d$-CLAW-VD for all $d\ge 2$ and improves the polynomial-time
algorithm proposed by F. Bonomo-Brabeman et al. for (unweighted) 3-CLAW-VD on
block graphs to 3-block graphs.
|
[
{
"version": "v1",
"created": "Sun, 13 Mar 2022 21:36:48 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Hsieh",
"Sun-Yuan",
""
],
[
"Le",
"Hoang-Oanh",
""
],
[
"Le",
"Van Bang",
""
],
[
"Peng",
"Sheng-Lung",
""
]
] |
new_dataset
| 0.977669 |
2203.06787
|
Xinhua Zhang
|
Xinhua Zhang and Lance R. Williams
|
Euclidean Invariant Recognition of 2D Shapes Using Histograms of
Magnitudes of Local Fourier-Mellin Descriptors
|
9 pages, 5 figures
|
2019 IEEE Winter Conference on Applications of Computer Vision
(WACV), 2019, pp. 303-311
|
10.1109/WACV.2019.00038
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Because the magnitude of inner products with its basis functions are
invariant to rotation and scale change, the Fourier-Mellin transform has long
been used as a component in Euclidean invariant 2D shape recognition systems.
Yet Fourier-Mellin transform magnitudes are only invariant to rotation and
scale changes about a known center point, and full Euclidean invariant shape
recognition is not possible except when this center point can be consistently
and accurately identified. In this paper, we describe a system where a
Fourier-Mellin transform is computed at every point in the image. The spatial
support of the Fourier-Mellin basis functions is made local by multiplying them
with a polynomial envelope. Significantly, the magnitudes of convolutions with
these complex filters at isolated points are not (by themselves) used as
features for Euclidean invariant shape recognition because reliable
discrimination would require filters with spatial support large enough to fully
encompass the shapes. Instead, we rely on the fact that normalized histograms
of magnitudes are fully Euclidean invariant. We demonstrate a system based on
the VLAD machine learning method that performs Euclidean invariant recognition
of 2D shapes and requires an order of magnitude less training data than
comparable methods based on convolutional neural networks.
|
[
{
"version": "v1",
"created": "Sun, 13 Mar 2022 23:54:56 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Zhang",
"Xinhua",
""
],
[
"Williams",
"Lance R.",
""
]
] |
new_dataset
| 0.985008 |
2203.06835
|
Xindi Wang
|
Xindi Wang, Robert E. Mercer, Frank Rudzicz
|
KenMeSH: Knowledge-enhanced End-to-end Biomedical Text Labelling
|
main conference at ACL 2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Currently, Medical Subject Headings (MeSH) are manually assigned to every
biomedical article published and subsequently recorded in the PubMed database
to facilitate retrieving relevant information. With the rapid growth of the
PubMed database, large-scale biomedical document indexing becomes increasingly
important. MeSH indexing is a challenging task for machine learning, as it
needs to assign multiple labels to each article from an extremely large
hierachically organized collection. To address this challenge, we propose
KenMeSH, an end-to-end model that combines new text features and a dynamic
\textbf{K}nowledge-\textbf{en}hanced mask attention that integrates document
features with MeSH label hierarchy and journal correlation features to index
MeSH terms. Experimental results show the proposed method achieves
state-of-the-art performance on a number of measures.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 03:09:56 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Wang",
"Xindi",
""
],
[
"Mercer",
"Robert E.",
""
],
[
"Rudzicz",
"Frank",
""
]
] |
new_dataset
| 0.995809 |
2203.06873
|
Arushi Jain
|
Arushi Jain, Shubham Paliwal, Monika Sharma, Lovekesh Vig
|
TSR-DSAW: Table Structure Recognition via Deep Spatial Association of
Words
|
6 pages, 1 figure, 1 table, ESANN 2021 proceedings, European
Symposium on Artificial Neural Networks, Computational Intelligence and
Machine Learning. Online event, 6-8 October 2021, i6doc.com publ., ISBN
978287587082-7
|
In ESANN 2021 proceedings, pages 257-262
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing methods for Table Structure Recognition (TSR) from camera-captured
or scanned documents perform poorly on complex tables consisting of nested rows
/ columns, multi-line texts and missing cell data. This is because current
data-driven methods work by simply training deep models on large volumes of
data and fail to generalize when an unseen table structure is encountered. In
this paper, we propose to train a deep network to capture the spatial
associations between different word pairs present in the table image for
unravelling the table structure. We present an end-to-end pipeline, named
TSR-DSAW: TSR via Deep Spatial Association of Words, which outputs a digital
representation of a table image in a structured format such as HTML. Given a
table image as input, the proposed method begins with the detection of all the
words present in the image using a text-detection network like CRAFT which is
followed by the generation of word-pairs using dynamic programming. These
word-pairs are highlighted in individual images and subsequently, fed into a
DenseNet-121 classifier trained to capture spatial associations such as
same-row, same-column, same-cell or none. Finally, we perform post-processing
on the classifier output to generate the table structure in HTML format. We
evaluate our TSR-DSAW pipeline on two public table-image datasets -- PubTabNet
and ICDAR 2013, and demonstrate improvement over previous methods such as
TableNet and DeepDeSRT.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 06:02:28 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Jain",
"Arushi",
""
],
[
"Paliwal",
"Shubham",
""
],
[
"Sharma",
"Monika",
""
],
[
"Vig",
"Lovekesh",
""
]
] |
new_dataset
| 0.970401 |
2203.06955
|
Romain Fouquet
|
Romain Fouquet (SPIRALS), Pierre Laperdrix (CNRS, SPIRALS), Romain
Rouvoy (SPIRALS)
|
JSRehab: Weaning Common Web Interface Components from JavaScript
Addiction
|
WWW '22 Companion, May 2022, Lyon, France
| null |
10.1145/3487553.3524227
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Leveraging JavaScript (JS) for User Interface (UI) interactivity has been the
norm on the web for many years. Yet, using JS increases bandwidth and battery
consumption as scripts need to be downloaded and processed by the browser.
Plus, client-side JS may expose visitors to security vulnerabilities such as
Cross-Site Scripting (XSS).This paper introduces a new server-side plugin,
called JSRehab, that automatically rewrites common web interface components by
alternatives that do not require any JavaScript (JS). The main objective of
JSRehab is to drastically reduce-and ultimately remove-the inclusion of JS in a
web page to improve its responsiveness and consume less resources. We report on
our implementation of JS-Rehab for Bootstrap, the most popular UI framework by
far, and evaluate it on a corpus of 100 webpages. We show through manual
validation that it is indeed possible to lower the dependencies of pages on JS
while keeping intact its interactivity and accessibility. We observe that
JSRehab brings energy savings of at least 5 % for the majority of web pages on
the tested devices, while introducing a median on-the-wire overhead of only 5 %
to the HTML payload.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 09:40:31 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Fouquet",
"Romain",
"",
"SPIRALS"
],
[
"Laperdrix",
"Pierre",
"",
"CNRS, SPIRALS"
],
[
"Rouvoy",
"Romain",
"",
"SPIRALS"
]
] |
new_dataset
| 0.986297 |
2203.06972
|
Stefano Dafarra
|
Stefano Dafarra, Kourosh Darvish, Riccardo Grieco, Gianluca Milani,
Ugo Pattacini, Lorenzo Rapetti, Giulio Romualdi, Mattia Salvi, Alessandro
Scalzo, Ines Sorrentino, Davide Tom\`e, Silvio Traversaro, Enrico Valli,
Paolo Maria Viceconte, Giorgio Metta, Marco Maggiali, Daniele Pucci
|
iCub3 Avatar System
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an avatar system that enables a human operator to visit a remote
location via iCub3, a new humanoid robot developed at the Italian Institute of
Technology (IIT) paving the way for the next generation of the iCub platforms.
On the one hand, we present the humanoid iCub3 that plays the role of the
robotic avatar. Particular attention is paid to the differences between iCub3
and the classical iCub humanoid robot. On the other hand, we present the set of
technologies of the avatar system at the operator side. They are mainly
composed of iFeel, namely, IIT lightweight non-invasive wearable devices for
motion tracking and haptic feedback, and of non-IIT technologies designed for
virtual reality ecosystems. Finally, we show the effectiveness of the avatar
system by describing a demonstration involving a realtime teleoperation of the
iCub3. The robot is located in Venice, Biennale di Venezia, while the human
operator is at more than 290km distance and located in Genoa, IIT. Using a
standard fiber optic internet connection, the avatar system transports the
operator locomotion, manipulation, voice, and face expressions to the iCub3
with visual, auditory, haptic and touch feedback.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 10:13:06 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Dafarra",
"Stefano",
""
],
[
"Darvish",
"Kourosh",
""
],
[
"Grieco",
"Riccardo",
""
],
[
"Milani",
"Gianluca",
""
],
[
"Pattacini",
"Ugo",
""
],
[
"Rapetti",
"Lorenzo",
""
],
[
"Romualdi",
"Giulio",
""
],
[
"Salvi",
"Mattia",
""
],
[
"Scalzo",
"Alessandro",
""
],
[
"Sorrentino",
"Ines",
""
],
[
"Tomè",
"Davide",
""
],
[
"Traversaro",
"Silvio",
""
],
[
"Valli",
"Enrico",
""
],
[
"Viceconte",
"Paolo Maria",
""
],
[
"Metta",
"Giorgio",
""
],
[
"Maggiali",
"Marco",
""
],
[
"Pucci",
"Daniele",
""
]
] |
new_dataset
| 0.999263 |
2203.06974
|
Vahid Hashemi
|
Hassan Hage, Emmanouil Seferis, Vahid Hashemi, and Frank Mantwill
|
SMC4PEP: Stochastic Model Checking of Product Engineering Processes
|
Paper accepted at the 25th International Conference on Fundamental
Approaches to Software Engineering (FASE'22)
| null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Product Engineering Processes (PEPs) are used for describing complex product
developments in big enterprises such as automotive and avionics industries. The
Business Process Model Notation (BPMN) is a widely used language to encode
interactions among several participants in such PEPs. In this paper, we present
SMC4PEP as a tool to convert graphical representations of a business process
using the BPMN standard to an equivalent discrete-time stochastic control
process called Markov Decision Process (MDP). To this aim, we first follow the
approach described in an earlier investigation to generate a semantically
equivalent business process which is more capable of handling the PEP
complexity. In particular, the interaction between different levels of
abstraction is realized by events rather than direct message flows. Afterwards,
SMC4PEP converts the generated process to an MDP model described by the syntax
of the probabilistic model checking tool PRISM. As such, SMC4PEP provides a
framework for automatic verification and validation of business processes in
particular with respect to requirements from legal standards such as Automotive
SPICE. Moreover, our experimental results confirm a faster verification routine
due to smaller MDP models generated from the alternative event-based BPMN
models.
|
[
{
"version": "v1",
"created": "Fri, 18 Feb 2022 12:29:48 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Hage",
"Hassan",
""
],
[
"Seferis",
"Emmanouil",
""
],
[
"Hashemi",
"Vahid",
""
],
[
"Mantwill",
"Frank",
""
]
] |
new_dataset
| 0.988459 |
2203.07003
|
Shibiao Xu
|
Changwei Wang, Rongtao Xu, Yuyang Zhang, Shibiao Xu, Weiliang Meng,
Bin Fan, Xiaopeng Zhang
|
MTLDesc: Looking Wider to Describe Better
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Limited by the locality of convolutional neural networks, most existing local
features description methods only learn local descriptors with local
information and lack awareness of global and surrounding spatial context. In
this work, we focus on making local descriptors "look wider to describe better"
by learning local Descriptors with More Than just Local information (MTLDesc).
Specifically, we resort to context augmentation and spatial attention
mechanisms to make our MTLDesc obtain non-local awareness. First, Adaptive
Global Context Augmented Module and Diverse Local Context Augmented Module are
proposed to construct robust local descriptors with context information from
global to local. Second, Consistent Attention Weighted Triplet Loss is designed
to integrate spatial attention awareness into both optimization and matching
stages of local descriptors learning. Third, Local Features Detection with
Feature Pyramid is given to obtain more stable and accurate keypoints
localization. With the above innovations, the performance of our MTLDesc
significantly surpasses the prior state-of-the-art local descriptors on
HPatches, Aachen Day-Night localization and InLoc indoor localization
benchmarks.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 11:16:05 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Wang",
"Changwei",
""
],
[
"Xu",
"Rongtao",
""
],
[
"Zhang",
"Yuyang",
""
],
[
"Xu",
"Shibiao",
""
],
[
"Meng",
"Weiliang",
""
],
[
"Fan",
"Bin",
""
],
[
"Zhang",
"Xiaopeng",
""
]
] |
new_dataset
| 0.993475 |
2203.07086
|
Alexander Kunitsyn
|
Alexander Kunitsyn, Maksim Kalashnikov, Maksim Dzabraev, Andrei
Ivaniuta
|
MDMMT-2: Multidomain Multimodal Transformer for Video Retrieval, One
More Step Towards Generalization
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this work we present a new State-of-The-Art on the text-to-video retrieval
task on MSR-VTT, LSMDC, MSVD, YouCook2 and TGIF obtained by a single model.
Three different data sources are combined: weakly-supervised videos,
crowd-labeled text-image pairs and text-video pairs. A careful analysis of
available pre-trained networks helps to choose the best prior-knowledge ones.
We introduce three-stage training procedure that provides high transfer
knowledge efficiency and allows to use noisy datasets during training without
prior knowledge degradation. Additionally, double positional encoding is used
for better fusion of different modalities and a simple method for non-square
inputs processing is suggested.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 13:15:09 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Kunitsyn",
"Alexander",
""
],
[
"Kalashnikov",
"Maksim",
""
],
[
"Dzabraev",
"Maksim",
""
],
[
"Ivaniuta",
"Andrei",
""
]
] |
new_dataset
| 0.996945 |
2203.07102
|
Youqian Zhang
|
Youqian Zhang, Kasper Rasmussen
|
Detection of Electromagnetic Signal Injection Attacks on Actuator
Systems
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An actuator is a device that converts electricity into another form of
energy, typically physical movement. They are absolutely essential for any
system that needs to impact or modify the physical world, and are used in
millions of systems of all sizes, all over the world, from cars and spacecraft
to factory control systems and critical infrastructure. An actuator is a "dumb
device" that is entirely controlled by the surrounding electronics, e.g., a
microcontroller, and thus cannot authenticate its control signals or do any
other form of processing. The problem we look at in this paper is how the wires
that connect an actuator to its control electronics can act like antennas,
picking up electromagnetic signals from the environment. This makes it possible
for a remote attacker to wirelessly inject signals (energy) into these wires to
bypass the controller and directly control the actuator.
To detect such attacks, we propose a novel detection method that allows the
microcontroller to monitor the control signal and detect attacks as a deviation
from the intended value. We have managed to do this without requiring the
microcontroller to sample the signal at a high rate or run any signal
processing. That makes our defense mechanism practical and easy to integrate
into existing systems. Our method is general and applies to any type of
actuator (provided a few basic assumptions are met), and can deal with
adversaries with arbitrarily high transmission power. We implement our
detection method on two different practical systems to show its generality,
effectiveness, and robustness.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 13:47:03 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Zhang",
"Youqian",
""
],
[
"Rasmussen",
"Kasper",
""
]
] |
new_dataset
| 0.998699 |
2203.07130
|
Kevin Haninger
|
Richard Hartisch and Kevin Haninger
|
Flexure-based Environmental Compliance for High-speed Robotic Contact
Tasks
|
7 pages, 7 figures, on review. Experiment video:
https://youtu.be/96EdFZqY_2E
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
The design of physical compliance -- its location, degree, and structure --
affects robot performance and robustness in contact-rich tasks. While
compliance is often used in the robot's joints, flange, or end-effector, this
paper proposes compliant structures in the environment, allowing safe and
robust contact while keeping the higher motion control bandwidth and precision
of high impedance robots. Compliance is here realized with flexures and
viscoelastic materials, which are integrated to several mechanisms to offer
structured compliance, such as a remote center of compliance. Additive
manufacturing with fused deposition modeling is used, allowing faster design
iteration and low-cost integration with standard industrial equipment.
Mechanical properties, including the total stiffness matrix, stiffness ratio,
and rotational precision, are analytically determined and compared to
experimental results. Three remote center of compliance (RCC) devices and a
1-DOF linear device are prototyped and tested in high-speed assembly tasks.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 14:25:44 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Hartisch",
"Richard",
""
],
[
"Haninger",
"Kevin",
""
]
] |
new_dataset
| 0.998581 |
2203.07228
|
Ilias Chalkidis
|
Ilias Chalkidis, Tommaso Pasini, Sheng Zhang, Letizia Tomada,
Sebastian Felix Schwemer, Anders S{\o}gaard
|
FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text
Processing
|
9 pages, long paper at ACL 2022 proceedings
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present a benchmark suite of four datasets for evaluating the fairness of
pre-trained language models and the techniques used to fine-tune them for
downstream tasks. Our benchmarks cover four jurisdictions (European Council,
USA, Switzerland, and China), five languages (English, German, French, Italian
and Chinese) and fairness across five attributes (gender, age, region,
language, and legal area). In our experiments, we evaluate pre-trained language
models using several group-robust fine-tuning techniques and show that
performance group disparities are vibrant in many cases, while none of these
techniques guarantee fairness, nor consistently mitigate group disparities.
Furthermore, we provide a quantitative and qualitative analysis of our results,
highlighting open challenges in the development of robustness methods in legal
NLP.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 16:10:28 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Chalkidis",
"Ilias",
""
],
[
"Pasini",
"Tommaso",
""
],
[
"Zhang",
"Sheng",
""
],
[
"Tomada",
"Letizia",
""
],
[
"Schwemer",
"Sebastian Felix",
""
],
[
"Søgaard",
"Anders",
""
]
] |
new_dataset
| 0.999827 |
2203.07238
|
Umberto Martinez-Penas
|
Umberto Mart\'inez-Pe\~nas
|
Multilayer crisscross error and erasure correction
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/publicdomain/zero/1.0/
|
In this work, multilayer crisscross error and erasures are considered, which
affect entire rows and columns in the matrices of a list of matrices. To
measure such errors and erasures, the multi-cover metric is introduced. Several
bounds are derived, including a Singleton bound, and maximum multi-cover
distance (MMCD) codes are defined as those attaining it. Duality, puncturing
and shortening of linear MMCD codes are studied. It is shown that the dual of a
linear MMCD code is not necessarily MMCD, and those satisfying this duality
condition are defined as dually MMCD codes. Finally, some constructions of
codes in the multi-cover metric are given, including dually MMCD codes,
together with efficient decoding algorithms for them.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 16:17:12 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Martínez-Peñas",
"Umberto",
""
]
] |
new_dataset
| 0.991275 |
2203.07307
|
Manuel Tran
|
Manuel Tran, Sophia J. Wagner, Melanie Boxberg, Tingying Peng
|
S5CL: Unifying Fully-Supervised, Self-Supervised, and Semi-Supervised
Learning Through Hierarchical Contrastive Learning
| null | null | null | null |
cs.CV stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
In computational pathology, we often face a scarcity of annotations and a
large amount of unlabeled data. One method for dealing with this is
semi-supervised learning which is commonly split into a self-supervised pretext
task and a subsequent model fine-tuning. Here, we compress this two-stage
training into one by introducing S5CL, a unified framework for
fully-supervised, self-supervised, and semi-supervised learning. With three
contrastive losses defined for labeled, unlabeled, and pseudo-labeled images,
S5CL can learn feature representations that reflect the hierarchy of distance
relationships: similar images and augmentations are embedded the closest,
followed by different looking images of the same class, while images from
separate classes have the largest distance. Moreover, S5CL allows us to
flexibly combine these losses to adapt to different scenarios. Evaluations of
our framework on two public histopathological datasets show strong improvements
in the case of sparse labels: for a H&E-stained colorectal cancer dataset, the
accuracy increases by up to 9% compared to supervised cross-entropy loss; for a
highly imbalanced dataset of single white blood cells from leukemia patient
blood smears, the F1-score increases by up to 6%.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 17:10:01 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Tran",
"Manuel",
""
],
[
"Wagner",
"Sophia J.",
""
],
[
"Boxberg",
"Melanie",
""
],
[
"Peng",
"Tingying",
""
]
] |
new_dataset
| 0.989422 |
2203.07355
|
Narges Kazempour
|
Seyed Reza Hoseini Najarkolaei, Narges Kazempour, Hasti Rostami,
Mohammad Reza Aref
|
Information-Theoretic Secure and Private Voting System
|
13 pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a private voting system that consists of N
authorized voters who may vote to one of the K candidates or vote abstain. Each
voter wants to compute the final tally while staying private and robust against
malicious voters, who try to gain information about the vote of the other
voters beyond the final result, or send incorrect information to affect the
final tally. We design an information-theoretic private voting system based on
Shamir secret sharing, which is secure and robust as long as there are up to
(N-1)/3 malicious voters.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 17:53:35 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Najarkolaei",
"Seyed Reza Hoseini",
""
],
[
"Kazempour",
"Narges",
""
],
[
"Rostami",
"Hasti",
""
],
[
"Aref",
"Mohammad Reza",
""
]
] |
new_dataset
| 0.98231 |
2203.07362
|
Jens Lemmens
|
Jens Lemmens, Jens Van Nooten, Tim Kreutz, Walter Daelemans
|
CoNTACT: A Dutch COVID-19 Adapted BERT for Vaccine Hesitancy and
Argumentation Detection
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present CoNTACT: a Dutch language model adapted to the domain of COVID-19
tweets. The model was developed by continuing the pre-training phase of RobBERT
(Delobelle, 2020) by using 2.8M Dutch COVID-19 related tweets posted in 2021.
In order to test the performance of the model and compare it to RobBERT, the
two models were tested on two tasks: (1) binary vaccine hesitancy detection and
(2) detection of arguments for vaccine hesitancy. For both tasks, not only
Twitter but also Facebook data was used to show cross-genre performance. In our
experiments, CoNTACT showed statistically significant gains over RobBERT in all
experiments for task 1. For task 2, we observed substantial improvements in
virtually all classes in all experiments. An error analysis indicated that the
domain adaptation yielded better representations of domain-specific
terminology, causing CoNTACT to make more accurate classification decisions.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 17:55:32 GMT"
}
] | 2022-03-15T00:00:00 |
[
[
"Lemmens",
"Jens",
""
],
[
"Van Nooten",
"Jens",
""
],
[
"Kreutz",
"Tim",
""
],
[
"Daelemans",
"Walter",
""
]
] |
new_dataset
| 0.992666 |
2101.05202
|
Mihails Birjukovs
|
Peteris Zvejnieks, Mihails Birjukovs, Martins Klevs, Megumi Akashi,
Sven Eckert, Andris Jakovics
|
MHT-X: Offline Multiple Hypothesis Tracking with Algorithm X
|
18 pages, 15 figures
| null |
10.1007/s00348-022-03399-5
| null |
cs.CV physics.flu-dyn
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An efficient and versatile implementation of offline multiple hypothesis
tracking with Algorithm X for optimal association search was developed using
Python. The code is intended for scientific applications that do not require
online processing. Directed graph framework is used and multiple scans with
progressively increasing time window width are used for edge construction for
maximum likelihood trajectories. The current version of the code was developed
for applications in multiphase hydrodynamics, e.g. bubble and particle
tracking, and is capable of resolving object motion, merges and splits.
Feasible object associations and trajectory graph edge likelihoods are
determined using weak mass and momentum conservation laws translated to
statistical functions for object properties. The code is compatible with
n-dimensional motion with arbitrarily many tracked object properties. This
framework is easily extendable beyond the present application by replacing the
currently used heuristics with ones more appropriate for the problem at hand.
The code is open-source and will be continuously developed further.
|
[
{
"version": "v1",
"created": "Thu, 17 Dec 2020 02:04:46 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Feb 2021 14:09:27 GMT"
}
] | 2022-03-14T00:00:00 |
[
[
"Zvejnieks",
"Peteris",
""
],
[
"Birjukovs",
"Mihails",
""
],
[
"Klevs",
"Martins",
""
],
[
"Akashi",
"Megumi",
""
],
[
"Eckert",
"Sven",
""
],
[
"Jakovics",
"Andris",
""
]
] |
new_dataset
| 0.990576 |
2102.11455
|
Abhijeet Sahu
|
Patrick Wlazlo, Abhijeet Sahu, Zeyu Mao, Hao Huang, Ana Goulart,
Katherine Davis, Saman Zonouz
|
Man-in-The-Middle Attacks and Defense in a Power System Cyber-Physical
Testbed
| null |
IET Cyber-Physical Systems: Theory & Applications 2021
|
10.1049/cps2.12014
| null |
cs.CR cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Man-in-The-Middle (MiTM) attacks present numerous threats to a smart grid. In
a MiTM attack, an intruder embeds itself within a conversation between two
devices to either eavesdrop or impersonate one of the devices, making it appear
to be a normal exchange of information. Thus, the intruder can perform false
data injection (FDI) and false command injection (FCI) attacks that can
compromise power system operations, such as state estimation, economic
dispatch, and automatic generation control (AGC). Very few researchers have
focused on MiTM methods that are difficult to detect within a smart grid. To
address this, we are designing and implementing multi-stage MiTM intrusions in
an emulation-based cyber-physical power system testbed against a large-scale
synthetic grid model to demonstrate how such attacks can cause physical
contingencies such as misguided operation and false measurements. MiTM
intrusions create FCI, FDI, and replay attacks in this synthetic power grid.
This work enables stakeholders to defend against these stealthy attacks, and we
present detection mechanisms that are developed using multiple alerts from
intrusion detection systems and network monitoring tools. Our contribution will
enable other smart grid security researchers and industry to develop further
detection mechanisms for inconspicuous MiTM attacks.
|
[
{
"version": "v1",
"created": "Tue, 23 Feb 2021 01:59:56 GMT"
}
] | 2022-03-14T00:00:00 |
[
[
"Wlazlo",
"Patrick",
""
],
[
"Sahu",
"Abhijeet",
""
],
[
"Mao",
"Zeyu",
""
],
[
"Huang",
"Hao",
""
],
[
"Goulart",
"Ana",
""
],
[
"Davis",
"Katherine",
""
],
[
"Zonouz",
"Saman",
""
]
] |
new_dataset
| 0.999183 |
2104.06570
|
Heide Gluesing-Luerssen
|
Heide Gluesing-Luerssen and Benjamin Jany
|
q-Polymatroids and Their Relation to Rank-Metric Codes
| null | null | null | null |
cs.IT math.CO math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
It is well known that linear rank-metric codes give rise to q-polymatroids.
Analogously to matroid theory one may ask whether a given q-polymatroid is
representable by a rank-metric code. We provide an answer by presenting an
example of a q-matroid that is not representable by any linear rank-metric code
and, via a relation to paving matroids, provide examples of various q-matroids
that are not representable by F_{q^m}-linear rank-metric codes. We then go on
and introduce deletion and contraction for q-polymatroids and show that they
are mutually dual and correspond to puncturing and shortening of rank-metric
codes. Finally, we introduce a closure operator along with the notion of flats
and show that the generalized rank weights of a rank-metric code are fully
determined by the flats of the associated q-polymatroid.
|
[
{
"version": "v1",
"created": "Wed, 14 Apr 2021 01:05:11 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Jun 2021 18:02:32 GMT"
},
{
"version": "v3",
"created": "Thu, 10 Mar 2022 20:08:25 GMT"
}
] | 2022-03-14T00:00:00 |
[
[
"Gluesing-Luerssen",
"Heide",
""
],
[
"Jany",
"Benjamin",
""
]
] |
new_dataset
| 0.999039 |
2105.03638
|
Ryota Eguchi
|
Ryota Eguchi, Naoki Kitamura, and Taisuke Izumi
|
Fast Neighborhood Rendezvous
| null | null |
10.1587/transinf.2021EDP7104
| null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
In the rendezvous problem, two computing entities (called \emph{agents})
located at different vertices in a graph have to meet at the same vertex. In
this paper, we consider the synchronous \emph{neighborhood rendezvous problem},
where the agents are initially located at two adjacent vertices. While this
problem can be trivially solved in $O(\Delta)$ rounds ($\Delta$ is the maximum
degree of the graph), it is highly challenging to reveal whether that problem
can be solved in $o(\Delta)$ rounds, even assuming the rich computational
capability of agents. The only known result is that the time complexity of
$O(\sqrt{n})$ rounds is achievable if the graph is complete and agents are
probabilistic, asymmetric, and can use whiteboards placed at vertices. Our main
contribution is to clarify the situation (with respect to computational models
and graph classes) admitting such a sublinear-time rendezvous algorithm. More
precisely, we present two algorithms achieving fast rendezvous additionally
assuming bounded minimum degree, unique vertex identifier, accessibility to
neighborhood IDs, and randomization. The first algorithm runs within
$\tilde{O}(\sqrt{n\Delta/\delta} + n/\delta)$ rounds for graphs of the minimum
degree larger than $\sqrt{n}$, where $n$ is the number of vertices in the
graph, and $\delta$ is the minimum degree of the graph. The second algorithm
assumes that the largest vertex ID is $O(n)$, and achieves $\tilde{O}\left(
\frac{n}{\sqrt{\delta}} \right)$-round time complexity without using
whiteboards. These algorithms attain $o(\Delta)$-round complexity in the case
of $\delta = {\omega}(\sqrt{n} \log n)$ and $\delta = \omega(n^{2/3} \log^{4/3}
n)$ respectively.
|
[
{
"version": "v1",
"created": "Sat, 8 May 2021 08:38:05 GMT"
}
] | 2022-03-14T00:00:00 |
[
[
"Eguchi",
"Ryota",
""
],
[
"Kitamura",
"Naoki",
""
],
[
"Izumi",
"Taisuke",
""
]
] |
new_dataset
| 0.998458 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.