id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2206.14276
|
Melih Elibol
|
Melih Elibol, Vinamra Benara, Samyu Yagati, Lianmin Zheng, Alvin
Cheung, Michael I. Jordan, Ion Stoica
|
NumS: Scalable Array Programming for the Cloud
| null | null | null | null |
cs.DC cs.LG cs.MS stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scientists increasingly rely on Python tools to perform scalable distributed
memory array operations using rich, NumPy-like expressions. However, many of
these tools rely on dynamic schedulers optimized for abstract task graphs,
which often encounter memory and network bandwidth-related bottlenecks due to
sub-optimal data and operator placement decisions. Tools built on the message
passing interface (MPI), such as ScaLAPACK and SLATE, have better scaling
properties, but these solutions require specialized knowledge to use. In this
work, we present NumS, an array programming library which optimizes NumPy-like
expressions on task-based distributed systems. This is achieved through a novel
scheduler called Load Simulated Hierarchical Scheduling (LSHS). LSHS is a local
search method which optimizes operator placement by minimizing maximum memory
and network load on any given node within a distributed system. Coupled with a
heuristic for load balanced data layouts, our approach is capable of attaining
communication lower bounds on some common numerical operations, and our
empirical study shows that LSHS enhances performance on Ray by decreasing
network load by a factor of 2x, requiring 4x less memory, and reducing
execution time by 10x on the logistic regression problem. On terabyte-scale
data, NumS achieves competitive performance to SLATE on DGEMM, up to 20x
speedup over Dask on a key operation for tensor factorization, and a 2x speedup
on logistic regression compared to Dask ML and Spark's MLlib.
|
[
{
"version": "v1",
"created": "Tue, 28 Jun 2022 20:13:40 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Jul 2022 01:12:04 GMT"
}
] | 2022-07-14T00:00:00 |
[
[
"Elibol",
"Melih",
""
],
[
"Benara",
"Vinamra",
""
],
[
"Yagati",
"Samyu",
""
],
[
"Zheng",
"Lianmin",
""
],
[
"Cheung",
"Alvin",
""
],
[
"Jordan",
"Michael I.",
""
],
[
"Stoica",
"Ion",
""
]
] |
new_dataset
| 0.999542 |
2207.01183
|
Sandy Ardianto
|
Sandy Ardianto, Hsueh-Ming Hang, Wen-Huang Cheng (National Yang Ming
Chiao Tung University)
|
Fast Vehicle Detection and Tracking on Fisheye Traffic Monitoring Video
using CNN and Bounding Box Propagation
|
to be published in International Conference on Image Processing
(ICIP) 2022, Bordeaux, France
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We design a fast car detection and tracking algorithm for traffic monitoring
fisheye video mounted on crossroads. We use ICIP 2020 VIP Cup dataset and adopt
YOLOv5 as the object detection base model. The nighttime video of this dataset
is very challenging, and the detection accuracy (AP50) of the base model is
about 54%. We design a reliable car detection and tracking algorithm based on
the concept of bounding box propagation among frames, which provides 17.9
percentage points (pp) and 6.2 pp. accuracy improvement over the base model for
the nighttime and daytime videos, respectively. To speed up, the grayscale
frame difference is used for the intermediate frames in a segment, which can
double the processing speed.
|
[
{
"version": "v1",
"created": "Mon, 4 Jul 2022 03:55:19 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Jul 2022 15:04:18 GMT"
}
] | 2022-07-14T00:00:00 |
[
[
"Ardianto",
"Sandy",
"",
"National Yang Ming\n Chiao Tung University"
],
[
"Hang",
"Hsueh-Ming",
"",
"National Yang Ming\n Chiao Tung University"
],
[
"Cheng",
"Wen-Huang",
"",
"National Yang Ming\n Chiao Tung University"
]
] |
new_dataset
| 0.999239 |
2207.03608
|
Hung-Min Hsu
|
Hung-Min Hsu, Yizhou Wang, Cheng-Yen Yang, Jenq-Neng Hwang, Hoang Le
Uyen Thuc, Kwang-Ju Kim
|
GaitTAKE: Gait Recognition by Temporal Attention and Keypoint-guided
Embedding
|
IEEE International Conference on Image Processing 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Gait recognition, which refers to the recognition or identification of a
person based on their body shape and walking styles, derived from video data
captured from a distance, is widely used in crime prevention, forensic
identification, and social security. However, to the best of our knowledge,
most of the existing methods use appearance, posture and temporal feautures
without considering a learned temporal attention mechanism for global and local
information fusion. In this paper, we propose a novel gait recognition
framework, called Temporal Attention and Keypoint-guided Embedding (GaitTAKE),
which effectively fuses temporal-attention-based global and local appearance
feature and temporal aggregated human pose feature. Experimental results show
that our proposed method achieves a new SOTA in gait recognition with rank-1
accuracy of 98.0% (normal), 97.5% (bag) and 92.2% (coat) on the CASIA-B gait
dataset; 90.4% accuracy on the OU-MVLP gait dataset.
|
[
{
"version": "v1",
"created": "Thu, 7 Jul 2022 22:38:54 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Jul 2022 18:05:58 GMT"
}
] | 2022-07-14T00:00:00 |
[
[
"Hsu",
"Hung-Min",
""
],
[
"Wang",
"Yizhou",
""
],
[
"Yang",
"Cheng-Yen",
""
],
[
"Hwang",
"Jenq-Neng",
""
],
[
"Thuc",
"Hoang Le Uyen",
""
],
[
"Kim",
"Kwang-Ju",
""
]
] |
new_dataset
| 0.998969 |
2207.03800
|
Yongqi Wang
|
Yongqi Wang and Zhou Zhao
|
FastLTS: Non-Autoregressive End-to-End Unconstrained Lip-to-Speech
Synthesis
|
10 pages, 5 figures, accepted by ACMMM 2022
| null |
10.1145/3503161.3548194
| null |
cs.SD cs.CL cs.CV cs.MM eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unconstrained lip-to-speech synthesis aims to generate corresponding speeches
from silent videos of talking faces with no restriction on head poses or
vocabulary. Current works mainly use sequence-to-sequence models to solve this
problem, either in an autoregressive architecture or a flow-based
non-autoregressive architecture. However, these models suffer from several
drawbacks: 1) Instead of directly generating audios, they use a two-stage
pipeline that first generates mel-spectrograms and then reconstructs audios
from the spectrograms. This causes cumbersome deployment and degradation of
speech quality due to error propagation; 2) The audio reconstruction algorithm
used by these models limits the inference speed and audio quality, while neural
vocoders are not available for these models since their output spectrograms are
not accurate enough; 3) The autoregressive model suffers from high inference
latency, while the flow-based model has high memory occupancy: neither of them
is efficient enough in both time and memory usage. To tackle these problems, we
propose FastLTS, a non-autoregressive end-to-end model which can directly
synthesize high-quality speech audios from unconstrained talking videos with
low latency, and has a relatively small model size. Besides, different from the
widely used 3D-CNN visual frontend for lip movement encoding, we for the first
time propose a transformer-based visual frontend for this task. Experiments
show that our model achieves $19.76\times$ speedup for audio waveform
generation compared with the current autoregressive model on input sequences of
3 seconds, and obtains superior audio quality.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2022 10:10:39 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Jul 2022 09:15:36 GMT"
}
] | 2022-07-14T00:00:00 |
[
[
"Wang",
"Yongqi",
""
],
[
"Zhao",
"Zhou",
""
]
] |
new_dataset
| 0.994859 |
2207.05008
|
Mustafa Erolcan Er
|
Deniz Zeyrek, Mustafa Erolcan Er
|
A description of Turkish Discourse Bank 1.2 and an examination of common
dependencies in Turkish discourse
|
Presented in The International Conference on Agglutinative Language
Technologies as a challenge of Natural Language Processing (ALTNLP) 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We describe Turkish Discourse Bank 1.2, the latest version of a discourse
corpus annotated for explicitly or implicitly conveyed discourse relations,
their constitutive units, and senses in the Penn Discourse Treebank style. We
present an evaluation of the recently added tokens and examine three commonly
occurring dependency patterns that hold among the constitutive units of a pair
of adjacent discourse relations, namely, shared arguments, full embedding and
partial containment of a discourse relation. We present three major findings:
(a) implicitly conveyed relations occur more often than explicitly conveyed
relations in the data; (b) it is much more common for two adjacent implicit
discourse relations to share an argument than for two adjacent explicit
relations to do so; (c) both full embedding and partial containment of
discourse relations are pervasive in the corpus, which can be partly due to
subordinator connectives whose preposed subordinate clause tends to be selected
together with the matrix clause rather than being selected alone. Finally, we
briefly discuss the implications of our findings for Turkish discourse parsing.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2022 16:57:00 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Jul 2022 09:52:10 GMT"
}
] | 2022-07-14T00:00:00 |
[
[
"Zeyrek",
"Deniz",
""
],
[
"Er",
"Mustafa Erolcan",
""
]
] |
new_dataset
| 0.999308 |
2207.05620
|
Ant\'onio Abreu
|
Ant\'onio J. Abreu, Lu\'is A. Alexandre, Jo\~ao A. Santos, Filippo
Basso
|
LudVision -- Remote Detection of Exotic Invasive Aquatic Floral Species
using Drone-Mounted Multispectral Data
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Remote sensing is the process of detecting and monitoring the physical
characteristics of an area by measuring its reflected and emitted radiation at
a distance. It is being broadly used to monitor ecosystems, mainly for their
preservation. Ever-growing reports of invasive species have affected the
natural balance of ecosystems. Exotic invasive species have a critical impact
when introduced into new ecosystems and may lead to the extinction of native
species. In this study, we focus on Ludwigia peploides, considered by the
European Union as an aquatic invasive species. Its presence can negatively
impact the surrounding ecosystem and human activities such as agriculture,
fishing, and navigation. Our goal was to develop a method to identify the
presence of the species. We used images collected by a drone-mounted
multispectral sensor to achieve this, creating our LudVision data set. To
identify the targeted species on the collected images, we propose a new method
for detecting Ludwigia p. in multispectral images. The method is based on
existing state-of-the-art semantic segmentation methods modified to handle
multispectral data. The proposed method achieved a producer's accuracy of 79.9%
and a user's accuracy of 95.5%.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2022 15:43:21 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Jul 2022 07:02:05 GMT"
}
] | 2022-07-14T00:00:00 |
[
[
"Abreu",
"António J.",
""
],
[
"Alexandre",
"Luís A.",
""
],
[
"Santos",
"João A.",
""
],
[
"Basso",
"Filippo",
""
]
] |
new_dataset
| 0.999307 |
2207.05836
|
Yinglun Zhu
|
Yinglun Zhu, Dylan J. Foster, John Langford, Paul Mineiro
|
Contextual Bandits with Large Action Spaces: Made Practical
|
To appear at ICML 2022
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A central problem in sequential decision making is to develop algorithms that
are practical and computationally efficient, yet support the use of flexible,
general-purpose models. Focusing on the contextual bandit problem, recent
progress provides provably efficient algorithms with strong empirical
performance when the number of possible alternatives ("actions") is small, but
guarantees for decision making in large, continuous action spaces have remained
elusive, leading to a significant gap between theory and practice. We present
the first efficient, general-purpose algorithm for contextual bandits with
continuous, linearly structured action spaces. Our algorithm makes use of
computational oracles for (i) supervised learning, and (ii) optimization over
the action space, and achieves sample complexity, runtime, and memory
independent of the size of the action space. In addition, it is simple and
practical. We perform a large-scale empirical evaluation, and show that our
approach typically enjoys superior performance and efficiency compared to
standard baselines.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2022 21:01:48 GMT"
}
] | 2022-07-14T00:00:00 |
[
[
"Zhu",
"Yinglun",
""
],
[
"Foster",
"Dylan J.",
""
],
[
"Langford",
"John",
""
],
[
"Mineiro",
"Paul",
""
]
] |
new_dataset
| 0.999089 |
2207.05844
|
Nigamaa Nayakanti
|
Nigamaa Nayakanti, Rami Al-Rfou, Aurick Zhou, Kratarth Goel, Khaled S.
Refaat, Benjamin Sapp
|
Wayformer: Motion Forecasting via Simple & Efficient Attention Networks
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motion forecasting for autonomous driving is a challenging task because
complex driving scenarios result in a heterogeneous mix of static and dynamic
inputs. It is an open problem how best to represent and fuse information about
road geometry, lane connectivity, time-varying traffic light state, and history
of a dynamic set of agents and their interactions into an effective encoding.
To model this diverse set of input features, many approaches proposed to design
an equally complex system with a diverse set of modality specific modules. This
results in systems that are difficult to scale, extend, or tune in rigorous
ways to trade off quality and efficiency. In this paper, we present Wayformer,
a family of attention based architectures for motion forecasting that are
simple and homogeneous. Wayformer offers a compact model description consisting
of an attention based scene encoder and a decoder. In the scene encoder we
study the choice of early, late and hierarchical fusion of the input
modalities. For each fusion type we explore strategies to tradeoff efficiency
and quality via factorized attention or latent query attention. We show that
early fusion, despite its simplicity of construction, is not only modality
agnostic but also achieves state-of-the-art results on both Waymo Open
MotionDataset (WOMD) and Argoverse leaderboards, demonstrating the
effectiveness of our design philosophy
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2022 21:19:04 GMT"
}
] | 2022-07-14T00:00:00 |
[
[
"Nayakanti",
"Nigamaa",
""
],
[
"Al-Rfou",
"Rami",
""
],
[
"Zhou",
"Aurick",
""
],
[
"Goel",
"Kratarth",
""
],
[
"Refaat",
"Khaled S.",
""
],
[
"Sapp",
"Benjamin",
""
]
] |
new_dataset
| 0.98965 |
2207.05849
|
Yinglun Zhu
|
Yinglun Zhu, Paul Mineiro
|
Contextual Bandits with Smooth Regret: Efficient Learning in Continuous
Action Spaces
|
To appear at ICML 2022
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Designing efficient general-purpose contextual bandit algorithms that work
with large -- or even continuous -- action spaces would facilitate application
to important scenarios such as information retrieval, recommendation systems,
and continuous control. While obtaining standard regret guarantees can be
hopeless, alternative regret notions have been proposed to tackle the large
action setting. We propose a smooth regret notion for contextual bandits, which
dominates previously proposed alternatives. We design a statistically and
computationally efficient algorithm -- for the proposed smooth regret -- that
works with general function approximation under standard supervised oracles. We
also present an adaptive algorithm that automatically adapts to any smoothness
level. Our algorithms can be used to recover the previous minimax/Pareto
optimal guarantees under the standard regret, e.g., in bandit problems with
multiple best arms and Lipschitz/H{\"o}lder bandits. We conduct large-scale
empirical evaluations demonstrating the efficacy of our proposed algorithms.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2022 21:27:09 GMT"
}
] | 2022-07-14T00:00:00 |
[
[
"Zhu",
"Yinglun",
""
],
[
"Mineiro",
"Paul",
""
]
] |
new_dataset
| 0.977164 |
2207.05975
|
Manish Purohit
|
Sharat Ibrahimpur, Manish Purohit, Zoya Svitkina, Erik Vee, Joshua
Wang
|
Caching with Reserves
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Caching is a crucial component of many computer systems, so naturally it is a
well-studied topic in algorithm design. Much of traditional caching research
studies cache management for a single-user or single-processor environment. In
this paper, we propose two related generalizations of the classical caching
problem that capture issues that arise in a multi-user or multi-processor
environment. In the caching with reserves problem, a caching algorithm is
required to maintain at least $k_i$ pages belonging to user $i$ in the cache at
any time, for some given reserve capacities $k_i$. In the public-private
caching problem, the cache of total size $k$ is partitioned into subcaches, a
private cache of size $k_i$ for each user $i$ and a shared public cache usable
by any user. In both of these models, as in the classical caching framework,
the objective of the algorithm is to dynamically maintain the cache so as to
minimize the total number of cache misses.
We show that caching with reserves and public-private caching models are
equivalent up to constant factors, and thus focus on the former. Unlike
classical caching, both of these models turn out to be NP-hard even in the
offline setting, where the page sequence is known in advance. For the offline
setting, we design a 2-approximation algorithm, whose analysis carefully keeps
track of a potential function to bound the cost. In the online setting, we
first design an $O(\ln k)$-competitive fractional algorithm using the
primal-dual framework, and then show how to convert it online to a randomized
integral algorithm with the same guarantee.
|
[
{
"version": "v1",
"created": "Wed, 13 Jul 2022 05:46:07 GMT"
}
] | 2022-07-14T00:00:00 |
[
[
"Ibrahimpur",
"Sharat",
""
],
[
"Purohit",
"Manish",
""
],
[
"Svitkina",
"Zoya",
""
],
[
"Vee",
"Erik",
""
],
[
"Wang",
"Joshua",
""
]
] |
new_dataset
| 0.977967 |
2207.05979
|
Shogo Anda
|
Shogo Anda, Masato Kikuchi, Tadachika Ozono
|
Developing a Component Comment Extractor from Product Reviews on
E-Commerce Sites
|
The 14th International Conference on E-Service and Knowledge
Management (ESKM 2022), 6 pages, 6 figures, 5 tables
|
2022 11th International Congress on Advanced Applied Informatics
(IIAI-AAI), pp. 83--88, 2022
|
10.1109/IIAI-AAI55812.2022.00026
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Consumers often read product reviews to inform their buying decision, as some
consumers want to know a specific component of a product. However, because
typical sentences on product reviews contain various details, users must
identify sentences about components they want to know amongst the many reviews.
Therefore, we aimed to develop a system that identifies and collects component
and aspect information of products in sentences. Our BERT-based classifiers
assign labels referring to components and aspects to sentences in reviews and
extract sentences with comments on specific components and aspects. We
determined proper labels based for the words identified through pattern
matching from product reviews to create the training data. Because we could not
use the words as labels, we carefully created labels covering the meanings of
the words. However, the training data was imbalanced on component and aspect
pairs. We introduced a data augmentation method using WordNet to reduce the
bias. Our evaluation demonstrates that the system can determine labels for road
bikes using pattern matching, covering more than 88\% of the indicators of
components and aspects on e-commerce sites. Moreover, our data augmentation
method can improve the-F1-measure on insufficient data from 0.66 to 0.76.
|
[
{
"version": "v1",
"created": "Wed, 13 Jul 2022 06:25:55 GMT"
}
] | 2022-07-14T00:00:00 |
[
[
"Anda",
"Shogo",
""
],
[
"Kikuchi",
"Masato",
""
],
[
"Ozono",
"Tadachika",
""
]
] |
new_dataset
| 0.976725 |
2207.06014
|
Heiko Paulheim
|
Jan Portisch and Heiko Paulheim
|
The DLCC Node Classification Benchmark for Analyzing Knowledge Graph
Embeddings
|
Accepted at International Semantic Web Conference (ISWC) 2022
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Knowledge graph embedding is a representation learning technique that
projects entities and relations in a knowledge graph to continuous vector
spaces. Embeddings have gained a lot of uptake and have been heavily used in
link prediction and other downstream prediction tasks. Most approaches are
evaluated on a single task or a single group of tasks to determine their
overall performance. The evaluation is then assessed in terms of how well the
embedding approach performs on the task at hand. Still, it is hardly evaluated
(and often not even deeply understood) what information the embedding
approaches are actually learning to represent.
To fill this gap, we present the DLCC (Description Logic Class Constructors)
benchmark, a resource to analyze embedding approaches in terms of which kinds
of classes they can represent. Two gold standards are presented, one based on
the real-world knowledge graph DBpedia and one synthetic gold standard. In
addition, an evaluation framework is provided that implements an experiment
protocol so that researchers can directly use the gold standard. To demonstrate
the use of DLCC, we compare multiple embedding approaches using the gold
standards. We find that many DL constructors on DBpedia are actually learned by
recognizing different correlated patterns than those defined in the gold
standard and that specific DL constructors, such as cardinality constraints,
are particularly hard to be learned for most embedding approaches.
|
[
{
"version": "v1",
"created": "Wed, 13 Jul 2022 07:43:51 GMT"
}
] | 2022-07-14T00:00:00 |
[
[
"Portisch",
"Jan",
""
],
[
"Paulheim",
"Heiko",
""
]
] |
new_dataset
| 0.971924 |
2207.06102
|
Dun Li
|
Zhijie Sun, Dezhi Han, Dun Li, Xiangsheng Wang, Chin-Chen Chang and
Zhongdai Wu
|
A blockchain-based secure storage scheme for medical information
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Medical data involves a large amount of personal information and is highly
privacy sensitive. In the age of big data, the increasing informatization of
healthcare makes it vital that medical information is stored securely and
accurately. However, current medical information is subject to the risk of
privacy leakage and difficult to share. To address these issues, this paper
proposes a healthcare information security storage solution based on
Hyperledger Fabric and the Attribute-Based Access Control (ABAC) framework. The
scheme first utilizes attribute-based access control, which allows dynamic and
fine-grained access to medical information, and then stores the medical
information in the blockchain, which can be secured and tamper-proof by
formulating corresponding smart contracts. In addition, this solution also
incorporates IPFS technology to relieve the storage pressure of the blockchain.
Experiments show that the proposed scheme combining access control of
attributes and blockchain technology in this paper can not only ensure the
secure storage and integrity of medical information but also has a high
throughput when accessing medical information.
|
[
{
"version": "v1",
"created": "Wed, 13 Jul 2022 10:19:55 GMT"
}
] | 2022-07-14T00:00:00 |
[
[
"Sun",
"Zhijie",
""
],
[
"Han",
"Dezhi",
""
],
[
"Li",
"Dun",
""
],
[
"Wang",
"Xiangsheng",
""
],
[
"Chang",
"Chin-Chen",
""
],
[
"Wu",
"Zhongdai",
""
]
] |
new_dataset
| 0.99621 |
2207.06182
|
Mark Quinlan
|
Mark Quinlan, Jun Zhao, Andrew Simpson
|
Connected Vehicles: A Privacy Analysis
| null |
International Conference on Security, Privacy and Anonymity in
Computation, Communication and Storage. Springer, Cham, 2019
| null | null |
cs.CR cs.CY cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Just as the world of consumer devices was forever changed by the introduction
of computer controlled solutions, the introduction of the engine control unit
(ECU) gave rise to the automobile's transformation from a transportation
product to a technology platform. A modern car is capable of processing,
analysing and transmitting data in ways that could not have been foreseen only
a few years ago. These cars often incorporate telematics systems, which are
used to provide navigation and internet connectivity over cellular networks, as
well as data-recording devices for insurance and product development purposes.
We examine the telematics system of a production vehicle, and aim to ascertain
some of the associated privacy-related threats. We also consider how this
analysis might underpin further research.
|
[
{
"version": "v1",
"created": "Wed, 13 Jul 2022 13:26:12 GMT"
}
] | 2022-07-14T00:00:00 |
[
[
"Quinlan",
"Mark",
""
],
[
"Zhao",
"Jun",
""
],
[
"Simpson",
"Andrew",
""
]
] |
new_dataset
| 0.989445 |
2207.06261
|
Filip Ilic
|
Filip Ilic, Thomas Pock, Richard P. Wildes
|
Is Appearance Free Action Recognition Possible?
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Intuition might suggest that motion and dynamic information are key to
video-based action recognition. In contrast, there is evidence that
state-of-the-art deep-learning video understanding architectures are biased
toward static information available in single frames. Presently, a methodology
and corresponding dataset to isolate the effects of dynamic information in
video are missing. Their absence makes it difficult to understand how well
contemporary architectures capitalize on dynamic vs. static information. We
respond with a novel Appearance Free Dataset (AFD) for action recognition. AFD
is devoid of static information relevant to action recognition in a single
frame. Modeling of the dynamics is necessary for solving the task, as the
action is only apparent through consideration of the temporal dimension. We
evaluated 11 contemporary action recognition architectures on AFD as well as
its related RGB video. Our results show a notable decrease in performance for
all architectures on AFD compared to RGB. We also conducted a complimentary
study with humans that shows their recognition accuracy on AFD and RGB is very
similar and much better than the evaluated architectures on AFD. Our results
motivate a novel architecture that revives explicit recovery of optical flow,
within a contemporary design for best performance on AFD and RGB.
|
[
{
"version": "v1",
"created": "Wed, 13 Jul 2022 15:04:53 GMT"
}
] | 2022-07-14T00:00:00 |
[
[
"Ilic",
"Filip",
""
],
[
"Pock",
"Thomas",
""
],
[
"Wildes",
"Richard P.",
""
]
] |
new_dataset
| 0.998684 |
2207.06309
|
Yulin Shao
|
Pengfei Shen and Yulin Shao and Qi Cao and Lu Lu
|
Dynamic gNodeB Sleep Control for Energy-Conserving 5G Radio Access
Network
|
Keywords: Base station sleep control, 5G, radio access network,
Markov decision process, greedy policy, index policy
| null | null | null |
cs.IT cs.NI cs.SY eess.SY math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
5G radio access network (RAN) is consuming much more energy than legacy RAN
due to the denser deployments of gNodeBs (gNBs) and higher single-gNB power
consumption. In an effort to achieve an energy-conserving RAN, this paper
develops a dynamic on-off switching paradigm, where the ON/OFF states of gNBs
can be dynamically configured according to the evolvements of the associated
users. We formulate the dynamic sleep control for a cluster of gNBs as a Markov
decision process (MDP) and analyze various switching policies to reduce the
energy expenditure. The optimal policy of the MDP that minimizes the energy
expenditure can be derived from dynamic programming, but the computation is
expensive. To circumvent this issue, this paper puts forth a greedy policy and
an index policy for gNB sleep control. When there is no constraint on the
number of gNBs that can be turned off, we prove the dual-threshold structure of
the greedy policy and analyze its connections with the optimal policy. Inspired
by the dual-threshold structure and Whittle index, we develop an index policy
by decoupling the original MDP into multiple one-dimensional MDPs -- the
indexability of the decoupled MDP is proven and an algorithm to compute the
index is proposed. Extensive simulation results verify that the index policy
exhibits close-to-optimal performance in terms of the energy expenditure of the
gNB cluster. As far as the computational complexity is concerned, on the other
hand, the index policy is much more efficient than the optimal policy, which is
computationally prohibitive when the number of gNBs is large.
|
[
{
"version": "v1",
"created": "Wed, 13 Jul 2022 16:00:09 GMT"
}
] | 2022-07-14T00:00:00 |
[
[
"Shen",
"Pengfei",
""
],
[
"Shao",
"Yulin",
""
],
[
"Cao",
"Qi",
""
],
[
"Lu",
"Lu",
""
]
] |
new_dataset
| 0.991069 |
2207.06313
|
David Garcia
|
Jana Lasser, Segun Taofeek Aroyehun, Almog Simchon, Fabio Carrella,
David Garcia, Stephan Lewandowsky
|
Social media sharing by political elites: An asymmetric American
exceptionalism
| null | null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Increased sharing of untrustworthy information on social media platforms is
one of the main challenges of our modern information society. Because
information disseminated by political elites is known to shape citizen and
media discourse, it is particularly important to examine the quality of
information shared by politicians. Here we show that from 2016 onward, members
of the Republican party in the U.S. Congress have been increasingly sharing
links to untrustworthy sources. The proportion of untrustworthy information
posted by Republicans versus Democrats is diverging at an accelerating rate,
and this divergence has worsened since president Biden was elected. This
divergence between parties seems to be unique to the U.S. as it cannot be
observed in other western democracies such as Germany and the United Kingdom,
where left-right disparities are smaller and have remained largely constant.
|
[
{
"version": "v1",
"created": "Wed, 13 Jul 2022 16:03:47 GMT"
}
] | 2022-07-14T00:00:00 |
[
[
"Lasser",
"Jana",
""
],
[
"Aroyehun",
"Segun Taofeek",
""
],
[
"Simchon",
"Almog",
""
],
[
"Carrella",
"Fabio",
""
],
[
"Garcia",
"David",
""
],
[
"Lewandowsky",
"Stephan",
""
]
] |
new_dataset
| 0.974573 |
2207.06349
|
Dan Stowell
|
Alberto Garc\'ia Arroba Parrilla and Dan Stowell
|
Polyphonic sound event detection for highly dense birdsong scenes
| null | null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
One hour before sunrise, one can experience the dawn chorus where birds from
different species sing together. In this scenario, high levels of polyphony, as
in the number of overlapping sound sources, are prone to happen resulting in a
complex acoustic outcome. Sound Event Detection (SED) tasks analyze acoustic
scenarios in order to identify the occurring events and their respective
temporal information. However, highly dense scenarios can be hard to process
and have not been studied in depth. Here we show, using a Convolutional
Recurrent Neural Network (CRNN), how birdsong polyphonic scenarios can be
detected when dealing with higher polyphony and how effectively this type of
model can face a very dense scene with up to 10 overlapping birds. We found
that models trained with denser examples (i.e., higher polyphony) learn at a
similar rate as models that used simpler samples in their training set.
Additionally, the model trained with the densest samples maintained a
consistent score for all polyphonies, while the model trained with the least
dense samples degraded as the polyphony increased. Our results demonstrate that
highly dense acoustic scenarios can be dealt with using CRNNs. We expect that
this study serves as a starting point for working on highly populated bird
scenarios such as dawn chorus or other dense acoustic problems.
|
[
{
"version": "v1",
"created": "Wed, 13 Jul 2022 17:02:29 GMT"
}
] | 2022-07-14T00:00:00 |
[
[
"Parrilla",
"Alberto García Arroba",
""
],
[
"Stowell",
"Dan",
""
]
] |
new_dataset
| 0.991525 |
2207.06360
|
Damien Dablain
|
Damien Dablain, Lilian Huang and Brandon Sepulvado
|
Developing an NLP-based Recommender System for the Ethical, Legal, and
Social Implications of Synthetic Biology
| null | null | null | null |
cs.IR cs.AI cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Synthetic biology is an emerging field that involves the engineering and
re-design of organisms for purposes such as food security, health, and
environmental protection. As such, it poses numerous ethical, legal, and social
implications (ELSI) for researchers and policy makers. Various efforts to
ensure socially responsible synthetic biology are underway. Policy making is
one regulatory avenue, and other initiatives have sought to embed social
scientists and ethicists on synthetic biology projects. However, given the
nascency of synthetic biology, the number of heterogeneous domains it spans,
and the open nature of many ethical questions, it has proven challenging to
establish widespread concrete policies, and including social scientists and
ethicists on synthetic biology teams has met with mixed success.
This text proposes a different approach, asking instead is it possible to
develop a well-performing recommender model based upon natural language
processing (NLP) to connect synthetic biologists with information on the ELSI
of their specific research? This recommender was developed as part of a larger
project building a Synthetic Biology Knowledge System (SBKS) to accelerate
discovery and exploration of the synthetic biology design space. Our approach
aims to distill for synthetic biologists relevant ethical and social scientific
information and embed it into synthetic biology research workflows.
|
[
{
"version": "v1",
"created": "Sun, 10 Jul 2022 20:18:53 GMT"
}
] | 2022-07-14T00:00:00 |
[
[
"Dablain",
"Damien",
""
],
[
"Huang",
"Lilian",
""
],
[
"Sepulvado",
"Brandon",
""
]
] |
new_dataset
| 0.989198 |
2207.06369
|
Luis Veiga
|
Pedro Agostinho and David Dias and Lu\'is Veiga
|
SmartPubSub: Content-based Pub-Sub on IPFS
| null | null | null | null |
cs.NI cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The InterPlanetary File System (IPFS) is a hypermedia distribution protocol
enabling the creation of completely distributed applications. One of the most
efficient and effective ways to distribute information is through
notifications, with a producer of content (publisher) sharing content with
other interested parts (subscribers). IPFS already implements topic-based
publish-subscribe systems under an experimental flag. The goal of this work is
to advance on that, by developing a content-based pub-sub system (with
subscriptions as predicates about event content) to disseminate information on
top of IPFS in an efficient and decentralized way, leveraging its
infrastructure. We design two protocols: ScoutSubs that is completely
decentralized; FastDelivery that is centered in the publisher. With these two
approaches, we show the different advantages of having each of these protocols
simultaneously by comparing ScoutSubs full decentralization, and FastDelivery
centralization at data sources.
|
[
{
"version": "v1",
"created": "Wed, 13 Jul 2022 17:20:29 GMT"
}
] | 2022-07-14T00:00:00 |
[
[
"Agostinho",
"Pedro",
""
],
[
"Dias",
"David",
""
],
[
"Veiga",
"Luís",
""
]
] |
new_dataset
| 0.999036 |
2101.02615
|
Cao Vien Phung
|
Cao Vien Phung, Mounir Bensalem and Admela Jukan
|
Benchmarking Buffer Size in IoT Devices Deploying REST HTTP
|
This paper is uploaded here for research community, thus it is for
non-commercial purposes
| null |
10.23919/MIPRO55190.2022.9803729
| null |
cs.NI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
A few potential IoT communication protocols at the application layer have
been proposed, including MQTT, CoAP and REST HTTP, with the latter being the
protocol of choice for software developers due to its compatibility with the
existing systems. We present a theoretical model of the expected buffer size on
the REST HTTP client buffer in IoT devices under lossy wireless conditions, and
validate the study experimentally. The results show that increasing the buffer
size in IoT devices does not always improve performance in lossy environments,
hence demonstrating the importance of benchmarking the buffer size in IoT
systems deploying REST HTTP.
|
[
{
"version": "v1",
"created": "Thu, 7 Jan 2021 16:31:47 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Jun 2021 08:31:37 GMT"
},
{
"version": "v3",
"created": "Tue, 16 Nov 2021 10:34:07 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Phung",
"Cao Vien",
""
],
[
"Bensalem",
"Mounir",
""
],
[
"Jukan",
"Admela",
""
]
] |
new_dataset
| 0.995958 |
2112.15439
|
Peng Zheng
|
Deng-Ping Fan, Ziling Huang, Peng Zheng, Hong Liu, Xuebin Qin, and Luc
Van Gool
|
Facial-Sketch Synthesis: A New Challenge
|
Accepted to Machine Intelligence Research (MIR)
| null |
10.1007/s11633-022-1349-9
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This paper aims to conduct a comprehensive study on facial-sketch synthesis
(FSS). However, due to the high costs of obtaining hand-drawn sketch datasets,
there lacks a complete benchmark for assessing the development of FSS
algorithms over the last decade. We first introduce a high-quality dataset for
FSS, named FS2K, which consists of 2,104 image-sketch pairs spanning three
types of sketch styles, image backgrounds, lighting conditions, skin colors,
and facial attributes. FS2K differs from previous FSS datasets in difficulty,
diversity, and scalability and should thus facilitate the progress of FSS
research. Second, we present the largest-scale FSS investigation by reviewing
89 classical methods, including 25 handcrafted feature-based facial-sketch
synthesis approaches, 29 general translation methods, and 35 image-to-sketch
approaches. Besides, we elaborate comprehensive experiments on the existing 19
cutting-edge models. Third, we present a simple baseline for FSS, named FSGAN.
With only two straightforward components, i.e., facial-aware masking and
style-vector expansion, FSGAN surpasses the performance of all previous
state-of-the-art models on the proposed FS2K dataset by a large margin.
Finally, we conclude with lessons learned over the past years and point out
several unsolved challenges. Our code is available at
https://github.com/DengPingFan/FSGAN.
|
[
{
"version": "v1",
"created": "Fri, 31 Dec 2021 13:19:21 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Jan 2022 01:09:03 GMT"
},
{
"version": "v3",
"created": "Wed, 30 Mar 2022 12:30:05 GMT"
},
{
"version": "v4",
"created": "Mon, 23 May 2022 10:30:22 GMT"
},
{
"version": "v5",
"created": "Wed, 15 Jun 2022 13:44:27 GMT"
},
{
"version": "v6",
"created": "Mon, 11 Jul 2022 22:07:30 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Fan",
"Deng-Ping",
""
],
[
"Huang",
"Ziling",
""
],
[
"Zheng",
"Peng",
""
],
[
"Liu",
"Hong",
""
],
[
"Qin",
"Xuebin",
""
],
[
"Van Gool",
"Luc",
""
]
] |
new_dataset
| 0.999532 |
2201.05812
|
Yuanxin Wu
|
Maoran Zhu, Yuanxin Wu
|
ChevOpt: Continuous-time State Estimation by Chebyshev Polynomial
Optimization
|
12 pages, 16 figures
|
IEEE Trans. on Signal Processing, 2022
|
10.1109/TSP.2022.3183435
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, a new framework for continuous-time maximum a posteriori
estimation based on the Chebyshev polynomial optimization (ChevOpt) is
proposed, which transforms the nonlinear continuous-time state estimation into
a problem of constant parameter optimization. Specifically, the time-varying
system state is represented by a Chebyshev polynomial and the unknown Chebyshev
coefficients are optimized by minimizing the weighted sum of the prior,
dynamics and measurements. The proposed ChevOpt is an optimal continuous-time
estimation in the least squares sense and needs a batch processing. A recursive
sliding-window version is proposed as well to meet the requirement of real-time
applications. Comparing with the well-known Gaussian filters, the ChevOpt
better resolves the nonlinearities in both dynamics and measurements. Numerical
results of demonstrative examples show that the proposed ChevOpt achieves
remarkably improved accuracy over the extended/unscented Kalman filters and
extended batch/fixed-lag smoother, closes to the Cramer-Rao lower bound.
|
[
{
"version": "v1",
"created": "Sat, 15 Jan 2022 09:43:30 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Jun 2022 13:19:03 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Zhu",
"Maoran",
""
],
[
"Wu",
"Yuanxin",
""
]
] |
new_dataset
| 0.981002 |
2202.14035
|
Jonne S\"alev\"a
|
Jonne S\"alev\"a and Constantine Lignos
|
ParaNames: A Massively Multilingual Entity Name Corpus
|
Resource available at https://github.com/bltlab/paranames
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce ParaNames, a multilingual parallel name resource consisting of
118 million names spanning across 400 languages. Names are provided for 13.6
million entities which are mapped to standardized entity types (PER/LOC/ORG).
Using Wikidata as a source, we create the largest resource of this type
to-date. We describe our approach to filtering and standardizing the data to
provide the best quality possible. ParaNames is useful for multilingual
language processing, both in defining tasks for name
translation/transliteration and as supplementary data for tasks such as named
entity recognition and linking. We demonstrate an application of ParaNames by
training a multilingual model for canonical name translation to and from
English. Our resource is released under a Creative Commons license (CC BY 4.0)
at https://github.com/bltlab/paranames.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 18:58:06 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Mar 2022 17:58:14 GMT"
},
{
"version": "v3",
"created": "Tue, 12 Jul 2022 17:26:01 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Sälevä",
"Jonne",
""
],
[
"Lignos",
"Constantine",
""
]
] |
new_dataset
| 0.995234 |
2204.08308
|
Huiyu Duan
|
Huiyu Duan, Wei Shen, Xiongkuo Min, Danyang Tu, Jing Li and Guangtao
Zhai
|
Saliency in Augmented Reality
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rapid development of multimedia technology, Augmented Reality (AR)
has become a promising next-generation mobile platform. The primary theory
underlying AR is human visual confusion, which allows users to perceive the
real-world scenes and augmented contents (virtual-world scenes) simultaneously
by superimposing them together. To achieve good Quality of Experience (QoE), it
is important to understand the interaction between two scenarios, and
harmoniously display AR contents. However, studies on how this superimposition
will influence the human visual attention are lacking. Therefore, in this
paper, we mainly analyze the interaction effect between background (BG) scenes
and AR contents, and study the saliency prediction problem in AR. Specifically,
we first construct a Saliency in AR Dataset (SARD), which contains 450 BG
images, 450 AR images, as well as 1350 superimposed images generated by
superimposing BG and AR images in pair with three mixing levels. A large-scale
eye-tracking experiment among 60 subjects is conducted to collect eye movement
data. To better predict the saliency in AR, we propose a vector quantized
saliency prediction method and generalize it for AR saliency prediction. For
comparison, three benchmark methods are proposed and evaluated together with
our proposed method on our SARD. Experimental results demonstrate the
superiority of our proposed method on both of the common saliency prediction
problem and the AR saliency prediction problem over benchmark methods. Our
dataset and code are available at: https://github.com/DuanHuiyu/ARSaliency.
|
[
{
"version": "v1",
"created": "Mon, 18 Apr 2022 13:25:07 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Jul 2022 08:41:10 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Duan",
"Huiyu",
""
],
[
"Shen",
"Wei",
""
],
[
"Min",
"Xiongkuo",
""
],
[
"Tu",
"Danyang",
""
],
[
"Li",
"Jing",
""
],
[
"Zhai",
"Guangtao",
""
]
] |
new_dataset
| 0.997616 |
2205.07707
|
Svetlana Puzynina
|
Val\'erie Berth\'e, Svetlana Puzynina
|
On the rigidity of Arnoux-Rauzy words
| null | null | null | null |
cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An infinite word generated by a substitution is rigid if all the
substitutions which fix this word are powers of a same substitution. Sturmian
words as well as characteristic Arnoux-Rauzy words are known to be rigid. In
the present paper, we prove that all Arnoux-Rauzy words are rigid. The proof
relies on two main ingredients: firstly, the fact that the primitive
substitutions that fix an Arnoux-Rauzy word share a common power, and secondly,
the notion of normal form of an episturmian substitution (i.e., a substitution
that fixes an Arnoux-Rauzy word). The main difficulty is then of a
combinatorial nature and relies on the normalization process when taking powers
of episturmian substitutions: the normal form of a square is not necessarily
equal to the square of the normal forms.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 14:19:55 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Jul 2022 16:07:39 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Berthé",
"Valérie",
""
],
[
"Puzynina",
"Svetlana",
""
]
] |
new_dataset
| 0.993289 |
2205.13326
|
Elia Moscoso Thompson
|
Elia Moscoso Thompson, Andrea Ranieri, Silvia Biasotti, Miguel
Chicchon, Ivan Sipiran, Minh-Khoi Pham, Thang-Long Nguyen-Ho, Hai-Dang
Nguyen, Minh-Triet Tran
|
SHREC 2022: pothole and crack detection in the road pavement using
images and RGB-D data
| null | null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper describes the methods submitted for evaluation to the SHREC 2022
track on pothole and crack detection in the road pavement. A total of 7
different runs for the semantic segmentation of the road surface are compared,
6 from the participants plus a baseline method. All methods exploit Deep
Learning techniques and their performance is tested using the same environment
(i.e.: a single Jupyter notebook). A training set, composed of 3836 semantic
segmentation image/mask pairs and 797 RGB-D video clips collected with the
latest depth cameras was made available to the participants. The methods are
then evaluated on the 496 image/mask pairs in the validation set, on the 504
pairs in the test set and finally on 8 video clips. The analysis of the results
is based on quantitative metrics for image segmentation and qualitative
analysis of the video clips. The participation and the results show that the
scenario is of great interest and that the use of RGB-D data is still
challenging in this context.
|
[
{
"version": "v1",
"created": "Thu, 26 May 2022 13:01:55 GMT"
},
{
"version": "v2",
"created": "Fri, 27 May 2022 13:47:29 GMT"
},
{
"version": "v3",
"created": "Wed, 6 Jul 2022 15:58:04 GMT"
},
{
"version": "v4",
"created": "Thu, 7 Jul 2022 12:33:32 GMT"
},
{
"version": "v5",
"created": "Tue, 12 Jul 2022 15:28:15 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Thompson",
"Elia Moscoso",
""
],
[
"Ranieri",
"Andrea",
""
],
[
"Biasotti",
"Silvia",
""
],
[
"Chicchon",
"Miguel",
""
],
[
"Sipiran",
"Ivan",
""
],
[
"Pham",
"Minh-Khoi",
""
],
[
"Nguyen-Ho",
"Thang-Long",
""
],
[
"Nguyen",
"Hai-Dang",
""
],
[
"Tran",
"Minh-Triet",
""
]
] |
new_dataset
| 0.999444 |
2205.15083
|
Luzhi Wang
|
Di Jin, Luzhi Wang, Yizhen Zheng, Xiang Li, Fei Jiang, Wei Lin, Shirui
Pan
|
CGMN: A Contrastive Graph Matching Network for Self-Supervised Graph
Similarity Learning
|
7 pages, 5 figures
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Graph similarity learning refers to calculating the similarity score between
two graphs, which is required in many realistic applications, such as visual
tracking, graph classification, and collaborative filtering. As most of the
existing graph neural networks yield effective graph representations of a
single graph, little effort has been made for jointly learning two graph
representations and calculating their similarity score. In addition, existing
unsupervised graph similarity learning methods are mainly clustering-based,
which ignores the valuable information embodied in graph pairs. To this end, we
propose a contrastive graph matching network (CGMN) for self-supervised graph
similarity learning in order to calculate the similarity between any two input
graph objects. Specifically, we generate two augmented views for each graph in
a pair respectively. Then, we employ two strategies, namely cross-view
interaction and cross-graph interaction, for effective node representation
learning. The former is resorted to strengthen the consistency of node
representations in two views. The latter is utilized to identify node
differences between different graphs. Finally, we transform node
representations into graph-level representations via pooling operations for
graph similarity computation. We have evaluated CGMN on eight real-world
datasets, and the experiment results show that the proposed new approach is
superior to the state-of-the-art methods in graph similarity learning
downstream tasks.
|
[
{
"version": "v1",
"created": "Mon, 30 May 2022 13:20:26 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Jul 2022 08:44:37 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Jin",
"Di",
""
],
[
"Wang",
"Luzhi",
""
],
[
"Zheng",
"Yizhen",
""
],
[
"Li",
"Xiang",
""
],
[
"Jiang",
"Fei",
""
],
[
"Lin",
"Wei",
""
],
[
"Pan",
"Shirui",
""
]
] |
new_dataset
| 0.993371 |
2206.01071
|
Francesco Foscarin
|
Carlos Cancino-Chac\'on, Silvan David Peter, Emmanouil Karystinaios,
Francesco Foscarin, Maarten Grachten, Gerhard Widmer
|
Partitura: A Python Package for Symbolic Music Processing
| null |
Proceedings of the Music Encoding Conference (MEC), 2022, Halifax,
Canada
| null | null |
cs.SD cs.DL eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Partitura is a lightweight Python package for handling symbolic musical
information. It provides easy access to features commonly used in music
information retrieval tasks, like note arrays (lists of timed pitched events)
and 2D piano roll matrices, as well as other score elements such as time and
key signatures, performance directives, and repeat structures. Partitura can
load musical scores (in MEI, MusicXML, Kern, and MIDI formats), MIDI
performances, and score-to-performance alignments. The package includes some
tools for music analysis, such as automatic pitch spelling, key signature
identification, and voice separation. Partitura is an open-source project and
is available at https://github.com/CPJKU/partitura/.
|
[
{
"version": "v1",
"created": "Thu, 2 Jun 2022 14:39:32 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Cancino-Chacón",
"Carlos",
""
],
[
"Peter",
"Silvan David",
""
],
[
"Karystinaios",
"Emmanouil",
""
],
[
"Foscarin",
"Francesco",
""
],
[
"Grachten",
"Maarten",
""
],
[
"Widmer",
"Gerhard",
""
]
] |
new_dataset
| 0.999867 |
2206.02603
|
Franco Oberti
|
Franco Oberti, Ernesto Sanchez, Alessandro Savino, Filippo Parisi, and
Stefano Di Carlo
|
CAN-MM: Multiplexed Message Authentication Code for Controller Area
Network message authentication in road vehicles
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The automotive market is increasingly profitable for cyberattacks with the
constant shift toward fully interconnected vehicles. Electronic Control Units
(ECUs) installed on cars often operate in a critical and hostile environment.
Hence, both carmakers and governments have decided to support a series of
initiatives to mitigate risks and threats belonging to the automotive domain.
The Controller Area Network (CAN) is the primary communication protocol in the
automotive field, and the integrity of the communication over this network is
assured through Message Authentication Codes (MAC). However, limitations in
throughput and frame size limit the application of this technique to specific
versions of the CAN protocol, leaving several vehicles still unprotected. This
paper presents CAN Multiplexed MAC (CAN-MM), a new approach exploiting
frequency modulation to multiplex MAC data with standard CAN communication.
CAN-MM allows transmitting MAC payloads maintaining full-back compatibility
with all versions of the standard CAN protocol. Moreover, multiplexing allows
sending DATA and MAC simultaneously.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2022 13:21:22 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Jul 2022 13:58:00 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Oberti",
"Franco",
""
],
[
"Sanchez",
"Ernesto",
""
],
[
"Savino",
"Alessandro",
""
],
[
"Parisi",
"Filippo",
""
],
[
"Di Carlo",
"Stefano",
""
]
] |
new_dataset
| 0.999611 |
2207.02031
|
Zhe Li
|
Zhe Li, Zerong Zheng, Hongwen Zhang, Chaonan Ji, Yebin Liu
|
AvatarCap: Animatable Avatar Conditioned Monocular Human Volumetric
Capture
|
Accepted by ECCV 2022, project page:
http://www.liuyebin.com/avatarcap/avatarcap.html, code:
https://github.com/lizhe00/AvatarCap
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To address the ill-posed problem caused by partial observations in monocular
human volumetric capture, we present AvatarCap, a novel framework that
introduces animatable avatars into the capture pipeline for high-fidelity
reconstruction in both visible and invisible regions. Our method firstly
creates an animatable avatar for the subject from a small number (~20) of 3D
scans as a prior. Then given a monocular RGB video of this subject, our method
integrates information from both the image observation and the avatar prior,
and accordingly recon-structs high-fidelity 3D textured models with dynamic
details regardless of the visibility. To learn an effective avatar for
volumetric capture from only few samples, we propose GeoTexAvatar, which
leverages both geometry and texture supervisions to constrain the
pose-dependent dynamics in a decomposed implicit manner. An avatar-conditioned
volumetric capture method that involves a canonical normal fusion and a
reconstruction network is further proposed to integrate both image observations
and avatar dynamics for high-fidelity reconstruction in both observed and
invisible regions. Overall, our method enables monocular human volumetric
capture with detailed and pose-dependent dynamics, and the experiments show
that our method outperforms state of the art. Code is available at
https://github.com/lizhe00/AvatarCap.
|
[
{
"version": "v1",
"created": "Tue, 5 Jul 2022 13:21:01 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Jul 2022 07:53:48 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Li",
"Zhe",
""
],
[
"Zheng",
"Zerong",
""
],
[
"Zhang",
"Hongwen",
""
],
[
"Ji",
"Chaonan",
""
],
[
"Liu",
"Yebin",
""
]
] |
new_dataset
| 0.99881 |
2207.04284
|
Patrick Ebel
|
Patrick Ebel, Moritz Berger, Christoph Lingenfelder, Andreas Vogelsang
|
How Do Drivers Self-Regulate their Secondary Task Engagements? The
Effect of Driving Automation on Touchscreen Interactions and Glance Behavior
|
14th International ACM Conference on Automotive User Interfaces and
Interactive Vehicular Applications
| null |
10.1145/3543174.3545173
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With ever-improving driver assistance systems and large touchscreens becoming
the main in-vehicle interface, drivers are more tempted than ever to engage in
distracting non-driving-related tasks. However, little research exists on how
driving automation affects drivers' self-regulation when interacting with
center stack touchscreens. To investigate this, we employ multilevel models on
a real-world driving dataset consisting of 10,139 sequences. Our results show
significant differences in drivers' interaction and glance behavior in response
to varying levels of driving automation, vehicle speed, and road curvature.
During partially automated driving, drivers are not only more likely to engage
in secondary touchscreen tasks, but their mean glance duration toward the
touchscreen also increases by 12% (Level 1) and 20% (Level 2) compared to
manual driving. We further show that the effect of driving automation on
drivers' self-regulation is larger than that of vehicle speed and road
curvature. The derived knowledge can facilitate the safety evaluation of
infotainment systems and the development of context-aware driver monitoring
systems.
|
[
{
"version": "v1",
"created": "Sat, 9 Jul 2022 15:00:38 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Jul 2022 06:53:48 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Ebel",
"Patrick",
""
],
[
"Berger",
"Moritz",
""
],
[
"Lingenfelder",
"Christoph",
""
],
[
"Vogelsang",
"Andreas",
""
]
] |
new_dataset
| 0.990305 |
2207.04535
|
Ashutosh Agarwal
|
Ashutosh Agarwal and Chetan Arora
|
Depthformer : Multiscale Vision Transformer For Monocular Depth
Estimation With Local Global Information Fusion
| null |
International Conference on Image Processing (ICIP), 2022
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Attention-based models such as transformers have shown outstanding
performance on dense prediction tasks, such as semantic segmentation, owing to
their capability of capturing long-range dependency in an image. However, the
benefit of transformers for monocular depth prediction has seldom been explored
so far. This paper benchmarks various transformer-based models for the depth
estimation task on an indoor NYUV2 dataset and an outdoor KITTI dataset. We
propose a novel attention-based architecture, Depthformer for monocular depth
estimation that uses multi-head self-attention to produce the multiscale
feature maps, which are effectively combined by our proposed decoder network.
We also propose a Transbins module that divides the depth range into bins whose
center value is estimated adaptively per image. The final depth estimated is a
linear combination of bin centers for each pixel. Transbins module takes
advantage of the global receptive field using the transformer module in the
encoding stage. Experimental results on NYUV2 and KITTI depth estimation
benchmark demonstrate that our proposed method improves the state-of-the-art by
3.3%, and 3.3% respectively in terms of Root Mean Squared Error (RMSE). Code is
available at https://github.com/ashutosh1807/Depthformer.git.
|
[
{
"version": "v1",
"created": "Sun, 10 Jul 2022 20:49:11 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Jul 2022 07:39:10 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Agarwal",
"Ashutosh",
""
],
[
"Arora",
"Chetan",
""
]
] |
new_dataset
| 0.99788 |
2207.05118
|
Laura Dilley
|
L. Dilley, W. Welna, F. Foster (Michigan State University)
|
QAnon Propaganda on Twitter as Information Warfare: Influencers,
Networks, and Narratives
|
60 pages, 14 figures
| null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
QAnon refers to a set of far-right, conspiratorial ideologies that have risen
in popularity in the U.S. since their initial promotion in 2017 on the 4chan
internet message board. A central narrative element of QAnon is that a powerful
group of elite, liberal members of the Democratic Party engage in morally
reprehensible practices, but that former U.S. President Donald J. Trump was
prosecuting them. Five studies investigated the influence and network
connectivity of accounts promoting QAnon on Twitter from August, 2020 through
January, 2021. Selection of Twitter accounts emphasized on-line influencers and
"persons of interest" known or suspected of participation in QAnon propaganda
promotion activities. Evidence of large-scale coordination among accounts
promoting QAnon was observed, demonstrating rigorous, quantitative evidence of
"astroturfing" in QAnon propaganda promotion on Twitter, as opposed to strictly
"grassroots" activities of citizens acting independently. Further, evidence was
obtained supporting that networks of extreme far-right adherents engaged in
organized QAnon propaganda promotion, as revealed by network overlap among
accounts promoting far-right extremist (e.g., anti-Semitic) content and
insurrectionist themes; New Age, occult, and "esoteric" themes; and internet
puzzle games like Cicada 3301 and other "alternate reality games." Based on
well-grounded theories and findings from the social sciences, it is argued that
QAnon propaganda on Twitter in the months circa the 2020 U.S. Presidential
election likely reflected joint participation of multiple actors, including
nation-states like Russia, in innovative misuse of social media toward
undermining democratic processes by promoting "magical" thinking, ostracism of
Democrats and liberals, and salience of White extinction narratives common
among otherwise ideologically diverse groups on the extreme far-right.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2022 18:23:30 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Dilley",
"L.",
"",
"Michigan State University"
],
[
"Welna",
"W.",
"",
"Michigan State University"
],
[
"Foster",
"F.",
"",
"Michigan State University"
]
] |
new_dataset
| 0.998078 |
2207.05144
|
Maaz Amjad
|
Maaz Amjad, Sabur Butt, Hamza Imam Amjad, Grigori Sidorov, Alisa
Zhila, Alexander Gelbukh
|
UrduFake@FIRE2021: Shared Track on Fake News Identification in Urdu
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This study reports the second shared task named as UrduFake@FIRE2021 on
identifying fake news detection in Urdu language. This is a binary
classification problem in which the task is to classify a given news article
into two classes: (i) real news, or (ii) fake news. In this shared task, 34
teams from 7 different countries (China, Egypt, Israel, India, Mexico,
Pakistan, and UAE) registered to participate in the shared task, 18 teams
submitted their experimental results and 11 teams submitted their technical
reports. The proposed systems were based on various count-based features and
used different classifiers as well as neural network architectures. The
stochastic gradient descent (SGD) algorithm outperformed other classifiers and
achieved 0.679 F-score.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2022 19:15:04 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Amjad",
"Maaz",
""
],
[
"Butt",
"Sabur",
""
],
[
"Amjad",
"Hamza Imam",
""
],
[
"Sidorov",
"Grigori",
""
],
[
"Zhila",
"Alisa",
""
],
[
"Gelbukh",
"Alexander",
""
]
] |
new_dataset
| 0.999835 |
2207.05192
|
Sharib Ali Dr.
|
Ziang Xu, Sharib Ali, Soumya Gupta, Simon Leedham, James E East, Jens
Rittscher
|
Patch-level instance-group discrimination with pretext-invariant
learning for colitis scoring
|
11
| null | null | null |
cs.CV cs.AI cs.MM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Inflammatory bowel disease (IBD), in particular ulcerative colitis (UC), is
graded by endoscopists and this assessment is the basis for risk stratification
and therapy monitoring. Presently, endoscopic characterisation is largely
operator dependant leading to sometimes undesirable clinical outcomes for
patients with IBD. We focus on the Mayo Endoscopic Scoring (MES) system which
is widely used but requires the reliable identification of subtle changes in
mucosal inflammation. Most existing deep learning classification methods cannot
detect these fine-grained changes which make UC grading such a challenging
task. In this work, we introduce a novel patch-level instance-group
discrimination with pretext-invariant representation learning (PLD-PIRL) for
self-supervised learning (SSL). Our experiments demonstrate both improved
accuracy and robustness compared to the baseline supervised network and several
state-of-the-art SSL methods. Compared to the baseline (ResNet50) supervised
classification our proposed PLD-PIRL obtained an improvement of 4.75% on
hold-out test data and 6.64% on unseen center test data for top-1 accuracy.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2022 21:06:29 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Xu",
"Ziang",
""
],
[
"Ali",
"Sharib",
""
],
[
"Gupta",
"Soumya",
""
],
[
"Leedham",
"Simon",
""
],
[
"East",
"James E",
""
],
[
"Rittscher",
"Jens",
""
]
] |
new_dataset
| 0.99348 |
2207.05200
|
Walter Zimmer
|
Walter Zimmer, Jialong Wu, Xingcheng Zhou, Alois C. Knoll
|
Real-Time And Robust 3D Object Detection with Roadside LiDARs
|
arXiv admin note: substantial text overlap with arXiv:2204.00132
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This work aims to address the challenges in autonomous driving by focusing on
the 3D perception of the environment using roadside LiDARs. We design a 3D
object detection model that can detect traffic participants in roadside LiDARs
in real-time. Our model uses an existing 3D detector as a baseline and improves
its accuracy. To prove the effectiveness of our proposed modules, we train and
evaluate the model on three different vehicle and infrastructure datasets. To
show the domain adaptation ability of our detector, we train it on an
infrastructure dataset from China and perform transfer learning on a different
dataset recorded in Germany. We do several sets of experiments and ablation
studies for each module in the detector that show that our model outperforms
the baseline by a significant margin, while the inference speed is at 45 Hz (22
ms). We make a significant contribution with our LiDAR-based 3D detector that
can be used for smart city applications to provide connected and automated
vehicles with a far-reaching view. Vehicles that are connected to the roadside
sensors can get information about other vehicles around the corner to improve
their path and maneuver planning and to increase road traffic safety.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2022 21:33:42 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Zimmer",
"Walter",
""
],
[
"Wu",
"Jialong",
""
],
[
"Zhou",
"Xingcheng",
""
],
[
"Knoll",
"Alois C.",
""
]
] |
new_dataset
| 0.999786 |
2207.05289
|
Chao-Wei Huang
|
Chao-Wei Huang, Shang-Chi Tsai, Yun-Nung Chen
|
PLM-ICD: Automatic ICD Coding with Pretrained Language Models
|
Accepted to the ClinicalNLP 2022 workshop
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Automatically classifying electronic health records (EHRs) into diagnostic
codes has been challenging to the NLP community. State-of-the-art methods
treated this problem as a multilabel classification problem and proposed
various architectures to model this problem. However, these systems did not
leverage the superb performance of pretrained language models, which achieved
superb performance on natural language understanding tasks. Prior work has
shown that pretrained language models underperformed on this task with the
regular finetuning scheme. Therefore, this paper aims at analyzing the causes
of the underperformance and developing a framework for automatic ICD coding
with pretrained language models. We spotted three main issues through the
experiments: 1) large label space, 2) long input sequences, and 3) domain
mismatch between pretraining and fine-tuning. We propose PLMICD, a framework
that tackles the challenges with various strategies. The experimental results
show that our proposed framework can overcome the challenges and achieves
state-of-the-art performance in terms of multiple metrics on the benchmark
MIMIC data. The source code is available at https://github.com/MiuLab/PLM-ICD
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2022 03:56:28 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Huang",
"Chao-Wei",
""
],
[
"Tsai",
"Shang-Chi",
""
],
[
"Chen",
"Yun-Nung",
""
]
] |
new_dataset
| 0.981058 |
2207.05317
|
Junho Kim
|
Junho Kim, Hojun Jang, Changwoon Choi, and Young Min Kim
|
CPO: Change Robust Panorama to Point Cloud Localization
|
Accepted to ECCV 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present CPO, a fast and robust algorithm that localizes a 2D panorama with
respect to a 3D point cloud of a scene possibly containing changes. To robustly
handle scene changes, our approach deviates from conventional feature point
matching, and focuses on the spatial context provided from panorama images.
Specifically, we propose efficient color histogram generation and subsequent
robust localization using score maps. By utilizing the unique equivariance of
spherical projections, we propose very fast color histogram generation for a
large number of camera poses without explicitly rendering images for all
candidate poses. We accumulate the regional consistency of the panorama and
point cloud as 2D/3D score maps, and use them to weigh the input color values
to further increase robustness. The weighted color distribution quickly finds
good initial poses and achieves stable convergence for gradient-based
optimization. CPO is lightweight and achieves effective localization in all
tested scenarios, showing stable performance despite scene changes, repetitive
structures, or featureless regions, which are typical challenges for visual
localization with perspective cameras.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2022 05:10:32 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Kim",
"Junho",
""
],
[
"Jang",
"Hojun",
""
],
[
"Choi",
"Changwoon",
""
],
[
"Kim",
"Young Min",
""
]
] |
new_dataset
| 0.951829 |
2207.05331
|
Sadman Sakib Enan
|
Sadman Sakib Enan, Michael Fulton and Junaed Sattar
|
Robotic Detection of a Human-Comprehensible Gestural Language for
Underwater Multi-Human-Robot Collaboration
| null | null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a motion-based robotic communication framework that
enables non-verbal communication among autonomous underwater vehicles (AUVs)
and human divers. We design a gestural language for AUV-to-AUV communication
which can be easily understood by divers observing the conversation unlike
typical radio frequency, light, or audio based AUV communication. To allow AUVs
to visually understand a gesture from another AUV, we propose a deep network
(RRCommNet) which exploits a self-attention mechanism to learn to recognize
each message by extracting maximally discriminative spatio-temporal features.
We train this network on diverse simulated and real-world data. Our
experimental evaluations, both in simulation and in closed-water robot trials,
demonstrate that the proposed RRCommNet architecture is able to decipher
gesture-based messages with an average accuracy of 88-94% on simulated data,
73-83% on real data (depending on the version of the model used). Further, by
performing a message transcription study with human participants, we also show
that the proposed language can be understood by humans, with an overall
transcription accuracy of 88%. Finally, we discuss the inference runtime of
RRCommNet on embedded GPU hardware, for real-time use on board AUVs in the
field.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2022 06:04:12 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Enan",
"Sadman Sakib",
""
],
[
"Fulton",
"Michael",
""
],
[
"Sattar",
"Junaed",
""
]
] |
new_dataset
| 0.989579 |
2207.05358
|
Lu Yu
|
Lu Yu, Wei Xiang, Juan Fang, Yi-Ping Phoebe Chen, Lianhua Chi
|
eX-ViT: A Novel eXplainable Vision Transformer for Weakly Supervised
Semantic Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently vision transformer models have become prominent models for a range
of vision tasks. These models, however, are usually opaque with weak feature
interpretability. Moreover, there is no method currently built for an
intrinsically interpretable transformer, which is able to explain its reasoning
process and provide a faithful explanation. To close these crucial gaps, we
propose a novel vision transformer dubbed the eXplainable Vision Transformer
(eX-ViT), an intrinsically interpretable transformer model that is able to
jointly discover robust interpretable features and perform the prediction.
Specifically, eX-ViT is composed of the Explainable Multi-Head Attention
(E-MHA) module, the Attribute-guided Explainer (AttE) module and the
self-supervised attribute-guided loss. The E-MHA tailors explainable attention
weights that are able to learn semantically interpretable representations from
local patches in terms of model decisions with noise robustness. Meanwhile,
AttE is proposed to encode discriminative attribute features for the target
object through diverse attribute discovery, which constitutes faithful evidence
for the model's predictions. In addition, a self-supervised attribute-guided
loss is developed for our eX-ViT, which aims at learning enhanced
representations through the attribute discriminability mechanism and attribute
diversity mechanism, to localize diverse and discriminative attributes and
generate more robust explanations. As a result, we can uncover faithful and
robust interpretations with diverse attributes through the proposed eX-ViT.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2022 07:43:29 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Yu",
"Lu",
""
],
[
"Xiang",
"Wei",
""
],
[
"Fang",
"Juan",
""
],
[
"Chen",
"Yi-Ping Phoebe",
""
],
[
"Chi",
"Lianhua",
""
]
] |
new_dataset
| 0.981093 |
2207.05393
|
Xavier Sevillano
|
Juan G\'omez-G\'omez, Ester Vida\~na-Vila, Xavier Sevillano
|
Western Mediterranean wetlands bird species classification: evaluating
small-footprint deep learning approaches on a new annotated dataset
|
17 pages, 8 figures, 3 tables
| null | null | null |
cs.SD cs.AI eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The deployment of an expert system running over a wireless acoustic sensors
network made up of bioacoustic monitoring devices that recognise bird species
from their sounds would enable the automation of many tasks of ecological
value, including the analysis of bird population composition or the detection
of endangered species in areas of environmental interest. Endowing these
devices with accurate audio classification capabilities is possible thanks to
the latest advances in artificial intelligence, among which deep learning
techniques excel. However, a key issue to make bioacoustic devices affordable
is the use of small footprint deep neural networks that can be embedded in
resource and battery constrained hardware platforms. For this reason, this work
presents a critical comparative analysis between two heavy and large footprint
deep neural networks (VGG16 and ResNet50) and a lightweight alternative,
MobileNetV2. Our experimental results reveal that MobileNetV2 achieves an
average F1-score less than a 5\% lower than ResNet50 (0.789 vs. 0.834),
performing better than VGG16 with a footprint size nearly 40 times smaller.
Moreover, to compare the models, we have created and made public the Western
Mediterranean Wetland Birds dataset, consisting of 201.6 minutes and 5,795
audio excerpts of 20 endemic bird species of the Aiguamolls de l'Empord\`a
Natural Park.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2022 08:48:12 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Gómez-Gómez",
"Juan",
""
],
[
"Vidaña-Vila",
"Ester",
""
],
[
"Sevillano",
"Xavier",
""
]
] |
new_dataset
| 0.994957 |
2207.05454
|
Andrea Gangemi
|
Danilo Bazzanella, Andrea Gangemi
|
Bitcoin: a new Proof-of-Work system with reduced variance
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Proof-of-Work (PoW) is a popular consensus protocol used by Bitcoin since its
inception. PoW has the well-known flaw of assigning all the reward to the
single miner (or pool) that inserts the new block. This has the consequence of
making the variance of the reward and thus the mining enterprise risk extremely
high. To address this problem, Shi in 2016 proposed a theoretical algorithm
that would substantially reduce the issue. We introduce a variant of
Proof-of-Work that improves on Shi's idea and can be easily implemented in
practice. In order to insert a block, the network must not find a single nonce,
but must find a few of them. This small change allows for a fairer distribution
of rewards and at the same time has the effect of regularizing the insertion
time of blocks. This would facilitate the emergence of small pools or
autonomous miners.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2022 10:48:37 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Bazzanella",
"Danilo",
""
],
[
"Gangemi",
"Andrea",
""
]
] |
new_dataset
| 0.998802 |
2207.05475
|
Vinod Patidar
|
Vinod Patidar and Gurpreet Kaur
|
A novel conservative chaos driven dynamic DNA coding for image
encryption
|
29 pages, 5 figures, 15 tables
| null | null | null |
cs.CR cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose a novel conservative chaotic standard map-driven
dynamic DNA coding (encoding, addition, subtraction and decoding) for the image
encryption. The proposed image encryption algorithm is a dynamic DNA coding
algorithm i.e., for the encryption of each pixel different rules for encoding,
addition/subtraction, decoding etc. are randomly selected based on the
pseudorandom sequences generated with the help of the conservative chaotic
standard map. We propose a novel way to generate pseudo-random sequences
through the conservative chaotic standard map and also test them rigorously
through the most stringent test suite of pseudo-randomness, the NIST test
suite, before using them in the proposed image encryption algorithm. Our image
encryption algorithm incorporates a unique feed-forward and feedback mechanisms
to generate and modify the dynamic one-time pixels that are further used for
the encryption of each pixel of the plain image, therefore, bringing in the
desired sensitivity on plaintext as well as ciphertext. All the controlling
pseudorandom sequences used in the algorithm are generated for a different
value of the parameter (part of the secret key) with inter-dependency through
the iterates of the chaotic map (in the generation process) and therefore
possess extreme key sensitivity too. The performance and security analysis has
been executed extensively through histogram analysis, correlation analysis,
information entropy analysis, DNA sequence-based analysis, perceptual quality
analysis, key sensitivity analysis, plaintext sensitivity analysis, etc., The
results are promising and prove the robustness of the algorithm against various
common cryptanalytic attacks.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2022 11:40:09 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Patidar",
"Vinod",
""
],
[
"Kaur",
"Gurpreet",
""
]
] |
new_dataset
| 0.98964 |
2207.05498
|
Rodolfo Zevallos
|
Rodolfo Zevallos, Luis Camacho and Nelsi Melgarejo
|
Huqariq: A Multilingual Speech Corpus of Native Languages of Peru for
Speech Recognition
|
Language Resources and Evaluation Conference (LREC 2022)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Huqariq corpus is a multilingual collection of speech from native
Peruvian languages. The transcribed corpus is intended for the research and
development of speech technologies to preserve endangered languages in Peru.
Huqariq is primarily designed for the development of automatic speech
recognition, language identification and text-to-speech tools. In order to
achieve corpus collection sustainably, we employ the crowdsourcing methodology.
Huqariq includes four native languages of Peru, and it is expected that by the
end of the year 2022, it can reach up to 20 native languages out of the 48
native languages in Peru. The corpus has 220 hours of transcribed audio
recorded by more than 500 volunteers, making it the largest speech corpus for
native languages in Peru. In order to verify the quality of the corpus, we
present speech recognition experiments using 220 hours of fully transcribed
audio.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2022 12:37:12 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Zevallos",
"Rodolfo",
""
],
[
"Camacho",
"Luis",
""
],
[
"Melgarejo",
"Nelsi",
""
]
] |
new_dataset
| 0.997238 |
2207.05539
|
Ivan Machado
|
Railana Santana and Luana Martins and T\'assio Virg\'inio and Larissa
Soares and Heitor Costa and Ivan Machado
|
Refactoring Assertion Roulette and Duplicate Assert test smells: a
controlled experiment
| null |
XXV Ibero-American Conference on Software Engineering (CIbSE 2022)
| null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Test smells can reduce the developers' ability to interact with the test
code. Refactoring test code offers a safe strategy to handle test smells.
However, the manual refactoring activity is not a trivial process, and it is
often tedious and error-prone. This study aims to evaluate RAIDE, a tool for
automatic identification and refactoring of test smells. We present an
empirical assessment of RAIDE, in which we analyzed its capability at
refactoring Assertion Roulette and Duplicate Assert test smells and compared
the results against both manual refactoring and a state-of-the-art approach.
The results show that RAIDE provides a faster and more intuitive approach for
handling test smells than using an automated tool for smells detection combined
with manual refactoring.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2022 13:59:43 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Santana",
"Railana",
""
],
[
"Martins",
"Luana",
""
],
[
"Virgínio",
"Tássio",
""
],
[
"Soares",
"Larissa",
""
],
[
"Costa",
"Heitor",
""
],
[
"Machado",
"Ivan",
""
]
] |
new_dataset
| 0.996272 |
2207.05610
|
Steven Obua
|
Steven Obua
|
Abstraction Logic: A New Foundation for (Computer) Mathematics
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Abstraction logic is a new logic, serving as a foundation of mathematics. It
combines features of both predicate logic and higher-order logic: abstraction
logic can be viewed both as higher-order logic minus static types as well as
predicate logic plus operators and variable binding. We argue that abstraction
logic is the best foundational logic possible because it maximises both
simplicity and practical expressivity. This argument is supported by the
observation that abstraction logic has simpler terms and a simpler notion of
proof than all other general logics. At the same time, abstraction logic can
formalise both intuitionistic and classical abstraction logic, and is sound and
complete for these logics and all other logics extending deduction logic with
equality.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2022 15:24:12 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Obua",
"Steven",
""
]
] |
new_dataset
| 0.998686 |
2207.05613
|
Zejun Zhang
|
Zejun Zhang and Zhenchang Xing and Xin Xia and Xiwei Xu and Liming Zhu
|
Making Python Code Idiomatic by Automatic Refactoring Non-Idiomatic
Python Code with Pythonic Idioms
|
12 pages, accepted to ESEC/FSE'2022
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Compared to other programming languages (e.g., Java), Python has more idioms
to make Python code concise and efficient. Although pythonic idioms are well
accepted in the Python community, Python programmers are often faced with many
challenges in using them, for example, being unaware of certain pythonic idioms
or do not know how to use them properly. Based on an analysis of 7,638 Python
repositories on GitHub, we find that non-idiomatic Python code that can be
implemented with pythonic idioms occurs frequently and widely. Unfortunately,
there is no tool for automatically refactoring such non-idiomatic code into
idiomatic code. In this paper, we design and implement an automatic refactoring
tool to make Python code idiomatic. We identify nine pythonic idioms by
systematically contrasting the abstract syntax grammar of Python and Java. Then
we define the syntactic patterns for detecting non-idiomatic code for each
pythonic idiom. Finally, we devise atomic AST-rewriting operations and
refactoring steps to refactor non-idiomatic code into idiomatic code. We test
and review over 4,115 refactorings applied to 1,065 Python projects from
GitHub, and submit 90 pull requests for the 90 randomly sampled refactorings to
84 projects. These evaluations confirm the high-accuracy, practicality and
usefulness of our refactoring tool on real-world Python code. Our refactoring
tool can be accessed at 47.242.131.128:5000.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2022 15:30:46 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Zhang",
"Zejun",
""
],
[
"Xing",
"Zhenchang",
""
],
[
"Xia",
"Xin",
""
],
[
"Xu",
"Xiwei",
""
],
[
"Zhu",
"Liming",
""
]
] |
new_dataset
| 0.998852 |
2207.05618
|
Jonathan Dupuy
|
Wilhem Barbier and Jonathan Dupuy
|
Htex: Per-Halfedge Texturing for Arbitrary Mesh Topologies
| null | null | null | null |
cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce per-halfedge texturing (Htex) a GPU-friendly method for
texturing arbitrary polygon-meshes without an explicit parameterization. Htex
builds upon the insight that halfedges encode an intrinsic triangulation for
polygon meshes, where each halfedge spans a unique triangle with direct
adjacency information. Rather than storing a separate texture per face of the
input mesh as is done by previous parameterization-free texturing methods, Htex
stores a square texture for each halfedge and its twin. We show that this
simple change from face to halfedge induces two important properties for high
performance parameterization-free texturing. First, Htex natively supports
arbitrary polygons without requiring dedicated code for, e.g, non-quad faces.
Second, Htex leads to a straightforward and efficient GPU implementation that
uses only three texture-fetches per halfedge to produce continuous texturing
across the entire mesh. We demonstrate the effectiveness of Htex by rendering
production assets in real time.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2022 15:42:44 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Barbier",
"Wilhem",
""
],
[
"Dupuy",
"Jonathan",
""
]
] |
new_dataset
| 0.984451 |
2207.05624
|
Ali Munir
|
Sepehr Abbasi, Shiva Ketabi, Ali Munir, Mahmoud Bahnasy, Yashar
Ganjali
|
DWTCP: Ultra Low Latency Congestion Control Protocol for Data Centers
|
19 pages, 17 figures
| null | null | null |
cs.NI cs.DC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Congestion control algorithms rely on a variety of congestion signals (packet
loss, Explicit Congestion Notification, delay, etc.) to achieve fast
convergence, high utilization, and fairness among flows. A key limitation of
these congestion signals is that they are either late in feedback or they incur
significant overheads. An ideal congestion control must discover any available
bandwidth in the network, detect congestion as soon as link utilization
approaches full capacity, and react timely to avoid queuing and packet drops,
without significant overheads. To this end, this work proposes Scout service
that leverages priority queues to infer bandwidth availability and link
busyness at the host. The key observation here is that as the high priority
queue (HPQ) gets busier, the low priority queue (LPQ) is served less.
Therefore, the state of the link can be observed from the LPQ and any
congestion can be detected several RTTs earlier than observing the HPQ. We
propose a new transport protocol, Double-Window Transmission Control Protocol
(DWTCP) that builds upon the Scout service to dynamically adjust its congestion
window. Our testbed and simulation-based evaluation demonstrates that Scout
enables a data center transport to achieve high throughput, near-zero queues,
lower latency, and high fairness.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2022 15:46:19 GMT"
}
] | 2022-07-13T00:00:00 |
[
[
"Abbasi",
"Sepehr",
""
],
[
"Ketabi",
"Shiva",
""
],
[
"Munir",
"Ali",
""
],
[
"Bahnasy",
"Mahmoud",
""
],
[
"Ganjali",
"Yashar",
""
]
] |
new_dataset
| 0.999157 |
2010.04968
|
Keren Fu
|
Keren Fu, Yao Jiang, Ge-Peng Ji, Tao Zhou, Qijun Zhao, Deng-Ping Fan
|
Light Field Salient Object Detection: A Review and Benchmark
| null |
Computational Visual Media, 2022, Vol. 8, No. 4, Pages: 509-534
|
10.1007/s41095-021-0256-2
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Salient object detection (SOD) is a long-standing research topic in computer
vision and has drawn an increasing amount of research interest in the past
decade. This paper provides the first comprehensive review and benchmark for
light field SOD, which has long been lacking in the saliency community.
Firstly, we introduce preliminary knowledge on light fields, including theory
and data forms, and then review existing studies on light field SOD, covering
ten traditional models, seven deep learning-based models, one comparative
study, and one brief review. Existing datasets for light field SOD are also
summarized with detailed information and statistical analyses. Secondly, we
benchmark nine representative light field SOD models together with several
cutting-edge RGB-D SOD models on four widely used light field datasets, from
which insightful discussions and analyses, including a comparison between light
field SOD and RGB-D SOD models, are achieved. Besides, due to the inconsistency
of datasets in their current forms, we further generate complete data and
supplement focal stacks, depth maps and multi-view images for the inconsistent
datasets, making them consistent and unified. Our supplemental data makes a
universal benchmark possible. Lastly, because light field SOD is quite a
special problem attributed to its diverse data representations and high
dependency on acquisition hardware, making it differ greatly from other
saliency detection tasks, we provide nine hints into the challenges and future
directions, and outline several open issues. We hope our review and
benchmarking could help advance research in this field. All the materials
including collected models, datasets, benchmarking results, and supplemented
light field datasets will be publicly available on our project site
https://github.com/kerenfu/LFSOD-Survey.
|
[
{
"version": "v1",
"created": "Sat, 10 Oct 2020 10:30:40 GMT"
},
{
"version": "v2",
"created": "Sun, 18 Oct 2020 13:40:45 GMT"
},
{
"version": "v3",
"created": "Sat, 23 Jan 2021 06:35:26 GMT"
},
{
"version": "v4",
"created": "Sat, 24 Jul 2021 14:23:26 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Fu",
"Keren",
""
],
[
"Jiang",
"Yao",
""
],
[
"Ji",
"Ge-Peng",
""
],
[
"Zhou",
"Tao",
""
],
[
"Zhao",
"Qijun",
""
],
[
"Fan",
"Deng-Ping",
""
]
] |
new_dataset
| 0.993818 |
2010.14457
|
Miroslav Mitev
|
Miroslav Mitev, Mahdi Shekiba-Herfeh, Arsenia Chorti, Martin Reed
|
Multi-factor Physical Layer Security Authentication in Short Blocklength
Communication
| null | null |
10.1109/ACCESS.2022.3187967
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lightweight and low latency security schemes at the physical layer that have
recently attracted a lot of attention include: (i) physical unclonable
functions (PUFs), (ii) localization based authentication, and, (iii) secret key
generation (SKG) from wireless fading coefficients. In this paper, we focus on
short blocklengths and propose a fast, privacy preserving, multi-factor
authentication protocol that uniquely combines PUFs, proximity estimation and
SKG. We focus on delay constrained applications and demonstrate the performance
of the SKG scheme in the short blocklength by providing a numerical comparison
of three families of channel codes, including half rate low density parity
check codes (LDPC), Bose Chaudhuri Hocquenghem (BCH), and, Polar Slepian Wolf
codes for n=512, 1024. The SKG keys are incorporated in a zero-round-trip-time
resumption protocol for fast re-authentication. All schemes of the proposed
mutual authentication protocol are shown to be secure through formal proofs
using Burrows, Abadi and Needham (BAN) and Mao and Boyd (MB) logic as well as
the Tamarin-prover.
|
[
{
"version": "v1",
"created": "Tue, 27 Oct 2020 17:17:13 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Feb 2021 14:54:00 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Mitev",
"Miroslav",
""
],
[
"Shekiba-Herfeh",
"Mahdi",
""
],
[
"Chorti",
"Arsenia",
""
],
[
"Reed",
"Martin",
""
]
] |
new_dataset
| 0.954431 |
2105.03242
|
Marvin Stuede
|
Marvin Stuede, Konrad Westermann, Moritz Schappler, Svenja
Spindeldreier
|
Sobi: An Interactive Social Service Robot for Long-Term Autonomy in Open
Environments
| null | null |
10.1109/ECMR50962.2021.9568838
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Long-term autonomy in service robotics is a current research topic,
especially for dynamic, large-scale environments that change over time. We
present Sobi, a mobile service robot developed as an interactive guide for open
environments, such as public places with indoor and outdoor areas. The robot
will serve as a platform for environmental modeling and human-robot
interaction. Its main hardware and software components, which we freely license
as a documented open source project, are presented. Another key focus is Sobi's
monitoring system for long-term autonomy, which restores system components in a
targeted manner in order to extend the total system lifetime without unplanned
intervention. We demonstrate first results of the long-term autonomous
capabilities in a 16-day indoor deployment, in which the robot patrols a total
of 66.6 km with an average of 5.5 hours of travel time per weekday, charging
autonomously in between. In a user study with 12 participants, we evaluate the
appearance and usability of the user interface, which allows users to
interactively query information about the environment and directions.
|
[
{
"version": "v1",
"created": "Fri, 7 May 2021 13:15:24 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Jul 2021 08:27:45 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Stuede",
"Marvin",
""
],
[
"Westermann",
"Konrad",
""
],
[
"Schappler",
"Moritz",
""
],
[
"Spindeldreier",
"Svenja",
""
]
] |
new_dataset
| 0.999716 |
2107.02692
|
Armin Moin
|
Armin Moin, Andrei Mituca, Moharram Challenger, Atta Badii and Stephan
G\"unnemann
|
ML-Quadrat & DriotData: A Model-Driven Engineering Tool and a Low-Code
Platform for Smart IoT Services
|
ICSE'22 Tool Demo
| null |
10.1109/ICSE-Companion55297.2022.9793752
| null |
cs.SE cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this paper, we present ML-Quadrat, an open-source research prototype that
is based on the Eclipse Modeling Framework (EMF) and the state of the art in
the literature of Model-Driven Software Engineering (MDSE) for smart
Cyber-Physical Systems (CPS) and the Internet of Things (IoT). Its envisioned
users are mostly software developers who might not have deep knowledge and
skills in the heterogeneous IoT platforms and the diverse Artificial
Intelligence (AI) technologies, specifically regarding Machine Learning (ML).
ML-Quadrat is released under the terms of the Apache 2.0 license on Github.
Additionally, we demonstrate an early tool prototype of DriotData, a web-based
Low-Code platform targeting citizen data scientists and citizen/end-user
software developers. DriotData exploits and adopts ML-Quadrat in the industry
by offering an extended version of it as a subscription-based service to
companies, mainly Small- and Medium-Sized Enterprises (SME). The current
preliminary version of DriotData has three web-based model editors: text-based,
tree-/form-based and diagram-based. The latter is designed for domain experts
in the problem or use case domains (namely the IoT vertical domains) who might
not have knowledge and skills in the field of IT. Finally, a short video
demonstrating the tools is available on YouTube: https://youtu.be/VAuz25w0a5k
|
[
{
"version": "v1",
"created": "Tue, 6 Jul 2021 15:52:09 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Nov 2021 14:45:28 GMT"
},
{
"version": "v3",
"created": "Thu, 10 Feb 2022 17:07:51 GMT"
},
{
"version": "v4",
"created": "Wed, 16 Feb 2022 13:21:36 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Moin",
"Armin",
""
],
[
"Mituca",
"Andrei",
""
],
[
"Challenger",
"Moharram",
""
],
[
"Badii",
"Atta",
""
],
[
"Günnemann",
"Stephan",
""
]
] |
new_dataset
| 0.991804 |
2108.00980
|
Guillaume Durandau
|
Guillaume Durandau, Wolfgang Rampeltshammer, Herman van der Kooij,
Massimo Sartori
|
Neuromechanical model-based adaptive control of bi-lateral ankle
exoskeletons: biological joint torque and electromyogram reduction across
walking conditions
|
16 pages, 12 figures, 1 table
| null |
10.1109/TRO.2022.3170239
| null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To enable the broad adoption of wearable robotic exoskeletons in medical and
industrial settings, it is crucial they can adaptively support large
repertoires of movements. We propose a new human-machine interface to
simultaneously drive bilateral ankle exoskeletons during a range of 'unseen'
walking conditions and transitions that were not used for establishing the
control interface. The proposed approach used person-specific neuromechanical
models to estimate biological ankle joint torques in real-time from measured
electromyograms (EMGS) and joint angles. A low-level controller based on a
disturbance observer translated biological torque estimates into exoskeleton
commands. We call this 'neuromechanical model-based control' (NMBC). NMBC
enabled six individuals to voluntarily control a bilateral ankle exoskeleton
across six walking conditions, including all intermediate transitions, i.e.,
two walking speeds, each performed at three ground elevations, with no need for
predefined torque profiles, nor a priori chosen neuromuscular reflex rules, or
state machines as common in literature. A single subject case-study was carried
out on a dexterous locomotion tasks involving moonwalking. NMBC always enabled
reducing biological ankle torques, as well as eight ankle muscle EMGs both
within (22% torque; 12% EMG) and between walking conditions (24% torque; 14%
EMG) when compared to non-assisted conditions. Torque and EMG reductions in
novel walking conditions indicated that the exoskeleton operated symbiotically,
as exomuscles controlled by the operator's neuromuscular system. This opens new
avenues for the systematic adoption of wearable robots as part of
out-of-the-lab medical and occupational settings.
|
[
{
"version": "v1",
"created": "Mon, 2 Aug 2021 15:28:59 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Jun 2022 16:46:37 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Durandau",
"Guillaume",
""
],
[
"Rampeltshammer",
"Wolfgang",
""
],
[
"van der Kooij",
"Herman",
""
],
[
"Sartori",
"Massimo",
""
]
] |
new_dataset
| 0.981978 |
2109.08913
|
Mingchen Zhang
|
Mingchen Zhang, Xiaojun Yuan
|
Intelligent Reflecting Surface Aided MIMO with Cascaded LoS Links:
Channel Modelling and Full Multiplexing Region
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work studies the modelling and the optimization of intelligent
reflecting surface (IRS) assisted multiple-input multiple-output (MIMO) systems
through cascaded line-of-sight (LoS) links. In Part I of this work, we build up
a new IRS-aided MIMO channel model, named the cascaded LoS MIMO channel. The
proposed channel model consists of a transmitter (Tx) and a receiver (Rx) both
equipped with uniform linear arrays, and an IRS is used to enable
communications between the transmitter and the receiver through the LoS links
seen by the IRS. When modeling the reflection of electromagnetic waves at the
IRS, we take into account the curvature of the wavefront on different
reflecting elements. Based on the established model, we study the spatial
multiplexing capability of the cascaded LoS MIMO system. We introduce the
notion of full multiplexing region (FMR) for the cascaded LoS MIMO channel,
where the FMR is the union of Tx-IRS and IRS-Rx distance pairs that enable full
multiplexing communication. Under a special passive beamforming strategy named
reflective focusing, we derive an inner bound of the FMR, and provide the
corresponding orientation settings of the antenna arrays that enable full
multiplexing. Based on the proposed channel model and reflective focusing, the
mutual information maximization problem is discussed in Part II.
|
[
{
"version": "v1",
"created": "Sat, 18 Sep 2021 11:54:39 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Jun 2022 06:27:20 GMT"
},
{
"version": "v3",
"created": "Sun, 10 Jul 2022 14:31:08 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Zhang",
"Mingchen",
""
],
[
"Yuan",
"Xiaojun",
""
]
] |
new_dataset
| 0.993868 |
2110.02584
|
Jaesung Tae
|
Jaesung Tae, Hyeongju Kim, Taesu Kim
|
EdiTTS: Score-based Editing for Controllable Text-to-Speech
|
4 pages, 3 figures, 3 tables, INTERSPEECH 2022
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We present EdiTTS, an off-the-shelf speech editing methodology based on
score-based generative modeling for text-to-speech synthesis. EdiTTS allows for
targeted, granular editing of audio, both in terms of content and pitch,
without the need for any additional training, task-specific optimization, or
architectural modifications to the score-based model backbone. Specifically, we
apply coarse yet deliberate perturbations in the Gaussian prior space to induce
desired behavior from the diffusion model while applying masks and softening
kernels to ensure that iterative edits are applied only to the target region.
Through listening tests and speech-to-text back transcription, we show that
EdiTTS outperforms existing baselines and produces robust samples that satisfy
user-imposed requirements.
|
[
{
"version": "v1",
"created": "Wed, 6 Oct 2021 08:51:10 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Feb 2022 03:39:33 GMT"
},
{
"version": "v3",
"created": "Sat, 9 Jul 2022 17:22:14 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Tae",
"Jaesung",
""
],
[
"Kim",
"Hyeongju",
""
],
[
"Kim",
"Taesu",
""
]
] |
new_dataset
| 0.977314 |
2110.15063
|
Hanlei Zhang
|
Hanlei Zhang, Xiaoteng Li, Hua Xu, Panpan Zhang, Kang Zhao, Kai Gao
|
TEXTOIR: An Integrated and Visualized Platform for Text Open Intent
Recognition
|
Published in ACL 2021, demo paper
| null |
10.18653/v1/2021.acl-demo.20
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
TEXTOIR is the first integrated and visualized platform for text open intent
recognition. It is composed of two main modules: open intent detection and open
intent discovery. Each module integrates most of the state-of-the-art
algorithms and benchmark intent datasets. It also contains an overall framework
connecting the two modules in a pipeline scheme. In addition, this platform has
visualized tools for data and model management, training, evaluation and
analysis of the performance from different aspects. TEXTOIR provides useful
toolkits and convenient visualized interfaces for each sub-module (Toolkit
code: https://github.com/thuiar/TEXTOIR), and designs a framework to implement
a complete process to both identify known intents and discover open intents
(Demo code: https://github.com/thuiar/TEXTOIR-DEMO).
|
[
{
"version": "v1",
"created": "Mon, 13 Sep 2021 02:08:18 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Zhang",
"Hanlei",
""
],
[
"Li",
"Xiaoteng",
""
],
[
"Xu",
"Hua",
""
],
[
"Zhang",
"Panpan",
""
],
[
"Zhao",
"Kang",
""
],
[
"Gao",
"Kai",
""
]
] |
new_dataset
| 0.966067 |
2202.12838
|
Praveen Kumar Rajendran
|
Praveen Kumar Rajendran, Sumit Mishra, Luiz Felipe Vecchietti, Dongsoo
Har
|
RelMobNet: End-to-end relative camera pose estimation using a robust
two-stage training
|
15 pages, 7 figures, 2 tables - RelMobNet revised draft
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Relative camera pose estimation, i.e. estimating the translation and rotation
vectors using a pair of images taken in different locations, is an important
part of systems in augmented reality and robotics. In this paper, we present an
end-to-end relative camera pose estimation network using a siamese architecture
that is independent of camera parameters. The network is trained using the
Cambridge Landmarks data with four individual scene datasets and a dataset
combining the four scenes. To improve generalization, we propose a novel
two-stage training that alleviates the need of a hyperparameter to balance the
translation and rotation loss scale. The proposed method is compared with
one-stage training CNN-based methods such as RPNet and RCPNet and demonstrate
that the proposed model improves translation vector estimation by 16.11%,
28.88%, and 52.27% on the Kings College, Old Hospital, and St Marys Church
scenes, respectively. For proving texture invariance, we investigate the
generalization of the proposed method augmenting the datasets to different
scene styles, as ablation studies, using generative adversarial networks. Also,
we present a qualitative assessment of epipolar lines of our network
predictions and ground truth poses.
|
[
{
"version": "v1",
"created": "Fri, 25 Feb 2022 17:27:26 GMT"
},
{
"version": "v2",
"created": "Sun, 10 Jul 2022 15:31:47 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Rajendran",
"Praveen Kumar",
""
],
[
"Mishra",
"Sumit",
""
],
[
"Vecchietti",
"Luiz Felipe",
""
],
[
"Har",
"Dongsoo",
""
]
] |
new_dataset
| 0.998657 |
2202.13403
|
Gerald Schwiebert
|
Gerald Schwiebert, Cornelius Weber, Leyuan Qu, Henrique Siqueira,
Stefan Wermter
|
A Multimodal German Dataset for Automatic Lip Reading Systems and
Transfer Learning
|
Accepted to LREC 2022
|
Proceedings of the 13th Conference on Language Resources and
Evaluation (LREC 2022), pages 6829-6836
|
10.25592/uhhfdm.10047
| null |
cs.CV cs.CL cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Large datasets as required for deep learning of lip reading do not exist in
many languages. In this paper we present the dataset GLips (German Lips)
consisting of 250,000 publicly available videos of the faces of speakers of the
Hessian Parliament, which was processed for word-level lip reading using an
automatic pipeline. The format is similar to that of the English language LRW
(Lip Reading in the Wild) dataset, with each video encoding one word of
interest in a context of 1.16 seconds duration, which yields compatibility for
studying transfer learning between both datasets. By training a deep neural
network, we investigate whether lip reading has language-independent features,
so that datasets of different languages can be used to improve lip reading
models. We demonstrate learning from scratch and show that transfer learning
from LRW to GLips and vice versa improves learning speed and performance, in
particular for the validation set.
|
[
{
"version": "v1",
"created": "Sun, 27 Feb 2022 17:37:35 GMT"
},
{
"version": "v2",
"created": "Thu, 5 May 2022 13:41:55 GMT"
},
{
"version": "v3",
"created": "Wed, 11 May 2022 10:21:56 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Schwiebert",
"Gerald",
""
],
[
"Weber",
"Cornelius",
""
],
[
"Qu",
"Leyuan",
""
],
[
"Siqueira",
"Henrique",
""
],
[
"Wermter",
"Stefan",
""
]
] |
new_dataset
| 0.999805 |
2203.17041
|
Han Xiao
|
Libing Wang, Han Xiao, Donglei Du, Dachuan Xu
|
On the Population Monotonicity of Independent Set Games
| null |
Operations Research Letters (2022)
|
10.1016/j.orl.2022.06.009
| null |
cs.GT cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An independent set game is a cooperative game defined on graphs and dealing
with profit-sharing in maximum independent set problems. A population monotonic
allocation scheme is a rule specifying how to share the profit of each
coalition among its participants such that every participant is better off when
the coalition expands. In this paper, we provide a necessary and sufficient
characterization for population monotonic allocation schemes in independent set
games. Moreover, our characterization can be verified efficiently.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 14:05:29 GMT"
},
{
"version": "v2",
"created": "Sun, 10 Jul 2022 12:21:41 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Wang",
"Libing",
""
],
[
"Xiao",
"Han",
""
],
[
"Du",
"Donglei",
""
],
[
"Xu",
"Dachuan",
""
]
] |
new_dataset
| 0.988033 |
2205.11680
|
Hanyang Liu
|
Hanyang Liu, Sunny S. Lou, Benjamin C. Warner, Derek R. Harford,
Thomas Kannampallil, Chenyang Lu
|
HiPAL: A Deep Framework for Physician Burnout Prediction Using Activity
Logs in Electronic Health Records
|
11 pages including appendices. Accepted by KDD'22
|
KDD 2022
|
10.1145/3534678.3539056
| null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Burnout is a significant public health concern affecting nearly half of the
healthcare workforce. This paper presents the first end-to-end deep learning
framework for predicting physician burnout based on electronic health record
(EHR) activity logs, digital traces of physician work activities that are
available in any EHR system. In contrast to prior approaches that exclusively
relied on surveys for burnout measurement, our framework directly learns deep
representations of physician behaviors from large-scale clinician activity logs
to predict burnout. We propose the Hierarchical burnout Prediction based on
Activity Logs (HiPAL), featuring a pre-trained time-dependent activity
embedding mechanism tailored for activity logs and a hierarchical predictive
model, which mirrors the natural hierarchical structure of clinician activity
logs and captures physicians' evolving burnout risk at both short-term and
long-term levels. To utilize the large amount of unlabeled activity logs, we
propose a semi-supervised framework that learns to transfer knowledge extracted
from unlabeled clinician activities to the HiPAL-based prediction model. The
experiment on over 15 million clinician activity logs collected from the EHR at
a large academic medical center demonstrates the advantages of our proposed
framework in predictive performance of physician burnout and training
efficiency over state-of-the-art approaches.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 00:10:27 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Jul 2022 21:40:33 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Liu",
"Hanyang",
""
],
[
"Lou",
"Sunny S.",
""
],
[
"Warner",
"Benjamin C.",
""
],
[
"Harford",
"Derek R.",
""
],
[
"Kannampallil",
"Thomas",
""
],
[
"Lu",
"Chenyang",
""
]
] |
new_dataset
| 0.967576 |
2206.10728
|
Shalini Saini
|
Shalini Saini, Dhiral Panjwani, and Nitesh Saxena
|
Mobile Mental Health Apps: Alternative Intervention or Intrusion?
|
11 pages
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Mental health is an extremely important subject, especially in these
unprecedented times of the COVID-19 pandemic. Ubiquitous mobile phones can
equip users to supplement psychiatric treatment and manage their mental health.
Mobile Mental Health (MMH) apps emerge as an effective alternative to assist
with a broad range of psychological disorders filling the much-needed
patient-provider accessibility gap. However, it also raises significant
concerns with sensitive information leakage.The absence of a transparent
privacy policy and lack of user awareness may pose a significant threat to
undermining the applicability of such tools. We conducted a multifold study of
- 1) Privacy Policies (Manually and with Polisis, an automated framework to
evaluate privacy policies); 2) App permissions; 3) Static Analysis for inherent
security issues; 4) Dynamic Analysis for threat surface and vulnerabilities
detection, and 5) Traffic Analysis.
Our results indicate that apps' exploitable flaws, dangerous permissions, and
insecure data handling pose a potential threat to the users' privacy and
security. The Dynamic analysis identified 145 vulnerabilities in 20 top-rated
MMH apps where attackers and malicious apps can access sensitive information.
45% of MMH apps use a unique identifier, Hardware Id, which can link a unique
id to a particular user and probe users' mental health. Traffic analysis shows
that sensitive mental health data can be leaked through insecure data
transmission. MMH apps need better scrutiny and regulation for more widespread
usage to meet the increasing need for mental health care without being
intrusive to the already vulnerable population.
|
[
{
"version": "v1",
"created": "Tue, 21 Jun 2022 21:05:54 GMT"
},
{
"version": "v2",
"created": "Sat, 9 Jul 2022 20:12:15 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Saini",
"Shalini",
""
],
[
"Panjwani",
"Dhiral",
""
],
[
"Saxena",
"Nitesh",
""
]
] |
new_dataset
| 0.989968 |
2206.13703
|
George Boateng
|
George Boateng, Samuel John, Andrew Glago, Samuel Boateng, Victor
Kumbol
|
Kwame for Science: An AI Teaching Assistant Based on Sentence-BERT for
Science Education in West Africa
|
5 pages, Accepted at the Fourth Workshop on Intelligent Textbooks
(iTextbooks) at the 23th International Conference on Artificial Intelligence
in Education (AIED 2022)
| null | null | null |
cs.CL cs.CY cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Africa has a high student-to-teacher ratio which limits students' access to
teachers. Consequently, students struggle to get answers to their questions. In
this work, we extended Kwame, our previous AI teaching assistant, adapted it
for science education, and deployed it as a web app. Kwame for Science answers
questions of students based on the Integrated Science subject of the West
African Senior Secondary Certificate Examination (WASSCE). Kwame for Science is
a Sentence-BERT-based question-answering web app that displays 3 paragraphs as
answers along with a confidence score in response to science questions.
Additionally, it displays the top 5 related past exam questions and their
answers in addition to the 3 paragraphs. Our preliminary evaluation of the
Kwame for Science with a 2.5-week real-world deployment showed a top 3 accuracy
of 87.5% (n=56) with 190 users across 11 countries. Kwame for Science will
enable the delivery of scalable, cost-effective, and quality remote education
to millions of people across Africa.
|
[
{
"version": "v1",
"created": "Tue, 28 Jun 2022 02:27:23 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Jul 2022 00:13:42 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Boateng",
"George",
""
],
[
"John",
"Samuel",
""
],
[
"Glago",
"Andrew",
""
],
[
"Boateng",
"Samuel",
""
],
[
"Kumbol",
"Victor",
""
]
] |
new_dataset
| 0.991441 |
2207.04136
|
Marcel Hussing
|
Jorge A. Mendez, Marcel Hussing, Meghna Gummadi, Eric Eaton
|
CompoSuite: A Compositional Reinforcement Learning Benchmark
|
Published at 1st Conference on Lifelong Learning Agents, 2022; code:
https://github.com/Lifelong-ML/CompoSuite
| null | null | null |
cs.LG cs.AI cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present CompoSuite, an open-source simulated robotic manipulation
benchmark for compositional multi-task reinforcement learning (RL). Each
CompoSuite task requires a particular robot arm to manipulate one individual
object to achieve a task objective while avoiding an obstacle. This
compositional definition of the tasks endows CompoSuite with two remarkable
properties. First, varying the robot/object/objective/obstacle elements leads
to hundreds of RL tasks, each of which requires a meaningfully different
behavior. Second, RL approaches can be evaluated specifically for their ability
to learn the compositional structure of the tasks. This latter capability to
functionally decompose problems would enable intelligent agents to identify and
exploit commonalities between learning tasks to handle large varieties of
highly diverse problems. We benchmark existing single-task, multi-task, and
compositional learning algorithms on various training settings, and assess
their capability to compositionally generalize to unseen tasks. Our evaluation
exposes the shortcomings of existing RL approaches with respect to
compositionality and opens new avenues for investigation.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2022 22:01:52 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Mendez",
"Jorge A.",
""
],
[
"Hussing",
"Marcel",
""
],
[
"Gummadi",
"Meghna",
""
],
[
"Eaton",
"Eric",
""
]
] |
new_dataset
| 0.99894 |
2207.04140
|
Alberto Gotta
|
Michele Martelli, Antonio Virdis, Alberto Gotta, Pietro Cassar\`A,
Maria Di Summa
|
An Outlook on the Future Marine Traffic Management System for Autonomous
Ships
| null |
IEEE Access, Vol. 9, Page(s): 157316 - 157328, 25 November 2021
|
10.1109/ACCESS.2021.3130741
| null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
In the shipping digitalisation process, the peak will be reached with the
advent of a wholly autonomous and at the same time safe and reliable ship. Full
autonomy could be obtained by two linked Artificial-Intelligence systems
representing the ship navigator and the ship engineer that possess sensing and
analysis skills, situational awareness, planning, and control capabilities.
Many efforts have been made in developing onboard systems; however, the shore
facilities are not ready yet to deal with these new technologies. The paper
aims to present the innovative technologies and methodologies needed to develop
a futuristic Vessel Traffic System. The proposed systems will aim at faultless
data acquisition and processing, provide input to decision-making systems, and
suggest evasive manoeuvre; to deal with hazards and systems failure without
human intervention onboard. The system is composed of three different and
interacting layers. The first is an artificially intelligent tool to detect and
control autonomous ships, thanks to situation recognition and obstacle
avoidance strategies. The second is an orchestration and management platform
designed to coordinate the sensing-actuation infrastructure and the AI
algorithms results made available by multiple ships, mustering edge, and
distributed computing techniques to fulfil the specific harsh requirements of
the sea environment. The final part is a holistic guidance-navigation-control
framework to manage autonomous ships navigation in a crowded area. Eventually,
a cyber-physical scenario, using both a ship digital-twin and a real
model-scale ship, is suggested to test and validate the innovative system
without the availability of a full-scale scenario.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2022 22:10:35 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Martelli",
"Michele",
""
],
[
"Virdis",
"Antonio",
""
],
[
"Gotta",
"Alberto",
""
],
[
"CassarÀ",
"Pietro",
""
],
[
"Di Summa",
"Maria",
""
]
] |
new_dataset
| 0.993307 |
2207.04165
|
Xiaoyi Zhang
|
Jieshan Chen, Amanda Swearngin, Jason Wu, Titus Barik, Jeffrey
Nichols, Xiaoyi Zhang
|
Extracting Replayable Interactions from Videos of Mobile App Usage
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Screen recordings of mobile apps are a popular and readily available way for
users to share how they interact with apps, such as in online tutorial videos,
user reviews, or as attachments in bug reports. Unfortunately, both people and
systems can find it difficult to reproduce touch-driven interactions from video
pixel data alone. In this paper, we introduce an approach to extract and replay
user interactions in videos of mobile apps, using only pixel information in
video frames. To identify interactions, we apply heuristic-based image
processing and convolutional deep learning to segment screen recordings,
classify the interaction in each segment, and locate the interaction point. To
replay interactions on another device, we match elements on app screens using
UI element detection. We evaluate the feasibility of our pixel-based approach
using two datasets: the Rico mobile app dataset and a new dataset of 64 apps
with both iOS and Android versions. We find that our end-to-end approach can
successfully replay a majority of interactions (iOS--84.1%, Android--78.4%) on
different devices, which is a step towards supporting a variety of scenarios,
including automatically annotating interactions in existing videos, automated
UI testing, and creating interactive app tutorials.
|
[
{
"version": "v1",
"created": "Sat, 9 Jul 2022 00:45:05 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Chen",
"Jieshan",
""
],
[
"Swearngin",
"Amanda",
""
],
[
"Wu",
"Jason",
""
],
[
"Barik",
"Titus",
""
],
[
"Nichols",
"Jeffrey",
""
],
[
"Zhang",
"Xiaoyi",
""
]
] |
new_dataset
| 0.993915 |
2207.04172
|
David Pal
|
Jonathan Gu, David Pal, Kevin Ryan
|
Auction for Double-Wide Ads
|
17 pages, 6 figures
| null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose an auction for online advertising where each ad occupies either
one square or two horizontally-adjacent squares of a grid of squares. Our
primary application are ads for products shown on retail websites such as
Instacart or Amazon where the products are naturally organized into a grid. We
propose efficient algorithms for computing the optimal layout of the ads and
pricing of the ads. The auction is a generalization of the generalized
second-price (GSP) auction used by internet search engines (e.g. Google,
Microsoft Bing, Yahoo!).
|
[
{
"version": "v1",
"created": "Sat, 9 Jul 2022 01:43:54 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Gu",
"Jonathan",
""
],
[
"Pal",
"David",
""
],
[
"Ryan",
"Kevin",
""
]
] |
new_dataset
| 0.998316 |
2207.04363
|
Zhong Zheng
|
Zhong Zheng, Xinyao Wang, Zesong Fei, Qingqing Wu, Bin Li, Lajos Hanzo
|
Secure UAV-to-Ground MIMO Communications: Joint Transceiver and Location
Optimization
|
15 pages, 11 figures. To appear in IEEE Transactions on Vehicular
Technology
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unmanned aerial vehicles (UAVs) are foreseen to constitute promising airborne
communication devices as a benefit of their superior channel quality. But
UAV-to-ground (U2G) communications are vulnerable to eavesdropping. Hence, we
conceive a sophisticated physical layer security solution for improving the
secrecy rate of multi-antenna aided U2G systems. Explicitly, the secrecy rate
of the U2G MIMO wiretap channels is derived by using random matrix theory. The
resultant explicit expression is then applied in the joint optimization of the
MIMO transceiver and the UAV location relying on an alternating optimization
technique. Our numerical results show that the joint transceiver and location
optimization conceived facilitates secure communications even in the
challenging scenario, where the legitimate channel of confidential information
is inferior to the eavesdropping channel.
|
[
{
"version": "v1",
"created": "Sun, 10 Jul 2022 01:43:12 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Zheng",
"Zhong",
""
],
[
"Wang",
"Xinyao",
""
],
[
"Fei",
"Zesong",
""
],
[
"Wu",
"Qingqing",
""
],
[
"Li",
"Bin",
""
],
[
"Hanzo",
"Lajos",
""
]
] |
new_dataset
| 0.998229 |
2207.04399
|
Litao Yu
|
Litao Yu, Jian Zhang
|
Horizontal and Vertical Attention in Transformers
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformers are built upon multi-head scaled dot-product attention and
positional encoding, which aim to learn the feature representations and token
dependencies. In this work, we focus on enhancing the distinctive
representation by learning to augment the feature maps with the self-attention
mechanism in Transformers. Specifically, we propose the horizontal attention to
re-weight the multi-head output of the scaled dot-product attention before
dimensionality reduction, and propose the vertical attention to adaptively
re-calibrate channel-wise feature responses by explicitly modelling
inter-dependencies among different channels. We demonstrate the Transformer
models equipped with the two attentions have a high generalization capability
across different supervised learning tasks, with a very minor additional
computational cost overhead. The proposed horizontal and vertical attentions
are highly modular, which can be inserted into various Transformer models to
further improve the performance. Our code is available in the supplementary
material.
|
[
{
"version": "v1",
"created": "Sun, 10 Jul 2022 07:08:18 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Yu",
"Litao",
""
],
[
"Zhang",
"Jian",
""
]
] |
new_dataset
| 0.998406 |
2207.04453
|
Mika H\"am\"al\"ainen
|
Teemu P\"oyh\"onen, Mika H\"am\"al\"ainen, Khalid Alnajjar
|
Multilingual Persuasion Detection: Video Games as an Invaluable Data
Source for NLP
|
DiGRA 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Role-playing games (RPGs) have a considerable amount of text in video game
dialogues. Quite often this text is semi-annotated by the game developers. In
this paper, we extract a multilingual dataset of persuasive dialogue from
several RPGs. We show the viability of this data in building a persuasion
detection system using a natural language processing (NLP) model called BERT.
We believe that video games have a lot of unused potential as a datasource for
a variety of NLP tasks. The code and data described in this paper are available
on Zenodo.
|
[
{
"version": "v1",
"created": "Sun, 10 Jul 2022 12:38:02 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Pöyhönen",
"Teemu",
""
],
[
"Hämäläinen",
"Mika",
""
],
[
"Alnajjar",
"Khalid",
""
]
] |
new_dataset
| 0.998535 |
2207.04508
|
Abhinandan Jain
|
Abhinandan Jain, Pattie Maes and Misha Sra
|
Adaptive Virtual Neuroarchitecture
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Our surrounding environment impacts our cognitive-emotional processes on a
daily basis and shapes our physical, psychological and social wellbeing.
Although the effects of the built environment on our psycho-physiological
processes are well studied, virtual environment design with a potentially
similar impact on the user, has received limited attention. Based on the
influence of space design on a user and combining that with the dynamic
affordances of virtual spaces, we present the idea of adaptive virtual
neuroarchitecture (AVN), where virtual environments respond to the user and the
user's real world context while simultaneously influencing them both in
realtime. To show how AVN has been explored in current research, we present a
sampling of recent work that demonstrates reciprocal relationships using
physical affordances (space, objects), the user's state (physiological,
cognitive, emotional), and the virtual world used in the design of novel
virtual reality experiences. We believe AVN has the potential to help us learn
how to design spaces and environments that can enhance the wellbeing of their
inhabitants.
|
[
{
"version": "v1",
"created": "Sun, 10 Jul 2022 17:14:37 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Jain",
"Abhinandan",
""
],
[
"Maes",
"Pattie",
""
],
[
"Sra",
"Misha",
""
]
] |
new_dataset
| 0.971931 |
2207.04614
|
Xiaowei Hu
|
Tianyu Wang, Xiaowei Hu, Pheng-Ann Heng, Chi-Wing Fu
|
Instance Shadow Detection with A Single-Stage Detector
|
Accepted to IEEE Transactions on Pattern Analysis and Machine
Intelligence (TPAMI). This is the journal version of arXiv:1911.07034 and
https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_Single-Stage_Instance_Shadow_Detection_With_Bidirectional_Relation_Learning_CVPR_2021_paper.pdf
| null |
10.1109/TPAMI.2022.3185628
| null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
This paper formulates a new problem, instance shadow detection, which aims to
detect shadow instance and the associated object instance that cast each shadow
in the input image. To approach this task, we first compile a new dataset with
the masks for shadow instances, object instances, and shadow-object
associations. We then design an evaluation metric for quantitative evaluation
of the performance of instance shadow detection. Further, we design a
single-stage detector to perform instance shadow detection in an end-to-end
manner, where the bidirectional relation learning module and the deformable
maskIoU head are proposed in the detector to directly learn the relation
between shadow instances and object instances and to improve the accuracy of
the predicted masks. Finally, we quantitatively and qualitatively evaluate our
method on the benchmark dataset of instance shadow detection and show the
applicability of our method on light direction estimation and photo editing.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2022 04:15:42 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Wang",
"Tianyu",
""
],
[
"Hu",
"Xiaowei",
""
],
[
"Heng",
"Pheng-Ann",
""
],
[
"Fu",
"Chi-Wing",
""
]
] |
new_dataset
| 0.99967 |
2207.04625
|
Yashael Faith Arthanto
|
Yashael Faith Arthanto, David Ojika, Joo-Young Kim
|
FSHMEM: Supporting Partitioned Global Address Space on FPGAs for
Large-Scale Hardware Acceleration Infrastructure
|
This paper will be published in the 2022 32nd International
Conference on Field Programmable Logic and Applications (FPL)
| null | null | null |
cs.DC cs.AR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
By providing highly efficient one-sided communication with globally shared
memory space, Partitioned Global Address Space (PGAS) has become one of the
most promising parallel computing models in high-performance computing (HPC).
Meanwhile, FPGA is getting attention as an alternative compute platform for HPC
systems with the benefit of custom computing and design flexibility. However,
the exploration of PGAS has not been conducted on FPGAs, unlike the traditional
message passing interface. This paper proposes FSHMEM, a software/hardware
framework that enables the PGAS programming model on FPGAs. We implement the
core functions of GASNet specification on FPGA for native PGAS integration in
hardware, while its programming interface is designed to be highly compatible
with legacy software. Our experiments show that FSHMEM achieves the peak
bandwidth of 3813 MB/s, which is more than 95% of the theoretical maximum,
outperforming the prior works by 9.5$\times$. It records 0.35$us$ and 0.59$us$
latency for remote write and read operations, respectively. Finally, we conduct
a case study on the two Intel D5005 FPGA nodes integrating Intel's deep
learning accelerator. The two-node system programmed by FSHMEM achieves
1.94$\times$ and 1.98$\times$ speedup for matrix multiplication and convolution
operation, respectively, showing its scalability potential for HPC
infrastructure.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2022 04:52:42 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Arthanto",
"Yashael Faith",
""
],
[
"Ojika",
"David",
""
],
[
"Kim",
"Joo-Young",
""
]
] |
new_dataset
| 0.998171 |
2207.04675
|
Jeonghun Baek
|
Jeonghun Baek, Yusuke Matsui, Kiyoharu Aizawa
|
COO: Comic Onomatopoeia Dataset for Recognizing Arbitrary or Truncated
Texts
|
Accepted at ECCV 2022. 25 pages, 16 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Recognizing irregular texts has been a challenging topic in text recognition.
To encourage research on this topic, we provide a novel comic onomatopoeia
dataset (COO), which consists of onomatopoeia texts in Japanese comics. COO has
many arbitrary texts, such as extremely curved, partially shrunk texts, or
arbitrarily placed texts. Furthermore, some texts are separated into several
parts. Each part is a truncated text and is not meaningful by itself. These
parts should be linked to represent the intended meaning. Thus, we propose a
novel task that predicts the link between truncated texts. We conduct three
tasks to detect the onomatopoeia region and capture its intended meaning: text
detection, text recognition, and link prediction. Through extensive
experiments, we analyze the characteristics of the COO. Our data and code are
available at \url{https://github.com/ku21fan/COO-Comic-Onomatopoeia}.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2022 07:39:35 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Baek",
"Jeonghun",
""
],
[
"Matsui",
"Yusuke",
""
],
[
"Aizawa",
"Kiyoharu",
""
]
] |
new_dataset
| 0.999816 |
2207.04676
|
Zhuo Li
|
Zhuo Li, Runqiu Xiao, Hangting Chen, Zhenduo Zhao, Zihan Zhang,
Wenchao Wang
|
The HCCL System for the NIST SRE21
|
accepted by interspeech 2022
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes the systems developed by the HCCL team for the NIST 2021
speaker recognition evaluation (NIST SRE21).We first explore various
state-of-the-art speaker embedding extractors combined with a novel circle loss
to obtain discriminative deep speaker embeddings. Considering that
cross-channel and cross-linguistic speaker recognition are the key challenges
of SRE21, we introduce several techniques to reduce the cross-domain mismatch.
Specifically, Codec and speech enhancement are directly applied to the raw
speech to eliminate the codecs and the environment noise mismatch. We denote
the methods that work directly on speech to eliminate the relatively explicit
mismatches collectively as data adaptation methods. Experiments show that data
adaption methods achieve 15\% improvements over our baseline. Furthermore, some
popular back-ends domain adaptation algorithms are deployed on speaker
embeddings to alleviate speaker performance degradation caused by the implicit
mismatch. Score calibration is a major failure for us in SRE21. The reason is
that score calibration with too many parameters easily lead to overfitting
problems.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2022 07:42:26 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Li",
"Zhuo",
""
],
[
"Xiao",
"Runqiu",
""
],
[
"Chen",
"Hangting",
""
],
[
"Zhao",
"Zhenduo",
""
],
[
"Zhang",
"Zihan",
""
],
[
"Wang",
"Wenchao",
""
]
] |
new_dataset
| 0.993765 |
2207.04692
|
Owen Millwood
|
Owen Millwood, Jack Miskelly, Bohao Yang, Prosanta Gope, Elif Kavun,
Chenghua Lin
|
PUF-Phenotype: A Robust and Noise-Resilient Approach to Aid
Intra-Group-based Authentication with DRAM-PUFs Using Machine Learning
|
13 pages main text, 7 pages supplementary material (total 20 pages),
8 figures, submitted to IEEE Transactions on Information Forensics and
Security
| null | null | null |
cs.CR cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
As the demand for highly secure and dependable lightweight systems increases
in the modern world, Physically Unclonable Functions (PUFs) continue to promise
a lightweight alternative to high-cost encryption techniques and secure key
storage. While the security features promised by PUFs are highly attractive for
secure system designers, they have been shown to be vulnerable to various
sophisticated attacks - most notably Machine Learning (ML) based modelling
attacks (ML-MA) which attempt to digitally clone the PUF behaviour and thus
undermine their security. More recent ML-MA have even exploited publicly known
helper data required for PUF error correction in order to predict PUF responses
without requiring knowledge of response data. In response to this, research is
beginning to emerge regarding the authentication of PUF devices with the
assistance of ML as opposed to traditional PUF techniques of storage and
comparison of pre-known Challenge-Response pairs (CRPs). In this article, we
propose a classification system using ML based on a novel `PUF-Phenotype'
concept to accurately identify the origin and determine the validity of noisy
memory derived (DRAM) PUF responses as an alternative to helper data-reliant
denoising techniques. To our best knowledge, we are the first to perform
classification over multiple devices per model to enable a group-based PUF
authentication scheme. We achieve up to 98\% classification accuracy using a
modified deep convolutional neural network (CNN) for feature extraction in
conjunction with several well-established classifiers. We also experimentally
verified the performance of our model on a Raspberry Pi device to determine the
suitability of deploying our proposed model in a resource-constrained
environment.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2022 08:13:08 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Millwood",
"Owen",
""
],
[
"Miskelly",
"Jack",
""
],
[
"Yang",
"Bohao",
""
],
[
"Gope",
"Prosanta",
""
],
[
"Kavun",
"Elif",
""
],
[
"Lin",
"Chenghua",
""
]
] |
new_dataset
| 0.998776 |
2207.04716
|
Martin Serror
|
Sven Zemanek and Immanuel Hacker and Konrad Wolsing and Eric Wagner
and Martin Henze and Martin Serror
|
PowerDuck: A GOOSE Data Set of Cyberattacks in Substations
|
Cyber Security Experimentation and Test Workshop (CSET 2022)
| null |
10.1145/3546096.3546102
| null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Power grids worldwide are increasingly victims of cyberattacks, where
attackers can cause immense damage to critical infrastructure. The growing
digitalization and networking in power grids combined with insufficient
protection against cyberattacks further exacerbate this trend. Hence, security
engineers and researchers must counter these new risks by continuously
improving security measures. Data sets of real network traffic during
cyberattacks play a decisive role in analyzing and understanding such attacks.
Therefore, this paper presents PowerDuck, a publicly available security data
set containing network traces of GOOSE communication in a physical substation
testbed. The data set includes recordings of various scenarios with and without
the presence of attacks. Furthermore, all network packets originating from the
attacker are clearly labeled to facilitate their identification. We thus
envision PowerDuck improving and complementing existing data sets of
substations, which are often generated synthetically, thus enhancing the
security of power grids.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2022 08:58:02 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Zemanek",
"Sven",
""
],
[
"Hacker",
"Immanuel",
""
],
[
"Wolsing",
"Konrad",
""
],
[
"Wagner",
"Eric",
""
],
[
"Henze",
"Martin",
""
],
[
"Serror",
"Martin",
""
]
] |
new_dataset
| 0.985557 |
2207.04796
|
Elisa Gugliotta
|
Elisa Gugliotta (1, 2, 3), Marco Dinarelli (1) ((1) Universit\'e
Grenoble Alpes, Laboratoires: LIG - Getalp Group (2) LIDILEM, (3) Sapienza
University of Rome)
|
TArC: Tunisian Arabish Corpus First complete release
|
In Proceedings of the Language Resources and Evaluation Conference
(LREC2022), Marseille. European Language Resources Association (pp.
1125-1136)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In this paper we present the final result of a project on Tunisian Arabic
encoded in Arabizi, the Latin-based writing system for digital conversations.
The project led to the creation of two integrated and independent resources: a
corpus and a NLP tool created to annotate the former with various levels of
linguistic information: word classification, transliteration, tokenization,
POS-tagging, lemmatization. We discuss our choices in terms of computational
and linguistic methodology and the strategies adopted to improve our results.
We report on the experiments performed in order to outline our research path.
Finally, we explain why we believe in the potential of these resources for both
computational and linguistic researches. Keywords: Tunisian Arabizi, Annotated
Corpus, Neural Network Architecture
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2022 11:46:59 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Gugliotta",
"Elisa",
""
],
[
"Dinarelli",
"Marco",
""
]
] |
new_dataset
| 0.987399 |
2207.04813
|
Fernando Alonso-Fernandez
|
Javier Galbally, Julian Fierrez-Aguilar, Joaquin Rodriguez-Gonzalez,
Fernando Alonso-Fernandez, Javier Ortega-Garcia, Marino Tapiador
|
On the vulnerability of fingerprint verification systems to fake
fingerprint attacks
|
Published at IEEE International Carnahan Conference on Security
Technology (ICCST)
| null | null | null |
cs.CV cs.CR eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A new method to generate gummy fingers is presented. A medium-size fake
fingerprint database is described and two different fingerprint verification
systems are evaluated on it. Three different scenarios are considered in the
experiments, namely: enrollment and test with real fingerprints, enrollment and
test with fake fingerprints, and enrollment with real fingerprints and test
with fake fingerprints. Results for an optical and a thermal sweeping sensors
are given. Both systems are shown to be vulnerable to direct attacks.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2022 12:22:52 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Galbally",
"Javier",
""
],
[
"Fierrez-Aguilar",
"Julian",
""
],
[
"Rodriguez-Gonzalez",
"Joaquin",
""
],
[
"Alonso-Fernandez",
"Fernando",
""
],
[
"Ortega-Garcia",
"Javier",
""
],
[
"Tapiador",
"Marino",
""
]
] |
new_dataset
| 0.998658 |
2207.04880
|
Leonard Bruns
|
Leonard Bruns and Patric Jensfelt
|
SDFEst: Categorical Pose and Shape Estimation of Objects from RGB-D
using Signed Distance Fields
|
Accepted to IEEE Robotics and Automation Letters (and IROS 2022).
Project page: https://github.com/roym899/sdfest
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rich geometric understanding of the world is an important component of many
robotic applications such as planning and manipulation. In this paper, we
present a modular pipeline for pose and shape estimation of objects from RGB-D
images given their category. The core of our method is a generative shape
model, which we integrate with a novel initialization network and a
differentiable renderer to enable 6D pose and shape estimation from a single or
multiple views. We investigate the use of discretized signed distance fields as
an efficient shape representation for fast analysis-by-synthesis optimization.
Our modular framework enables multi-view optimization and extensibility. We
demonstrate the benefits of our approach over state-of-the-art methods in
several experiments on both synthetic and real data. We open-source our
approach at https://github.com/roym899/sdfest.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2022 13:53:50 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Bruns",
"Leonard",
""
],
[
"Jensfelt",
"Patric",
""
]
] |
new_dataset
| 0.999416 |
2207.04911
|
Evangelos Kolyvas
|
Evangelos Kolyvas, Spyros Voulgaris
|
CougaR: Fast and Eclipse-Resilient Dissemination for Blockchain Networks
|
12 pages, 12 figures, The 16th ACM International Conference on
Distributed and Event-Based Systems, ACM DEBS 2022
| null |
10.1145/3524860.3539805
| null |
cs.DC cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Despite their development for over a decade, a key problem blockchains are
still facing is scalability in terms of throughput, typically limited to a few
transactions per second. A fundamental factor limiting this metric is the
propagation latency of blocks through the underlying peer-to-peer network,
which is typically constructed by means of random connectivity. Disseminating
blocks fast improves not only the transaction throughput, but also the security
of the system as it reduces the probability of forks. In this paper we present
CougaR: a simple yet efficient, eclipse-resistant, decentralized protocol that
substantially reduces the block dissemination time in blockchain networks.
CougaR's key advantages stem from its link selection policy, which combines a
network latency criterion with randomness to offer fast and reliable block
dissemination to the entire network. Moreover, CougaR is eclipse-resistant by
design, as nodes are protected from having all their links directly or
indirectly imposed on them by others, which is the typical vulnerability
exploited to deploy eclipse attacks. We rigorously evaluate CougaR by an
extensive set of experiments, both against a wide spectrum of parameter
settings, and in comparison to the current state of the art.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2022 14:40:01 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Kolyvas",
"Evangelos",
""
],
[
"Voulgaris",
"Spyros",
""
]
] |
new_dataset
| 0.995312 |
2207.04945
|
Jie Qin
|
Jie Qin, Shuaihang Yuan, Jiaxin Chen, Boulbaba Ben Amor, Yi Fang, Nhat
Hoang-Xuan, Chi-Bien Chu, Khoi-Nguyen Nguyen-Ngoc, Thien-Tri Cao, Nhat-Khang
Ngo, Tuan-Luc Huynh, Hai-Dang Nguyen, Minh-Triet Tran, Haoyang Luo, Jianning
Wang, Zheng Zhang, Zihao Xin, Yang Wang, Feng Wang, Ying Tang, Haiqin Chen,
Yan Wang, Qunying Zhou, Ji Zhang, Hongyuan Wang
|
SHREC'22 Track: Sketch-Based 3D Shape Retrieval in the Wild
| null | null | null | null |
cs.CV cs.GR cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Sketch-based 3D shape retrieval (SBSR) is an important yet challenging task,
which has drawn more and more attention in recent years. Existing approaches
address the problem in a restricted setting, without appropriately simulating
real application scenarios. To mimic the realistic setting, in this track, we
adopt large-scale sketches drawn by amateurs of different levels of drawing
skills, as well as a variety of 3D shapes including not only CAD models but
also models scanned from real objects. We define two SBSR tasks and construct
two benchmarks consisting of more than 46,000 CAD models, 1,700 realistic
models, and 145,000 sketches in total. Four teams participated in this track
and submitted 15 runs for the two tasks, evaluated by 7 commonly-adopted
metrics. We hope that, the benchmarks, the comparative results, and the
open-sourced evaluation code will foster future research in this direction
among the 3D object retrieval community.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2022 15:26:52 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Qin",
"Jie",
""
],
[
"Yuan",
"Shuaihang",
""
],
[
"Chen",
"Jiaxin",
""
],
[
"Amor",
"Boulbaba Ben",
""
],
[
"Fang",
"Yi",
""
],
[
"Hoang-Xuan",
"Nhat",
""
],
[
"Chu",
"Chi-Bien",
""
],
[
"Nguyen-Ngoc",
"Khoi-Nguyen",
""
],
[
"Cao",
"Thien-Tri",
""
],
[
"Ngo",
"Nhat-Khang",
""
],
[
"Huynh",
"Tuan-Luc",
""
],
[
"Nguyen",
"Hai-Dang",
""
],
[
"Tran",
"Minh-Triet",
""
],
[
"Luo",
"Haoyang",
""
],
[
"Wang",
"Jianning",
""
],
[
"Zhang",
"Zheng",
""
],
[
"Xin",
"Zihao",
""
],
[
"Wang",
"Yang",
""
],
[
"Wang",
"Feng",
""
],
[
"Tang",
"Ying",
""
],
[
"Chen",
"Haiqin",
""
],
[
"Wang",
"Yan",
""
],
[
"Zhou",
"Qunying",
""
],
[
"Zhang",
"Ji",
""
],
[
"Wang",
"Hongyuan",
""
]
] |
new_dataset
| 0.999792 |
2207.04947
|
Ramya Tekumalla
|
Ramya Tekumalla and Juan M. Banda
|
TweetDIS: A Large Twitter Dataset for Natural Disasters Built using Weak
Supervision
|
12 pages
| null |
10.5281/zenodo.6628961
| null |
cs.CL cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Social media is often utilized as a lifeline for communication during natural
disasters. Traditionally, natural disaster tweets are filtered from the Twitter
stream using the name of the natural disaster and the filtered tweets are sent
for human annotation. The process of human annotation to create labeled sets
for machine learning models is laborious, time consuming, at times inaccurate,
and more importantly not scalable in terms of size and real-time use. In this
work, we curate a silver standard dataset using weak supervision. In order to
validate its utility, we train machine learning models on the weakly supervised
data to identify three different types of natural disasters i.e earthquakes,
hurricanes and floods. Our results demonstrate that models trained on the
silver standard dataset achieved performance greater than 90% when classifying
a manually curated, gold-standard dataset. To enable reproducible research and
additional downstream utility, we release the silver standard dataset for the
scientific community.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2022 15:30:09 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Tekumalla",
"Ramya",
""
],
[
"Banda",
"Juan M.",
""
]
] |
new_dataset
| 0.999748 |
2207.04984
|
Henry Pfister
|
S. Brandsen, Avijit Mandal, and Henry D. Pfister
|
Belief Propagation with Quantum Messages for Symmetric Classical-Quantum
Channels
|
Extended version of submission to the 2022 Information Theory
Workshop in Mumbai, India
| null | null | null |
cs.IT math.IT quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Belief propagation (BP) is a classical algorithm that approximates the
marginal distribution associated with a factor graph by passing messages
between adjacent nodes in the graph. It gained popularity in the 1990's as a
powerful decoding algorithm for LDPC codes. In 2016, Renes introduced a belief
propagation with quantum messages (BPQM) and described how it could be used to
decode classical codes defined by tree factor graphs that are sent over the
classical-quantum pure-state channel. In this work, we propose an extension of
BPQM to general binary-input symmetric classical-quantum (BSCQ) channels based
on the implementation of a symmetric "paired measurement". While this new
paired-measurement BPQM (PMBPQM) approach is suboptimal in general, it provides
a concrete BPQM decoder that can be implemented with local operations.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2022 16:14:49 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Brandsen",
"S.",
""
],
[
"Mandal",
"Avijit",
""
],
[
"Pfister",
"Henry D.",
""
]
] |
new_dataset
| 0.954137 |
2207.05006
|
Christopher Agia
|
Christopher Agia, Krishna Murthy Jatavallabhula, Mohamed Khodeir,
Ondrej Miksik, Vibhav Vineet, Mustafa Mukadam, Liam Paull, Florian Shkurti
|
TASKOGRAPHY: Evaluating robot task planning over large 3D scene graphs
|
Video:
https://www.youtube.com/watch?v=mM4v5hP4LdA&ab_channel=KrishnaMurthy .
Project page: https://taskography.github.io/ . 18 pages, 7 figures. In
proceedings of Conference on Robot Learning (CoRL) 2021. The first two
authors contributed equally
|
PMLR 164 (2022) 46-58
| null | null |
cs.RO cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
3D scene graphs (3DSGs) are an emerging description; unifying symbolic,
topological, and metric scene representations. However, typical 3DSGs contain
hundreds of objects and symbols even for small environments; rendering task
planning on the full graph impractical. We construct TASKOGRAPHY, the first
large-scale robotic task planning benchmark over 3DSGs. While most benchmarking
efforts in this area focus on vision-based planning, we systematically study
symbolic planning, to decouple planning performance from visual representation
learning. We observe that, among existing methods, neither classical nor
learning-based planners are capable of real-time planning over full 3DSGs.
Enabling real-time planning demands progress on both (a) sparsifying 3DSGs for
tractable planning and (b) designing planners that better exploit 3DSG
hierarchies. Towards the former goal, we propose SCRUB, a task-conditioned 3DSG
sparsification method; enabling classical planners to match and in some cases
surpass state-of-the-art learning-based planners. Towards the latter goal, we
propose SEEK, a procedure enabling learning-based planners to exploit 3DSG
structure, reducing the number of replanning queries required by current best
approaches by an order of magnitude. We will open-source all code and baselines
to spur further research along the intersections of robot task planning,
learning and 3DSGs.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2022 16:51:44 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Agia",
"Christopher",
""
],
[
"Jatavallabhula",
"Krishna Murthy",
""
],
[
"Khodeir",
"Mohamed",
""
],
[
"Miksik",
"Ondrej",
""
],
[
"Vineet",
"Vibhav",
""
],
[
"Mukadam",
"Mustafa",
""
],
[
"Paull",
"Liam",
""
],
[
"Shkurti",
"Florian",
""
]
] |
new_dataset
| 0.993228 |
2207.05049
|
Long Zhuo
|
Long Zhuo, Guangcong Wang, Shikai Li, Wayne Wu, Ziwei Liu
|
Fast-Vid2Vid: Spatial-Temporal Compression for Video-to-Video Synthesis
|
ECCV 2022, Project Page: https://fast-vid2vid.github.io/ , Code:
https://github.com/fast-vid2vid/fast-vid2vid
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video-to-Video synthesis (Vid2Vid) has achieved remarkable results in
generating a photo-realistic video from a sequence of semantic maps. However,
this pipeline suffers from high computational cost and long inference latency,
which largely depends on two essential factors: 1) network architecture
parameters, 2) sequential data stream. Recently, the parameters of image-based
generative models have been significantly compressed via more efficient network
architectures. Nevertheless, existing methods mainly focus on slimming network
architectures and ignore the size of the sequential data stream. Moreover, due
to the lack of temporal coherence, image-based compression is not sufficient
for the compression of the video task. In this paper, we present a
spatial-temporal compression framework, \textbf{Fast-Vid2Vid}, which focuses on
data aspects of generative models. It makes the first attempt at time dimension
to reduce computational resources and accelerate inference. Specifically, we
compress the input data stream spatially and reduce the temporal redundancy.
After the proposed spatial-temporal knowledge distillation, our model can
synthesize key-frames using the low-resolution data stream. Finally,
Fast-Vid2Vid interpolates intermediate frames by motion compensation with
slight latency. On standard benchmarks, Fast-Vid2Vid achieves around real-time
performance as 20 FPS and saves around 8x computational cost on a single V100
GPU.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2022 17:57:57 GMT"
}
] | 2022-07-12T00:00:00 |
[
[
"Zhuo",
"Long",
""
],
[
"Wang",
"Guangcong",
""
],
[
"Li",
"Shikai",
""
],
[
"Wu",
"Wayne",
""
],
[
"Liu",
"Ziwei",
""
]
] |
new_dataset
| 0.998248 |
2101.09571
|
Vadim Liventsev
|
Vadim Liventsev, Aki H\"arm\"a and Milan Petkovi\'c
|
BF++: a language for general-purpose program synthesis
|
8+2 pages (paper+references)
| null | null | null |
cs.AI cs.LG cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
Most state of the art decision systems based on Reinforcement Learning (RL)
are data-driven black-box neural models, where it is often difficult to
incorporate expert knowledge into the models or let experts review and validate
the learned decision mechanisms. Knowledge-insertion and model review are
important requirements in many applications involving human health and safety.
One way to bridge the gap between data and knowledge driven systems is program
synthesis: replacing a neural network that outputs decisions with a symbolic
program generated by a neural network or by means of genetic programming. We
propose a new programming language, BF++, designed specifically for automatic
programming of agents in a Partially Observable Markov Decision Process (POMDP)
setting and apply neural program synthesis to solve standard OpenAI Gym
benchmarks.
|
[
{
"version": "v1",
"created": "Sat, 23 Jan 2021 19:44:44 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Jan 2021 12:25:25 GMT"
},
{
"version": "v3",
"created": "Thu, 18 Feb 2021 20:24:02 GMT"
},
{
"version": "v4",
"created": "Thu, 17 Jun 2021 13:01:09 GMT"
},
{
"version": "v5",
"created": "Thu, 25 Nov 2021 12:39:55 GMT"
},
{
"version": "v6",
"created": "Fri, 8 Jul 2022 10:30:50 GMT"
}
] | 2022-07-11T00:00:00 |
[
[
"Liventsev",
"Vadim",
""
],
[
"Härmä",
"Aki",
""
],
[
"Petković",
"Milan",
""
]
] |
new_dataset
| 0.999089 |
2106.12102
|
Farid Yagubbayli
|
Farid Yagubbayli, Yida Wang, Alessio Tonioni, Federico Tombari
|
LegoFormer: Transformers for Block-by-Block Multi-view 3D Reconstruction
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most modern deep learning-based multi-view 3D reconstruction techniques use
RNNs or fusion modules to combine information from multiple images after
independently encoding them. These two separate steps have loose connections
and do not allow easy information sharing among views. We propose LegoFormer, a
transformer model for voxel-based 3D reconstruction that uses the attention
layers to share information among views during all computational stages.
Moreover, instead of predicting each voxel independently, we propose to
parametrize the output with a series of low-rank decomposition factors. This
reformulation allows the prediction of an object as a set of independent
regular structures then aggregated to obtain the final reconstruction.
Experiments conducted on ShapeNet demonstrate the competitive performance of
our model with respect to the state of the art while having increased
interpretability thanks to the self-attention layers. We also show promising
generalization results to real data.
|
[
{
"version": "v1",
"created": "Wed, 23 Jun 2021 00:15:08 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Jul 2022 16:49:26 GMT"
}
] | 2022-07-11T00:00:00 |
[
[
"Yagubbayli",
"Farid",
""
],
[
"Wang",
"Yida",
""
],
[
"Tonioni",
"Alessio",
""
],
[
"Tombari",
"Federico",
""
]
] |
new_dataset
| 0.998554 |
2203.09311
|
Vesa Halava
|
Vesa Halava, Tero Harju, Teemu Pirttim\"aki
|
A recursive function coding number theoretic functions
| null | null | null | null |
cs.DM cs.FL math.CO
|
http://creativecommons.org/licenses/by/4.0/
|
We show that there exists a fixed recursive function $e$ such that for all
functions $h\colon \mathbb{N}\to \mathbb{N}$, there exists an injective
function $c_h\colon \mathbb{N}\to \mathbb{N}$ such that $c_h(h(n))=e(c_h(n))$,
i.e., $h=c_h^{-1}ec_h$.
|
[
{
"version": "v1",
"created": "Thu, 17 Mar 2022 13:25:05 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Jul 2022 13:02:53 GMT"
}
] | 2022-07-11T00:00:00 |
[
[
"Halava",
"Vesa",
""
],
[
"Harju",
"Tero",
""
],
[
"Pirttimäki",
"Teemu",
""
]
] |
new_dataset
| 0.995417 |
2205.00731
|
Fangzhi Xu
|
Fangzhi Xu, Jun Liu, Qika Lin, Yudai Pan, Lingling Zhang
|
Logiformer: A Two-Branch Graph Transformer Network for Interpretable
Logical Reasoning
|
Accepted by SIGIR 2022
| null |
10.1145/3477495.3532016
| null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Machine reading comprehension has aroused wide concerns, since it explores
the potential of model for text understanding. To further equip the machine
with the reasoning capability, the challenging task of logical reasoning is
proposed. Previous works on logical reasoning have proposed some strategies to
extract the logical units from different aspects. However, there still remains
a challenge to model the long distance dependency among the logical units.
Also, it is demanding to uncover the logical structures of the text and further
fuse the discrete logic to the continuous text embedding. To tackle the above
issues, we propose an end-to-end model Logiformer which utilizes a two-branch
graph transformer network for logical reasoning of text. Firstly, we introduce
different extraction strategies to split the text into two sets of logical
units, and construct the logical graph and the syntax graph respectively. The
logical graph models the causal relations for the logical branch while the
syntax graph captures the co-occurrence relations for the syntax branch.
Secondly, to model the long distance dependency, the node sequence from each
graph is fed into the fully connected graph transformer structures. The two
adjacent matrices are viewed as the attention biases for the graph transformer
layers, which map the discrete logical structures to the continuous text
embedding space. Thirdly, a dynamic gate mechanism and a question-aware
self-attention module are introduced before the answer prediction to update the
features. The reasoning process provides the interpretability by employing the
logical units, which are consistent with human cognition. The experimental
results show the superiority of our model, which outperforms the
state-of-the-art single model on two logical reasoning benchmarks.
|
[
{
"version": "v1",
"created": "Mon, 2 May 2022 08:34:59 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Jul 2022 06:28:37 GMT"
}
] | 2022-07-11T00:00:00 |
[
[
"Xu",
"Fangzhi",
""
],
[
"Liu",
"Jun",
""
],
[
"Lin",
"Qika",
""
],
[
"Pan",
"Yudai",
""
],
[
"Zhang",
"Lingling",
""
]
] |
new_dataset
| 0.998757 |
2207.03558
|
Xiurong Jiang
|
Xiurong Jiang, Lin Zhu, Yifan Hou, Hui Tian
|
Mirror Complementary Transformer Network for RGB-thermal Salient Object
Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
RGB-thermal salient object detection (RGB-T SOD) aims to locate the common
prominent objects of an aligned visible and thermal infrared image pair and
accurately segment all the pixels belonging to those objects. It is promising
in challenging scenes such as nighttime and complex backgrounds due to the
insensitivity to lighting conditions of thermal images. Thus, the key problem
of RGB-T SOD is to make the features from the two modalities complement and
adjust each other flexibly, since it is inevitable that any modalities of RGB-T
image pairs failure due to challenging scenes such as extreme light conditions
and thermal crossover. In this paper, we propose a novel mirror complementary
Transformer network (MCNet) for RGB-T SOD. Specifically, we introduce a
Transformer-based feature extraction module to effective extract hierarchical
features of RGB and thermal images. Then, through the attention-based feature
interaction and serial multiscale dilated convolution (SDC) based feature
fusion modules, the proposed model achieves the complementary interaction of
low-level features and the semantic fusion of deep features. Finally, based on
the mirror complementary structure, the salient regions of the two modalities
can be accurately extracted even one modality is invalid. To demonstrate the
robustness of the proposed model under challenging scenes in real world, we
build a novel RGB-T SOD dataset VT723 based on a large public semantic
segmentation RGB-T dataset used in the autonomous driving domain. Expensive
experiments on benchmark and VT723 datasets show that the proposed method
outperforms state-of-the-art approaches, including CNN-based and
Transformer-based methods. The code and dataset will be released later at
https://github.com/jxr326/SwinMCNet.
|
[
{
"version": "v1",
"created": "Thu, 7 Jul 2022 20:26:09 GMT"
}
] | 2022-07-11T00:00:00 |
[
[
"Jiang",
"Xiurong",
""
],
[
"Zhu",
"Lin",
""
],
[
"Hou",
"Yifan",
""
],
[
"Tian",
"Hui",
""
]
] |
new_dataset
| 0.996295 |
2207.03592
|
Akhil Arora
|
Vuk Vukovi\'c, Akhil Arora, Huan-Cheng Chang, Andreas Spitz, and
Robert West
|
Quote Erat Demonstrandum: A Web Interface for Exploring the Quotebank
Corpus
|
SIGIR 2022 (Demo), 5 pages, 2 figures
| null |
10.1145/3477495.3531696
| null |
cs.IR cs.CL cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
The use of attributed quotes is the most direct and least filtered pathway of
information propagation in news. Consequently, quotes play a central role in
the conception, reception, and analysis of news stories. Since quotes provide a
more direct window into a speaker's mind than regular reporting, they are a
valuable resource for journalists and researchers alike. While substantial
research efforts have been devoted to methods for the automated extraction of
quotes from news and their attribution to speakers, few comprehensive corpora
of attributed quotes from contemporary sources are available to the public.
Here, we present an adaptive web interface for searching Quotebank, a massive
collection of quotes from the news, which we make available at
https://quotebank.dlab.tools.
|
[
{
"version": "v1",
"created": "Thu, 7 Jul 2022 21:41:03 GMT"
}
] | 2022-07-11T00:00:00 |
[
[
"Vuković",
"Vuk",
""
],
[
"Arora",
"Akhil",
""
],
[
"Chang",
"Huan-Cheng",
""
],
[
"Spitz",
"Andreas",
""
],
[
"West",
"Robert",
""
]
] |
new_dataset
| 0.99809 |
2207.03616
|
Eric M\"orth
|
Eric M\"orth, Stefan Bruckner, Noeska N. Smit
|
ScrollyVis: Interactive visual authoring of guided dynamic narratives
for scientific scrollytelling
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual stories are an effective and powerful tool to convey specific
information to a diverse public. Scrollytelling is a recent visual storytelling
technique extensively used on the web, where content appears or changes as
users scroll up or down a page. By employing the familiar gesture of scrolling
as its primary interaction mechanism, it provides users with a sense of
control, exploration and discoverability while still offering a simple and
intuitive interface. In this paper, we present a novel approach for authoring,
editing, and presenting data-driven scientific narratives using scrollytelling.
Our method flexibly integrates common sources such as images, text, and video,
but also supports more specialized visualization techniques such as interactive
maps as well as scalar field and mesh data visualizations. We show that
scrolling navigation can be used to traverse dynamic narratives and demonstrate
how it can be combined with interactive parameter exploration. The resulting
system consists of an extensible web-based authoring tool capable of exporting
stand-alone stories that can be hosted on any web server. We demonstrate the
power and utility of our approach with case studies from several of diverse
scientific fields and with a user study including 12 participants of diverse
professional backgrounds. Furthermore, an expert in creating interactive
articles assessed the usefulness of our approach and the quality of the created
stories.
|
[
{
"version": "v1",
"created": "Thu, 7 Jul 2022 23:32:06 GMT"
}
] | 2022-07-11T00:00:00 |
[
[
"Mörth",
"Eric",
""
],
[
"Bruckner",
"Stefan",
""
],
[
"Smit",
"Noeska N.",
""
]
] |
new_dataset
| 0.999495 |
2207.03618
|
Shannan Guan
|
Shannan Guan, Haiyan Lu, Linchao Zhu, Gengfa Fang
|
PoseGU: 3D Human Pose Estimation with Novel Human Pose Generator and
Unbiased Learning
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D pose estimation has recently gained substantial interests in computer
vision domain. Existing 3D pose estimation methods have a strong reliance on
large size well-annotated 3D pose datasets, and they suffer poor model
generalization on unseen poses due to limited diversity of 3D poses in training
sets. In this work, we propose PoseGU, a novel human pose generator that
generates diverse poses with access only to a small size of seed samples, while
equipping the Counterfactual Risk Minimization to pursue an unbiased evaluation
objective. Extensive experiments demonstrate PoseGU outforms almost all the
state-of-the-art 3D human pose methods under consideration over three popular
benchmark datasets. Empirical analysis also proves PoseGU generates 3D poses
with improved data diversity and better generalization ability.
|
[
{
"version": "v1",
"created": "Thu, 7 Jul 2022 23:43:53 GMT"
}
] | 2022-07-11T00:00:00 |
[
[
"Guan",
"Shannan",
""
],
[
"Lu",
"Haiyan",
""
],
[
"Zhu",
"Linchao",
""
],
[
"Fang",
"Gengfa",
""
]
] |
new_dataset
| 0.998774 |
2207.03622
|
Hazim Shakhatreh
|
Hazim Shakhatreh, Ahmad Sawalmeh, Ali H Alenezi, Sharief Abdel-Razeq,
Muhannad Almutiry, Ala Al-Fuqaha
|
Mobile-IRS Assisted Next Generation UAV Communication Networks
|
11 pages, 8 figures
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Prior research on intelligent reflection surface (IRS)-assisted unmanned
aerial vehicle (UAV) communications has focused on a fixed location for the IRS
or mounted on a UAV. The assumption that the IRS is located at a fixed position
will prohibit mobile users from maximizing many wireless network benefits, such
as data rate and coverage. Furthermore, assuming that the IRS is placed on a
UAV is impractical for various reasons, including the IRS's weight and size and
the speed of wind in severe weather. Unlike previous studies, this study
assumes a single UAV and an IRS mounted on a mobile ground vehicle (M-IRS) to
be deployed in an Internet-of-Things (IoT) 6G wireless network to maximize the
average data rate. Such a methodology for providing wireless coverage using an
M-IRS assisted UAV system is expected in smart cities. In this paper, we
formulate an optimization problem to find an efficient trajectory for the UAV,
an efficient path for the M-IRS, and users' power allocation coefficients that
maximize the average data rate for mobile ground users. Due to its
intractability, we propose efficient techniques that can help in finding the
solution to the optimization problem. First, we show that our dynamic power
allocation technique outperforms the fixed power allocation technique in terms
of network average sum rate. Then we employ the individual movement model
(Random Waypoint Model) in order to represent the users' movements inside the
coverage area. Finally, we propose an efficient approach using a Genetic
Algorithm (GA) for finding an efficient trajectory for the UAV, and an
efficient path for the M-IRS to provide wireless connectivity for mobile users
during their movement. We demonstrate through simulations that our methodology
can enhance the average data rate by 15\% on average compared with the static
IRS and by 25\% on average compared without the IRS system.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2022 00:06:06 GMT"
}
] | 2022-07-11T00:00:00 |
[
[
"Shakhatreh",
"Hazim",
""
],
[
"Sawalmeh",
"Ahmad",
""
],
[
"Alenezi",
"Ali H",
""
],
[
"Abdel-Razeq",
"Sharief",
""
],
[
"Almutiry",
"Muhannad",
""
],
[
"Al-Fuqaha",
"Ala",
""
]
] |
new_dataset
| 0.973711 |
2207.03680
|
Minhao Zhang
|
Minhao Zhang, Ruoyu Zhang, Yanzeng Li, Lei Zou
|
Crake: Causal-Enhanced Table-Filler for Question Answering over Large
Scale Knowledge Base
|
NAACL 2022 Findings
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Semantic parsing solves knowledge base (KB) question answering (KBQA) by
composing a KB query, which generally involves node extraction (NE) and graph
composition (GC) to detect and connect related nodes in a query. Despite the
strong causal effects between NE and GC, previous works fail to directly model
such causalities in their pipeline, hindering the learning of subtask
correlations. Also, the sequence-generation process for GC in previous works
induces ambiguity and exposure bias, which further harms accuracy. In this
work, we formalize semantic parsing into two stages. In the first stage (graph
structure generation), we propose a causal-enhanced table-filler to overcome
the issues in sequence-modelling and to learn the internal causalities. In the
second stage (relation extraction), an efficient beam-search algorithm is
presented to scale complex queries on large-scale KBs. Experiments on LC-QuAD
1.0 indicate that our method surpasses previous state-of-the-arts by a large
margin (17%) while remaining time and space efficiency. The code and models are
available at https://github.com/AOZMH/Crake.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2022 04:21:26 GMT"
}
] | 2022-07-11T00:00:00 |
[
[
"Zhang",
"Minhao",
""
],
[
"Zhang",
"Ruoyu",
""
],
[
"Li",
"Yanzeng",
""
],
[
"Zou",
"Lei",
""
]
] |
new_dataset
| 0.961228 |
2207.03697
|
Wen-Chin Huang
|
Wen Chin Huang, Dejan Markovic, Alexander Richard, Israel Dejene Gebru
and Anjali Menon
|
End-to-End Binaural Speech Synthesis
|
Accepted to INTERSPEECH 2022. Demo link:
https://unilight.github.io/Publication-Demos/publications/e2e-binaural-synthesis
| null | null | null |
cs.SD cs.AI cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present an end-to-end binaural speech synthesis system that
combines a low-bitrate audio codec with a powerful binaural decoder that is
capable of accurate speech binauralization while faithfully reconstructing
environmental factors like ambient noise or reverb. The network is a modified
vector-quantized variational autoencoder, trained with several carefully
designed objectives, including an adversarial loss. We evaluate the proposed
system on an internal binaural dataset with objective metrics and a perceptual
study. Results show that the proposed approach matches the ground truth data
more closely than previous methods. In particular, we demonstrate the
capability of the adversarial loss in capturing environment effects needed to
create an authentic auditory scene.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2022 05:18:36 GMT"
}
] | 2022-07-11T00:00:00 |
[
[
"Huang",
"Wen Chin",
""
],
[
"Markovic",
"Dejan",
""
],
[
"Richard",
"Alexander",
""
],
[
"Gebru",
"Israel Dejene",
""
],
[
"Menon",
"Anjali",
""
]
] |
new_dataset
| 0.999371 |
2207.03704
|
Akio Kodaira
|
Akio Kodaira, Yiyang Zhou, Pengwei Zang, Wei Zhan, Masayoshi Tomizuka
|
SST-Calib: Simultaneous Spatial-Temporal Parameter Calibration between
LIDAR and Camera
|
7 pages, 4 figures, 4 tables, accepted by ITSC2022
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With information from multiple input modalities, sensor fusion-based
algorithms usually out-perform their single-modality counterparts in robotics.
Camera and LIDAR, with complementary semantic and depth information, are the
typical choices for detection tasks in complicated driving environments. For
most camera-LIDAR fusion algorithms, however, the calibration of the sensor
suite will greatly impact the performance. More specifically, the detection
algorithm usually requires an accurate geometric relationship among multiple
sensors as the input, and it is often assumed that the contents from these
sensors are captured at the same time. Preparing such sensor suites involves
carefully designed calibration rigs and accurate synchronization mechanisms,
and the preparation process is usually done offline. In this work, a
segmentation-based framework is proposed to jointly estimate the geometrical
and temporal parameters in the calibration of a camera-LIDAR suite. A semantic
segmentation mask is first applied to both sensor modalities, and the
calibration parameters are optimized through pixel-wise bidirectional loss. We
specifically incorporated the velocity information from optical flow for
temporal parameters. Since supervision is only performed at the segmentation
level, no calibration label is needed within the framework. The proposed
algorithm is tested on the KITTI dataset, and the result shows an accurate
real-time calibration of both geometric and temporal parameters.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2022 06:21:52 GMT"
}
] | 2022-07-11T00:00:00 |
[
[
"Kodaira",
"Akio",
""
],
[
"Zhou",
"Yiyang",
""
],
[
"Zang",
"Pengwei",
""
],
[
"Zhan",
"Wei",
""
],
[
"Tomizuka",
"Masayoshi",
""
]
] |
new_dataset
| 0.99021 |
2207.03708
|
Xiaojiang Peng
|
Xiaojiang Peng, Xiaomao Fan, Qingyang Wu, Jieyan Zhao, Pan Gao
|
Video-based Smoky Vehicle Detection with A Coarse-to-Fine Framework
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic smoky vehicle detection in videos is a superior solution to the
traditional expensive remote sensing one with ultraviolet-infrared light
devices for environmental protection agencies. However, it is challenging to
distinguish vehicle smoke from shadow and wet regions coming from rear vehicle
or clutter roads, and could be worse due to limited annotated data. In this
paper, we first introduce a real-world large-scale smoky vehicle dataset with
75,000 annotated smoky vehicle images, facilitating the effective training of
advanced deep learning models. To enable fair algorithm comparison, we also
build a smoky vehicle video dataset including 163 long videos with
segment-level annotations. Moreover, we present a new Coarse-to-fine Deep Smoky
vehicle detection (CoDeS) framework for efficient smoky vehicle detection. The
CoDeS first leverages a light-weight YOLO detector for fast smoke detection
with high recall rate, and then applies a smoke-vehicle matching strategy to
eliminate non-vehicle smoke, and finally uses a elaborately-designed 3D model
to further refine the results in spatial temporal space. Extensive experiments
in four metrics demonstrate that our framework is significantly superior to
those hand-crafted feature based methods and recent advanced methods. The code
and dataset will be released at https://github.com/pengxj/smokyvehicle.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2022 06:42:45 GMT"
}
] | 2022-07-11T00:00:00 |
[
[
"Peng",
"Xiaojiang",
""
],
[
"Fan",
"Xiaomao",
""
],
[
"Wu",
"Qingyang",
""
],
[
"Zhao",
"Jieyan",
""
],
[
"Gao",
"Pan",
""
]
] |
new_dataset
| 0.999745 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.