id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2209.10314
|
Dimitrios Kanoulas
|
Chengxu Zhou, Christopher Peers, Yuhui Wan, Robert Richardson,
Dimitrios Kanoulas
|
TeLeMan: Teleoperation for Legged Robot Loco-Manipulation using Wearable
IMU-based Motion Capture
|
8 pages, 7 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Human life is invaluable. When dangerous or life-threatening tasks need to be
completed, robotic platforms could be ideal in replacing human operators. Such
a task that we focus on in this work is the Explosive Ordnance Disposal. Robot
telepresence has the potential to provide safety solutions, given that mobile
robots have shown robust capabilities when operating in several environments.
However, autonomy may be challenging and risky at this stage, compared to human
operation. Teleoperation could be a compromise between full robot autonomy and
human presence. In this paper, we present a relatively cheap solution for
telepresence and robot teleoperation, to assist with Explosive Ordnance
Disposal, using a legged manipulator (i.e., a legged quadruped robot, embedded
with a manipulator and RGB-D sensing). We propose a novel system integration
for the non-trivial problem of quadruped manipulator whole-body control. Our
system is based on a wearable IMU-based motion capture system that is used for
teleoperation and a VR headset for visual telepresence. We experimentally
validate our method in real-world, for loco-manipulation tasks that require
whole-body robot control and visual telepresence.
|
[
{
"version": "v1",
"created": "Wed, 21 Sep 2022 12:44:30 GMT"
}
] | 2022-09-22T00:00:00 |
[
[
"Zhou",
"Chengxu",
""
],
[
"Peers",
"Christopher",
""
],
[
"Wan",
"Yuhui",
""
],
[
"Richardson",
"Robert",
""
],
[
"Kanoulas",
"Dimitrios",
""
]
] |
new_dataset
| 0.999599 |
2209.10322
|
EPTCS
|
Thomas Brihaye (University of Mons), Sophie Pinchinat (Universit\'e de
Rennes, IRISA), Alexandre Terefenko (Universit\'e de Rennes, IRISA,
University of Mons)
|
Adversarial Formal Semantics of Attack Trees and Related Problems
|
In Proceedings GandALF 2022, arXiv:2209.09333
|
EPTCS 370, 2022, pp. 162-177
|
10.4204/EPTCS.370.11
| null |
cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
Security is a subject of increasing attention in our actual society in order
to protect critical resources from information disclosure, theft or damage. The
informal model of attack trees introduced by Schneier, and widespread in the
industry, is advocated in the 2008 NATO report to govern the evaluation of the
threat in risk analysis. Attack-defense trees have since been the subject of
many theoretical works addressing different formal approaches.
In 2017, M. Audinot et al. introduced a path semantics over a transition
system for attack trees. Inspired by the later, we propose a two-player
interpretation of the attack-tree formalism. To do so, we replace transition
systems by concurrent game arenas and our associated semantics consist of
strategies. We then show that the emptiness problem, known to be NP-complete
for the path semantics, is now PSPACE-complete. Additionally, we show that the
membership problem is coNP-complete for our two-player interpretation while it
collapses to P in the path semantics.
|
[
{
"version": "v1",
"created": "Wed, 21 Sep 2022 12:46:20 GMT"
}
] | 2022-09-22T00:00:00 |
[
[
"Brihaye",
"Thomas",
"",
"University of Mons"
],
[
"Pinchinat",
"Sophie",
"",
"Université de\n Rennes, IRISA"
],
[
"Terefenko",
"Alexandre",
"",
"Université de Rennes, IRISA,\n University of Mons"
]
] |
new_dataset
| 0.996387 |
2209.10340
|
Xuhui Liu
|
Bohan Zeng, Boyu Liu, Hong Li, Xuhui Liu, Jianzhuang Liu, Dapeng Chen,
Wei Peng, Baochang Zhang
|
FNeVR: Neural Volume Rendering for Face Animation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Face animation, one of the hottest topics in computer vision, has achieved a
promising performance with the help of generative models. However, it remains a
critical challenge to generate identity preserving and photo-realistic images
due to the sophisticated motion deformation and complex facial detail modeling.
To address these problems, we propose a Face Neural Volume Rendering (FNeVR)
network to fully explore the potential of 2D motion warping and 3D volume
rendering in a unified framework. In FNeVR, we design a 3D Face Volume
Rendering (FVR) module to enhance the facial details for image rendering.
Specifically, we first extract 3D information with a well-designed
architecture, and then introduce an orthogonal adaptive ray-sampling module for
efficient rendering. We also design a lightweight pose editor, enabling FNeVR
to edit the facial pose in a simple yet effective way. Extensive experiments
show that our FNeVR obtains the best overall quality and performance on widely
used talking-head benchmarks.
|
[
{
"version": "v1",
"created": "Wed, 21 Sep 2022 13:18:59 GMT"
}
] | 2022-09-22T00:00:00 |
[
[
"Zeng",
"Bohan",
""
],
[
"Liu",
"Boyu",
""
],
[
"Li",
"Hong",
""
],
[
"Liu",
"Xuhui",
""
],
[
"Liu",
"Jianzhuang",
""
],
[
"Chen",
"Dapeng",
""
],
[
"Peng",
"Wei",
""
],
[
"Zhang",
"Baochang",
""
]
] |
new_dataset
| 0.999278 |
2209.10381
|
Felix Juefei-Xu
|
Xuhong Ren, Jianlang Chen, Felix Juefei-Xu, Wanli Xue, Qing Guo, Lei
Ma, Jianjun Zhao, Shengyong Chen
|
DARTSRepair: Core-failure-set Guided DARTS for Network Robustness to
Common Corruptions
|
To appear in Pattern Recognition (PR)
| null |
10.1016/j.patcog.2022.108864
| null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Network architecture search (NAS), in particular the differentiable
architecture search (DARTS) method, has shown a great power to learn excellent
model architectures on the specific dataset of interest. In contrast to using a
fixed dataset, in this work, we focus on a different but important scenario for
NAS: how to refine a deployed network's model architecture to enhance its
robustness with the guidance of a few collected and misclassified examples that
are degraded by some real-world unknown corruptions having a specific pattern
(e.g., noise, blur, etc.). To this end, we first conduct an empirical study to
validate that the model architectures can be definitely related to the
corruption patterns. Surprisingly, by just adding a few corrupted and
misclassified examples (e.g., $10^3$ examples) to the clean training dataset
(e.g., $5.0 \times 10^4$ examples), we can refine the model architecture and
enhance the robustness significantly. To make it more practical, the key
problem, i.e., how to select the proper failure examples for the effective NAS
guidance, should be carefully investigated. Then, we propose a novel
core-failure-set guided DARTS that embeds a K-center-greedy algorithm for DARTS
to select suitable corrupted failure examples to refine the model architecture.
We use our method for DARTS-refined DNNs on the clean as well as 15 corruptions
with the guidance of four specific real-world corruptions. Compared with the
state-of-the-art NAS as well as data-augmentation-based enhancement methods,
our final method can achieve higher accuracy on both corrupted datasets and the
original clean dataset. On some of the corruption patterns, we can achieve as
high as over 45% absolute accuracy improvements.
|
[
{
"version": "v1",
"created": "Wed, 21 Sep 2022 14:18:49 GMT"
}
] | 2022-09-22T00:00:00 |
[
[
"Ren",
"Xuhong",
""
],
[
"Chen",
"Jianlang",
""
],
[
"Juefei-Xu",
"Felix",
""
],
[
"Xue",
"Wanli",
""
],
[
"Guo",
"Qing",
""
],
[
"Ma",
"Lei",
""
],
[
"Zhao",
"Jianjun",
""
],
[
"Chen",
"Shengyong",
""
]
] |
new_dataset
| 0.971612 |
2209.10399
|
Shuja Khalid
|
Shuja Khalid, Frank Rudzicz
|
wildNeRF: Complete view synthesis of in-the-wild dynamic scenes captured
using sparse monocular data
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present a novel neural radiance model that is trainable in a
self-supervised manner for novel-view synthesis of dynamic unstructured scenes.
Our end-to-end trainable algorithm learns highly complex, real-world static
scenes within seconds and dynamic scenes with both rigid and non-rigid motion
within minutes. By differentiating between static and motion-centric pixels, we
create high-quality representations from a sparse set of images. We perform
extensive qualitative and quantitative evaluation on existing benchmarks and
set the state-of-the-art on performance measures on the challenging NVIDIA
Dynamic Scenes Dataset. Additionally, we evaluate our model performance on
challenging real-world datasets such as Cholec80 and SurgicalActions160.
|
[
{
"version": "v1",
"created": "Tue, 20 Sep 2022 14:37:56 GMT"
}
] | 2022-09-22T00:00:00 |
[
[
"Khalid",
"Shuja",
""
],
[
"Rudzicz",
"Frank",
""
]
] |
new_dataset
| 0.997586 |
2209.10421
|
Xiao Ke
|
Xiao Ke, Xiaoling Zhang, Tianwen Zhang, Jun Shi, Shunjun Wei
|
Sar Ship Detection based on Swin Transformer and Feature Enhancement
Feature Pyramid Network
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the booming of Convolutional Neural Networks (CNNs), CNNs such as VGG-16
and ResNet-50 widely serve as backbone in SAR ship detection. However, CNN
based backbone is hard to model long-range dependencies, and causes the lack of
enough high-quality semantic information in feature maps of shallow layers,
which leads to poor detection performance in complicated background and
small-sized ships cases. To address these problems, we propose a SAR ship
detection method based on Swin Transformer and Feature Enhancement Feature
Pyramid Network (FEFPN). Swin Transformer serves as backbone to model
long-range dependencies and generates hierarchical features maps. FEFPN is
proposed to further improve the quality of feature maps by gradually enhancing
the semantic information of feature maps at all levels, especially feature maps
in shallow layers. Experiments conducted on SAR ship detection dataset (SSDD)
reveal the advantage of our proposed methods.
|
[
{
"version": "v1",
"created": "Wed, 21 Sep 2022 15:12:50 GMT"
}
] | 2022-09-22T00:00:00 |
[
[
"Ke",
"Xiao",
""
],
[
"Zhang",
"Xiaoling",
""
],
[
"Zhang",
"Tianwen",
""
],
[
"Shi",
"Jun",
""
],
[
"Wei",
"Shunjun",
""
]
] |
new_dataset
| 0.965194 |
2209.10482
|
Luan Thanh Nguyen
|
Luan Thanh Nguyen, Kiet Van Nguyen, Ngan Luu-Thuy Nguyen
|
SMTCE: A Social Media Text Classification Evaluation Benchmark and
BERTology Models for Vietnamese
|
Accepted at The 36th annual Meeting of Pacific Asia Conference on
Language, Information and Computation (PACLIC 36)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text classification is a typical natural language processing or computational
linguistics task with various interesting applications. As the number of users
on social media platforms increases, data acceleration promotes emerging
studies on Social Media Text Classification (SMTC) or social media text mining
on these valuable resources. In contrast to English, Vietnamese, one of the
low-resource languages, is still not concentrated on and exploited thoroughly.
Inspired by the success of the GLUE, we introduce the Social Media Text
Classification Evaluation (SMTCE) benchmark, as a collection of datasets and
models across a diverse set of SMTC tasks. With the proposed benchmark, we
implement and analyze the effectiveness of a variety of multilingual BERT-based
models (mBERT, XLM-R, and DistilmBERT) and monolingual BERT-based models
(PhoBERT, viBERT, vELECTRA, and viBERT4news) for tasks in the SMTCE benchmark.
Monolingual models outperform multilingual models and achieve state-of-the-art
results on all text classification tasks. It provides an objective assessment
of multilingual and monolingual BERT-based models on the benchmark, which will
benefit future studies about BERTology in the Vietnamese language.
|
[
{
"version": "v1",
"created": "Wed, 21 Sep 2022 16:33:46 GMT"
}
] | 2022-09-22T00:00:00 |
[
[
"Nguyen",
"Luan Thanh",
""
],
[
"Van Nguyen",
"Kiet",
""
],
[
"Nguyen",
"Ngan Luu-Thuy",
""
]
] |
new_dataset
| 0.999515 |
2209.10518
|
Sam Johnston
|
Sam Johnston
|
Sustainable Venture Capital
|
Masters thesis. 114 pages, 18 figures
| null | null | null |
cs.CY econ.GN q-fin.EC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Sustainability initiatives are set to benefit greatly from the growing
involvement of venture capital, in the same way that other technological
endeavours have been enabled and accelerated in the post-war period. With the
spoils increasingly being shared between shareholders and other stakeholders,
this requires a more nuanced view than the finance-first methodologies deployed
to date. Indeed, it is possible for a venture-backed sustainability startup to
deliver outstanding results to society in general without returning a cent to
investors, though the most promising outcomes deliver profit with purpose,
satisfying all stakeholders in ways that make existing 'extractive' venture
capital seem hollow.
To explore this nascent area, a review of related research was conducted and
social entrepreneurs & investors interviewed to construct a questionnaire
assessing the interests and intentions of current & future ecosystem
participants. Analysis of 114 responses received via several sampling methods
revealed statistically significant relationships between investing preferences
and genders, generations, sophistication, and other variables, all the way down
to the level of individual UN Sustainable Development Goals (SDGs).
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2022 01:17:39 GMT"
}
] | 2022-09-22T00:00:00 |
[
[
"Johnston",
"Sam",
""
]
] |
new_dataset
| 0.9995 |
2209.10528
|
VinayKumar Chapala Mr
|
Vinay Kumar Chapala, S.M. Zafaruddin
|
Reconfigurable Intelligent Surface for Vehicular Communications: Exact
Performance Analysis with Phase Noise and Mobility
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent research provides an approximation on the performance for
reconfigurable intelligent surface (RIS) assisted systems over generalized
fading channels with phase noise resulting from imperfect phase compensation at
the RIS. In this paper, we developed exact analysis and upper bounds on the
performance of RIS-assisted vehicular communication system considering phase
noise with mobility over asymmetric fading channels by coherently combining
received signals reflected by RIS elements and direct transmissions from the
source terminal. We employ a novel approach to represent the PDF and CDF of the
product of four channel coefficients in terms of a single univariate Fox-H
function. We use the derived PDF to develop an exact statistical analysis of
the end-to-end SNR for the RIS-assisted system using multi-variate Fox-H
function.
|
[
{
"version": "v1",
"created": "Wed, 21 Sep 2022 17:38:41 GMT"
}
] | 2022-09-22T00:00:00 |
[
[
"Chapala",
"Vinay Kumar",
""
],
[
"Zafaruddin",
"S. M.",
""
]
] |
new_dataset
| 0.99415 |
1712.07046
|
Francesco Caravelli
|
Francesco Caravelli
|
Asymptotic behavior of memristive circuits
|
20 pages, 8 figures; proofs corrected, figures changed; results
substantially unchanged; to appear in Entropy
|
Entropy 21(8), 789 (2019)
|
10.3390/e21080789
| null |
cs.ET cond-mat.dis-nn physics.app-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The interest in memristors has risen due to their possible application both
as memory units and as computational devices in combination with CMOS. This is
in part due to their nonlinear dynamics, and a strong dependence on the circuit
topology. We provide evidence that also purely memristive circuits can be
employed for computational purposes. In the present paper we show that a
polynomial Lyapunov function in the memory parameters exists for the case of DC
controlled memristors. Such Lyapunov function can be asymptotically
approximated with binary variables, and mapped to quadratic combinatorial
optimization problems. This also shows a direct parallel between memristive
circuits and the Hopfield-Little model. In the case of Erdos-Renyi random
circuits, we show numerically that the distribution of the matrix elements of
the projectors can be roughly approximated with a Gaussian distribution, and
that it scales with the inverse square root of the number of elements. This
provides an approximated but direct connection with the physics of disordered
system and, in particular, of mean field spin glasses. Using this and the fact
that the interaction is controlled by a projector operator on the loop space of
the circuit. We estimate the number of stationary points of the approximate
Lyapunov function and provide a scaling formula as an upper bound in terms of
the circuit topology only.
|
[
{
"version": "v1",
"created": "Wed, 15 Nov 2017 06:17:31 GMT"
},
{
"version": "v2",
"created": "Sun, 4 Aug 2019 14:28:22 GMT"
},
{
"version": "v3",
"created": "Wed, 7 Aug 2019 13:36:47 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Caravelli",
"Francesco",
""
]
] |
new_dataset
| 0.982562 |
2006.00979
|
Matthew W. Hoffman
|
Matthew W. Hoffman, Bobak Shahriari, John Aslanides, Gabriel
Barth-Maron, Nikola Momchev, Danila Sinopalnikov, Piotr Sta\'nczyk, Sabela
Ramos, Anton Raichuk, Damien Vincent, L\'eonard Hussenot, Robert Dadashi,
Gabriel Dulac-Arnold, Manu Orsini, Alexis Jacq, Johan Ferret, Nino Vieillard,
Seyed Kamyar Seyed Ghasemipour, Sertan Girgin, Olivier Pietquin, Feryal
Behbahani, Tamara Norman, Abbas Abdolmaleki, Albin Cassirer, Fan Yang, Kate
Baumli, Sarah Henderson, Abe Friesen, Ruba Haroun, Alex Novikov, Sergio
G\'omez Colmenarejo, Serkan Cabi, Caglar Gulcehre, Tom Le Paine, Srivatsan
Srinivasan, Andrew Cowie, Ziyu Wang, Bilal Piot, Nando de Freitas
|
Acme: A Research Framework for Distributed Reinforcement Learning
|
This work presents a second version of the paper which coincides with
an increase in modularity, additional emphasis on offline, imitation and
learning from demonstrations algorithms, as well as various new agents
implemented as part of Acme
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep reinforcement learning (RL) has led to many recent and groundbreaking
advances. However, these advances have often come at the cost of both increased
scale in the underlying architectures being trained as well as increased
complexity of the RL algorithms used to train them. These increases have in
turn made it more difficult for researchers to rapidly prototype new ideas or
reproduce published RL algorithms. To address these concerns this work
describes Acme, a framework for constructing novel RL algorithms that is
specifically designed to enable agents that are built using simple, modular
components that can be used at various scales of execution. While the primary
goal of Acme is to provide a framework for algorithm development, a secondary
goal is to provide simple reference implementations of important or
state-of-the-art algorithms. These implementations serve both as a validation
of our design decisions as well as an important contribution to reproducibility
in RL research. In this work we describe the major design decisions made within
Acme and give further details as to how its components can be used to implement
various algorithms. Our experiments provide baselines for a number of common
and state-of-the-art algorithms as well as showing how these algorithms can be
scaled up for much larger and more complex environments. This highlights one of
the primary advantages of Acme, namely that it can be used to implement large,
distributed RL algorithms that can run at massive scales while still
maintaining the inherent readability of that implementation.
This work presents a second version of the paper which coincides with an
increase in modularity, additional emphasis on offline, imitation and learning
from demonstrations algorithms, as well as various new agents implemented as
part of Acme.
|
[
{
"version": "v1",
"created": "Mon, 1 Jun 2020 14:38:52 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Sep 2022 17:15:51 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Hoffman",
"Matthew W.",
""
],
[
"Shahriari",
"Bobak",
""
],
[
"Aslanides",
"John",
""
],
[
"Barth-Maron",
"Gabriel",
""
],
[
"Momchev",
"Nikola",
""
],
[
"Sinopalnikov",
"Danila",
""
],
[
"Stańczyk",
"Piotr",
""
],
[
"Ramos",
"Sabela",
""
],
[
"Raichuk",
"Anton",
""
],
[
"Vincent",
"Damien",
""
],
[
"Hussenot",
"Léonard",
""
],
[
"Dadashi",
"Robert",
""
],
[
"Dulac-Arnold",
"Gabriel",
""
],
[
"Orsini",
"Manu",
""
],
[
"Jacq",
"Alexis",
""
],
[
"Ferret",
"Johan",
""
],
[
"Vieillard",
"Nino",
""
],
[
"Ghasemipour",
"Seyed Kamyar Seyed",
""
],
[
"Girgin",
"Sertan",
""
],
[
"Pietquin",
"Olivier",
""
],
[
"Behbahani",
"Feryal",
""
],
[
"Norman",
"Tamara",
""
],
[
"Abdolmaleki",
"Abbas",
""
],
[
"Cassirer",
"Albin",
""
],
[
"Yang",
"Fan",
""
],
[
"Baumli",
"Kate",
""
],
[
"Henderson",
"Sarah",
""
],
[
"Friesen",
"Abe",
""
],
[
"Haroun",
"Ruba",
""
],
[
"Novikov",
"Alex",
""
],
[
"Colmenarejo",
"Sergio Gómez",
""
],
[
"Cabi",
"Serkan",
""
],
[
"Gulcehre",
"Caglar",
""
],
[
"Paine",
"Tom Le",
""
],
[
"Srinivasan",
"Srivatsan",
""
],
[
"Cowie",
"Andrew",
""
],
[
"Wang",
"Ziyu",
""
],
[
"Piot",
"Bilal",
""
],
[
"de Freitas",
"Nando",
""
]
] |
new_dataset
| 0.98449 |
2103.08640
|
Ching-Hsun Tseng
|
Ching-Hsun Tseng, Shin-Jye Lee, Jia-Nan Feng, Shengzhong Mao, Yu-Ping
Wu, Jia-Yu Shang, Mou-Chung Tseng, and Xiao-Jun Zeng
|
UPANets: Learning from the Universal Pixel Attention Networks
| null | null |
10.3390/e24091243
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Among image classification, skip and densely-connection-based networks have
dominated most leaderboards. Recently, from the successful development of
multi-head attention in natural language processing, it is sure that now is a
time of either using a Transformer-like model or hybrid CNNs with attention.
However, the former need a tremendous resource to train, and the latter is in
the perfect balance in this direction. In this work, to make CNNs handle global
and local information, we proposed UPANets, which equips channel-wise attention
with a hybrid skip-densely-connection structure. Also, the extreme-connection
structure makes UPANets robust with a smoother loss landscape. In experiments,
UPANets surpassed most well-known and widely-used SOTAs with an accuracy of
96.47% in Cifar-10, 80.29% in Cifar-100, and 67.67% in Tiny Imagenet. Most
importantly, these performances have high parameters efficiency and only
trained in one customer-based GPU. We share implementing code of UPANets in
https://github.com/hanktseng131415go/UPANets.
|
[
{
"version": "v1",
"created": "Mon, 15 Mar 2021 18:27:59 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Mar 2021 13:29:04 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Tseng",
"Ching-Hsun",
""
],
[
"Lee",
"Shin-Jye",
""
],
[
"Feng",
"Jia-Nan",
""
],
[
"Mao",
"Shengzhong",
""
],
[
"Wu",
"Yu-Ping",
""
],
[
"Shang",
"Jia-Yu",
""
],
[
"Tseng",
"Mou-Chung",
""
],
[
"Zeng",
"Xiao-Jun",
""
]
] |
new_dataset
| 0.9962 |
2103.08743
|
Dana Moshkovitz
|
Dana Moshkovitz
|
Strong Parallel Repetition for Unique Games on Small Set Expanders
|
Bug: The idea was that the [RS] reduction from small set expansion
(SSE) to unique games creates product structure without hurting completeness.
Hence, like in Raz's parallel repetition, even conditioning on past winning,
a typical round approximately simulates the original game. Sadly, SSE
requires simulation conditioned on falling into the small set, which is not
necessarily possible
| null | null | null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Strong Parallel Repetition for Unique Games on Small Set Expanders
The strong parallel repetition problem for unique games is to efficiently
reduce the 1-delta vs. 1-C*delta gap problem of Boolean unique games (where C>1
is a sufficiently large constant) to the 1-epsilon vs. epsilon gap problem of
unique games over large alphabet. Due to its importance to the Unique Games
Conjecture, this problem garnered a great deal of interest from the research
community. There are positive results for certain easy unique games (e.g.,
unique games on expanders), and an impossibility result for hard unique games.
In this paper we show how to bypass the impossibility result by enlarging the
alphabet sufficiently before repetition. We consider the case of unique games
on small set expanders for two setups: (i) Strong small set expanders that
yield easy unique games. (ii) Weaker small set expanders underlying possibly
hard unique games as long as the game is mildly fortified. We show how to
fortify unique games in both cases, i.e., how to transform the game so
sufficiently large induced sub-games have bounded value. We then prove strong
parallel repetition for the fortified games. Prior to this work fortification
was known for projection games but seemed hopeless for unique games.
|
[
{
"version": "v1",
"created": "Mon, 15 Mar 2021 22:08:26 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jul 2021 15:52:33 GMT"
},
{
"version": "v3",
"created": "Tue, 20 Sep 2022 15:40:37 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Moshkovitz",
"Dana",
""
]
] |
new_dataset
| 0.997538 |
2107.05868
|
Aldar C.-F. Chan
|
Aldar C-F. Chan, Raymond M. H. Chung
|
Security and Privacy of Wireless Beacon Systems
|
13 pages, 3 figures
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Bluetooth Low Energy (BLE) beacons have been increasingly used in smart city
applications, such as location-based and proximity-based services, to enable
Internet of Things to interact with people in vicinity or enhance
context-awareness. Their widespread deployment in human-centric applications
makes them an attractive target to adversaries for social or economic reasons.
In fact, beacons are reportedly exposed to various security issues and privacy
concerns. A characterization of attacks against beacon systems is given to help
understand adversary motives, required adversarial capabilities, potential
impact and possible defence mechanisms for different threats, with a view to
facilitating security evaluation and protection formulation for beacon systems.
|
[
{
"version": "v1",
"created": "Tue, 13 Jul 2021 06:23:08 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Sep 2022 01:13:52 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Chan",
"Aldar C-F.",
""
],
[
"Chung",
"Raymond M. H.",
""
]
] |
new_dataset
| 0.998164 |
2109.15276
|
Jesse David Dinneen
|
Charles-Antoine Julien, Banafsheh Asadi, Jesse David Dinneen, Fei Shu
|
Library of Congress Subject Heading (LCSH) Browsing and Natural Language
Searching
|
conference paper (ASIST '16), 4 pages plus a poster
|
ASIST 2016: Proceedings of the 79th Annual Meeting of the
Association for Information Science & Technology, 53
|
10.1002/pra2.2016.14505301116
| null |
cs.IR cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Controlled topical vocabularies (CVs) are built into information systems to
aid browsing and retrieval of items that may be unfamiliar, but it is unclear
how this feature should be integrated with standard keyword searching. Few
systems or scholarly prototypes have attempted this, and none have used the
most widely used CV, the Library of Congress Subject Headings (LCSH), which
organizes monograph collections in academic libraries throughout the world.
This paper describes a working prototype of a Web application that concurrently
allows topic exploration using an outline tree view of the LCSH hierarchy and
natural language keyword searching of a real-world Science and Engineering
bibliographic collection. Pilot testing shows the system is functional, and
work to fit the complex LCSH structure into a usable hierarchy is ongoing. This
study contributes to knowledge of the practical design decisions required when
developing linked interactions between topical hierarchy browsing and natural
language searching, which promise to facilitate information discovery and
exploration.
|
[
{
"version": "v1",
"created": "Thu, 30 Sep 2021 17:22:35 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Julien",
"Charles-Antoine",
""
],
[
"Asadi",
"Banafsheh",
""
],
[
"Dinneen",
"Jesse David",
""
],
[
"Shu",
"Fei",
""
]
] |
new_dataset
| 0.963031 |
2110.08403
|
Chandra Maddila
|
Chandra Maddila, Suhas Shanbhogue, Apoorva Agrawal, Thomas Zimmermann,
Chetan Bansal, Nicole Forsgren, Divyanshu Agrawal, Kim Herzig, Arie van
Deursen
|
Nalanda: A Socio-Technical Graph for Building Software Analytics Tools
at Enterprise Scale
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Software development is information-dense knowledge work that requires
collaboration with other developers and awareness of artifacts such as work
items, pull requests, and files. With the speed of development increasing,
information overload is a challenge for people developing and maintaining these
systems. Finding information and people is difficult for software engineers,
especially when they work in large software systems or have just recently
joined a project. In this paper, we build a large scale data platform named
Nalanda platform, which contains two subsystems: 1. A large scale
socio-technical graph system, named Nalanda graph system 2. A large scale
recommendation system, named Nalanda index system that aims at satisfying the
information needs of software developers. The Nalanda graph is an enterprise
scale graph with data from 6,500 repositories, with 37,410,706 nodes and
128,745,590 edges. On top of the Nalanda graph system, we built software
analytics applications including a newsfeed named MyNalanda, and based on
organic growth alone, it has Daily Active Users (DAU) of 290 and Monthly Active
Users (MAU) of 590. A preliminary user study shows that 74% of developers and
engineering managers surveyed are favorable toward continued use of the
platform for information discovery. The Nalanda index system constitutes two
indices: artifact index and expert index. It uses the socio-technical graph
(Nalanda graph system) to rank the results and provide better recommendations
to software developers. A large scale quantitative evaluation shows that the
Nalanda index system provides recommendations with an accuracy of 78% for the
top three recommendations.
|
[
{
"version": "v1",
"created": "Fri, 15 Oct 2021 22:55:23 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Oct 2021 22:22:42 GMT"
},
{
"version": "v3",
"created": "Sat, 5 Mar 2022 18:20:22 GMT"
},
{
"version": "v4",
"created": "Mon, 19 Sep 2022 21:01:20 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Maddila",
"Chandra",
""
],
[
"Shanbhogue",
"Suhas",
""
],
[
"Agrawal",
"Apoorva",
""
],
[
"Zimmermann",
"Thomas",
""
],
[
"Bansal",
"Chetan",
""
],
[
"Forsgren",
"Nicole",
""
],
[
"Agrawal",
"Divyanshu",
""
],
[
"Herzig",
"Kim",
""
],
[
"van Deursen",
"Arie",
""
]
] |
new_dataset
| 0.999642 |
2110.14706
|
Dario Mantegazza
|
Dario Mantegazza (1), Carlos Redondo (2), Fran Espada (2), Luca M.
Gambardella (1), Alessandro Giusti (1) and J\'er\^ome Guzzi (1) ((1) Dalle
Molle Institute for Artificial Intelligence (IDSIA), USI-SUPSI, Lugano,
Switzerland,(2) Hovering Solutions Ltd, Madrid, Spain)
|
Sensing Anomalies as Potential Hazards: Datasets and Benchmarks
| null | null |
10.1007/978-3-031-15908-4_17
| null |
cs.RO cs.AI cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the problem of detecting, in the visual sensing data stream of an
autonomous mobile robot, semantic patterns that are unusual (i.e., anomalous)
with respect to the robot's previous experience in similar environments. These
anomalies might indicate unforeseen hazards and, in scenarios where failure is
costly, can be used to trigger an avoidance behavior. We contribute three novel
image-based datasets acquired in robot exploration scenarios, comprising a
total of more than 200k labeled frames, spanning various types of anomalies. On
these datasets, we study the performance of an anomaly detection approach based
on autoencoders operating at different scales.
|
[
{
"version": "v1",
"created": "Wed, 27 Oct 2021 18:47:06 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Sep 2022 15:21:21 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Mantegazza",
"Dario",
""
],
[
"Redondo",
"Carlos",
""
],
[
"Espada",
"Fran",
""
],
[
"Gambardella",
"Luca M.",
""
],
[
"Giusti",
"Alessandro",
""
],
[
"Guzzi",
"Jérôme",
""
]
] |
new_dataset
| 0.98501 |
2201.11940
|
Paul Zhang
|
Paul Zhang, Dmitriy Smirnov, Justin Solomon
|
Wassersplines for Neural Vector Field--Controlled Animation
| null | null | null | null |
cs.GR cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Much of computer-generated animation is created by manipulating meshes with
rigs. While this approach works well for animating articulated objects like
animals, it has limited flexibility for animating less structured free-form
objects. We introduce Wassersplines, a novel trajectory inference method for
animating unstructured densities based on recent advances in continuous
normalizing flows and optimal transport. The key idea is to train a
neurally-parameterized velocity field that represents the motion between
keyframes. Trajectories are then computed by advecting keyframes through the
velocity field. We solve an additional Wasserstein barycenter interpolation
problem to guarantee strict adherence to keyframes. Our tool can stylize
trajectories through a variety of PDE-based regularizers to create different
visual effects. We demonstrate our tool on various keyframe interpolation
problems to produce temporally-coherent animations without meshing or rigging.
|
[
{
"version": "v1",
"created": "Fri, 28 Jan 2022 05:36:02 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Sep 2022 02:39:39 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Zhang",
"Paul",
""
],
[
"Smirnov",
"Dmitriy",
""
],
[
"Solomon",
"Justin",
""
]
] |
new_dataset
| 0.997064 |
2202.10084
|
\"Ozgecan \"Ozdogan
|
\"Ozgecan \"Ozdogan and Emil Bj\"ornson
|
Massive MIMO with Dual-Polarized Antennas
|
15 pages, 9 figures. To appear in IEEE Transactions on Wireless
Communications
| null |
10.1109/TWC.2022.3205471
| null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper considers a single-cell massive MIMO (multiple-input
multiple-output) system with dual-polarized antennas at both the base station
and users. We study a channel model that includes the key practical aspects
that arise when utilizing dual-polarization: channel cross-polar discrimination
(XPD) and cross-polar correlations (XPC) at the transmitter and receiver. We
derive the achievable uplink and downlink spectral efficiencies (SE) with and
without successive interference cancellation (SIC) when using the linear
minimum mean squared error (MMSE), zero-forcing (ZF), and maximum ratio (MR)
combining/precoding schemes. The expressions depend on the statistical
properties of the MMSE channel estimator obtained for the dual-polarized
channel model. Closed-form uplink and downlink SE expressions for MR
combining/precoding are derived. Using these expressions, we propose
power-control algorithms that maximize the uplink and downlink sum SEs under
uncorrelated fading but can be used to enhance performance also with correlated
fading. We compare the SEs achieved in dual-polarized and uni-polarized setups
numerically and evaluate the impact of XPD and XPC conditions. The simulations
reveal that dual-polarized setups achieve 40-60\% higher SEs and the gains
remain also under severe XPD and XPC. Dual-polarized also systems benefit more
from advanced signal processing that compensates for imperfections.
|
[
{
"version": "v1",
"created": "Mon, 21 Feb 2022 09:47:51 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Sep 2022 06:48:24 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Özdogan",
"Özgecan",
""
],
[
"Björnson",
"Emil",
""
]
] |
new_dataset
| 0.962451 |
2203.00235
|
Li You
|
Li You, Xiaoyu Qiang, Christos G. Tsinos, Fan Liu, Wenjin Wang, Xiqi
Gao, Bj\"orn Ottersten
|
Beam Squint-Aware Integrated Sensing and Communications for Hybrid
Massive MIMO LEO Satellite Systems
|
to appear in IEEE Journal on Selected Areas in Communications
|
IEEE Journal on Selected Areas in Communications, vol. 40, no. 10,
pp. 2994-3009, Oct. 2022
|
10.1109/JSAC.2022.3196114
| null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The space-air-ground-sea integrated network (SAGSIN) plays an important role
in offering global coverage. To improve the efficient utilization of spectral
and hardware resources in the SAGSIN, integrated sensing and communications
(ISAC) has drawn extensive attention. Most existing ISAC works focus on
terrestrial networks and can not be straightforwardly applied in satellite
systems due to the significantly different electromagnetic wave propagation
properties. In this work, we investigate the application of ISAC in massive
multiple-input multiple-output (MIMO) low earth orbit (LEO) satellite systems.
We first characterize the statistical wave propagation properties by
considering beam squint effects. Based on this analysis, we propose a beam
squint-aware ISAC technique for hybrid analog/digital massive MIMO LEO
satellite systems exploiting statistical channel state information. Simulation
results demonstrate that the proposed scheme can operate both the wireless
communications and the target sensing simultaneously with satisfactory
performance, and the beam-squint effects can be efficiently mitigated with the
proposed method in typical LEO satellite systems.
|
[
{
"version": "v1",
"created": "Tue, 1 Mar 2022 05:20:23 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"You",
"Li",
""
],
[
"Qiang",
"Xiaoyu",
""
],
[
"Tsinos",
"Christos G.",
""
],
[
"Liu",
"Fan",
""
],
[
"Wang",
"Wenjin",
""
],
[
"Gao",
"Xiqi",
""
],
[
"Ottersten",
"Björn",
""
]
] |
new_dataset
| 0.953254 |
2203.05297
|
Haiyang Liu
|
Haiyang Liu, Zihao Zhu, Naoya Iwamoto, Yichen Peng, Zhengqing Li, You
Zhou, Elif Bozkurt, Bo Zheng
|
BEAT: A Large-Scale Semantic and Emotional Multi-Modal Dataset for
Conversational Gestures Synthesis
|
28 pages, 15 figures, Accepted by ECCV2022
| null | null | null |
cs.CV cs.CL cs.GR cs.LG cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Achieving realistic, vivid, and human-like synthesized conversational
gestures conditioned on multi-modal data is still an unsolved problem due to
the lack of available datasets, models and standard evaluation metrics. To
address this, we build Body-Expression-Audio-Text dataset, BEAT, which has i)
76 hours, high-quality, multi-modal data captured from 30 speakers talking with
eight different emotions and in four different languages, ii) 32 millions
frame-level emotion and semantic relevance annotations. Our statistical
analysis on BEAT demonstrates the correlation of conversational gestures with
facial expressions, emotions, and semantics, in addition to the known
correlation with audio, text, and speaker identity. Based on this observation,
we propose a baseline model, Cascaded Motion Network (CaMN), which consists of
above six modalities modeled in a cascaded architecture for gesture synthesis.
To evaluate the semantic relevancy, we introduce a metric, Semantic Relevance
Gesture Recall (SRGR). Qualitative and quantitative experiments demonstrate
metrics' validness, ground truth data quality, and baseline's state-of-the-art
performance. To the best of our knowledge, BEAT is the largest motion capture
dataset for investigating human gestures, which may contribute to a number of
different research fields, including controllable gesture synthesis,
cross-modality analysis, and emotional gesture recognition. The data, code and
model are available on https://pantomatrix.github.io/BEAT/.
|
[
{
"version": "v1",
"created": "Thu, 10 Mar 2022 11:19:52 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Mar 2022 16:19:50 GMT"
},
{
"version": "v3",
"created": "Fri, 18 Mar 2022 04:59:49 GMT"
},
{
"version": "v4",
"created": "Tue, 19 Apr 2022 10:23:51 GMT"
},
{
"version": "v5",
"created": "Tue, 20 Sep 2022 05:44:29 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Liu",
"Haiyang",
""
],
[
"Zhu",
"Zihao",
""
],
[
"Iwamoto",
"Naoya",
""
],
[
"Peng",
"Yichen",
""
],
[
"Li",
"Zhengqing",
""
],
[
"Zhou",
"You",
""
],
[
"Bozkurt",
"Elif",
""
],
[
"Zheng",
"Bo",
""
]
] |
new_dataset
| 0.999778 |
2204.03038
|
Ruixuan Liu
|
Ruixuan Liu, Rui Chen and Changliu Liu
|
Safe Interactive Industrial Robots using Jerk-based Safe Set Algorithm
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The need to increase the flexibility of production lines is calling for
robots to collaborate with human workers. However, existing interactive
industrial robots only guarantee intrinsic safety (reduce collision impact),
but not interactive safety (collision avoidance), which greatly limited their
flexibility. The issue arises from two limitations in existing control software
for industrial robots: 1) lack of support for real-time trajectory
modification; 2) lack of intelligent safe control algorithms with guaranteed
collision avoidance under robot dynamics constraints. To address the first
issue, a jerk-bounded position controller (JPC) was developed previously. This
paper addresses the second limitation, on top of the JPC. Specifically, we
introduce a jerk-based safe set algorithm (JSSA) to ensure collision avoidance
while considering the robot dynamics constraints. The JSSA greatly extends the
scope of the original safe set algorithm, which has only been applied for
second-order systems with unbounded accelerations. The JSSA is implemented on
the FANUC LR Mate 200id/7L robot and validated with HRI tasks. Experiments show
that the JSSA can consistently keep the robot at a safe distance from the human
while executing the designated task.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 18:43:22 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Sep 2022 02:22:08 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Liu",
"Ruixuan",
""
],
[
"Chen",
"Rui",
""
],
[
"Liu",
"Changliu",
""
]
] |
new_dataset
| 0.984397 |
2204.03636
|
Yi Wei
|
Yi Wei, Linqing Zhao, Wenzhao Zheng, Zheng Zhu, Yongming Rao, Guan
Huang, Jiwen Lu, Jie Zhou
|
SurroundDepth: Entangling Surrounding Views for Self-Supervised
Multi-Camera Depth Estimation
|
Accepted to CoRL 2022. Project page:
https://surrounddepth.ivg-research.xyz Code:
https://github.com/weiyithu/SurroundDepth
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Depth estimation from images serves as the fundamental step of 3D perception
for autonomous driving and is an economical alternative to expensive depth
sensors like LiDAR. The temporal photometric constraints enables
self-supervised depth estimation without labels, further facilitating its
application. However, most existing methods predict the depth solely based on
each monocular image and ignore the correlations among multiple surrounding
cameras, which are typically available for modern self-driving vehicles. In
this paper, we propose a SurroundDepth method to incorporate the information
from multiple surrounding views to predict depth maps across cameras.
Specifically, we employ a joint network to process all the surrounding views
and propose a cross-view transformer to effectively fuse the information from
multiple views. We apply cross-view self-attention to efficiently enable the
global interactions between multi-camera feature maps. Different from
self-supervised monocular depth estimation, we are able to predict real-world
scales given multi-camera extrinsic matrices. To achieve this goal, we adopt
the two-frame structure-from-motion to extract scale-aware pseudo depths to
pretrain the models. Further, instead of predicting the ego-motion of each
individual camera, we estimate a universal ego-motion of the vehicle and
transfer it to each view to achieve multi-view ego-motion consistency. In
experiments, our method achieves the state-of-the-art performance on the
challenging multi-camera depth estimation datasets DDAD and nuScenes.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 17:58:47 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Sep 2022 06:38:01 GMT"
},
{
"version": "v3",
"created": "Tue, 20 Sep 2022 13:15:39 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Wei",
"Yi",
""
],
[
"Zhao",
"Linqing",
""
],
[
"Zheng",
"Wenzhao",
""
],
[
"Zhu",
"Zheng",
""
],
[
"Rao",
"Yongming",
""
],
[
"Huang",
"Guan",
""
],
[
"Lu",
"Jiwen",
""
],
[
"Zhou",
"Jie",
""
]
] |
new_dataset
| 0.993009 |
2205.03373
|
Aldo Glielmo Dr.
|
Aldo Glielmo, Iuri Macocco, Diego Doimo, Matteo Carli, Claudio Zeni,
Romina Wild, Maria d'Errico, Alex Rodriguez, Alessandro Laio
|
DADApy: Distance-based Analysis of DAta-manifolds in Python
|
9 pages, 6 figures. Patterns (2022)
| null |
10.1016/j.patter.2022.100589
| null |
cs.LG physics.comp-ph stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
DADApy is a python software package for analysing and characterising
high-dimensional data manifolds. It provides methods for estimating the
intrinsic dimension and the probability density, for performing density-based
clustering and for comparing different distance metrics. We review the main
functionalities of the package and exemplify its usage in toy cases and in a
real-world application. DADApy is freely available under the open-source Apache
2.0 license.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 08:41:59 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Sep 2022 20:05:45 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Glielmo",
"Aldo",
""
],
[
"Macocco",
"Iuri",
""
],
[
"Doimo",
"Diego",
""
],
[
"Carli",
"Matteo",
""
],
[
"Zeni",
"Claudio",
""
],
[
"Wild",
"Romina",
""
],
[
"d'Errico",
"Maria",
""
],
[
"Rodriguez",
"Alex",
""
],
[
"Laio",
"Alessandro",
""
]
] |
new_dataset
| 0.998906 |
2205.08389
|
Giuseppe Vecchio
|
Giuseppe Vecchio, Simone Palazzo, Dario C. Guastella, Ignacio
Carlucho, Stefano V. Albrecht, Giovanni Muscato and Concetto Spampinato
|
MIDGARD: A Simulation Platform for Autonomous Navigation in Unstructured
Environments
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We present MIDGARD, an open-source simulation platform for autonomous robot
navigation in outdoor unstructured environments. MIDGARD is designed to enable
the training of autonomous agents (e.g., unmanned ground vehicles) in
photorealistic 3D environments, and to support the generalization skills of
learning-based agents through the variability in training scenarios. MIDGARD's
main features include a configurable, extensible, and difficulty-driven
procedural landscape generation pipeline, with fast and photorealistic scene
rendering based on Unreal Engine. Additionally, MIDGARD has built-in support
for OpenAI Gym, a programming interface for feature extension (e.g.,
integrating new types of sensors, customizing exposing internal simulation
variables), and a variety of simulated agent sensors (e.g., RGB, depth and
instance/semantic segmentation). We evaluate MIDGARD's capabilities as a
benchmarking tool for robot navigation utilizing a set of state-of-the-art
reinforcement learning algorithms. The results demonstrate MIDGARD's
suitability as a simulation and training environment, as well as the
effectiveness of our procedural generation approach in controlling scene
difficulty, which directly reflects on accuracy metrics. MIDGARD build, source
code and documentation are available at https://midgardsim.org/.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2022 14:10:21 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Sep 2022 10:10:11 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Vecchio",
"Giuseppe",
""
],
[
"Palazzo",
"Simone",
""
],
[
"Guastella",
"Dario C.",
""
],
[
"Carlucho",
"Ignacio",
""
],
[
"Albrecht",
"Stefano V.",
""
],
[
"Muscato",
"Giovanni",
""
],
[
"Spampinato",
"Concetto",
""
]
] |
new_dataset
| 0.996965 |
2205.10000
|
Paolo Fittipaldi
|
Paolo Fittipaldi (QI), Anastasios Giovanidis (NPA), Fr\'ed\'eric
Grosshans (QI)
|
A Linear Algebraic Framework for Quantum Internet Dynamic Scheduling
| null |
IEEE International Conference on Quantum Computing and Engineering
(QCE22), Sep 2022, Broomfield, CO, United States
| null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Future quantum internet aims to enable quantum communication between
arbitrary pairs of distant nodes through the sharing of end-to-end
entanglement, a universal resource for many quantum applications. As in
classical networks, quantum networks also have to resolve problems related to
routing and satisfaction of service at a sufficient rate. We deal here with the
problem of scheduling when multiple commodities must be served through a
quantum network based on first generation quantum repeaters, or quantum
switches. To this end, we introduce a novel discrete-time algebraic model for
arbitrary network topology, including transmission and memory losses, and
adapted to dynamic scheduling decisions. Our algebraic model allows the
scheduler to use the storage of temporary intermediate links to optimize the
performance, depending on the information availability, ranging from full
global information for a centralized scheduler to partial local information for
a distributed one. As an illustrative example, we compare a simple greedy
scheduling policy with several Max-Weight inspired scheduling policies and
illustrate the resulting achievable rate regions for two competing pairs of
clients through a network.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2022 07:33:55 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Sep 2022 08:46:44 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Fittipaldi",
"Paolo",
"",
"QI"
],
[
"Giovanidis",
"Anastasios",
"",
"NPA"
],
[
"Grosshans",
"Frédéric",
"",
"QI"
]
] |
new_dataset
| 0.957032 |
2205.13764
|
Chunhua Shen
|
Zhi Tian, Xiangxiang Chu, Xiaoming Wang, Xiaolin Wei, Chunhua Shen
|
Fully Convolutional One-Stage 3D Object Detection on LiDAR Range Images
|
Accepted to: Proc. Thirty-sixth Conference on Neural Information
Processing Systems (NeurIPS) 2022. 14 pages
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present a simple yet effective fully convolutional one-stage 3D object
detector for LiDAR point clouds of autonomous driving scenes, termed
FCOS-LiDAR. Unlike the dominant methods that use the bird-eye view (BEV), our
proposed detector detects objects from the range view (RV, a.k.a. range image)
of the LiDAR points. Due to the range view's compactness and compatibility with
the LiDAR sensors' sampling process on self-driving cars, the range view-based
object detector can be realized by solely exploiting the vanilla 2D
convolutions, departing from the BEV-based methods which often involve
complicated voxelization operations and sparse convolutions.
For the first time, we show that an RV-based 3D detector with standard 2D
convolutions alone can achieve comparable performance to state-of-the-art
BEV-based detectors while being significantly faster and simpler. More
importantly, almost all previous range view-based detectors only focus on
single-frame point clouds, since it is challenging to fuse multi-frame point
clouds into a single range view. In this work, we tackle this challenging issue
with a novel range view projection mechanism, and for the first time
demonstrate the benefits of fusing multi-frame point clouds for a range-view
based detector. Extensive experiments on nuScenes show the superiority of our
proposed method and we believe that our work can be strong evidence that an
RV-based 3D detector can compare favourably with the current mainstream
BEV-based detectors.
|
[
{
"version": "v1",
"created": "Fri, 27 May 2022 05:42:16 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Sep 2022 03:06:12 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Tian",
"Zhi",
""
],
[
"Chu",
"Xiangxiang",
""
],
[
"Wang",
"Xiaoming",
""
],
[
"Wei",
"Xiaolin",
""
],
[
"Shen",
"Chunhua",
""
]
] |
new_dataset
| 0.996306 |
2207.03051
|
Haitao Mao
|
Lixin Zou, Haitao Mao, Xiaokai Chu, Jiliang Tang, Wenwen Ye,
Shuaiqiang Wang, Dawei Yin
|
A Large Scale Search Dataset for Unbiased Learning to Rank
|
15 pages, 9 figures
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The unbiased learning to rank (ULTR) problem has been greatly advanced by
recent deep learning techniques and well-designed debias algorithms. However,
promising results on the existing benchmark datasets may not be extended to the
practical scenario due to the following disadvantages observed from those
popular benchmark datasets: (1) outdated semantic feature extraction where
state-of-the-art large scale pre-trained language models like BERT cannot be
exploited due to the missing of the original text;(2) incomplete display
features for in-depth study of ULTR, e.g., missing the displayed abstract of
documents for analyzing the click necessary bias; (3) lacking real-world user
feedback, leading to the prevalence of synthetic datasets in the empirical
study. To overcome the above disadvantages, we introduce the Baidu-ULTR
dataset. It involves randomly sampled 1.2 billion searching sessions and 7,008
expert annotated queries, which is orders of magnitude larger than the existing
ones. Baidu-ULTR provides:(1) the original semantic feature and a pre-trained
language model for easy usage; (2) sufficient display information such as
position, displayed height, and displayed abstract, enabling the comprehensive
study of different biases with advanced techniques such as causal discovery and
meta-learning; and (3) rich user feedback on search result pages (SERPs) like
dwelling time, allowing for user engagement optimization and promoting the
exploration of multi-task learning in ULTR. In this paper, we present the
design principle of Baidu-ULTR and the performance of benchmark ULTR algorithms
on this new data resource, favoring the exploration of ranking for long-tail
queries and pre-training tasks for ranking. The Baidu-ULTR dataset and
corresponding baseline implementation are available at
https://github.com/ChuXiaokai/baidu_ultr_dataset.
|
[
{
"version": "v1",
"created": "Thu, 7 Jul 2022 02:37:25 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Sep 2022 19:34:38 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Zou",
"Lixin",
""
],
[
"Mao",
"Haitao",
""
],
[
"Chu",
"Xiaokai",
""
],
[
"Tang",
"Jiliang",
""
],
[
"Ye",
"Wenwen",
""
],
[
"Wang",
"Shuaiqiang",
""
],
[
"Yin",
"Dawei",
""
]
] |
new_dataset
| 0.988718 |
2208.11284
|
Nithin Gopalakrishnan Nair
|
Nithin Gopalakrishnan Nair, Kangfu Mei, Vishal M.Patel
|
AT-DDPM: Restoring Faces degraded by Atmospheric Turbulence using
Denoising Diffusion Probabilistic Models
|
Accepted to IEEE WACV 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Although many long-range imaging systems are designed to support extended
vision applications, a natural obstacle to their operation is degradation due
to atmospheric turbulence. Atmospheric turbulence causes significant
degradation to image quality by introducing blur and geometric distortion. In
recent years, various deep learning-based single image atmospheric turbulence
mitigation methods, including CNN-based and GAN inversion-based, have been
proposed in the literature which attempt to remove the distortion in the image.
However, some of these methods are difficult to train and often fail to
reconstruct facial features and produce unrealistic results especially in the
case of high turbulence. Denoising Diffusion Probabilistic Models (DDPMs) have
recently gained some traction because of their stable training process and
their ability to generate high quality images. In this paper, we propose the
first DDPM-based solution for the problem of atmospheric turbulence mitigation.
We also propose a fast sampling technique for reducing the inference times for
conditional DDPMs. Extensive experiments are conducted on synthetic and
real-world data to show the significance of our model. To facilitate further
research, all codes and pretrained models are publically available at
http://github.com/Nithin-GK/AT-DDPM
|
[
{
"version": "v1",
"created": "Wed, 24 Aug 2022 03:13:04 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Sep 2022 06:13:41 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Nair",
"Nithin Gopalakrishnan",
""
],
[
"Mei",
"Kangfu",
""
],
[
"Patel",
"Vishal M.",
""
]
] |
new_dataset
| 0.969761 |
2209.09010
|
Jingguang Tian
|
Jingguang Tian, Xinhui Hu, Xinkang Xu
|
The Royalflush System for VoxCeleb Speaker Recognition Challenge 2022
| null | null | null | null |
cs.SD cs.AI eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this technical report, we describe the Royalflush submissions for the
VoxCeleb Speaker Recognition Challenge 2022 (VoxSRC-22). Our submissions
contain track 1, which is for supervised speaker verification and track 3,
which is for semi-supervised speaker verification. For track 1, we develop a
powerful U-Net-based speaker embedding extractor with a symmetric architecture.
The proposed system achieves 2.06% in EER and 0.1293 in MinDCF on the
validation set. Compared with the state-of-the-art ECAPA-TDNN, it obtains a
relative improvement of 20.7% in EER and 22.70% in MinDCF. For track 3, we
employ the joint training of source domain supervision and target domain
self-supervision to get a speaker embedding extractor. The subsequent
clustering process can obtain target domain pseudo-speaker labels. We adapt the
speaker embedding extractor using all source and target domain data in a
supervised manner, where it can fully leverage both domain information.
Moreover, clustering and supervised domain adaptation can be repeated until the
performance converges on the validation set. Our final submission is a fusion
of 10 models and achieves 7.75% EER and 0.3517 MinDCF on the validation set.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 13:35:36 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Sep 2022 12:46:18 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Tian",
"Jingguang",
""
],
[
"Hu",
"Xinhui",
""
],
[
"Xu",
"Xinkang",
""
]
] |
new_dataset
| 0.996326 |
2209.09076
|
Bing Han
|
Zhengyang Chen, Bing Han, Xu Xiang, Houjun Huang, Bei Liu, Yanmin Qian
|
SJTU-AISPEECH System for VoxCeleb Speaker Recognition Challenge 2022
|
System description of VoxSRC 2022
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This report describes the SJTU-AISPEECH system for the Voxceleb Speaker
Recognition Challenge 2022. For track1, we implemented two kinds of systems,
the online system and the offline system. Different ResNet-based backbones and
loss functions are explored. Our final fusion system achieved 3rd place in
track1. For track3, we implemented statistic adaptation and jointly training
based domain adaptation. In the jointly training based domain adaptation, we
jointly trained the source and target domain dataset with different training
objectives to do the domain adaptation. We explored two different training
objectives for target domain data, self-supervised learning based angular
proto-typical loss and semi-supervised learning based classification loss with
estimated pseudo labels. Besides, we used the dynamic loss-gate and label
correction (DLG-LC) strategy to improve the quality of pseudo labels when the
target domain objective is a classification loss. Our final fusion system
achieved 4th place (very close to 3rd place, relatively less than 1%) in
track3.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 15:06:42 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Sep 2022 15:33:18 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Chen",
"Zhengyang",
""
],
[
"Han",
"Bing",
""
],
[
"Xiang",
"Xu",
""
],
[
"Huang",
"Houjun",
""
],
[
"Liu",
"Bei",
""
],
[
"Qian",
"Yanmin",
""
]
] |
new_dataset
| 0.998761 |
2209.09327
|
Quang Loc Le
|
Quang Loc Le, Jun Sun, Long H. Pham, and Shengchao Qin
|
S2TD: a Separation Logic Verifier that Supports Reasoning of the Absence
and Presence of Bugs
|
24 pages
| null | null | null |
cs.PL cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Heap-manipulating programs are known to be challenging to reason about. We
present a novel verifier for heap-manipulating programs called S2TD, which
encodes programs systematically in the form of Constrained Horn Clauses (CHC)
using a novel extension of separation logic (SL) with recursive predicates and
dangling predicates. S2TD actively explores cyclic proofs to address the path
explosion problem. S2TD differentiates itself from existing CHC-based verifiers
by focusing on heap-manipulating programs and employing cyclic proof to
efficiently verify or falsify them with counterexamples. Compared with existing
SL-based verifiers, S2TD precisely specifies the heaps of de-allocated pointers
to avoid false positives in reasoning about the presence of bugs. S2TD has been
evaluated using a comprehensive set of benchmark programs from the SV-COMP
repository. The results show that S2TD is more effective than state-of-art
program verifiers and is more efficient than most of them.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 20:07:54 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Le",
"Quang Loc",
""
],
[
"Sun",
"Jun",
""
],
[
"Pham",
"Long H.",
""
],
[
"Qin",
"Shengchao",
""
]
] |
new_dataset
| 0.966274 |
2209.09331
|
Robert Chuchro
|
Robert Chuchro
|
Training an Assassin AI for The Resistance: Avalon
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The Resistance: Avalon is a partially observable social deduction game. This
area of AI game playing is fairly undeveloped. Implementing an AI for this game
involves multiple components specific to each phase as well as role in the
game. In this paper, we plan to iteratively develop the required components for
each role/phase by first addressing the Assassination phase which can be
modeled as a machine learning problem. Using a publicly available dataset from
an online version of the game, we train classifiers that emulate an Assassin.
After trying various classification techniques, we are able to achieve above
average human performance using a simple linear support vector classifier. The
eventual goal of this project is to pursue developing an intelligent and
complete Avalon player that can play through each phase of the game as any
role.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 20:19:32 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Chuchro",
"Robert",
""
]
] |
new_dataset
| 0.999338 |
2209.09368
|
David Dale
|
David Dale
|
The first neural machine translation system for the Erzya language
|
Accepted to the Field Matters workshop at the COLING 2022 conference
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We present the first neural machine translation system for translation
between the endangered Erzya language and Russian and the dataset collected by
us to train and evaluate it. The BLEU scores are 17 and 19 for translation to
Erzya and Russian respectively, and more than half of the translations are
rated as acceptable by native speakers. We also adapt our model to translate
between Erzya and 10 other languages, but without additional parallel data, the
quality on these directions remains low. We release the translation models
along with the collected text corpus, a new language identification model, and
a multilingual sentence encoder adapted for the Erzya language. These resources
will be available at https://github.com/slone-nlp/myv-nmt.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 22:21:37 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Dale",
"David",
""
]
] |
new_dataset
| 0.995762 |
2209.09375
|
Catie Cuan
|
Catie Cuan, Edward Lee, Emre Fisher, Anthony Francis, Leila Takayama,
Tingnan Zhang, Alexander Toshev, and S\"oren Pirk
|
Gesture2Path: Imitation Learning for Gesture-aware Navigation
|
8 pages, 12 figures
| null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
As robots increasingly enter human-centered environments, they must not only
be able to navigate safely around humans, but also adhere to complex social
norms. Humans often rely on non-verbal communication through gestures and
facial expressions when navigating around other people, especially in densely
occupied spaces. Consequently, robots also need to be able to interpret
gestures as part of solving social navigation tasks. To this end, we present
Gesture2Path, a novel social navigation approach that combines image-based
imitation learning with model-predictive control. Gestures are interpreted
based on a neural network that operates on streams of images, while we use a
state-of-the-art model predictive control algorithm to solve point-to-point
navigation tasks. We deploy our method on real robots and showcase the
effectiveness of our approach for the four gestures-navigation scenarios:
left/right, follow me, and make a circle. Our experiments indicate that our
method is able to successfully interpret complex human gestures and to use them
as a signal to generate socially compliant trajectories for navigation tasks.
We validated our method based on in-situ ratings of participants interacting
with the robots.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 23:05:36 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Cuan",
"Catie",
""
],
[
"Lee",
"Edward",
""
],
[
"Fisher",
"Emre",
""
],
[
"Francis",
"Anthony",
""
],
[
"Takayama",
"Leila",
""
],
[
"Zhang",
"Tingnan",
""
],
[
"Toshev",
"Alexander",
""
],
[
"Pirk",
"Sören",
""
]
] |
new_dataset
| 0.999077 |
2209.09391
|
Alexander W. Winkler
|
Alexander Winkler, Jungdam Won, Yuting Ye
|
QuestSim: Human Motion Tracking from Sparse Sensors with Simulated
Avatars
| null |
SIGGRAPH Asia 2022 Conference Papers, December 6 to 9, 2022,
Daegu, Republic of Korea
|
10.1145/3550469.3555411
| null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Real-time tracking of human body motion is crucial for interactive and
immersive experiences in AR/VR. However, very limited sensor data about the
body is available from standalone wearable devices such as HMDs (Head Mounted
Devices) or AR glasses. In this work, we present a reinforcement learning
framework that takes in sparse signals from an HMD and two controllers, and
simulates plausible and physically valid full body motions. Using high quality
full body motion as dense supervision during training, a simple policy network
can learn to output appropriate torques for the character to balance, walk, and
jog, while closely following the input signals. Our results demonstrate
surprisingly similar leg motions to ground truth without any observations of
the lower body, even when the input is only the 6D transformations of the HMD.
We also show that a single policy can be robust to diverse locomotion styles,
different body sizes, and novel environments.
|
[
{
"version": "v1",
"created": "Tue, 20 Sep 2022 00:25:54 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Winkler",
"Alexander",
""
],
[
"Won",
"Jungdam",
""
],
[
"Ye",
"Yuting",
""
]
] |
new_dataset
| 0.997644 |
2209.09452
|
Seongju Lee
|
Seongju Lee, Yeonguk Yu, Seunghyeok Back, Hogeon Seo, Kyoobin Lee
|
SleePyCo: Automatic Sleep Scoring with Feature Pyramid and Contrastive
Learning
|
14 pages, 3 figures, 8 tables
| null | null | null |
cs.LG cs.AI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic sleep scoring is essential for the diagnosis and treatment of sleep
disorders and enables longitudinal sleep tracking in home environments.
Conventionally, learning-based automatic sleep scoring on single-channel
electroencephalogram (EEG) is actively studied because obtaining multi-channel
signals during sleep is difficult. However, learning representation from raw
EEG signals is challenging owing to the following issues: 1) sleep-related EEG
patterns occur on different temporal and frequency scales and 2) sleep stages
share similar EEG patterns. To address these issues, we propose a deep learning
framework named SleePyCo that incorporates 1) a feature pyramid and 2)
supervised contrastive learning for automatic sleep scoring. For the feature
pyramid, we propose a backbone network named SleePyCo-backbone to consider
multiple feature sequences on different temporal and frequency scales.
Supervised contrastive learning allows the network to extract class
discriminative features by minimizing the distance between intra-class features
and simultaneously maximizing that between inter-class features. Comparative
analyses on four public datasets demonstrate that SleePyCo consistently
outperforms existing frameworks based on single-channel EEG. Extensive ablation
experiments show that SleePyCo exhibits enhanced overall performance, with
significant improvements in discrimination between the N1 and rapid eye
movement (REM) stages.
|
[
{
"version": "v1",
"created": "Tue, 20 Sep 2022 04:10:49 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Lee",
"Seongju",
""
],
[
"Yu",
"Yeonguk",
""
],
[
"Back",
"Seunghyeok",
""
],
[
"Seo",
"Hogeon",
""
],
[
"Lee",
"Kyoobin",
""
]
] |
new_dataset
| 0.998688 |
2209.09578
|
Philipp M\"uller
|
Philipp M\"uller, Michael Dietz, Dominik Schiller, Dominike Thomas,
Hali Lindsay, Patrick Gebhard, Elisabeth Andr\'e, Andreas Bulling
|
MultiMediate '22: Backchannel Detection and Agreement Estimation in
Group Interactions
|
ACM Multimedia 2022
| null |
10.1145/3503161.3551589
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Backchannels, i.e. short interjections of the listener, serve important
meta-conversational purposes like signifying attention or indicating agreement.
Despite their key role, automatic analysis of backchannels in group
interactions has been largely neglected so far. The MultiMediate challenge
addresses, for the first time, the tasks of backchannel detection and agreement
estimation from backchannels in group conversations. This paper describes the
MultiMediate challenge and presents a novel set of annotations consisting of
7234 backchannel instances for the MPIIGroupInteraction dataset. Each
backchannel was additionally annotated with the extent by which it expresses
agreement towards the current speaker. In addition to a an analysis of the
collected annotations, we present baseline results for both challenge tasks.
|
[
{
"version": "v1",
"created": "Tue, 20 Sep 2022 09:49:47 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Müller",
"Philipp",
""
],
[
"Dietz",
"Michael",
""
],
[
"Schiller",
"Dominik",
""
],
[
"Thomas",
"Dominike",
""
],
[
"Lindsay",
"Hali",
""
],
[
"Gebhard",
"Patrick",
""
],
[
"André",
"Elisabeth",
""
],
[
"Bulling",
"Andreas",
""
]
] |
new_dataset
| 0.974897 |
2209.09660
|
Carlos Perez Galvan Dr
|
Imanol Arzac-Garmendia, Mattia Vallerio, Carlos Perez-Galvan and
Francisco J. Navarro-Brull
|
Industrial Data Science for Batch Manufacturing Processes
| null | null | null | null |
cs.LG cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Batch processes show several sources of variability, from raw materials'
properties to initial and evolving conditions that change during the different
events in the manufacturing process. In this chapter, we will illustrate with
an industrial example how to use machine learning to reduce this apparent
excess of data while maintaining the relevant information for process
engineers. Two common use cases will be presented: 1) AutoML analysis to
quickly find correlations in batch process data, and 2) trajectory analysis to
monitor and identify anomalous batches leading to process control improvements.
|
[
{
"version": "v1",
"created": "Tue, 20 Sep 2022 11:59:13 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Arzac-Garmendia",
"Imanol",
""
],
[
"Vallerio",
"Mattia",
""
],
[
"Perez-Galvan",
"Carlos",
""
],
[
"Navarro-Brull",
"Francisco J.",
""
]
] |
new_dataset
| 0.983419 |
2209.09667
|
Kerstin Weinberg
|
Kai Friebertsh\"auser, Christian Wieners and Kerstin Weinberg
|
Dynamic fracture with continuum-kinematics-based peridynamics
| null | null | null | null |
cs.CE
|
http://creativecommons.org/licenses/by/4.0/
|
This contribution presents a concept to dynamic fracture with
continuum-kinematics-based peridynamics. Continuum-kinematics-based
peridynamics is a geometrically exact formulation of peridynamics, which adds
surface- or volumetric-based interactions to the classical peridynamic bonds,
thus capturing the finite deformation kinematics correctly. The surfaces and
volumes considered for these non-local interactions are constructed using the
point families derived from the material points' horizon.
For fracture, the classical bond-stretch damage approach is not sufficient in
continuum-kinematics-based peridynamics. Here it is extended to the surface-
and volume-based interactions by additional failure variables considering the
loss of strength in the material points' internal force densities. By numerical
examples, it is shown that the approach can correctly handle crack growth,
impact damage, and spontaneous crack initiation under dynamic loading
conditions with large deformations.
|
[
{
"version": "v1",
"created": "Tue, 20 Sep 2022 12:04:44 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Friebertshäuser",
"Kai",
""
],
[
"Wieners",
"Christian",
""
],
[
"Weinberg",
"Kerstin",
""
]
] |
new_dataset
| 0.992745 |
2209.09725
|
Alessandro Berti Mr
|
Alessandro Berti, Wil van der Aalst
|
OC-PM: Analyzing Object-Centric Event Logs and Process Models
| null | null |
10.1007/s10009-022-00668-w
| null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Object-centric process mining is a novel branch of process mining that aims
to analyze event data from mainstream information systems (such as SAP) more
naturally, without being forced to form mutually exclusive groups of events
with the specification of a case notion. The development of object-centric
process mining is related to exploiting object-centric event logs, which
includes exploring and filtering the behavior contained in the logs and
constructing process models which can encode the behavior of different classes
of objects and their interactions (which can be discovered from object-centric
event logs). This paper aims to provide a broad look at the exploration and
processing of object-centric event logs to discover information related to the
lifecycle of the different objects composing the event log. Also, comprehensive
tool support (OC-PM) implementing the proposed techniques is described in the
paper.
|
[
{
"version": "v1",
"created": "Tue, 20 Sep 2022 13:59:12 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Berti",
"Alessandro",
""
],
[
"van der Aalst",
"Wil",
""
]
] |
new_dataset
| 0.982743 |
2209.09757
|
Alan Ramponi
|
Alan Ramponi
|
NLP for Language Varieties of Italy: Challenges and the Path Forward
|
16 pages, 3 figures, 4 tables
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Italy is characterized by a one-of-a-kind linguistic diversity landscape in
Europe, which implicitly encodes local knowledge, cultural traditions, artistic
expression, and history of its speakers. However, over 30 language varieties in
Italy are at risk of disappearing within few generations. Language technology
has a main role in preserving endangered languages, but it currently struggles
with such varieties as they are under-resourced and mostly lack standardized
orthography, being mainly used in spoken settings. In this paper, we introduce
the linguistic context of Italy and discuss challenges facing the development
of NLP technologies for Italy's language varieties. We provide potential
directions and advocate for a shift in the paradigm from machine-centric to
speaker-centric NLP. Finally, we propose building a local community towards
responsible, participatory development of speech and language technologies for
languages and dialects of Italy.
|
[
{
"version": "v1",
"created": "Tue, 20 Sep 2022 14:39:12 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Ramponi",
"Alan",
""
]
] |
new_dataset
| 0.996611 |
2209.09794
|
Andrey Belogolovy
|
Andrey Belogolovy, Deepak Dasalukunte, Richard Dorrance, Evgeny
Stupachenko, and Xue Zhang
|
Low latency communication over commercially available LTE and remote
driving
| null | null | null | null |
cs.NI cs.MM
|
http://creativecommons.org/publicdomain/zero/1.0/
|
In addition to autonomous car operation, in many cases it is desirable to let
a human drive the vehicle remotely. To make remote operation possible, it is
very critical to have a low and predictable latency to transmit video from the
car cameras and receive control commands back. In this paper, we analyze the
problem and present a communication and video streaming system that addresses
the latency challenges and enables teleoperation of a real car over
commercially available LTE network; demonstrating sub-50ms roundtrip latencies
for 720p, 60FPS video, with average PSNR 36db.
|
[
{
"version": "v1",
"created": "Tue, 20 Sep 2022 15:28:48 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Belogolovy",
"Andrey",
""
],
[
"Dasalukunte",
"Deepak",
""
],
[
"Dorrance",
"Richard",
""
],
[
"Stupachenko",
"Evgeny",
""
],
[
"Zhang",
"Xue",
""
]
] |
new_dataset
| 0.993667 |
2209.09795
|
Tongjia Zheng
|
Tongjia Zheng, Zhenyuan Yuan, Mollik Nayyar, Alan R. Wagner, Minghui
Zhu, Hai Lin
|
Multi-Robot-Assisted Human Crowd Evacuation using Navigation Velocity
Fields
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work studies a robot-assisted crowd evacuation problem where we control
a small group of robots to guide a large human crowd to safe locations. The
challenge lies in how to model human-robot interactions and design robot
controls to indirectly control a human population that significantly outnumbers
the robots. To address the challenge, we treat the crowd as a continuum and
formulate the evacuation objective as driving the crowd density to target
locations. We propose a novel mean-field model which consists of a family of
microscopic equations that explicitly model how human motions are locally
guided by the robots and an associated macroscopic equation that describes how
the crowd density is controlled by the navigation velocity fields generated by
all robots. Then, we design density feedback controllers for the robots to
dynamically adjust their states such that the generated navigation velocity
fields drive the crowd density to a target density. Stability guarantees of the
proposed controllers are proven. Agent-based simulations are included to
evaluate the proposed evacuation algorithms.
|
[
{
"version": "v1",
"created": "Tue, 20 Sep 2022 15:28:52 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Zheng",
"Tongjia",
""
],
[
"Yuan",
"Zhenyuan",
""
],
[
"Nayyar",
"Mollik",
""
],
[
"Wagner",
"Alan R.",
""
],
[
"Zhu",
"Minghui",
""
],
[
"Lin",
"Hai",
""
]
] |
new_dataset
| 0.999434 |
2209.09814
|
Shreyas Fadnavis
|
Shreyas Fadnavis, Amit Dhurandhar, Raquel Norel, Jenna M Reinen, Carla
Agurto, Erica Secchettin, Vittorio Schweiger, Giovanni Perini, Guillermo
Cecchi
|
PainPoints: A Framework for Language-based Detection of Chronic Pain and
Expert-Collaborative Text-Summarization
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Chronic pain is a pervasive disorder which is often very disabling and is
associated with comorbidities such as depression and anxiety. Neuropathic Pain
(NP) is a common sub-type which is often caused due to nerve damage and has a
known pathophysiology. Another common sub-type is Fibromyalgia (FM) which is
described as musculoskeletal, diffuse pain that is widespread through the body.
The pathophysiology of FM is poorly understood, making it very hard to
diagnose. Standard medications and treatments for FM and NP differ from one
another and if misdiagnosed it can cause an increase in symptom severity. To
overcome this difficulty, we propose a novel framework, PainPoints, which
accurately detects the sub-type of pain and generates clinical notes via
summarizing the patient interviews. Specifically, PainPoints makes use of large
language models to perform sentence-level classification of the text obtained
from interviews of FM and NP patients with a reliable AUC of 0.83. Using a
sufficiency-based interpretability approach, we explain how the fine-tuned
model accurately picks up on the nuances that patients use to describe their
pain. Finally, we generate summaries of these interviews via expert
interventions by introducing a novel facet-based approach. PainPoints thus
enables practitioners to add/drop facets and generate a custom summary based on
the notion of "facet-coverage" which is also introduced in this work.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2022 06:08:13 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Fadnavis",
"Shreyas",
""
],
[
"Dhurandhar",
"Amit",
""
],
[
"Norel",
"Raquel",
""
],
[
"Reinen",
"Jenna M",
""
],
[
"Agurto",
"Carla",
""
],
[
"Secchettin",
"Erica",
""
],
[
"Schweiger",
"Vittorio",
""
],
[
"Perini",
"Giovanni",
""
],
[
"Cecchi",
"Guillermo",
""
]
] |
new_dataset
| 0.950331 |
2209.09835
|
Thilo Krachenfels
|
Niclas K\"uhnapfel, Robert Buhren, Hans Niklas Jacob, Thilo
Krachenfels, Christian Werling, Jean-Pierre Seifert
|
EM-Fault It Yourself: Building a Replicable EMFI Setup for Desktop and
Server Hardware
|
This is the authors' version of the article accepted for publication
at IEEE International Conference on Physical Assurance and Inspection of
Electronics (PAINE 2022)
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
EMFI has become a popular fault injection (FI) technique due to its ability
to inject faults precisely considering timing and location. Recently, ARM,
RISC-V, and even x86 processing units in different packages were shown to be
vulnerable to electromagnetic fault injection (EMFI) attacks. However, past
publications lack a detailed description of the entire attack setup, hindering
researchers and companies from easily replicating the presented attacks on
their devices. In this work, we first show how to build an automated EMFI setup
with high scanning resolution and good repeatability that is large enough to
attack modern desktop and server CPUs. We structurally lay out all details on
mechanics, hardware, and software along with this paper. Second, we use our
setup to attack a deeply embedded security co-processor in modern AMD systems
on a chip (SoCs), the AMD Secure Processor (AMD-SP). Using a previously
published code execution exploit, we run two custom payloads on the AMD-SP that
utilize the SoC to different degrees. We then visualize these fault locations
on SoC photographs allowing us to reason about the SoC's components under
attack. Finally, we show that the signature verification process of one of the
first executed firmware parts is susceptible to EMFI attacks, undermining the
security architecture of the entire SoC. To the best of our knowledge, this is
the first reported EMFI attack against an AMD desktop CPU.
|
[
{
"version": "v1",
"created": "Tue, 20 Sep 2022 16:27:34 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Kühnapfel",
"Niclas",
""
],
[
"Buhren",
"Robert",
""
],
[
"Jacob",
"Hans Niklas",
""
],
[
"Krachenfels",
"Thilo",
""
],
[
"Werling",
"Christian",
""
],
[
"Seifert",
"Jean-Pierre",
""
]
] |
new_dataset
| 0.997053 |
2209.09857
|
Furkan Ulger Mr.
|
Furkan Ulger, Seniha Esen Yuksel, Atila Yilmaz, and Dincer Gokcen
|
Fine-grained Classification of Solder Joints with {\alpha}-skew
Jensen-Shannon Divergence
|
Submitted to IEEE Transactions on Components, Packaging and
Manufacturing Technology
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Solder joint inspection (SJI) is a critical process in the production of
printed circuit boards (PCB). Detection of solder errors during SJI is quite
challenging as the solder joints have very small sizes and can take various
shapes. In this study, we first show that solders have low feature diversity,
and that the SJI can be carried out as a fine-grained image classification task
which focuses on hard-to-distinguish object classes. To improve the
fine-grained classification accuracy, penalizing confident model predictions by
maximizing entropy was found useful in the literature. Inline with this
information, we propose using the {\alpha}-skew Jensen-Shannon divergence
({\alpha}-JS) for penalizing the confidence in model predictions. We compare
the {\alpha}-JS regularization with both existing entropyregularization based
methods and the methods based on attention mechanism, segmentation techniques,
transformer models, and specific loss functions for fine-grained image
classification tasks. We show that the proposed approach achieves the highest
F1-score and competitive accuracy for different models in the finegrained
solder joint classification task. Finally, we visualize the activation maps and
show that with entropy-regularization, more precise class-discriminative
regions are localized, which are also more resilient to noise. Code will be
made available here upon acceptance.
|
[
{
"version": "v1",
"created": "Tue, 20 Sep 2022 17:06:51 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Ulger",
"Furkan",
""
],
[
"Yuksel",
"Seniha Esen",
""
],
[
"Yilmaz",
"Atila",
""
],
[
"Gokcen",
"Dincer",
""
]
] |
new_dataset
| 0.995951 |
2209.09861
|
Peter Xenopoulos
|
Peter Xenopoulos, Claudio Silva
|
ESTA: An Esports Trajectory and Action Dataset
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sports, due to their global reach and impact-rich prediction tasks, are an
exciting domain to deploy machine learning models. However, data from
conventional sports is often unsuitable for research use due to its size,
veracity, and accessibility. To address these issues, we turn to esports, a
growing domain that encompasses video games played in a capacity similar to
conventional sports. Since esports data is acquired through server logs rather
than peripheral sensors, esports provides a unique opportunity to obtain a
massive collection of clean and detailed spatiotemporal data, similar to those
collected in conventional sports. To parse esports data, we develop awpy, an
open-source esports game log parsing library that can extract player
trajectories and actions from game logs. Using awpy, we parse 8.6m actions,
7.9m game frames, and 417k trajectories from 1,558 game logs from professional
Counter-Strike tournaments to create the Esports Trajectory and Actions (ESTA)
dataset. ESTA is one of the largest and most granular publicly available sports
data sets to date. We use ESTA to develop benchmarks for win prediction using
player-specific information. The ESTA data is available at
https://github.com/pnxenopoulos/esta and awpy is made public through PyPI.
|
[
{
"version": "v1",
"created": "Tue, 20 Sep 2022 17:13:50 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Xenopoulos",
"Peter",
""
],
[
"Silva",
"Claudio",
""
]
] |
new_dataset
| 0.999894 |
2209.09871
|
Fateme Nikseresht
|
Moeen Mostafavi, Mahsa Pahlavikhah Varnosfaderani, Fateme Nikseresht,
Seyed Ahmad Mansouri
|
emojiSpace: Spatial Representation of Emojis
|
5 pages, 5 tables
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the absence of nonverbal cues during messaging communication, users
express part of their emotions using emojis. Thus, having emojis in the
vocabulary of text messaging language models can significantly improve many
natural language processing (NLP) applications such as online communication
analysis. On the other hand, word embedding models are usually trained on a
very large corpus of text such as Wikipedia or Google News datasets that
include very few samples with emojis. In this study, we create emojiSpace,
which is a combined word-emoji embedding using the word2vec model from the
Genism library in Python. We trained emojiSpace on a corpus of more than 4
billion tweets and evaluated it by implementing sentiment analysis on a Twitter
dataset containing more than 67 million tweets as an extrinsic task. For this
task, we compared the performance of two different classifiers of random forest
(RF) and linear support vector machine (SVM). For evaluation, we compared
emojiSpace performance with two other pre-trained embeddings and demonstrated
that emojiSpace outperforms both.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 13:57:31 GMT"
}
] | 2022-09-21T00:00:00 |
[
[
"Mostafavi",
"Moeen",
""
],
[
"Varnosfaderani",
"Mahsa Pahlavikhah",
""
],
[
"Nikseresht",
"Fateme",
""
],
[
"Mansouri",
"Seyed Ahmad",
""
]
] |
new_dataset
| 0.999701 |
2201.09101
|
Minbo Ma
|
Minbo Ma, Peng Xie, Fei Teng, Tianrui Li, Bin Wang, Shenggong Ji,
Junbo Zhang
|
HiSTGNN: Hierarchical Spatio-temporal Graph Neural Networks for Weather
Forecasting
|
Some sections will be modified because of some errors of the
experiments and presents
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Weather Forecasting is an attractive challengeable task due to its influence
on human life and complexity in atmospheric motion. Supported by massive
historical observed time series data, the task is suitable for data-driven
approaches, especially deep neural networks. Recently, the Graph Neural
Networks (GNNs) based methods have achieved excellent performance for
spatio-temporal forecasting. However, the canonical GNNs-based methods only
individually model the local graph of meteorological variables per station or
the global graph of whole stations, lacking information interaction between
meteorological variables in different stations. In this paper, we propose a
novel Hierarchical Spatio-Temporal Graph Neural Network (HiSTGNN) to model
cross-regional spatio-temporal correlations among meteorological variables in
multiple stations. An adaptive graph learning layer and spatial graph
convolution are employed to construct self-learning graph and study hidden
dependency among nodes of variable-level and station-level graph. For capturing
temporal pattern, the dilated inception as the backbone of gate temporal
convolution is designed to model long and various meteorological trends.
Moreover, a dynamic interaction learning is proposed to build bidirectional
information passing in hierarchical graph. Experimental results on three
real-world meteorological datasets demonstrate the superior performance of
HiSTGNN beyond 7 baselines and it reduces the errors by 4.2% to 11.6%
especially compared to state-of-the-art weather forecasting method.
|
[
{
"version": "v1",
"created": "Sat, 22 Jan 2022 17:30:46 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Sep 2022 06:55:12 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Ma",
"Minbo",
""
],
[
"Xie",
"Peng",
""
],
[
"Teng",
"Fei",
""
],
[
"Li",
"Tianrui",
""
],
[
"Wang",
"Bin",
""
],
[
"Ji",
"Shenggong",
""
],
[
"Zhang",
"Junbo",
""
]
] |
new_dataset
| 0.992448 |
2203.07724
|
Kaican Li
|
Kaican Li, Kai Chen, Haoyu Wang, Lanqing Hong, Chaoqiang Ye, Jianhua
Han, Yukuai Chen, Wei Zhang, Chunjing Xu, Dit-Yan Yeung, Xiaodan Liang,
Zhenguo Li, Hang Xu
|
CODA: A Real-World Road Corner Case Dataset for Object Detection in
Autonomous Driving
|
ECCV 2022
| null | null | null |
cs.CV cs.LG cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Contemporary deep-learning object detection methods for autonomous driving
usually assume prefixed categories of common traffic participants, such as
pedestrians and cars. Most existing detectors are unable to detect uncommon
objects and corner cases (e.g., a dog crossing a street), which may lead to
severe accidents in some situations, making the timeline for the real-world
application of reliable autonomous driving uncertain. One main reason that
impedes the development of truly reliably self-driving systems is the lack of
public datasets for evaluating the performance of object detectors on corner
cases. Hence, we introduce a challenging dataset named CODA that exposes this
critical problem of vision-based detectors. The dataset consists of 1500
carefully selected real-world driving scenes, each containing four object-level
corner cases (on average), spanning more than 30 object categories. On CODA,
the performance of standard object detectors trained on large-scale autonomous
driving datasets significantly drops to no more than 12.8% in mAR. Moreover, we
experiment with the state-of-the-art open-world object detector and find that
it also fails to reliably identify the novel objects in CODA, suggesting that a
robust perception system for autonomous driving is probably still far from
reach. We expect our CODA dataset to facilitate further research in reliable
detection for real-world autonomous driving. Our dataset will be released at
https://coda-dataset.github.io.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 08:32:56 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Jul 2022 15:32:48 GMT"
},
{
"version": "v3",
"created": "Sat, 17 Sep 2022 04:52:35 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Li",
"Kaican",
""
],
[
"Chen",
"Kai",
""
],
[
"Wang",
"Haoyu",
""
],
[
"Hong",
"Lanqing",
""
],
[
"Ye",
"Chaoqiang",
""
],
[
"Han",
"Jianhua",
""
],
[
"Chen",
"Yukuai",
""
],
[
"Zhang",
"Wei",
""
],
[
"Xu",
"Chunjing",
""
],
[
"Yeung",
"Dit-Yan",
""
],
[
"Liang",
"Xiaodan",
""
],
[
"Li",
"Zhenguo",
""
],
[
"Xu",
"Hang",
""
]
] |
new_dataset
| 0.999832 |
2203.08856
|
Victor Lutfalla
|
Jarkko Kari, Victor Lutfalla
|
Planar Rosa : a family of quasiperiodic substitution discrete plane
tilings with $2n$-fold rotational symmetry
| null | null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Planar Rosa, a family of rhombus tilings with a $2n$-fold
rotational symmetry that are generated by a primitive substitution and that are
also discrete plane tilings, meaning that they are obtained as a projection of
a higher dimensional discrete plane. The discrete plane condition is a relaxed
version of the cut-and-project condition. We also prove that the Sub Rosa
substitution tilings with $2n$-fold rotational symmetry defined by Kari and
Rissanen do not satisfy even the weaker discrete plane condition. We prove
these results for all even $n\geq 4$. This completes our previously published
results for odd values of $n$.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 18:25:04 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Sep 2022 09:55:59 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Kari",
"Jarkko",
""
],
[
"Lutfalla",
"Victor",
""
]
] |
new_dataset
| 0.999463 |
2203.10225
|
Xingda Wei
|
Xingda Wei, Fangming Lu, Tianxia Wang, Jinyu Gu, Yuhan Yang, Rong
Chen, and Haibo Chen
|
No Provisioned Concurrency: Fast RDMA-codesigned Remote Fork for
Serverless Computing
|
To appear in OSDI'23
| null | null | null |
cs.OS cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Serverless platforms essentially face a tradeoff between container startup
time and provisioned concurrency (i.e., cached instances), which is further
exaggerated by the frequent need for remote container initialization. This
paper presents MITOSIS, an operating system primitive that provides fast remote
fork, which exploits a deep codesign of the OS kernel with RDMA. By leveraging
the fast remote read capability of RDMA and partial state transfer across
serverless containers, MITOSIS bridges the performance gap between local and
remote container initialization. MITOSIS is the first to fork over 10,000 new
containers from one instance across multiple machines within a second, while
allowing the new containers to efficiently transfer the pre-materialized states
of the forked one. We have implemented MITOSIS on Linux and integrated it with
FN, a popular serverless platform. Under load spikes in real-world serverless
workloads, MITOSIS reduces the function tail latency by 89% with orders of
magnitude lower memory usage. For serverless workflow that requires state
transfer, MITOSIS improves its execution time by 86%.
|
[
{
"version": "v1",
"created": "Sat, 19 Mar 2022 02:49:55 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Aug 2022 03:35:43 GMT"
},
{
"version": "v3",
"created": "Sat, 17 Sep 2022 01:52:44 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Wei",
"Xingda",
""
],
[
"Lu",
"Fangming",
""
],
[
"Wang",
"Tianxia",
""
],
[
"Gu",
"Jinyu",
""
],
[
"Yang",
"Yuhan",
""
],
[
"Chen",
"Rong",
""
],
[
"Chen",
"Haibo",
""
]
] |
new_dataset
| 0.991195 |
2204.01147
|
Van Chuong Nguyen
|
Chuong Nguyen, Lingfan Bao, and Quan Nguyen
|
Continuous Jumping for Legged Robots on Stepping Stones via Trajectory
Optimization and Model Predictive Control
|
Accepted to the 61st IEEE Conference on Decision and Control (CDC
2022)
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Performing highly agile dynamic motions, such as jumping or running on uneven
stepping stones has remained a challenging problem in legged robot locomotion.
This paper presents a framework that combines trajectory optimization and model
predictive control to perform robust and consecutive jumping on stepping
stones. In our approach, we first utilize trajectory optimization based on
full-nonlinear dynamics of the robot to generate periodic jumping trajectories
for various jumping distances. A jumping controller based on a model predictive
control is then designed for realizing smooth jumping transitions, enabling the
robot to achieve continuous jumps on stepping stones. Thanks to the
incorporation of MPC as a real-time feedback controller, the proposed framework
is also validated to be robust to uneven platforms with unknown height
perturbations and model uncertainty on the robot dynamics.
|
[
{
"version": "v1",
"created": "Sun, 3 Apr 2022 19:49:54 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Sep 2022 18:11:42 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Nguyen",
"Chuong",
""
],
[
"Bao",
"Lingfan",
""
],
[
"Nguyen",
"Quan",
""
]
] |
new_dataset
| 0.95479 |
2206.02096
|
Minghao Xu
|
Minghao Xu, Zuobai Zhang, Jiarui Lu, Zhaocheng Zhu, Yangtian Zhang,
Chang Ma, Runcheng Liu, Jian Tang
|
PEER: A Comprehensive and Multi-Task Benchmark for Protein Sequence
Understanding
|
Accepted by NeurIPS 2022 Dataset and Benchmark Track. arXiv v2:
source code released; arXiv v1: release all benchmark results
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We are now witnessing significant progress of deep learning methods in a
variety of tasks (or datasets) of proteins. However, there is a lack of a
standard benchmark to evaluate the performance of different methods, which
hinders the progress of deep learning in this field. In this paper, we propose
such a benchmark called PEER, a comprehensive and multi-task benchmark for
Protein sEquence undERstanding. PEER provides a set of diverse protein
understanding tasks including protein function prediction, protein localization
prediction, protein structure prediction, protein-protein interaction
prediction, and protein-ligand interaction prediction. We evaluate different
types of sequence-based methods for each task including traditional feature
engineering approaches, different sequence encoding methods as well as
large-scale pre-trained protein language models. In addition, we also
investigate the performance of these methods under the multi-task learning
setting. Experimental results show that large-scale pre-trained protein
language models achieve the best performance for most individual tasks, and
jointly training multiple tasks further boosts the performance. The datasets
and source codes of this benchmark are all available at
https://github.com/DeepGraphLearning/PEER_Benchmark
|
[
{
"version": "v1",
"created": "Sun, 5 Jun 2022 05:21:56 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Sep 2022 17:31:38 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Xu",
"Minghao",
""
],
[
"Zhang",
"Zuobai",
""
],
[
"Lu",
"Jiarui",
""
],
[
"Zhu",
"Zhaocheng",
""
],
[
"Zhang",
"Yangtian",
""
],
[
"Ma",
"Chang",
""
],
[
"Liu",
"Runcheng",
""
],
[
"Tang",
"Jian",
""
]
] |
new_dataset
| 0.973074 |
2206.03062
|
Haodong Yuan
|
Haodong Yuan, Yudong Zhang, Shengyin Fan, Xue Li and Jian Wang
|
Object Scan Context: Object-centric Spatial Descriptor for Place
Recognition within 3D Point Cloud Map
|
7 pages, 11 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Place recognition technology endows a SLAM algorithm with the ability to
eliminate accumulated errors and to relocalize itself. Existing methods on
point cloud-based place recognition often leverage the matching of global
descriptors which are lidar-centric. These methods have the following two major
defects: place recognition cannot be performed when the distance between the
two point clouds is far, and only the rotation angle can be calculated without
the offset in the X and Y direction. To solve these two problems, we propose a
novel global descriptor, which is built around the Main Object, in this way,
descriptors are no longer dependent on the observation position. We analyze the
theory that this method can solve the above two problems, and conduct a lot of
experiments on KITTI Odometry and KITTI360, which show that our method has
obvious advantages over state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 07:27:28 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Sep 2022 03:21:44 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Yuan",
"Haodong",
""
],
[
"Zhang",
"Yudong",
""
],
[
"Fan",
"Shengyin",
""
],
[
"Li",
"Xue",
""
],
[
"Wang",
"Jian",
""
]
] |
new_dataset
| 0.995831 |
2206.09426
|
Yue Zhao
|
Songqiao Han, Xiyang Hu, Hailiang Huang, Mingqi Jiang, Yue Zhao
|
ADBench: Anomaly Detection Benchmark
|
NeurIPS 2022. All authors contribute equally and are listed
alphabetically. Code available at https://github.com/Minqi824/ADBench
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given a long list of anomaly detection algorithms developed in the last few
decades, how do they perform with regard to (i) varying levels of supervision,
(ii) different types of anomalies, and (iii) noisy and corrupted data? In this
work, we answer these key questions by conducting (to our best knowledge) the
most comprehensive anomaly detection benchmark with 30 algorithms on 57
benchmark datasets, named ADBench. Our extensive experiments (98,436 in total)
identify meaningful insights into the role of supervision and anomaly types,
and unlock future directions for researchers in algorithm selection and design.
With ADBench, researchers can easily conduct comprehensive and fair evaluations
for newly proposed methods on the datasets (including our contributed ones from
natural language and computer vision domains) against the existing baselines.
To foster accessibility and reproducibility, we fully open-source ADBench and
the corresponding results.
|
[
{
"version": "v1",
"created": "Sun, 19 Jun 2022 15:02:17 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Sep 2022 02:43:48 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Han",
"Songqiao",
""
],
[
"Hu",
"Xiyang",
""
],
[
"Huang",
"Hailiang",
""
],
[
"Jiang",
"Mingqi",
""
],
[
"Zhao",
"Yue",
""
]
] |
new_dataset
| 0.987749 |
2207.01424
|
Yang Li
|
Yang Li, Shixin Zhu, Pi Li
|
On MDS Codes With Galois Hulls of Arbitrary Dimensions
|
21 pages,5 tables
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Galois hulls of linear codes are a generalization of the Euclidean and
Hermitian hulls of linear codes. In this paper, we study the Galois hulls of
(extended) GRS codes and present several new constructions of MDS codes with
Galois hulls of arbitrary dimensions via (extended) GRS codes. Two general
methods of constructing MDS codes with Galois hulls of arbitrary dimensions by
Hermitian or general Galois self-orthogonal (extended) GRS codes are given.
Using these methods, some MDS codes with larger dimensions and Galois hulls of
arbitrary dimensions can be obtained and relatively strict conditions can also
lead to many new classes of MDS codes with Galois hulls of arbitrary
dimensions.
|
[
{
"version": "v1",
"created": "Mon, 4 Jul 2022 14:06:10 GMT"
},
{
"version": "v2",
"created": "Sat, 9 Jul 2022 11:25:05 GMT"
},
{
"version": "v3",
"created": "Sun, 7 Aug 2022 07:52:21 GMT"
},
{
"version": "v4",
"created": "Mon, 19 Sep 2022 12:24:18 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Li",
"Yang",
""
],
[
"Zhu",
"Shixin",
""
],
[
"Li",
"Pi",
""
]
] |
new_dataset
| 0.997655 |
2208.02129
|
Dingding Cai
|
Dingding Cai, Janne Heikkil\"a, Esa Rahtu
|
SC6D: Symmetry-agnostic and Correspondence-free 6D Object Pose
Estimation
|
3DV 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents an efficient symmetry-agnostic and correspondence-free
framework, referred to as SC6D, for 6D object pose estimation from a single
monocular RGB image. SC6D requires neither the 3D CAD model of the object nor
any prior knowledge of the symmetries. The pose estimation is decomposed into
three sub-tasks: a) object 3D rotation representation learning and matching; b)
estimation of the 2D location of the object center; and c) scale-invariant
distance estimation (the translation along the z-axis) via classification. SC6D
is evaluated on three benchmark datasets, T-LESS, YCB-V, and ITODD, and results
in state-of-the-art performance on the T-LESS dataset. Moreover, SC6D is
computationally much more efficient than the previous state-of-the-art method
SurfEmb. The implementation and pre-trained models are publicly available at
https://github.com/dingdingcai/SC6D-pose.
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 15:08:27 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Aug 2022 09:43:38 GMT"
},
{
"version": "v3",
"created": "Sun, 18 Sep 2022 07:24:50 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Cai",
"Dingding",
""
],
[
"Heikkilä",
"Janne",
""
],
[
"Rahtu",
"Esa",
""
]
] |
new_dataset
| 0.999343 |
2208.02515
|
Juncheng Li
|
Juncheng Li, Xin He, Longhui Wei, Long Qian, Linchao Zhu, Lingxi Xie,
Yueting Zhuang, Qi Tian, Siliang Tang
|
Fine-Grained Semantically Aligned Vision-Language Pre-Training
|
Accepted by NeurIPS 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large-scale vision-language pre-training has shown impressive advances in a
wide range of downstream tasks. Existing methods mainly model the cross-modal
alignment by the similarity of the global representations of images and texts,
or advanced cross-modal attention upon image and text features. However, they
fail to explicitly learn the fine-grained semantic alignment between visual
regions and textual phrases, as only global image-text alignment information is
available. In this paper, we introduce LOUPE, a fine-grained semantically
aLigned visiOn-langUage PrE-training framework, which learns fine-grained
semantic alignment from the novel perspective of game-theoretic interactions.
To efficiently compute the game-theoretic interactions, we further propose an
uncertainty-aware neural Shapley interaction learning module. Experiments show
that LOUPE achieves state-of-the-art performance on a variety of
vision-language tasks. Furthermore, without any object-level human annotations
and fine-tuning, LOUPE achieves competitive performance on object detection and
visual grounding. More importantly, LOUPE opens a new promising direction of
learning fine-grained semantics from large-scale raw image-text pairs. The
repository of this work is at https://github.com/YYJMJC/LOUPE.
|
[
{
"version": "v1",
"created": "Thu, 4 Aug 2022 07:51:48 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Sep 2022 14:50:15 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Li",
"Juncheng",
""
],
[
"He",
"Xin",
""
],
[
"Wei",
"Longhui",
""
],
[
"Qian",
"Long",
""
],
[
"Zhu",
"Linchao",
""
],
[
"Xie",
"Lingxi",
""
],
[
"Zhuang",
"Yueting",
""
],
[
"Tian",
"Qi",
""
],
[
"Tang",
"Siliang",
""
]
] |
new_dataset
| 0.973798 |
2208.02918
|
Rogerio Bonatti
|
Arthur Bucker, Luis Figueredo, Sami Haddadin, Ashish Kapoor, Shuang
Ma, Sai Vemprala, Rogerio Bonatti
|
LATTE: LAnguage Trajectory TransformEr
| null | null | null | null |
cs.RO cs.AI cs.CL cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Natural language is one of the most intuitive ways to express human intent.
However, translating instructions and commands towards robotic motion
generation and deployment in the real world is far from being an easy task. The
challenge of combining a robot's inherent low-level geometric and kinodynamic
constraints with a human's high-level semantic instructions traditionally is
solved using task-specific solutions with little generalizability between
hardware platforms, often with the use of static sets of target actions and
commands. This work instead proposes a flexible language-based framework that
allows a user to modify generic robotic trajectories. Our method leverages
pre-trained language models (BERT and CLIP) to encode the user's intent and
target objects directly from a free-form text input and scene images, fuses
geometrical features generated by a transformer encoder network, and finally
outputs trajectories using a transformer decoder, without the need of priors
related to the task or robot information. We significantly extend our own
previous work presented in Bucker et al. by expanding the trajectory
parametrization space to 3D and velocity as opposed to just XY movements. In
addition, we now train the model to use actual images of the objects in the
scene for context (as opposed to textual descriptions), and we evaluate the
system in a diverse set of scenarios beyond manipulation, such as aerial and
legged robots. Our simulated and real-life experiments demonstrate that our
transformer model can successfully follow human intent, modifying the shape and
speed of trajectories within multiple environments. Codebase available at:
https://github.com/arthurfenderbucker/LaTTe-Language-Trajectory-TransformEr.git
|
[
{
"version": "v1",
"created": "Thu, 4 Aug 2022 22:43:21 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Aug 2022 16:50:10 GMT"
},
{
"version": "v3",
"created": "Fri, 16 Sep 2022 18:36:41 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Bucker",
"Arthur",
""
],
[
"Figueredo",
"Luis",
""
],
[
"Haddadin",
"Sami",
""
],
[
"Kapoor",
"Ashish",
""
],
[
"Ma",
"Shuang",
""
],
[
"Vemprala",
"Sai",
""
],
[
"Bonatti",
"Rogerio",
""
]
] |
new_dataset
| 0.999724 |
2209.00508
|
Dongkwan Kim
|
Dongkwan Kim, Jiho Jin, Jaimeen Ahn, Alice Oh
|
Models and Benchmarks for Representation Learning of Partially Observed
Subgraphs
|
CIKM 2022 Short Paper (Camera-ready + Appendix)
| null | null | null |
cs.LG cs.AI cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Subgraphs are rich substructures in graphs, and their nodes and edges can be
partially observed in real-world tasks. Under partial observation, existing
node- or subgraph-level message-passing produces suboptimal representations. In
this paper, we formulate a novel task of learning representations of partially
observed subgraphs. To solve this problem, we propose Partial Subgraph InfoMax
(PSI) framework and generalize existing InfoMax models, including DGI,
InfoGraph, MVGRL, and GraphCL, into our framework. These models maximize the
mutual information between the partial subgraph's summary and various
substructures from nodes to full subgraphs. In addition, we suggest a novel
two-stage model with $k$-hop PSI, which reconstructs the representation of the
full subgraph and improves its expressiveness from different local-global
structures. Under training and evaluation protocols designed for this problem,
we conduct experiments on three real-world datasets and demonstrate that PSI
models outperform baselines.
|
[
{
"version": "v1",
"created": "Thu, 1 Sep 2022 14:51:37 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Sep 2022 04:29:55 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Kim",
"Dongkwan",
""
],
[
"Jin",
"Jiho",
""
],
[
"Ahn",
"Jaimeen",
""
],
[
"Oh",
"Alice",
""
]
] |
new_dataset
| 0.970787 |
2209.08129
|
Kriste Krstovski
|
Kriste Krstovski, Angela Soomin Ryu, Bruce Kogut
|
Evons: A Dataset for Fake and Real News Virality Analysis and Prediction
| null | null | null | null |
cs.CV cs.CL cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present a novel collection of news articles originating from fake and real
news media sources for the analysis and prediction of news virality. Unlike
existing fake news datasets which either contain claims or news article
headline and body, in this collection each article is supported with a Facebook
engagement count which we consider as an indicator of the article virality. In
addition we also provide the article description and thumbnail image with which
the article was shared on Facebook. These images were automatically annotated
with object tags and color attributes. Using cloud based vision analysis tools,
thumbnail images were also analyzed for faces and detected faces were annotated
with facial attributes. We empirically investigate the use of this collection
on an example task of article virality prediction.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2022 18:52:44 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Krstovski",
"Kriste",
""
],
[
"Ryu",
"Angela Soomin",
""
],
[
"Kogut",
"Bruce",
""
]
] |
new_dataset
| 0.999838 |
2209.08194
|
Haoyu Ma
|
Haoyu Ma, Zhe Wang, Yifei Chen, Deying Kong, Liangjian Chen, Xingwei
Liu, Xiangyi Yan, Hao Tang, Xiaohui Xie
|
PPT: token-Pruned Pose Transformer for monocular and multi-view human
pose estimation
|
ECCV 2022. Code is available at https://github.com/HowieMa/PPT
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, the vision transformer and its variants have played an increasingly
important role in both monocular and multi-view human pose estimation.
Considering image patches as tokens, transformers can model the global
dependencies within the entire image or across images from other views.
However, global attention is computationally expensive. As a consequence, it is
difficult to scale up these transformer-based methods to high-resolution
features and many views.
In this paper, we propose the token-Pruned Pose Transformer (PPT) for 2D
human pose estimation, which can locate a rough human mask and performs
self-attention only within selected tokens. Furthermore, we extend our PPT to
multi-view human pose estimation. Built upon PPT, we propose a new cross-view
fusion strategy, called human area fusion, which considers all human foreground
pixels as corresponding candidates. Experimental results on COCO and MPII
demonstrate that our PPT can match the accuracy of previous pose transformer
methods while reducing the computation. Moreover, experiments on Human 3.6M and
Ski-Pose demonstrate that our Multi-view PPT can efficiently fuse cues from
multiple views and achieve new state-of-the-art results.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2022 23:22:47 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Ma",
"Haoyu",
""
],
[
"Wang",
"Zhe",
""
],
[
"Chen",
"Yifei",
""
],
[
"Kong",
"Deying",
""
],
[
"Chen",
"Liangjian",
""
],
[
"Liu",
"Xingwei",
""
],
[
"Yan",
"Xiangyi",
""
],
[
"Tang",
"Hao",
""
],
[
"Xie",
"Xiaohui",
""
]
] |
new_dataset
| 0.999105 |
2209.08199
|
Yu-Chung Hsiao
|
Yu-Chung Hsiao, Fedir Zubach, Maria Wang, Jindong (JD) Chen
|
ScreenQA: Large-Scale Question-Answer Pairs over Mobile App Screenshots
| null | null | null | null |
cs.CL cs.CV cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
We present a new task and dataset, ScreenQA, for screen content understanding
via question answering. The existing screen datasets are focused either on
structure and component-level understanding, or on a much higher-level
composite task such as navigation and task completion. We attempt to bridge the
gap between these two by annotating 80,000+ question-answer pairs over the RICO
dataset in hope to benchmark the screen reading comprehension capacity.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2022 23:49:00 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Hsiao",
"Yu-Chung",
"",
"JD"
],
[
"Zubach",
"Fedir",
"",
"JD"
],
[
"Wang",
"Maria",
"",
"JD"
],
[
"Jindong",
"",
"",
"JD"
],
[
"Chen",
"",
""
]
] |
new_dataset
| 0.99963 |
2209.08277
|
Hanxin Zhu
|
Hanxin Zhu, Henan Wang and Zhibo Chen
|
MiNL: Micro-images based Neural Representation for Light Fields
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditional representations for light fields can be separated into two types:
explicit representation and implicit representation. Unlike explicit
representation that represents light fields as Sub-Aperture Images (SAIs) based
arrays or Micro-Images (MIs) based lenslet images, implicit representation
treats light fields as neural networks, which is inherently a continuous
representation in contrast to discrete explicit representation. However, at
present almost all the implicit representations for light fields utilize SAIs
to train an MLP to learn a pixel-wise mapping from 4D spatial-angular
coordinate to pixel colors, which is neither compact nor of low complexity.
Instead, in this paper we propose MiNL, a novel MI-wise implicit neural
representation for light fields that train an MLP + CNN to learn a mapping from
2D MI coordinates to MI colors. Given the micro-image's coordinate, MiNL
outputs the corresponding micro-image's RGB values. Light field encoding in
MiNL is just training a neural network to regress the micro-images and the
decoding process is a simple feedforward operation. Compared with common
pixel-wise implicit representation, MiNL is more compact and efficient that has
faster decoding speed (\textbf{$\times$80$\sim$180} speed-up) as well as better
visual quality (\textbf{1$\sim$4dB} PSNR improvement on average).
|
[
{
"version": "v1",
"created": "Sat, 17 Sep 2022 08:06:38 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Zhu",
"Hanxin",
""
],
[
"Wang",
"Henan",
""
],
[
"Chen",
"Zhibo",
""
]
] |
new_dataset
| 0.9904 |
2209.08316
|
Lisa Alazraki
|
Lisa Alazraki, Ali Ghachem, Neophytos Polydorou, Foaad Khosmood and
Abbas Edalat
|
An Empathetic AI Coach for Self-Attachment Therapy
| null |
2021 IEEE Third International Conference on Cognitive Machine
Intelligence (CogMI), 2021, pp. 78-87
|
10.1109/CogMI52975.2021.00019
| null |
cs.AI cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present a new dataset and a computational strategy for a
digital coach that aims to guide users in practicing the protocols of
self-attachment therapy. Our framework augments a rule-based conversational
agent with a deep-learning classifier for identifying the underlying emotion in
a user's text response, as well as a deep-learning assisted retrieval method
for producing novel, fluent and empathetic utterances. We also craft a set of
human-like personas that users can choose to interact with. Our goal is to
achieve a high level of engagement during virtual therapy sessions. We evaluate
the effectiveness of our framework in a non-clinical trial with N=16
participants, all of whom have had at least four interactions with the agent
over the course of five days. We find that our platform is consistently rated
higher for empathy, user engagement and usefulness than the simple rule-based
framework. Finally, we provide guidelines to further improve the design and
performance of the application, in accordance with the feedback received.
|
[
{
"version": "v1",
"created": "Sat, 17 Sep 2022 12:01:35 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Alazraki",
"Lisa",
""
],
[
"Ghachem",
"Ali",
""
],
[
"Polydorou",
"Neophytos",
""
],
[
"Khosmood",
"Foaad",
""
],
[
"Edalat",
"Abbas",
""
]
] |
new_dataset
| 0.993511 |
2209.08356
|
Nikolay Ivanov
|
Nikolay Ivanov and Qiben Yan
|
Et tu, Blockchain? Outsmarting Smart Contracts via Social Engineering
|
14th annual Graduate Academic Conference (GAC). arXiv admin note:
text overlap with arXiv:2105.00132
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We reveal six zero-day social engineering attacks in Ethereum, and subdivide
them into two classes: Address Manipulation and Homograph. We demonstrate the
attacks by embedding them in source codes of five popular smart contracts with
combined market capitalization of over \$29 billion, and show that the attacks
have the ability to remain dormant during the testing phase and activate only
after production deployment. We analyze 85,656 open source smart contracts and
find 1,027 contracts that can be directly used for performing social
engineering attacks. For responsible disclosure, we contact seven smart
contract security firms. In the spirit of open research, we make the source
codes of the attack benchmark, tools, and datasets available to the public.
|
[
{
"version": "v1",
"created": "Sat, 17 Sep 2022 15:55:31 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Ivanov",
"Nikolay",
""
],
[
"Yan",
"Qiben",
""
]
] |
new_dataset
| 0.990527 |
2209.08359
|
Dat Quoc Nguyen
|
Mai Hoang Dao, Thinh Hung Truong, Dat Quoc Nguyen
|
From Disfluency Detection to Intent Detection and Slot Filling
|
In Proceedings of INTERSPEECH 2022
| null |
10.21437/Interspeech.2022-10161
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the first empirical study investigating the influence of
disfluency detection on downstream tasks of intent detection and slot filling.
We perform this study for Vietnamese -- a low-resource language that has no
previous study as well as no public dataset available for disfluency detection.
First, we extend the fluent Vietnamese intent detection and slot filling
dataset PhoATIS by manually adding contextual disfluencies and annotating them.
Then, we conduct experiments using strong baselines for disfluency detection
and joint intent detection and slot filling, which are based on pre-trained
language models. We find that: (i) disfluencies produce negative effects on the
performances of the downstream intent detection and slot filling tasks, and
(ii) in the disfluency context, the pre-trained multilingual language model
XLM-R helps produce better intent detection and slot filling performances than
the pre-trained monolingual language model PhoBERT, and this is opposite to
what generally found in the fluency context.
|
[
{
"version": "v1",
"created": "Sat, 17 Sep 2022 16:03:57 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Dao",
"Mai Hoang",
""
],
[
"Truong",
"Thinh Hung",
""
],
[
"Nguyen",
"Dat Quoc",
""
]
] |
new_dataset
| 0.998459 |
2209.08375
|
Farhad Aghili
|
Farhad Aghili
|
Six-DOF Spacecraft Dynamics Simulator For Testing Translation and
Attitude Control
| null | null |
10.1177/0278364908099464
| null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a method to control a manipulator system grasping a
rigid-body payload so that the motion of the combined system in consequence of
externally applied forces to be the same as another free-floating rigid-body
(with different inertial properties). This allows zero-g emulation of a scaled
spacecraft prototype under the test in a 1-g laboratory environment. The
controller consisting of motion feedback and force/moment feedback adjusts the
motion of the test spacecraft so as to match that of the flight spacecraft,
even if the latter has flexible appendages (such as solar panels) and the
former is rigid. The stability of the overall system is analytically
investigated, and the results show that the system remains stable provided that
the inertial properties of two spacecraft are different and that an upperbound
on the norm of the inertia ratio of the payload to manipulator is respected.
Important practical issues such as calibration and sensitivity analysis to
sensor noise and quantization are also presented.
|
[
{
"version": "v1",
"created": "Sat, 17 Sep 2022 17:35:08 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Aghili",
"Farhad",
""
]
] |
new_dataset
| 0.968431 |
2209.08392
|
Mir Lodro
|
Mir Lodro, Gabriele Gradoni, Christopher Smartt, David Thomas, and
Steve Greedy
|
2x2 MIMO Prototype for BER and EVM Measurements in Metal Enclosure
|
10 pages
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, we present a 2x2 near-field multi-input multiple-output (MIMO)
prototype for bit-error-rate (BER) and error vector magnitude (EVM)
measurements in a metal enclosure. The near-field MIMO prototype is developed
using software-defined-radios (SDRs) for over-the-air transmission of QPSK
modulated baseband waveforms. We check the near-field MIMO BER and EVM
measurements in three different scenarios in a highly reflecting metal
enclosure environment. In the first scenario, the line-of-sight (LOS)
communication link is investigated when the mode-stirrer is stationary. In
stationary channel conditions near-field MIMO BER and EVM measurements are
performed. In the second scenario, BER and EVM measurements are performed in
dynamic channel conditions when the mode-stirrer is set to move continuously.
In the third scenario, LOS communication near-field MIMO BER and EVM
measurements are performed in stationary channel conditions but now in the
presence of MIMO interference. In three different scenarios, near-field MIMO
BER and EVM measurements are investigated at different Tx USRP gain values and
in the presence of varying levels of MIMO interference.
|
[
{
"version": "v1",
"created": "Sat, 17 Sep 2022 19:00:39 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Lodro",
"Mir",
""
],
[
"Gradoni",
"Gabriele",
""
],
[
"Smartt",
"Christopher",
""
],
[
"Thomas",
"David",
""
],
[
"Greedy",
"Steve",
""
]
] |
new_dataset
| 0.999839 |
2209.08443
|
Lingjiao Chen
|
Lingjiao Chen and Zhihua Jin and Sabri Eyuboglu and Christopher R\'e
and Matei Zaharia and James Zou
|
HAPI: A Large-scale Longitudinal Dataset of Commercial ML API
Predictions
|
Preprint, to appear in NeurIPS 2022
| null | null | null |
cs.SE cs.AI cs.DB cs.LG cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Commercial ML APIs offered by providers such as Google, Amazon and Microsoft
have dramatically simplified ML adoption in many applications. Numerous
companies and academics pay to use ML APIs for tasks such as object detection,
OCR and sentiment analysis. Different ML APIs tackling the same task can have
very heterogeneous performance. Moreover, the ML models underlying the APIs
also evolve over time. As ML APIs rapidly become a valuable marketplace and a
widespread way to consume machine learning, it is critical to systematically
study and compare different APIs with each other and to characterize how APIs
change over time. However, this topic is currently underexplored due to the
lack of data. In this paper, we present HAPI (History of APIs), a longitudinal
dataset of 1,761,417 instances of commercial ML API applications (involving
APIs from Amazon, Google, IBM, Microsoft and other providers) across diverse
tasks including image tagging, speech recognition and text mining from 2020 to
2022. Each instance consists of a query input for an API (e.g., an image or
text) along with the API's output prediction/annotation and confidence scores.
HAPI is the first large-scale dataset of ML API usages and is a unique resource
for studying ML-as-a-service (MLaaS). As examples of the types of analyses that
HAPI enables, we show that ML APIs' performance change substantially over
time--several APIs' accuracies dropped on specific benchmark datasets. Even
when the API's aggregate performance stays steady, its error modes can shift
across different subtypes of data between 2020 and 2022. Such changes can
substantially impact the entire analytics pipelines that use some ML API as a
component. We further use HAPI to study commercial APIs' performance
disparities across demographic subgroups over time. HAPI can stimulate more
research in the growing field of MLaaS.
|
[
{
"version": "v1",
"created": "Sun, 18 Sep 2022 01:52:16 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Chen",
"Lingjiao",
""
],
[
"Jin",
"Zhihua",
""
],
[
"Eyuboglu",
"Sabri",
""
],
[
"Ré",
"Christopher",
""
],
[
"Zaharia",
"Matei",
""
],
[
"Zou",
"James",
""
]
] |
new_dataset
| 0.99943 |
2209.08445
|
Xiaolin Xu
|
Xiaolin Xu, Yuan Zong, Wenming Zheng, Yang Li, Chuangao Tang, Xingxun
Jiang, Haolin Jiang
|
SDFE-LV: A Large-Scale, Multi-Source, and Unconstrained Database for
Spotting Dynamic Facial Expressions in Long Videos
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present a large-scale, multi-source, and unconstrained
database called SDFE-LV for spotting the onset and offset frames of a complete
dynamic facial expression from long videos, which is known as the topic of
dynamic facial expression spotting (DFES) and a vital prior step for lots of
facial expression analysis tasks. Specifically, SDFE-LV consists of 1,191 long
videos, each of which contains one or more complete dynamic facial expressions.
Moreover, each complete dynamic facial expression in its corresponding long
video was independently labeled for five times by 10 well-trained annotators.
To the best of our knowledge, SDFE-LV is the first unconstrained large-scale
database for the DFES task whose long videos are collected from multiple
real-world/closely real-world media sources, e.g., TV interviews,
documentaries, movies, and we-media short videos. Therefore, DFES tasks on
SDFE-LV database will encounter numerous difficulties in practice such as head
posture changes, occlusions, and illumination. We also provided a comprehensive
benchmark evaluation from different angles by using lots of recent
state-of-the-art deep spotting methods and hence researchers interested in DFES
can quickly and easily get started. Finally, with the deep discussions on the
experimental evaluation results, we attempt to point out several meaningful
directions to deal with DFES tasks and hope that DFES can be better advanced in
the future. In addition, SDFE-LV will be freely released for academic use only
as soon as possible.
|
[
{
"version": "v1",
"created": "Sun, 18 Sep 2022 01:59:12 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Xu",
"Xiaolin",
""
],
[
"Zong",
"Yuan",
""
],
[
"Zheng",
"Wenming",
""
],
[
"Li",
"Yang",
""
],
[
"Tang",
"Chuangao",
""
],
[
"Jiang",
"Xingxun",
""
],
[
"Jiang",
"Haolin",
""
]
] |
new_dataset
| 0.999704 |
2209.08453
|
Minh Vu
|
Minh N. Vu, Huy Q. Mai, My T. Thai
|
EMaP: Explainable AI with Manifold-based Perturbations
|
29 pages
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In the last few years, many explanation methods based on the perturbations of
input data have been introduced to improve our understanding of decisions made
by black-box models. The goal of this work is to introduce a novel perturbation
scheme so that more faithful and robust explanations can be obtained. Our study
focuses on the impact of perturbing directions on the data topology. We show
that perturbing along the orthogonal directions of the input manifold better
preserves the data topology, both in the worst-case analysis of the discrete
Gromov-Hausdorff distance and in the average-case analysis via persistent
homology. From those results, we introduce EMaP algorithm, realizing the
orthogonal perturbation scheme. Our experiments show that EMaP not only
improves the explainers' performance but also helps them overcome a
recently-developed attack against perturbation-based methods.
|
[
{
"version": "v1",
"created": "Sun, 18 Sep 2022 02:43:50 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Vu",
"Minh N.",
""
],
[
"Mai",
"Huy Q.",
""
],
[
"Thai",
"My T.",
""
]
] |
new_dataset
| 0.970894 |
2209.08471
|
Chongyi Li
|
Qingyu Yang, Guang Yang, Jun Jiang, Chongyi Li, Ruicheng Feng,
Shangchen Zhou, Wenxiu Sun, Qingpeng Zhu, Chen Change Loy, Jinwei Gu
|
MIPI 2022 Challenge on RGBW Sensor Re-mosaic: Dataset and Report
|
ECCV 2022 Mobile Intelligent Photography and Imaging (MIPI)
Workshop--RGBW Sensor Re-mosaic Challenge Report. MIPI workshop website:
http://mipi-challenge.org/. arXiv admin note: substantial text overlap with
arXiv:2209.07060, arXiv:2209.07530, arXiv:2209.07057
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Developing and integrating advanced image sensors with novel algorithms in
camera systems are prevalent with the increasing demand for computational
photography and imaging on mobile platforms. However, the lack of high-quality
data for research and the rare opportunity for in-depth exchange of views from
industry and academia constrain the development of mobile intelligent
photography and imaging (MIPI). To bridge the gap, we introduce the first MIPI
challenge including five tracks focusing on novel image sensors and imaging
algorithms. In this paper, RGBW Joint Remosaic and Denoise, one of the five
tracks, working on the interpolation of RGBW CFA to Bayer at full resolution,
is introduced. The participants were provided with a new dataset including 70
(training) and 15 (validation) scenes of high-quality RGBW and Bayer pairs. In
addition, for each scene, RGBW of different noise levels was provided at 0dB,
24dB, and 42dB. All the data were captured using an RGBW sensor in both outdoor
and indoor conditions. The final results are evaluated using objective metrics
including PSNR, SSIM, LPIPS, and KLD. A detailed description of all models
developed in this challenge is provided in this paper. More details of this
challenge and the link to the dataset can be found at
https://github.com/mipi-challenge/MIPI2022.
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2022 06:06:56 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Yang",
"Qingyu",
""
],
[
"Yang",
"Guang",
""
],
[
"Jiang",
"Jun",
""
],
[
"Li",
"Chongyi",
""
],
[
"Feng",
"Ruicheng",
""
],
[
"Zhou",
"Shangchen",
""
],
[
"Sun",
"Wenxiu",
""
],
[
"Zhu",
"Qingpeng",
""
],
[
"Loy",
"Chen Change",
""
],
[
"Gu",
"Jinwei",
""
]
] |
new_dataset
| 0.999179 |
2209.08490
|
Changhao Chen
|
Zheming Tu, Changhao Chen, Xianfei Pan, Ruochen Liu, Jiarui Cui, Jun
Mao
|
EMA-VIO: Deep Visual-Inertial Odometry with External Memory Attention
|
Accepted by IEEE Sensors Journal
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Accurate and robust localization is a fundamental need for mobile agents.
Visual-inertial odometry (VIO) algorithms exploit the information from camera
and inertial sensors to estimate position and translation. Recent deep learning
based VIO models attract attentions as they provide pose information in a
data-driven way, without the need of designing hand-crafted algorithms.
Existing learning based VIO models rely on recurrent models to fuse multimodal
data and process sensor signal, which are hard to train and not efficient
enough. We propose a novel learning based VIO framework with external memory
attention that effectively and efficiently combines visual and inertial
features for states estimation. Our proposed model is able to estimate pose
accurately and robustly, even in challenging scenarios, e.g., on overcast days
and water-filled ground , which are difficult for traditional VIO algorithms to
extract visual features. Experiments validate that it outperforms both
traditional and learning based VIO baselines in different scenes.
|
[
{
"version": "v1",
"created": "Sun, 18 Sep 2022 07:05:36 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Tu",
"Zheming",
""
],
[
"Chen",
"Changhao",
""
],
[
"Pan",
"Xianfei",
""
],
[
"Liu",
"Ruochen",
""
],
[
"Cui",
"Jiarui",
""
],
[
"Mao",
"Jun",
""
]
] |
new_dataset
| 0.998391 |
2209.08512
|
Guangren Wang
|
Guangren Wang, Liang Cai, Fangyu Gai, Jianyu Niu
|
Phalanx: A Practical Byzantine Ordered Consensus Protocol
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Byzantine fault tolerance (BFT) consensus is a fundamental primitive for
distributed computation. However, BFT protocols suffer from the ordering
manipulation, in which an adversary can make front-running. Several protocols
are proposed to resolve the manipulation problem, but there are some
limitations for them. The batch-based protocols such as Themis has significant
performance loss because of the use of complex algorithms to find strongly
connected components (SCCs). The timestamp-based protocols such as Pompe have
simplified the ordering phase, but they are limited on fairness that the
adversary can manipulate the ordering via timestamps of transactions. In this
paper, we propose a Byzantine ordered consensus protocol called Phalanx, in
which transactions are committed by anchor-based ordering strategy. The
anchor-based strategy makes aggregation of the Lamport logical clock of
transactions on each participant and generates the final ordering without
complex detection for SCCs. Therefore, Phalanx has achieved satisfying
performance and performs better in resisting ordering manipulation than
timestamp-based strategy.
|
[
{
"version": "v1",
"created": "Sun, 18 Sep 2022 09:13:53 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Wang",
"Guangren",
""
],
[
"Cai",
"Liang",
""
],
[
"Gai",
"Fangyu",
""
],
[
"Niu",
"Jianyu",
""
]
] |
new_dataset
| 0.999302 |
2209.08516
|
Prasanna Kumar Routray
|
Prasanna Kumar Routray, Aditya Sanjiv Kanade, Jay Bhanushali,
Manivannan Muniyandi
|
VisTaNet: Attention Guided Deep Fusion for Surface Roughness
Classification
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Human texture perception is a weighted average of multi-sensory inputs:
visual and tactile. While the visual sensing mechanism extracts global
features, the tactile mechanism complements it by extracting local features.
The lack of coupled visuotactile datasets in the literature is a challenge for
studying multimodal fusion strategies analogous to human texture perception.
This paper presents a visual dataset that augments an existing tactile dataset.
We propose a novel deep fusion architecture that fuses visual and tactile data
using four types of fusion strategies: summation, concatenation, max-pooling,
and attention. Our model shows significant performance improvements (97.22%) in
surface roughness classification accuracy over tactile only (SVM - 92.60%) and
visual only (FENet-50 - 85.01%) architectures. Among the several fusion
techniques, attention-guided architecture results in better classification
accuracy. Our study shows that analogous to human texture perception, the
proposed model chooses a weighted combination of the two modalities (visual and
tactile), thus resulting in higher surface roughness classification accuracy;
and it chooses to maximize the weightage of the tactile modality where the
visual modality fails and vice-versa.
|
[
{
"version": "v1",
"created": "Sun, 18 Sep 2022 09:37:06 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Routray",
"Prasanna Kumar",
""
],
[
"Kanade",
"Aditya Sanjiv",
""
],
[
"Bhanushali",
"Jay",
""
],
[
"Muniyandi",
"Manivannan",
""
]
] |
new_dataset
| 0.994235 |
2209.08538
|
Deeksha Arya
|
Deeksha Arya (1 and 2), Hiroya Maeda (3), Sanjay Kumar Ghosh (1),
Durga Toshniwal (1), Yoshihide Sekimoto (2) ((1) Indian Institute of
Technology Roorkee, India, (2) The University of Tokyo, Japan, (3) UrbanX
Technologies, Inc., Tokyo, Japan)
|
RDD2022: A multi-national image dataset for automatic Road Damage
Detection
|
16 pages, 20 figures, IEEE BigData Cup - Crowdsensing-based Road
damage detection challenge (CRDDC'2022)
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The data article describes the Road Damage Dataset, RDD2022, which comprises
47,420 road images from six countries, Japan, India, the Czech Republic,
Norway, the United States, and China. The images have been annotated with more
than 55,000 instances of road damage. Four types of road damage, namely
longitudinal cracks, transverse cracks, alligator cracks, and potholes, are
captured in the dataset. The annotated dataset is envisioned for developing
deep learning-based methods to detect and classify road damage automatically.
The dataset has been released as a part of the Crowd sensing-based Road Damage
Detection Challenge (CRDDC2022). The challenge CRDDC2022 invites researchers
from across the globe to propose solutions for automatic road damage detection
in multiple countries. The municipalities and road agencies may utilize the
RDD2022 dataset, and the models trained using RDD2022 for low-cost automatic
monitoring of road conditions. Further, computer vision and machine learning
researchers may use the dataset to benchmark the performance of different
algorithms for other image-based applications of the same type (classification,
object detection, etc.).
|
[
{
"version": "v1",
"created": "Sun, 18 Sep 2022 11:29:49 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Arya",
"Deeksha",
"",
"1 and 2"
],
[
"Maeda",
"Hiroya",
""
],
[
"Ghosh",
"Sanjay Kumar",
""
],
[
"Toshniwal",
"Durga",
""
],
[
"Sekimoto",
"Yoshihide",
""
]
] |
new_dataset
| 0.999851 |
2209.08544
|
Konstantinos Georgiou
|
Konstantinos Georgiou, Woojin Jang
|
Triangle Evacuation of 2 Agents in the Wireless Model
| null | null | null | null |
cs.DM
|
http://creativecommons.org/licenses/by/4.0/
|
The input to the \emph{Triangle Evacuation} problem is a triangle $ABC$.
Given a starting point $S$ on the perimeter of the triangle, a feasible
solution to the problem consists of two unit-speed trajectories of mobile
agents that eventually visit every point on the perimeter of $ABC$. The cost of
a feasible solution (evacuation cost) is defined as the supremum over all
points $T$ of the time it takes that $T$ is visited for the first time by an
agent plus the distance of $T$ to the other agent at that time.
Similar evacuation type problems are well studied in the literature covering
the unit circle, the $\ell_p$ unit circle for $p\geq 1$, the square, and the
equilateral triangle. We extend this line of research to arbitrary non-obtuse
triangles. Motivated by the lack of symmetry of our search domain, we introduce
4 different algorithmic problems arising by letting the starting edge and/or
the starting point $S$ on that edge to be chosen either by the algorithm or the
adversary. To that end, we provide a tight analysis for the algorithm that has
been proved to be optimal for the previously studied search domains, as well as
we provide lower bounds for each of the problems. Both our upper and lower
bounds match and extend naturally the previously known results that were
established only for equilateral triangles.
|
[
{
"version": "v1",
"created": "Sun, 18 Sep 2022 11:53:02 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Georgiou",
"Konstantinos",
""
],
[
"Jang",
"Woojin",
""
]
] |
new_dataset
| 0.977381 |
2209.08569
|
Wenjin Wang
|
Wenjin Wang, Zhengjie Huang, Bin Luo, Qianglong Chen, Qiming Peng,
Yinxu Pan, Weichong Yin, Shikun Feng, Yu Sun, Dianhai Yu, Yin Zhang
|
ERNIE-mmLayout: Multi-grained MultiModal Transformer for Document
Understanding
|
Accepted by ACM Multimedia 2022
| null | null | null |
cs.CV cs.AI cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recent efforts of multimodal Transformers have improved Visually Rich
Document Understanding (VrDU) tasks via incorporating visual and textual
information. However, existing approaches mainly focus on fine-grained elements
such as words and document image patches, making it hard for them to learn from
coarse-grained elements, including natural lexical units like phrases and
salient visual regions like prominent image regions. In this paper, we attach
more importance to coarse-grained elements containing high-density information
and consistent semantics, which are valuable for document understanding. At
first, a document graph is proposed to model complex relationships among
multi-grained multimodal elements, in which salient visual regions are detected
by a cluster-based method. Then, a multi-grained multimodal Transformer called
mmLayout is proposed to incorporate coarse-grained information into existing
pre-trained fine-grained multimodal Transformers based on the graph. In
mmLayout, coarse-grained information is aggregated from fine-grained, and then,
after further processing, is fused back into fine-grained for final prediction.
Furthermore, common sense enhancement is introduced to exploit the semantic
information of natural lexical units. Experimental results on four tasks,
including information extraction and document question answering, show that our
method can improve the performance of multimodal Transformers based on
fine-grained elements and achieve better performance with fewer parameters.
Qualitative analyses show that our method can capture consistent semantics in
coarse-grained elements.
|
[
{
"version": "v1",
"created": "Sun, 18 Sep 2022 13:46:56 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Wang",
"Wenjin",
""
],
[
"Huang",
"Zhengjie",
""
],
[
"Luo",
"Bin",
""
],
[
"Chen",
"Qianglong",
""
],
[
"Peng",
"Qiming",
""
],
[
"Pan",
"Yinxu",
""
],
[
"Yin",
"Weichong",
""
],
[
"Feng",
"Shikun",
""
],
[
"Sun",
"Yu",
""
],
[
"Yu",
"Dianhai",
""
],
[
"Zhang",
"Yin",
""
]
] |
new_dataset
| 0.999161 |
2209.08630
|
Wei-Ting Chen
|
Wei-Ting Chen, I-Hsiang Chen, Chih-Yuan Yeh, Hao-Hsiang Yang, Hua-En
Chang, Jian-Jiun Ding, Sy-Yen Kuo
|
RVSL: Robust Vehicle Similarity Learning in Real Hazy Scenes Based on
Semi-supervised Learning
|
Accepted by ECCV 2022
| null | null | null |
cs.CV cs.AI cs.CY cs.GT eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, vehicle similarity learning, also called re-identification (ReID),
has attracted significant attention in computer vision. Several algorithms have
been developed and obtained considerable success. However, most existing
methods have unpleasant performance in the hazy scenario due to poor
visibility. Though some strategies are possible to resolve this problem, they
still have room to be improved due to the limited performance in real-world
scenarios and the lack of real-world clear ground truth. Thus, to resolve this
problem, inspired by CycleGAN, we construct a training paradigm called
\textbf{RVSL} which integrates ReID and domain transformation techniques. The
network is trained on semi-supervised fashion and does not require to employ
the ID labels and the corresponding clear ground truths to learn hazy vehicle
ReID mission in the real-world haze scenes. To further constrain the
unsupervised learning process effectively, several losses are developed.
Experimental results on synthetic and real-world datasets indicate that the
proposed method can achieve state-of-the-art performance on hazy vehicle ReID
problems. It is worth mentioning that although the proposed method is trained
without real-world label information, it can achieve competitive performance
compared to existing supervised methods trained on complete label information.
|
[
{
"version": "v1",
"created": "Sun, 18 Sep 2022 18:45:06 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Chen",
"Wei-Ting",
""
],
[
"Chen",
"I-Hsiang",
""
],
[
"Yeh",
"Chih-Yuan",
""
],
[
"Yang",
"Hao-Hsiang",
""
],
[
"Chang",
"Hua-En",
""
],
[
"Ding",
"Jian-Jiun",
""
],
[
"Kuo",
"Sy-Yen",
""
]
] |
new_dataset
| 0.973603 |
2209.08663
|
Parv Maheshwari
|
Shivam Sood, Jaskaran Singh Sodhi, Parv Maheshwari, Karan Uppal,
Debashish Chakravarty
|
Multiple Waypoint Navigation in Unknown Indoor Environments
|
Accepted at ICCR 2022
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Indoor motion planning focuses on solving the problem of navigating an agent
through a cluttered environment. To date, quite a lot of work has been done in
this field, but these methods often fail to find the optimal balance between
computationally inexpensive online path planning, and optimality of the path.
Along with this, these works often prove optimality for single-start
single-goal worlds. To address these challenges, we present a multiple waypoint
path planner and controller stack for navigation in unknown indoor environments
where waypoints include the goal along with the intermediary points that the
robot must traverse before reaching the goal. Our approach makes use of a
global planner (to find the next best waypoint at any instant), a local planner
(to plan the path to a specific waypoint), and an adaptive Model Predictive
Control strategy (for robust system control and faster maneuvers). We evaluate
our algorithm on a set of randomly generated obstacle maps, intermediate
waypoints, and start-goal pairs, with results indicating a significant
reduction in computational costs, with high accuracies and robust control.
|
[
{
"version": "v1",
"created": "Sun, 18 Sep 2022 21:54:06 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Sood",
"Shivam",
""
],
[
"Sodhi",
"Jaskaran Singh",
""
],
[
"Maheshwari",
"Parv",
""
],
[
"Uppal",
"Karan",
""
],
[
"Chakravarty",
"Debashish",
""
]
] |
new_dataset
| 0.99677 |
2209.08664
|
Junheng Li
|
Junheng Li and Quan Nguyen
|
Dynamic Walking of Bipedal Robots on Uneven Stepping Stones via
Adaptive-frequency MPC
|
6 pages, 7 figures, 1 table
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a novel Adaptive-frequency MPC framework for bipedal
locomotion over terrain with uneven stepping stones. In detail, we intend to
achieve adaptive foot placement and gait period for bipedal periodic walking
gait with this MPC, in order to traverse terrain with discontinuities without
slowing down. We pair this adaptive-frequency MPC with a kino-dynamics
trajectory optimization for optimal gait periods, center of mass (CoM)
trajectory, and foot placements. We use whole-body control (WBC) along with
adaptive-frequency MPC to track the optimal trajectories from the offline
optimization. In numerical validations, our adaptive-frequency MPC framework
with optimization has shown advantages over fixed-frequency MPC. The proposed
framework can control the bipedal robot to traverse through uneven stepping
stone terrains with perturbed stone heights, widths, and surface shapes while
maintaining an average speed of 1.5 m/s.
|
[
{
"version": "v1",
"created": "Sun, 18 Sep 2022 22:00:22 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Li",
"Junheng",
""
],
[
"Nguyen",
"Quan",
""
]
] |
new_dataset
| 0.985646 |
2209.08679
|
Xinya Du
|
Xinya Du, Sha Li, Heng Ji
|
Dynamic Global Memory for Document-level Argument Extraction
|
ACL 2022 main conference (12 pages)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Extracting informative arguments of events from news articles is a
challenging problem in information extraction, which requires a global
contextual understanding of each document. While recent work on document-level
extraction has gone beyond single-sentence and increased the cross-sentence
inference capability of end-to-end models, they are still restricted by certain
input sequence length constraints and usually ignore the global context between
events. To tackle this issue, we introduce a new global neural generation-based
framework for document-level event argument extraction by constructing a
document memory store to record the contextual event information and leveraging
it to implicitly and explicitly help with decoding of arguments for later
events. Empirical results show that our framework outperforms prior methods
substantially and it is more robust to adversarially annotated examples with
our constrained decoding design. (Our code and resources are available at
https://github.com/xinyadu/memory_docie for research purpose.)
|
[
{
"version": "v1",
"created": "Sun, 18 Sep 2022 23:45:25 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Du",
"Xinya",
""
],
[
"Li",
"Sha",
""
],
[
"Ji",
"Heng",
""
]
] |
new_dataset
| 0.996332 |
2209.08688
|
Minshen Zhu
|
Alex Block, Jeremiah Blocki, Kuan Cheng, Elena Grigorescu, Xin Li, Yu
Zheng, Minshen Zhu
|
On Relaxed Locally Decodable Codes for Hamming and Insertion-Deletion
Errors
| null | null | null | null |
cs.IT cs.CC math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Locally Decodable Codes (LDCs) are error-correcting codes
$C:\Sigma^n\rightarrow \Sigma^m$ with super-fast decoding algorithms. They are
important mathematical objects in many areas of theoretical computer science,
yet the best constructions so far have codeword length $m$ that is
super-polynomial in $n$, for codes with constant query complexity and constant
alphabet size. In a very surprising result, Ben-Sasson et al. showed how to
construct a relaxed version of LDCs (RLDCs) with constant query complexity and
almost linear codeword length over the binary alphabet, and used them to obtain
significantly-improved constructions of Probabilistically Checkable Proofs. In
this work, we study RLDCs in the standard Hamming-error setting, and introduce
their variants in the insertion and deletion (Insdel) error setting. Insdel
LDCs were first studied by Ostrovsky and Paskin-Cherniavsky, and are further
motivated by recent advances in DNA random access bio-technologies, in which
the goal is to retrieve individual files from a DNA storage database. Our first
result is an exponential lower bound on the length of Hamming RLDCs making 2
queries, over the binary alphabet. This answers a question explicitly raised by
Gur and Lachish. Our result exhibits a "phase-transition"-type behavior on the
codeword length for constant-query Hamming RLDCs. We further define two
variants of RLDCs in the Insdel-error setting, a weak and a strong version. On
the one hand, we construct weak Insdel RLDCs with with parameters matching
those of the Hamming variants. On the other hand, we prove exponential lower
bounds for strong Insdel RLDCs. These results demonstrate that, while these
variants are equivalent in the Hamming setting, they are significantly
different in the insdel setting. Our results also prove a strict separation
between Hamming RLDCs and Insdel RLDCs.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 00:40:32 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Block",
"Alex",
""
],
[
"Blocki",
"Jeremiah",
""
],
[
"Cheng",
"Kuan",
""
],
[
"Grigorescu",
"Elena",
""
],
[
"Li",
"Xin",
""
],
[
"Zheng",
"Yu",
""
],
[
"Zhu",
"Minshen",
""
]
] |
new_dataset
| 0.999384 |
2209.08709
|
Mao Ye
|
Mao Ye, Bo Liu, Stephen Wright, Peter Stone and Qiang Liu
|
BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach
| null | null | null | null |
cs.LG cs.AI math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bilevel optimization (BO) is useful for solving a variety of important
machine learning problems including but not limited to hyperparameter
optimization, meta-learning, continual learning, and reinforcement learning.
Conventional BO methods need to differentiate through the low-level
optimization process with implicit differentiation, which requires expensive
calculations related to the Hessian matrix. There has been a recent quest for
first-order methods for BO, but the methods proposed to date tend to be
complicated and impractical for large-scale deep learning applications. In this
work, we propose a simple first-order BO algorithm that depends only on
first-order gradient information, requires no implicit differentiation, and is
practical and efficient for large-scale non-convex functions in deep learning.
We provide non-asymptotic convergence analysis of the proposed method to
stationary points for non-convex objectives and present empirical results that
show its superior practical performance.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 01:51:12 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Ye",
"Mao",
""
],
[
"Liu",
"Bo",
""
],
[
"Wright",
"Stephen",
""
],
[
"Stone",
"Peter",
""
],
[
"Liu",
"Qiang",
""
]
] |
new_dataset
| 0.98782 |
2209.08712
|
Zilong Wang
|
Fei Guo, Zilong Wang, Guang Gong
|
Systematic Constructions of Bent-Negabent Functions, 2-Rotation
Symmetric Bent-Negabent Functions and Their Duals
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Bent-negabent functions have many important properties for their application
in cryptography since they have the flat absolute spectrum under the both
Walsh-Hadamard transform and nega-Hadamard transform. In this paper, we present
four new systematic constructions of bent-negabent functions on $4k, 8k, 4k+2$
and $8k+2$ variables, respectively, by modifying the truth tables of two
classes of quadratic bent-negabent functions with simple form. The algebraic
normal forms and duals of these constructed functions are also determined. We
further identify necessary and sufficient conditions for those bent-negabent
functions which have the maximum algebraic degree. At last, by modifying the
truth tables of a class of quadratic 2-rotation symmetric bent-negabent
functions, we present a construction of 2-rotation symmetric bent-negabent
functions with any possible algebraic degrees. Considering that there are
probably no bent-negabent functions in the rotation symmetric class, it is the
first significant attempt to construct bent-negabent functions in the
generalized rotation symmetric class.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 02:03:53 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Guo",
"Fei",
""
],
[
"Wang",
"Zilong",
""
],
[
"Gong",
"Guang",
""
]
] |
new_dataset
| 0.986598 |
2209.08716
|
Jiang Bian
|
Nicholas Gray, Megan Moraes, Jiang Bian, Allen Tian, Alex Wang, Haoyi
Xiong, Zhishan Guo
|
GLARE: A Dataset for Traffic Sign Detection in Sun Glare
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Real-time machine learning detection algorithms are often found within
autonomous vehicle technology and depend on quality datasets. It is essential
that these algorithms work correctly in everyday conditions as well as under
strong sun glare. Reports indicate glare is one of the two most prominent
environment-related reasons for crashes. However, existing datasets, such as
LISA and the German Traffic Sign Recognition Benchmark, do not reflect the
existence of sun glare at all. This paper presents the GLARE traffic sign
dataset: a collection of images with U.S based traffic signs under heavy visual
interference by sunlight. GLARE contains 2,157 images of traffic signs with sun
glare, pulled from 33 videos of dashcam footage of roads in the United States.
It provides an essential enrichment to the widely used LISA Traffic Sign
dataset. Our experimental study shows that although several state-of-the-art
baseline methods demonstrate superior performance when trained and tested
against traffic sign datasets without sun glare, they greatly suffer when
tested against GLARE (e.g., ranging from 9% to 21% mean mAP, which is
significantly lower than the performances on LISA dataset). We also notice that
current architectures have better detection accuracy (e.g., on average 42% mean
mAP gain for mainstream algorithms) when trained on images of traffic signs in
sun glare.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 02:25:41 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Gray",
"Nicholas",
""
],
[
"Moraes",
"Megan",
""
],
[
"Bian",
"Jiang",
""
],
[
"Tian",
"Allen",
""
],
[
"Wang",
"Alex",
""
],
[
"Xiong",
"Haoyi",
""
],
[
"Guo",
"Zhishan",
""
]
] |
new_dataset
| 0.999878 |
2209.08725
|
Ka-Hei Hui
|
Ka-Hei Hui, Ruihui Li, Jingyu Hu, Chi-Wing Fu
|
Neural Wavelet-domain Diffusion for 3D Shape Generation
| null | null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper presents a new approach for 3D shape generation, enabling direct
generative modeling on a continuous implicit representation in wavelet domain.
Specifically, we propose a compact wavelet representation with a pair of coarse
and detail coefficient volumes to implicitly represent 3D shapes via truncated
signed distance functions and multi-scale biorthogonal wavelets, and formulate
a pair of neural networks: a generator based on the diffusion model to produce
diverse shapes in the form of coarse coefficient volumes; and a detail
predictor to further produce compatible detail coefficient volumes for
enriching the generated shapes with fine structures and details. Both
quantitative and qualitative experimental results manifest the superiority of
our approach in generating diverse and high-quality shapes with complex
topology and structures, clean surfaces, and fine details, exceeding the 3D
generation capabilities of the state-of-the-art models.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 02:51:48 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Hui",
"Ka-Hei",
""
],
[
"Li",
"Ruihui",
""
],
[
"Hu",
"Jingyu",
""
],
[
"Fu",
"Chi-Wing",
""
]
] |
new_dataset
| 0.985317 |
2209.08750
|
Vishwa Shah
|
Vishwa Shah, Aditya Sharma, Gautam Shroff, Lovekesh Vig, Tirtharaj
Dash, Ashwin Srinivasan
|
Knowledge-based Analogical Reasoning in Neuro-symbolic Latent Spaces
|
13 pages, 4 figures, Accepted at 16th International Workshop on
Neural-Symbolic Learning and Reasoning as part of the 2nd International Joint
Conference on Learning & Reasoning (IJCLR 2022)
| null | null | null |
cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Analogical Reasoning problems challenge both connectionist and symbolic AI
systems as these entail a combination of background knowledge, reasoning and
pattern recognition. While symbolic systems ingest explicit domain knowledge
and perform deductive reasoning, they are sensitive to noise and require inputs
be mapped to preset symbolic features. Connectionist systems on the other hand
can directly ingest rich input spaces such as images, text or speech and
recognize pattern even with noisy inputs. However, connectionist models
struggle to include explicit domain knowledge for deductive reasoning. In this
paper, we propose a framework that combines the pattern recognition abilities
of neural networks with symbolic reasoning and background knowledge for solving
a class of Analogical Reasoning problems where the set of attributes and
possible relations across them are known apriori. We take inspiration from the
'neural algorithmic reasoning' approach [DeepMind 2020] and use
problem-specific background knowledge by (i) learning a distributed
representation based on a symbolic model of the problem (ii) training
neural-network transformations reflective of the relations involved in the
problem and finally (iii) training a neural network encoder from images to the
distributed representation in (i). These three elements enable us to perform
search-based reasoning using neural networks as elementary functions
manipulating distributed representations. We test this on visual analogy
problems in RAVENs Progressive Matrices, and achieve accuracy competitive with
human performance and, in certain cases, superior to initial end-to-end
neural-network based approaches. While recent neural models trained at scale
yield SOTA, our novel neuro-symbolic reasoning approach is a promising
direction for this problem, and is arguably more general, especially for
problems where domain knowledge is available.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 04:03:20 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Shah",
"Vishwa",
""
],
[
"Sharma",
"Aditya",
""
],
[
"Shroff",
"Gautam",
""
],
[
"Vig",
"Lovekesh",
""
],
[
"Dash",
"Tirtharaj",
""
],
[
"Srinivasan",
"Ashwin",
""
]
] |
new_dataset
| 0.999473 |
2209.08759
|
Hongliang Fei
|
Tan Yu and Jie Liu and Yi Yang and Yi Li and Hongliang Fei and Ping Li
|
Tree-based Text-Vision BERT for Video Search in Baidu Video Advertising
|
This revision is based on a manuscript submitted in October 2020, to
ICDE 2021. We thank the Program Committee for their valuable comments
| null | null | null |
cs.CV cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
The advancement of the communication technology and the popularity of the
smart phones foster the booming of video ads. Baidu, as one of the leading
search engine companies in the world, receives billions of search queries per
day. How to pair the video ads with the user search is the core task of Baidu
video advertising. Due to the modality gap, the query-to-video retrieval is
much more challenging than traditional query-to-document retrieval and
image-to-image search. Traditionally, the query-to-video retrieval is tackled
by the query-to-title retrieval, which is not reliable when the quality of
tiles are not high. With the rapid progress achieved in computer vision and
natural language processing in recent years, content-based search methods
becomes promising for the query-to-video retrieval. Benefited from pretraining
on large-scale datasets, some visionBERT methods based on cross-modal attention
have achieved excellent performance in many vision-language tasks not only in
academia but also in industry. Nevertheless, the expensive computation cost of
cross-modal attention makes it impractical for large-scale search in industrial
applications. In this work, we present a tree-based combo-attention network
(TCAN) which has been recently launched in Baidu's dynamic video advertising
platform. It provides a practical solution to deploy the heavy cross-modal
attention for the large-scale query-to-video search. After launching tree-based
combo-attention network, click-through rate gets improved by 2.29\% and
conversion rate get improved by 2.63\%.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 04:49:51 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Yu",
"Tan",
""
],
[
"Liu",
"Jie",
""
],
[
"Yang",
"Yi",
""
],
[
"Li",
"Yi",
""
],
[
"Fei",
"Hongliang",
""
],
[
"Li",
"Ping",
""
]
] |
new_dataset
| 0.981784 |
2209.08810
|
Xiaofei Zhang
|
Letian Zhang, Jinping Wang, Lu Jie, Nanjie Chen, Xiaojun Tan, Zhifei
Duan
|
LMBAO: A Landmark Map for Bundle Adjustment Odometry in LiDAR SLAM
|
9 pages, 3 tables, 6 figures
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
LiDAR odometry is one of the essential parts of LiDAR simultaneous
localization and mapping (SLAM). However, existing LiDAR odometry tends to
match a new scan simply iteratively with previous fixed-pose scans, gradually
accumulating errors. Furthermore, as an effective joint optimization mechanism,
bundle adjustment (BA) cannot be directly introduced into real-time odometry
due to the intensive computation of large-scale global landmarks. Therefore,
this letter designs a new strategy named a landmark map for bundle adjustment
odometry (LMBAO) in LiDAR SLAM to solve these problems. First, BA-based
odometry is further developed with an active landmark maintenance strategy for
a more accurate local registration and avoiding cumulative errors.
Specifically, this paper keeps entire stable landmarks on the map instead of
just their feature points in the sliding window and deletes the landmarks
according to their active grade. Next, the sliding window length is reduced,
and marginalization is performed to retain the scans outside the window but
corresponding to active landmarks on the map, greatly simplifying the
computation and improving the real-time properties. In addition, experiments on
three challenging datasets show that our algorithm achieves real-time
performance in outdoor driving and outperforms state-of-the-art LiDAR SLAM
algorithms, including Lego-LOAM and VLOM.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 07:48:28 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Zhang",
"Letian",
""
],
[
"Wang",
"Jinping",
""
],
[
"Jie",
"Lu",
""
],
[
"Chen",
"Nanjie",
""
],
[
"Tan",
"Xiaojun",
""
],
[
"Duan",
"Zhifei",
""
]
] |
new_dataset
| 0.996604 |
2209.08814
|
Nithin Gopalakrishnan Nair
|
Nithin Gopalakrishnan Nair and Vishal M. Patel
|
T2V-DDPM: Thermal to Visible Face Translation using Denoising Diffusion
Probabilistic Models
|
Accepted at The IEEE conference series on Automatic Face and Gesture
Recognition 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Modern-day surveillance systems perform person recognition using deep
learning-based face verification networks. Most state-of-the-art facial
verification systems are trained using visible spectrum images. But, acquiring
images in the visible spectrum is impractical in scenarios of low-light and
nighttime conditions, and often images are captured in an alternate domain such
as the thermal infrared domain. Facial verification in thermal images is often
performed after retrieving the corresponding visible domain images. This is a
well-established problem often known as the Thermal-to-Visible (T2V) image
translation. In this paper, we propose a Denoising Diffusion Probabilistic
Model (DDPM) based solution for T2V translation specifically for facial images.
During training, the model learns the conditional distribution of visible
facial images given their corresponding thermal image through the diffusion
process. During inference, the visible domain image is obtained by starting
from Gaussian noise and performing denoising repeatedly. The existing inference
process for DDPMs is stochastic and time-consuming. Hence, we propose a novel
inference strategy for speeding up the inference time of DDPMs, specifically
for the problem of T2V image translation. We achieve the state-of-the-art
results on multiple datasets. The code and pretrained models are publically
available at http://github.com/Nithin-GK/T2V-DDPM
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 07:59:32 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Nair",
"Nithin Gopalakrishnan",
""
],
[
"Patel",
"Vishal M.",
""
]
] |
new_dataset
| 0.9882 |
2209.08824
|
Patrick St\"ockle
|
Patrick St\"ockle, Ionut Pruteanu, Bernd Grobauer, Alexander
Pretschner
|
Hardening with Scapolite: a DevOps-based Approach for Improved Authoring
and Testing of Security-Configuration Guides in Large-Scale Organizations
|
We submitted this article as a full-length paper. Unfortunately, the
CODASPY Program Committee decided that our paper can only be accepted in the
tool track. Thus, the published version only consists of 6 pages
|
Proceedings of the Twelveth ACM Conference on Data and Application
Security and Privacy (CODASPY '22), April 24--27, 2022, Baltimore, MD, USA
|
10.1145/3508398.3511525
| null |
cs.CR cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Security Hardening is the process of configuring IT systems to ensure the
security of the systems' components and data they process or store. In many
cases, so-called security-configuration guides are used as a basis for security
hardening. These guides describe secure configuration settings for components
such as operating systems and standard applications. Rigorous testing of
security-configuration guides and automated mechanisms for their implementation
and validation are necessary since erroneous implementations or checks of
hardening guides may severely impact systems' security and functionality. At
Siemens, centrally maintained security-configuration guides carry
machine-readable information specifying both the implementation and validation
of each required configuration step. The guides are maintained within git
repositories; automated pipelines generate the artifacts for implementation and
checking, e.g., PowerShell scripts for Windows, and carry out testing of these
artifacts on AWS images. This paper describes our experiences with our
DevOps-inspired approach for authoring, maintaining, and testing
security-configuration guides. We want to share these experiences to help other
organizations with their security hardening and, thus, increase their systems'
security.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 08:14:42 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Stöckle",
"Patrick",
""
],
[
"Pruteanu",
"Ionut",
""
],
[
"Grobauer",
"Bernd",
""
],
[
"Pretschner",
"Alexander",
""
]
] |
new_dataset
| 0.986917 |
2209.08839
|
Madhu Raka
|
Swati Bhardwaj, Madhu Raka
|
Skew Cyclic Codes Over A Finite Ring : A Note on a result of Mohammadi
et al
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this small note we correct an error made by Mohammadi et al. in their
paper entitled "On Skew Cyclic Codes Over A Finite Ring" ( Iranian Jl. Math.
Sci. Inform. Vol 14 (1) (2019), 135-145).
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 08:34:58 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Bhardwaj",
"Swati",
""
],
[
"Raka",
"Madhu",
""
]
] |
new_dataset
| 0.999691 |
2209.08887
|
Haofeng Li
|
Junjia Huang, Haofeng Li, Guanbin Li, Xiang Wan
|
Attentive Symmetric Autoencoder for Brain MRI Segmentation
|
MICCAI 2022, code:https://github.com/lhaof/ASA
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-supervised learning methods based on image patch reconstruction have
witnessed great success in training auto-encoders, whose pre-trained weights
can be transferred to fine-tune other downstream tasks of image understanding.
However, existing methods seldom study the various importance of reconstructed
patches and the symmetry of anatomical structures, when they are applied to 3D
medical images. In this paper we propose a novel Attentive Symmetric
Auto-encoder (ASA) based on Vision Transformer (ViT) for 3D brain MRI
segmentation tasks. We conjecture that forcing the auto-encoder to recover
informative image regions can harvest more discriminative representations, than
to recover smooth image patches. Then we adopt a gradient based metric to
estimate the importance of each image patch. In the pre-training stage, the
proposed auto-encoder pays more attention to reconstruct the informative
patches according to the gradient metrics. Moreover, we resort to the prior of
brain structures and develop a Symmetric Position Encoding (SPE) method to
better exploit the correlations between long-range but spatially symmetric
regions to obtain effective features. Experimental results show that our
proposed attentive symmetric auto-encoder outperforms the state-of-the-art
self-supervised learning methods and medical image segmentation models on three
brain MRI segmentation benchmarks.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 09:43:19 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Huang",
"Junjia",
""
],
[
"Li",
"Haofeng",
""
],
[
"Li",
"Guanbin",
""
],
[
"Wan",
"Xiang",
""
]
] |
new_dataset
| 0.975643 |
2209.08924
|
Haoxian Zhang
|
Haoxian Zhang, Yonggen Ling
|
HVC-Net: Unifying Homography, Visibility, and Confidence Learning for
Planar Object Tracking
|
Accepted to ECCV 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robust and accurate planar tracking over a whole video sequence is vitally
important for many vision applications. The key to planar object tracking is to
find object correspondences, modeled by homography, between the reference image
and the tracked image. Existing methods tend to obtain wrong correspondences
with changing appearance variations, camera-object relative motions and
occlusions. To alleviate this problem, we present a unified convolutional
neural network (CNN) model that jointly considers homography, visibility, and
confidence. First, we introduce correlation blocks that explicitly account for
the local appearance changes and camera-object relative motions as the base of
our model. Second, we jointly learn the homography and visibility that links
camera-object relative motions with occlusions. Third, we propose a confidence
module that actively monitors the estimation quality from the pixel correlation
distributions obtained in correlation blocks. All these modules are plugged
into a Lucas-Kanade (LK) tracking pipeline to obtain both accurate and robust
planar object tracking. Our approach outperforms the state-of-the-art methods
on public POT and TMT datasets. Its superior performance is also verified on a
real-world application, synthesizing high-quality in-video advertisements.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 11:11:56 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Zhang",
"Haoxian",
""
],
[
"Ling",
"Yonggen",
""
]
] |
new_dataset
| 0.996881 |
2209.08978
|
Chen Lyu
|
Zheng Ma, Yuexiu Gao, Lei Lyu, Chen Lyu
|
MMF3: Neural Code Summarization Based on Multi-Modal Fine-Grained
Feature Fusion
|
12 pages, 5 figures
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Background: Code summarization automatically generates the corresponding
natural language descriptions according to the input code. Comprehensiveness of
code representation is critical to code summarization task. However, most
existing approaches typically use coarse-grained fusion methods to integrate
multi-modal features. They generally represent different modalities of a piece
of code, such as an Abstract Syntax Tree (AST) and a token sequence, as two
embeddings and then fuse the two ones at the AST/code levels. Such a coarse
integration makes it difficult to learn the correlations between fine-grained
code elements across modalities effectively. Aims: This study intends to
improve the model's prediction performance for high-quality code summarization
by accurately aligning and fully fusing semantic and syntactic structure
information of source code at node/token levels. Method: This paper proposes a
Multi-Modal Fine-grained Feature Fusion approach (MMF3) for neural code
summarization. We introduce a novel fine-grained fusion method, which allows
fine-grained fusion of multiple code modalities at the token and node levels.
Specifically, we use this method to fuse information from both token and AST
modalities and apply the fused features to code summarization. Results: We
conduct experiments on one Java and one Python datasets, and evaluate generated
summaries using four metrics. The results show that: 1) the performance of our
model outperforms the current state-of-the-art models, and 2) the ablation
experiments show that our proposed fine-grained fusion method can effectively
improve the accuracy of generated summaries. Conclusion: MMF3 can mine the
relationships between crossmodal elements and perform accurate fine-grained
element-level alignment fusion accordingly. As a result, more clues can be
provided to improve the accuracy of the generated code summaries.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 12:51:48 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Ma",
"Zheng",
""
],
[
"Gao",
"Yuexiu",
""
],
[
"Lyu",
"Lei",
""
],
[
"Lyu",
"Chen",
""
]
] |
new_dataset
| 0.990653 |
2209.09019
|
Dongxu Li
|
Dongxu Li, Junnan Li, Hung Le, Guangsen Wang, Silvio Savarese, Steven
C.H. Hoi
|
LAVIS: A Library for Language-Vision Intelligence
|
Preprint of LAVIS technical report
| null | null | null |
cs.CV cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce LAVIS, an open-source deep learning library for LAnguage-VISion
research and applications. LAVIS aims to serve as a one-stop comprehensive
library that brings recent advancements in the language-vision field accessible
for researchers and practitioners, as well as fertilizing future research and
development. It features a unified interface to easily access state-of-the-art
image-language, video-language models and common datasets. LAVIS supports
training, evaluation and benchmarking on a rich variety of tasks, including
multimodal classification, retrieval, captioning, visual question answering,
dialogue and pre-training. In the meantime, the library is also highly
extensible and configurable, facilitating future development and customization.
In this technical report, we describe design principles, key components and
functionalities of the library, and also present benchmarking results across
common language-vision tasks. The library is available at:
https://github.com/salesforce/LAVIS.
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2022 18:04:10 GMT"
}
] | 2022-09-20T00:00:00 |
[
[
"Li",
"Dongxu",
""
],
[
"Li",
"Junnan",
""
],
[
"Le",
"Hung",
""
],
[
"Wang",
"Guangsen",
""
],
[
"Savarese",
"Silvio",
""
],
[
"Hoi",
"Steven C. H.",
""
]
] |
new_dataset
| 0.999909 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.