id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2303.05309
|
Xize Cheng
|
Xize Cheng, Linjun Li, Tao Jin, Rongjie Huang, Wang Lin, Zehan Wang,
Huangdai Liu, Ye Wang, Aoxiong Yin, Zhou Zhao
|
MixSpeech: Cross-Modality Self-Learning with Audio-Visual Stream Mixup
for Visual Speech Translation and Recognition
|
https://github.com/Exgc/AVMuST-TED
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-media communications facilitate global interaction among people.
However, despite researchers exploring cross-lingual translation techniques
such as machine translation and audio speech translation to overcome language
barriers, there is still a shortage of cross-lingual studies on visual speech.
This lack of research is mainly due to the absence of datasets containing
visual speech and translated text pairs. In this paper, we present
\textbf{AVMuST-TED}, the first dataset for \textbf{A}udio-\textbf{V}isual
\textbf{Mu}ltilingual \textbf{S}peech \textbf{T}ranslation, derived from
\textbf{TED} talks. Nonetheless, visual speech is not as distinguishable as
audio speech, making it difficult to develop a mapping from source speech
phonemes to the target language text. To address this issue, we propose
MixSpeech, a cross-modality self-learning framework that utilizes audio speech
to regularize the training of visual speech tasks. To further minimize the
cross-modality gap and its impact on knowledge transfer, we suggest adopting
mixed speech, which is created by interpolating audio and visual streams, along
with a curriculum learning strategy to adjust the mixing ratio as needed.
MixSpeech enhances speech translation in noisy environments, improving BLEU
scores for four languages on AVMuST-TED by +1.4 to +4.2. Moreover, it achieves
state-of-the-art performance in lip reading on CMLR (11.1\%), LRS2 (25.5\%),
and LRS3 (28.0\%).
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 14:58:29 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Cheng",
"Xize",
""
],
[
"Li",
"Linjun",
""
],
[
"Jin",
"Tao",
""
],
[
"Huang",
"Rongjie",
""
],
[
"Lin",
"Wang",
""
],
[
"Wang",
"Zehan",
""
],
[
"Liu",
"Huangdai",
""
],
[
"Wang",
"Ye",
""
],
[
"Yin",
"Aoxiong",
""
],
[
"Zhao",
"Zhou",
""
]
] |
new_dataset
| 0.999717 |
2303.05321
|
Tiago Roxo
|
Tiago Roxo, Joana C. Costa, Pedro R. M. In\'acio, Hugo Proen\c{c}a
|
WASD: A Wilder Active Speaker Detection Dataset
| null | null | null | null |
cs.CV cs.SD eess.AS eess.IV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Current Active Speaker Detection (ASD) models achieve great results on
AVA-ActiveSpeaker (AVA), using only sound and facial features. Although this
approach is applicable in movie setups (AVA), it is not suited for less
constrained conditions. To demonstrate this limitation, we propose a Wilder
Active Speaker Detection (WASD) dataset, with increased difficulty by targeting
the two key components of current ASD: audio and face. Grouped into 5
categories, ranging from optimal conditions to surveillance settings, WASD
contains incremental challenges for ASD with tactical impairment of audio and
face data. We select state-of-the-art models and assess their performance in
two groups of WASD: Easy (cooperative settings) and Hard (audio and/or face are
specifically degraded). The results show that: 1) AVA trained models maintain a
state-of-the-art performance in WASD Easy group, while underperforming in the
Hard one, showing the 2) similarity between AVA and Easy data; and 3) training
in WASD does not improve models performance to AVA levels, particularly for
audio impairment and surveillance settings. This shows that AVA does not
prepare models for wild ASD and current approaches are subpar to deal with such
conditions. The proposed dataset also contains body data annotations to provide
a new source for ASD, and is available at https://github.com/Tiago-Roxo/WASD.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 15:13:22 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Roxo",
"Tiago",
""
],
[
"Costa",
"Joana C.",
""
],
[
"Inácio",
"Pedro R. M.",
""
],
[
"Proença",
"Hugo",
""
]
] |
new_dataset
| 0.999575 |
2303.05345
|
Alberto Maria Mongardini
|
Massimo La Morgia, Alessandro Mei, Alberto Maria Mongardini
|
TGDataset: a Collection of Over One Hundred Thousand Telegram Channels
|
10 pages, 4 figures
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Telegram is one of the most popular instant messaging apps in today's digital
age. In addition to providing a private messaging service, Telegram, with its
channels, represents a valid medium for rapidly broadcasting content to a large
audience (COVID-19 announcements), but, unfortunately, also for disseminating
radical ideologies and coordinating attacks (Capitol Hill riot). This paper
presents the TGDataset, a new dataset that includes 120,979 Telegram channels
and over 400 million messages, making it the largest collection of Telegram
channels to the best of our knowledge. After a brief introduction to the data
collection process, we analyze the languages spoken within our dataset and the
topic covered by English channels. Finally, we discuss some use cases in which
our dataset can be extremely useful to understand better the Telegram
ecosystem, as well as to study the diffusion of questionable news. In addition
to the raw dataset, we released the scripts we used to analyze the dataset and
the list of channels belonging to the network of a new conspiracy theory called
Sabmyk.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 15:42:38 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"La Morgia",
"Massimo",
""
],
[
"Mei",
"Alessandro",
""
],
[
"Mongardini",
"Alberto Maria",
""
]
] |
new_dataset
| 0.999894 |
2303.05378
|
Sujan Kumar Gonugondla
|
Xiaokai Wei, Sujan Gonugondla, Wasi Ahmad, Shiqi Wang, Baishakhi Ray,
Haifeng Qian, Xiaopeng Li, Varun Kumar, Zijian Wang, Yuchen Tian, Qing Sun,
Ben Athiwaratkun, Mingyue Shang, Murali Krishna Ramanathan, Parminder Bhatia,
Bing Xiang
|
Greener yet Powerful: Taming Large Code Generation Models with
Quantization
|
10 pages, 7 figures, 10 tables
| null | null | null |
cs.LG cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
ML-powered code generation aims to assist developers to write code in a more
productive manner, by intelligently generating code blocks based on natural
language prompts. Recently, large pretrained deep learning models have
substantially pushed the boundary of code generation and achieved impressive
performance. Despite their great power, the huge number of model parameters
poses a significant threat to adapting them in a regular software development
environment, where a developer might use a standard laptop or mid-size server
to develop her code. Such large models incur significant resource usage (in
terms of memory, latency, and dollars) as well as carbon footprint.
Model compression is a promising approach to address these challenges.
Several techniques are proposed to compress large pretrained models typically
used for vision or textual data. Out of many available compression techniques,
we identified that quantization is mostly applicable for code generation task
as it does not require significant retraining cost. As quantization represents
model parameters with lower-bit integer (e.g., int8), the model size and
runtime latency would both benefit from such int representation. We extensively
study the impact of quantized model on code generation tasks across different
dimension: (i) resource usage and carbon footprint, (ii) accuracy, and (iii)
robustness. To this end, through systematic experiments we find a recipe of
quantization technique that could run even a $6$B model in a regular laptop
without significant accuracy or robustness degradation. We further found the
recipe is readily applicable to code summarization task as well.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 16:25:51 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Wei",
"Xiaokai",
""
],
[
"Gonugondla",
"Sujan",
""
],
[
"Ahmad",
"Wasi",
""
],
[
"Wang",
"Shiqi",
""
],
[
"Ray",
"Baishakhi",
""
],
[
"Qian",
"Haifeng",
""
],
[
"Li",
"Xiaopeng",
""
],
[
"Kumar",
"Varun",
""
],
[
"Wang",
"Zijian",
""
],
[
"Tian",
"Yuchen",
""
],
[
"Sun",
"Qing",
""
],
[
"Athiwaratkun",
"Ben",
""
],
[
"Shang",
"Mingyue",
""
],
[
"Ramanathan",
"Murali Krishna",
""
],
[
"Bhatia",
"Parminder",
""
],
[
"Xiang",
"Bing",
""
]
] |
new_dataset
| 0.979438 |
2303.05404
|
Matou\v{s} Vrba
|
Matou\v{s} Vrba, Viktor Walter, Martin Saska
|
On Onboard LiDAR-based Flying Object Detection
|
12 pages, 8 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A new robust and accurate approach for the detection and localization of
flying objects with the purpose of highly dynamic aerial interception and agile
multi-robot interaction is presented in this paper. The approach is proposed
for use onboard an autonomous aerial vehicle equipped with a 3D LiDAR sensor
providing input data for the algorithm. It relies on a novel 3D occupancy voxel
mapping method for the target detection and a cluster-based multiple hypothesis
tracker to compensate uncertainty of the sensory data. When compared to
state-of-the-art methods of onboard detection of other flying objects, the
presented approach provides superior localization accuracy and robustness to
different environments and appearance changes of the target, as well as a
greater detection range. Furthermore, in combination with the proposed
multi-target tracker, sporadic false positives are suppressed, state estimation
of the target is provided and the detection latency is negligible. This makes
the detector suitable for tasks of agile multi-robot interaction, such as
autonomous aerial interception or formation control where precise, robust, and
fast relative localization of other robots is crucial. We demonstrate the
practical usability and performance of the system in simulated and real-world
experiments.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 16:44:34 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Vrba",
"Matouš",
""
],
[
"Walter",
"Viktor",
""
],
[
"Saska",
"Martin",
""
]
] |
new_dataset
| 0.992641 |
2303.05416
|
Kazi Injamamul Haque
|
Kazi Injamamul Haque and Zerrin Yumak
|
FaceXHuBERT: Text-less Speech-driven E(X)pressive 3D Facial Animation
Synthesis Using Self-Supervised Speech Representation Learning
|
13 pages, 4 figures, code included
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents FaceXHuBERT, a text-less speech-driven 3D facial
animation generation method that allows to capture personalized and subtle cues
in speech (e.g. identity, emotion and hesitation). It is also very robust to
background noise and can handle audio recorded in a variety of situations (e.g.
multiple people speaking). Recent approaches employ end-to-end deep learning
taking into account both audio and text as input to generate facial animation
for the whole face. However, scarcity of publicly available expressive audio-3D
facial animation datasets poses a major bottleneck. The resulting animations
still have issues regarding accurate lip-synching, expressivity,
person-specific information and generalizability. We effectively employ
self-supervised pretrained HuBERT model in the training process that allows us
to incorporate both lexical and non-lexical information in the audio without
using a large lexicon. Additionally, guiding the training with a binary emotion
condition and speaker identity distinguishes the tiniest subtle facial motion.
We carried out extensive objective and subjective evaluation in comparison to
ground-truth and state-of-the-art work. A perceptual user study demonstrates
that our approach produces superior results with respect to the realism of the
animation 78% of the time in comparison to the state-of-the-art. In addition,
our method is 4 times faster eliminating the use of complex sequential models
such as transformers. We strongly recommend watching the supplementary video
before reading the paper. We also provide the implementation and evaluation
codes with a GitHub repository link.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 17:05:19 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Haque",
"Kazi Injamamul",
""
],
[
"Yumak",
"Zerrin",
""
]
] |
new_dataset
| 0.993293 |
2303.05465
|
Ishtiaq Ahmad Dr.
|
Shahid Rasool, Irfan Ullah, Abid Ali, and Ishtiaq Ahmad
|
3D UAV Trajectory Design for Fair and Energy-Efficient Communication: A
Deep Reinforcement Learning Technique
| null | null | null | null |
cs.NI cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In different situations, like disaster communication and network connectivity
for rural locations, unmanned aerial vehicles (UAVs) could indeed be utilized
as airborne base stations to improve both the functionality and coverage of
communication networks. Ground users can employ mobile UAVs to establish
communication channels and deliver packages. UAVs, on the other hand, have
restricted transmission capabilities and fuel supplies. They can't always cover
the full region or continue to fly for a long time, especially in a huge
territory. Controlling a swarm of UAVs to yield a relatively long communication
coverage while maintaining connectivity and limiting energy usage is so
difficult. We use modern deep reinforcement learning (DRL) for UAV connectivity
to provide an innovative and extremely energy-efficient DRL-based algorithm.
The proposed method: 1) enhances novel energy efficiency while taking into
account communications throughput, energy consumption, fairness, and
connectivity; 2) evaluates the environment and its dynamics; and 3) makes
judgments using strong deep neural networks. For performance evaluation, we
have performed comprehensive simulations. In terms of energy consumption and
fairness, simulation results show that the DRL-based algorithm consistently
outperforms two commonly used baseline techniques.
|
[
{
"version": "v1",
"created": "Fri, 27 Jan 2023 12:28:19 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Rasool",
"Shahid",
""
],
[
"Ullah",
"Irfan",
""
],
[
"Ali",
"Abid",
""
],
[
"Ahmad",
"Ishtiaq",
""
]
] |
new_dataset
| 0.995065 |
2303.05512
|
Chuang Gan
|
Xuan Li, Yi-Ling Qiao, Peter Yichen Chen, Krishna Murthy
Jatavallabhula, Ming Lin, Chenfanfu Jiang, Chuang Gan
|
PAC-NeRF: Physics Augmented Continuum Neural Radiance Fields for
Geometry-Agnostic System Identification
|
ICLR 2023 Spotlight. Project page:
https://sites.google.com/view/PAC-NeRF
| null | null | null |
cs.CV cs.AI cs.GR cs.LG cs.RO
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Existing approaches to system identification (estimating the physical
parameters of an object) from videos assume known object geometries. This
precludes their applicability in a vast majority of scenes where object
geometries are complex or unknown. In this work, we aim to identify parameters
characterizing a physical system from a set of multi-view videos without any
assumption on object geometry or topology. To this end, we propose "Physics
Augmented Continuum Neural Radiance Fields" (PAC-NeRF), to estimate both the
unknown geometry and physical parameters of highly dynamic objects from
multi-view videos. We design PAC-NeRF to only ever produce physically plausible
states by enforcing the neural radiance field to follow the conservation laws
of continuum mechanics. For this, we design a hybrid Eulerian-Lagrangian
representation of the neural radiance field, i.e., we use the Eulerian grid
representation for NeRF density and color fields, while advecting the neural
radiance fields via Lagrangian particles. This hybrid Eulerian-Lagrangian
representation seamlessly blends efficient neural rendering with the material
point method (MPM) for robust differentiable physics simulation. We validate
the effectiveness of our proposed framework on geometry and physical parameter
estimation over a vast range of materials, including elastic bodies,
plasticine, sand, Newtonian and non-Newtonian fluids, and demonstrate
significant performance gain on most tasks.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 18:59:50 GMT"
}
] | 2023-03-10T00:00:00 |
[
[
"Li",
"Xuan",
""
],
[
"Qiao",
"Yi-Ling",
""
],
[
"Chen",
"Peter Yichen",
""
],
[
"Jatavallabhula",
"Krishna Murthy",
""
],
[
"Lin",
"Ming",
""
],
[
"Jiang",
"Chenfanfu",
""
],
[
"Gan",
"Chuang",
""
]
] |
new_dataset
| 0.99812 |
1707.07545
|
Daniele Francesco Santamaria
|
Domenico Cantone and Marianna Nicolosi-Asmundo and Daniele Francesco
Santamaria
|
A \textsf{C++} reasoner for the description logic $\shdlssx$ (Extended
Version)
|
15 pages. arXiv admin note: text overlap with arXiv:1702.03096,
arXiv:1804.11222
|
CEUR Workshop Proceedings, ISSN 1613-0073, 2017
| null |
Vol. 1949, pp. 276-280
|
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an ongoing implementation of a \ke\space based reasoner for a
decidable fragment of stratified elementary set theory expressing the
description logic $\dlssx$ (shortly $\shdlssx$). The reasoner checks the
consistency of $\shdlssx$-knowledge bases (KBs) represented in set-theoretic
terms. It is implemented in \textsf{C++} and supports $\shdlssx$-KBs serialized
in the OWL/XML format. To the best of our knowledge, this is the first attempt
to implement a reasoner for the consistency checking of a description logic
represented via a fragment of set theory that can also classify standard OWL
ontologies.
|
[
{
"version": "v1",
"created": "Fri, 21 Jul 2017 16:54:41 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Jul 2017 12:28:29 GMT"
},
{
"version": "v3",
"created": "Sat, 2 Sep 2017 07:18:03 GMT"
},
{
"version": "v4",
"created": "Tue, 12 Sep 2017 08:23:20 GMT"
},
{
"version": "v5",
"created": "Mon, 25 Sep 2017 14:02:09 GMT"
},
{
"version": "v6",
"created": "Thu, 5 Oct 2017 09:29:13 GMT"
},
{
"version": "v7",
"created": "Fri, 29 Jun 2018 17:03:43 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Cantone",
"Domenico",
""
],
[
"Nicolosi-Asmundo",
"Marianna",
""
],
[
"Santamaria",
"Daniele Francesco",
""
]
] |
new_dataset
| 0.977717 |
1709.02618
|
Daniele Francesco Santamaria
|
Claudia Cantale, Domenico Cantone, Manuela Lupica Rinato, Marianna
Nicolosi-Asmundo, and Daniele Francesco Santamaria
|
The Shape of a Benedictine Monastery: The SaintGall Ontology (Extended
Version)
|
10 pages, 10 figures
|
CEUR Workshop Proceedings, ISSN 1613-0073, 2017
| null |
, Vol. 2050, Paper 2
|
cs.AI cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an OWL 2 ontology representing the Saint Gall plan, one of the
most ancient documents arrived intact to us, which describes the ideal model of
a Benedictine monastic complex that inspired the design of many European
monasteries.
|
[
{
"version": "v1",
"created": "Fri, 8 Sep 2017 09:51:31 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Sep 2017 18:21:38 GMT"
},
{
"version": "v3",
"created": "Fri, 15 Sep 2017 05:30:23 GMT"
},
{
"version": "v4",
"created": "Mon, 18 Sep 2017 11:20:18 GMT"
},
{
"version": "v5",
"created": "Fri, 29 Jun 2018 17:02:32 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Cantale",
"Claudia",
""
],
[
"Cantone",
"Domenico",
""
],
[
"Rinato",
"Manuela Lupica",
""
],
[
"Nicolosi-Asmundo",
"Marianna",
""
],
[
"Santamaria",
"Daniele Francesco",
""
]
] |
new_dataset
| 0.998099 |
2010.14648
|
Mohammad Abdulaziz
|
Mohammad Abdulaziz and Friedrich Kurz
|
Formally Verified SAT-Based AI Planning
| null | null | null | null |
cs.AI cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an executable formally verified SAT encoding of classical AI
planning. We use the theorem prover Isabelle/HOL to perform the verification.
We experimentally test the verified encoding and show that it can be used for
reasonably sized standard planning benchmarks. We also use it as a reference to
test a state-of-the-art SAT-based planner, showing that it sometimes falsely
claims that problems have no solutions of certain lengths.
|
[
{
"version": "v1",
"created": "Tue, 27 Oct 2020 22:23:04 GMT"
},
{
"version": "v2",
"created": "Sat, 14 Nov 2020 12:19:38 GMT"
},
{
"version": "v3",
"created": "Mon, 23 Nov 2020 09:43:28 GMT"
},
{
"version": "v4",
"created": "Thu, 17 Dec 2020 18:21:00 GMT"
},
{
"version": "v5",
"created": "Tue, 7 Mar 2023 19:09:59 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Abdulaziz",
"Mohammad",
""
],
[
"Kurz",
"Friedrich",
""
]
] |
new_dataset
| 0.999512 |
2012.01410
|
Daniele Francesco Santamaria
|
Domenico Cantone, Carmelo Fabio Longo, Marianna Nicolosi-Asmundo,
Daniele Francesco Santamaria, Corrado Santoro
|
Ontological Smart Contracts in OASIS: Ontology for Agents, Systems, and
Integration of Services (Extended Version)
|
This work has been accepted for publication at The 14th International
Symposium on Intelligent Distributed Computing, 16--18 September 2021 -
Online. Paper accepted on 8 September 2020
|
Intelligent Distributed Computing XIV, Studies in Computational
Intelligence 1026, 2021
|
10.1007/978-3-030-96627-0
|
Chapter 22, pp. 237--247
|
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In this contribution we extend an ontology for modelling agents and their
interactions, called Ontology for Agents, Systems, and Integration of Services
(in short, OASIS), with conditionals and ontological smart contracts (in short,
OSCs). OSCs are ontological representations of smart contracts that allow to
establish responsibilities and authorizations among agents and set agreements,
whereas conditionals allow one to restrict and limit agent interactions, define
activation mechanisms that trigger agent actions, and define constraints and
contract terms on OSCs. Conditionals and OSCs, as defined in OASIS, are applied
to extend with ontological capabilities digital public ledgers such as the
blockchain and smart contracts implemented on it. We will also sketch the
architecture of a framework based on the OASIS definition of OSCs that exploits
the Ethereum platform and the Interplanetary File System.
|
[
{
"version": "v1",
"created": "Wed, 2 Dec 2020 18:58:26 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Sep 2021 14:39:54 GMT"
},
{
"version": "v3",
"created": "Tue, 14 Sep 2021 19:56:58 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Cantone",
"Domenico",
""
],
[
"Longo",
"Carmelo Fabio",
""
],
[
"Nicolosi-Asmundo",
"Marianna",
""
],
[
"Santamaria",
"Daniele Francesco",
""
],
[
"Santoro",
"Corrado",
""
]
] |
new_dataset
| 0.952129 |
2108.12992
|
Masanari Kimura
|
Masanari Kimura, Takuma Nakamura, Yuki Saito
|
SHIFT15M: Fashion-specific dataset for set-to-set matching with several
distribution shifts
| null | null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This paper addresses the problem of set-to-set matching, which involves
matching two different sets of items based on some criteria, especially in the
case of high-dimensional items like images. Although neural networks have been
applied to solve this problem, most machine learning-based approaches assume
that the training and test data follow the same distribution, which is not
always true in real-world scenarios. To address this limitation, we introduce
SHIFT15M, a dataset that can be used to evaluate set-to-set matching models
when the distribution of data changes between training and testing. We conduct
benchmark experiments that demonstrate the performance drop of naive methods
due to distribution shift. Additionally, we provide software to handle the
SHIFT15M dataset in a simple manner, with the URL for the software to be made
available after publication of this manuscript. We believe proposed SHIFT15M
dataset provide a valuable resource for evaluating set-to-set matching models
under the distribution shift.
|
[
{
"version": "v1",
"created": "Mon, 30 Aug 2021 05:07:59 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Mar 2023 15:25:18 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Kimura",
"Masanari",
""
],
[
"Nakamura",
"Takuma",
""
],
[
"Saito",
"Yuki",
""
]
] |
new_dataset
| 0.999823 |
2111.00169
|
Nicholas Boucher
|
Nicholas Boucher, Ross Anderson
|
Trojan Source: Invisible Vulnerabilities
|
To appear in the 32nd USENIX Security Symposium. Revisions: Adds 4
languages, 2 encodings, threat model, & scanning details
| null | null | null |
cs.CR cs.PL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present a new type of attack in which source code is maliciously encoded
so that it appears different to a compiler and to the human eye. This attack
exploits subtleties in text-encoding standards such as Unicode to produce
source code whose tokens are logically encoded in a different order from the
one in which they are displayed, leading to vulnerabilities that cannot be
perceived directly by human code reviewers. 'Trojan Source' attacks, as we call
them, pose an immediate threat both to first-party software and of supply-chain
compromise across the industry. We present working examples of Trojan Source
attacks in C, C++, C#, JavaScript, Java, Rust, Go, Python, SQL, Bash, Assembly,
and Solidity. We propose definitive compiler-level defenses, and describe other
mitigating controls that can be deployed in editors, repositories, and build
pipelines while compilers are upgraded to block this attack. We document an
industry-wide coordinated disclosure for these vulnerabilities; as they affect
most compilers, editors, and repositories, the exercise teaches how different
firms, open-source communities, and other stakeholders respond to vulnerability
disclosure.
|
[
{
"version": "v1",
"created": "Sat, 30 Oct 2021 04:05:46 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Mar 2023 15:39:03 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Boucher",
"Nicholas",
""
],
[
"Anderson",
"Ross",
""
]
] |
new_dataset
| 0.998307 |
2111.15613
|
Rajiv Kumar V
|
Chintan Tundia, Rajiv Kumar, Om Damani, G. Sivakumar
|
The MIS Check-Dam Dataset for Object Detection and Instance Segmentation
Tasks
| null | null |
10.5220/0010799600003124
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Deep learning has led to many recent advances in object detection and
instance segmentation, among other computer vision tasks. These advancements
have led to wide application of deep learning based methods and related
methodologies in object detection tasks for satellite imagery. In this paper,
we introduce MIS Check-Dam, a new dataset of check-dams from satellite imagery
for building an automated system for the detection and mapping of check-dams,
focusing on the importance of irrigation structures used for agriculture. We
review some of the most recent object detection and instance segmentation
methods and assess their performance on our new dataset. We evaluate several
single stage, two-stage and attention based methods under various network
configurations and backbone architectures. The dataset and the pre-trained
models are available at https://www.cse.iitb.ac.in/gramdrishti/.
|
[
{
"version": "v1",
"created": "Tue, 30 Nov 2021 18:04:02 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Tundia",
"Chintan",
""
],
[
"Kumar",
"Rajiv",
""
],
[
"Damani",
"Om",
""
],
[
"Sivakumar",
"G.",
""
]
] |
new_dataset
| 0.991403 |
2210.02697
|
Ruicheng Wang
|
Ruicheng Wang, Jialiang Zhang, Jiayi Chen, Yinzhen Xu, Puhao Li,
Tengyu Liu, He Wang
|
DexGraspNet: A Large-Scale Robotic Dexterous Grasp Dataset for General
Objects Based on Simulation
| null | null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Robotic dexterous grasping is the first step to enable human-like dexterous
object manipulation and thus a crucial robotic technology. However, dexterous
grasping is much more under-explored than object grasping with parallel
grippers, partially due to the lack of a large-scale dataset. In this work, we
present a large-scale robotic dexterous grasp dataset, DexGraspNet, generated
by our proposed highly efficient synthesis method that can be generally applied
to any dexterous hand. Our method leverages a deeply accelerated differentiable
force closure estimator and thus can efficiently and robustly synthesize stable
and diverse grasps on a large scale. We choose ShadowHand and generate 1.32
million grasps for 5355 objects, covering more than 133 object categories and
containing more than 200 diverse grasps for each object instance, with all
grasps having been validated by the Isaac Gym simulator. Compared to the
previous dataset from Liu et al. generated by GraspIt!, our dataset has not
only more objects and grasps, but also higher diversity and quality. Via
performing cross-dataset experiments, we show that training several algorithms
of dexterous grasp synthesis on our dataset significantly outperforms training
on the previous one. To access our data and code, including code for human and
Allegro grasp synthesis, please visit our project page:
https://pku-epic.github.io/DexGraspNet/.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 06:09:16 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Mar 2023 03:14:59 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Wang",
"Ruicheng",
""
],
[
"Zhang",
"Jialiang",
""
],
[
"Chen",
"Jiayi",
""
],
[
"Xu",
"Yinzhen",
""
],
[
"Li",
"Puhao",
""
],
[
"Liu",
"Tengyu",
""
],
[
"Wang",
"He",
""
]
] |
new_dataset
| 0.999869 |
2210.17517
|
Matthew Finlayson
|
Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard Tang, Sean Welleck,
Chitta Baral, Tanmay Rajpurohit, Oyvind Tafjord, Ashish Sabharwal, Peter
Clark, Ashwin Kalyan
|
Lila: A Unified Benchmark for Mathematical Reasoning
|
EMNLP 2022
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Mathematical reasoning skills are essential for general-purpose intelligent
systems to perform tasks from grocery shopping to climate modeling. Towards
evaluating and improving AI systems in this domain, we propose LILA, a unified
mathematical reasoning benchmark consisting of 23 diverse tasks along four
dimensions: (i) mathematical abilities e.g., arithmetic, calculus (ii) language
format e.g., question-answering, fill-in-the-blanks (iii) language diversity
e.g., no language, simple language (iv) external knowledge e.g., commonsense,
physics. We construct our benchmark by extending 20 datasets benchmark by
collecting task instructions and solutions in the form of Python programs,
thereby obtaining explainable solutions in addition to the correct answer. We
additionally introduce two evaluation datasets to measure out-of-distribution
performance and robustness to language perturbation. Finally, we introduce
BHASKARA, a general-purpose mathematical reasoning model trained on LILA.
Importantly, we find that multi-tasking leads to significant improvements
(average relative improvement of 21.83% F1 score vs. single-task models), while
the best performing model only obtains 60.40%, indicating the room for
improvement in general mathematical reasoning and understanding.
|
[
{
"version": "v1",
"created": "Mon, 31 Oct 2022 17:41:26 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Mar 2023 16:47:46 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Mishra",
"Swaroop",
""
],
[
"Finlayson",
"Matthew",
""
],
[
"Lu",
"Pan",
""
],
[
"Tang",
"Leonard",
""
],
[
"Welleck",
"Sean",
""
],
[
"Baral",
"Chitta",
""
],
[
"Rajpurohit",
"Tanmay",
""
],
[
"Tafjord",
"Oyvind",
""
],
[
"Sabharwal",
"Ashish",
""
],
[
"Clark",
"Peter",
""
],
[
"Kalyan",
"Ashwin",
""
]
] |
new_dataset
| 0.999615 |
2301.00798
|
Purbesh Mitra
|
Purbesh Mitra and Sennur Ulukus
|
Timely Opportunistic Gossiping in Dense Networks
| null | null | null | null |
cs.IT cs.MA cs.NI eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider gossiping in a fully-connected wireless network consisting of $n$
nodes. The network receives Poisson updates from a source, which generates new
information. The nodes gossip their available information with the neighboring
nodes to maintain network timeliness. In this work, we propose two gossiping
schemes, one semi-distributed and the other one fully-distributed. In the
semi-distributed scheme, the freshest nodes use pilot signals to interact with
the network and gossip with the full available update rate $B$. In the
fully-distributed scheme, each node gossips for a fixed amount of time duration
with the full update rate $B$. Both schemes achieve $O(1)$ age scaling, and the
semi-distributed scheme has the best age performance for any symmetric
randomized gossiping policy. We compare the results with the recently proposed
ASUMAN scheme, which also gives $O(1)$ age performance, but the nodes need to
be age-aware.
|
[
{
"version": "v1",
"created": "Mon, 2 Jan 2023 18:43:42 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Mar 2023 02:59:56 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Mitra",
"Purbesh",
""
],
[
"Ulukus",
"Sennur",
""
]
] |
new_dataset
| 0.967435 |
2301.07028
|
Jeong Hun Lee
|
Jeong Hun Lee, Mike Y. Michelis, Robert Katzschmann, Zachary
Manchester
|
Aquarium: A Fully Differentiable Fluid-Structure Interaction Solver for
Robotics Applications
|
8 pages, 7 figures, accepted to IEEE ICRA 2023
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We present Aquarium, a differentiable fluid-structure interaction solver for
robotics that offers stable simulation, accurately coupled fluid-robot physics
in two dimensions, and full differentiability with respect to fluid and robot
states and parameters. Aquarium achieves stable simulation with accurate flow
physics by directly integrating over the incompressible Navier-Stokes equations
using a fully implicit Crank-Nicolson scheme with a second-order finite-volume
spatial discretization. The fluid and robot physics are coupled using the
immersed-boundary method by formulating the no-slip condition as an equality
constraint applied directly to the Navier-Stokes system. This choice of
coupling allows the fluid-structure interaction to be posed and solved as a
nonlinear optimization problem. This optimization-based formulation is then
exploited using the implicit-function theorem to compute derivatives.
Derivatives can then be passed to downstream gradient-based optimization or
learning algorithms. We demonstrate Aquarium's ability to accurately simulate
coupled fluid-robot physics with numerous 2D examples, including a cylinder in
free stream and a soft robotic fish tail with hardware validation. We also
demonstrate Aquarium's ability to provide analytical gradients by performing
gradient-based shape-and-gait optimization of an oscillating diamond foil to
maximize its generated thrust.
|
[
{
"version": "v1",
"created": "Tue, 17 Jan 2023 17:26:24 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Mar 2023 20:11:00 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Lee",
"Jeong Hun",
""
],
[
"Michelis",
"Mike Y.",
""
],
[
"Katzschmann",
"Robert",
""
],
[
"Manchester",
"Zachary",
""
]
] |
new_dataset
| 0.9514 |
2301.12093
|
Jerry Wang
|
Chenyi Wang, Huan Wang, Peiwen Pan
|
Local Contrast and Global Contextual Information Make Infrared Small
Object Salient Again
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Infrared small object detection (ISOS) aims to segment small objects only
covered with several pixels from clutter background in infrared images. It's of
great challenge due to: 1) small objects lack of sufficient intensity, shape
and texture information; 2) small objects are easily lost in the process where
detection models, say deep neural networks, obtain high-level semantic features
and image-level receptive fields through successive downsampling. This paper
proposes a reliable detection model for ISOS, dubbed UCFNet, which can handle
well the two issues. It builds upon central difference convolution (CDC) and
fast Fourier convolution (FFC). On one hand, CDC can effectively guide the
network to learn the contrast information between small objects and the
background, as the contrast information is very essential in human visual
system dealing with the ISOS task. On the other hand, FFC can gain image-level
receptive fields and extract global information while preventing small objects
from being overwhelmed.Experiments on several public datasets demonstrate that
our method significantly outperforms the state-of-the-art ISOS models, and can
provide useful guidelines for designing better ISOS deep models. Code are
available at https://github.com/wcyjerry/BasicISOS.
|
[
{
"version": "v1",
"created": "Sat, 28 Jan 2023 05:18:13 GMT"
},
{
"version": "v2",
"created": "Tue, 31 Jan 2023 04:02:40 GMT"
},
{
"version": "v3",
"created": "Wed, 8 Mar 2023 16:30:18 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Wang",
"Chenyi",
""
],
[
"Wang",
"Huan",
""
],
[
"Pan",
"Peiwen",
""
]
] |
new_dataset
| 0.997347 |
2302.13053
|
Aashish Kolluri
|
Aashish Kolluri, Sarthak Choudhary, Bryan Hooi, Prateek Saxena
|
RETEXO: Scalable Neural Network Training over Distributed Graphs
| null | null | null | null |
cs.LG cs.AI cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Graph neural networks offer a promising approach to supervised learning over
graph data. Graph data, especially when it is privacy-sensitive or too large to
train on centrally, is often stored partitioned across disparate processing
units (clients) which want to minimize the communication costs during
collaborative training. The fully-distributed setup takes such partitioning to
its extreme, wherein features of only a single node and its adjacent edges are
kept locally with one client processor. Existing GNNs are not architected for
training in such setups and incur prohibitive costs therein. We propose RETEXO,
a novel transformation of existing GNNs that improves the communication
efficiency during training in the fully-distributed setup. We experimentally
confirm that RETEXO offers up to 6 orders of magnitude better communication
efficiency even when training shallow GNNs, with a minimal trade-off in
accuracy for supervised node classification tasks.
|
[
{
"version": "v1",
"created": "Sat, 25 Feb 2023 10:42:34 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Mar 2023 04:13:48 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Kolluri",
"Aashish",
""
],
[
"Choudhary",
"Sarthak",
""
],
[
"Hooi",
"Bryan",
""
],
[
"Saxena",
"Prateek",
""
]
] |
new_dataset
| 0.989176 |
2302.14166
|
Buyu Liu
|
Buyu Liu, BaoJun, Jianping Fan, Xi Peng, Kui Ren and Jun Yu
|
GLOW: Global Layout Aware Attacks on Object Detection
|
ICCV
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Adversarial attacks aim to perturb images such that a predictor outputs
incorrect results. Due to the limited research in structured attacks, imposing
consistency checks on natural multi-object scenes is a promising yet practical
defense against conventional adversarial attacks. More desired attacks, to this
end, should be able to fool defenses with such consistency checks. Therefore,
we present the first approach GLOW that copes with various attack requests by
generating global layout-aware adversarial attacks, in which both categorical
and geometric layout constraints are explicitly established. Specifically, we
focus on object detection task and given a victim image, GLOW first localizes
victim objects according to target labels. And then it generates multiple
attack plans, together with their context-consistency scores. Our proposed
GLOW, on the one hand, is capable of handling various types of requests,
including single or multiple victim objects, with or without specified victim
objects. On the other hand, it produces a consistency score for each attack
plan, reflecting the overall contextual consistency that both semantic category
and global scene layout are considered. In experiment, we design multiple types
of attack requests and validate our ideas on MS COCO and Pascal. Extensive
experimental results demonstrate that we can achieve about 30$\%$ average
relative improvement compared to state-of-the-art methods in conventional
single object attack request; Moreover, our method outperforms SOTAs
significantly on more generic attack requests by about 20$\%$ in average;
Finally, our method produces superior performance under challenging zero-query
black-box setting, or 20$\%$ better than SOTAs. Our code, model and attack
requests would be made available.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 22:01:34 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Mar 2023 09:41:14 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Liu",
"Buyu",
""
],
[
"BaoJun",
"",
""
],
[
"Fan",
"Jianping",
""
],
[
"Peng",
"Xi",
""
],
[
"Ren",
"Kui",
""
],
[
"Yu",
"Jun",
""
]
] |
new_dataset
| 0.985442 |
2302.14554
|
Rajiv Kumar V
|
Chintan Tundia, Rajiv Kumar, Om Damani, G. Sivakumar
|
FPCD: An Open Aerial VHR Dataset for Farm Pond Change Detection
| null | null |
10.5220/0011797600003417
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Change detection for aerial imagery involves locating and identifying changes
associated with the areas of interest between co-registered bi-temporal or
multi-temporal images of a geographical location. Farm ponds are man-made
structures belonging to the category of minor irrigation structures used to
collect surface run-off water for future irrigation purposes. Detection of farm
ponds from aerial imagery and their evolution over time helps in land surveying
to analyze the agricultural shifts, policy implementation, seasonal effects and
climate changes. In this paper, we introduce a publicly available object
detection and instance segmentation (OD/IS) dataset for localizing farm ponds
from aerial imagery. We also collected and annotated the bi-temporal data over
a time-span of 14 years across 17 villages, resulting in a binary change
detection dataset called \textbf{F}arm \textbf{P}ond \textbf{C}hange
\textbf{D}etection Dataset (\textbf{FPCD}). We have benchmarked and analyzed
the performance of various object detection and instance segmentation methods
on our OD/IS dataset and the change detection methods over the FPCD dataset.
The datasets are publicly accessible at this page:
\textit{\url{https://huggingface.co/datasets/ctundia/FPCD}}
|
[
{
"version": "v1",
"created": "Tue, 28 Feb 2023 13:19:11 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Tundia",
"Chintan",
""
],
[
"Kumar",
"Rajiv",
""
],
[
"Damani",
"Om",
""
],
[
"Sivakumar",
"G.",
""
]
] |
new_dataset
| 0.999824 |
2303.01894
|
Wenxing Hu
|
Wenxing Hu, Minglei Tong
|
TRR360D: A dataset for 360 degree rotated rectangular box table
detection
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
To address the problem of scarcity and high annotation costs of rotated image
table detection datasets, this paper proposes a method for building a rotated
image table detection dataset. Based on the ICDAR2019MTD modern table detection
dataset, we refer to the annotation format of the DOTA dataset to create the
TRR360D rotated table detection dataset. The training set contains 600 rotated
images and 977 annotated instances, and the test set contains 240 rotated
images and 499 annotated instances. The AP50(T<90) evaluation metric is
defined, and this dataset is available for future researchers to study rotated
table detection algorithms and promote the development of table detection
technology. The TRR360D rotated table detection dataset was created by
constraining the starting point and annotation direction, and is publicly
available at https://github.com/vansin/TRR360D.
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 12:47:30 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Mar 2023 01:49:30 GMT"
},
{
"version": "v3",
"created": "Wed, 8 Mar 2023 11:23:18 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Hu",
"Wenxing",
""
],
[
"Tong",
"Minglei",
""
]
] |
new_dataset
| 0.999664 |
2303.02688
|
Will Rowan Mr
|
Will Rowan, Patrik Huber, Nick Pears, Andrew Keeling
|
Text2Face: A Multi-Modal 3D Face Model
|
Fixed formatting and a typo
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present the first 3D morphable modelling approach, whereby 3D face shape
can be directly and completely defined using a textual prompt. Building on work
in multi-modal learning, we extend the FLAME head model to a common
image-and-text latent space. This allows for direct 3D Morphable Model (3DMM)
parameter generation and therefore shape manipulation from textual
descriptions. Our method, Text2Face, has many applications; for example:
generating police photofits where the input is already in natural language. It
further enables multi-modal 3DMM image fitting to sketches and sculptures, as
well as images.
|
[
{
"version": "v1",
"created": "Sun, 5 Mar 2023 15:06:54 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Mar 2023 11:28:21 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Rowan",
"Will",
""
],
[
"Huber",
"Patrik",
""
],
[
"Pears",
"Nick",
""
],
[
"Keeling",
"Andrew",
""
]
] |
new_dataset
| 0.99967 |
2303.03953
|
Taja Kuzman
|
Taja Kuzman, Igor Mozeti\v{c}, Nikola Ljube\v{s}i\'c
|
ChatGPT: Beginning of an End of Manual Linguistic Data Annotation? Use
Case of Automatic Genre Identification
| null | null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
ChatGPT has shown strong capabilities in natural language generation tasks,
which naturally leads researchers to explore where its abilities end. In this
paper, we examine whether ChatGPT can be used for zero-shot text
classification, more specifically, automatic genre identification. We compare
ChatGPT with a multilingual XLM-RoBERTa language model that was fine-tuned on
datasets, manually annotated with genres. The models are compared on test sets
in two languages: English and Slovenian. Results show that ChatGPT outperforms
the fine-tuned model when applied to the dataset which was not seen before by
either of the models. Even when applied on Slovenian language as an
under-resourced language, ChatGPT's performance is no worse than when applied
to English. However, if the model is fully prompted in Slovenian, the
performance drops significantly, showing the current limitations of ChatGPT
usage on smaller languages. The presented results lead us to questioning
whether this is the beginning of an end of laborious manual annotation
campaigns even for smaller languages, such as Slovenian.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 14:59:33 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Mar 2023 09:35:09 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Kuzman",
"Taja",
""
],
[
"Mozetič",
"Igor",
""
],
[
"Ljubešić",
"Nikola",
""
]
] |
new_dataset
| 0.985218 |
2303.04221
|
Zoya Bylinskii
|
Tianyuan Cai, Aleena Gertrudes Niklaus, Michael Kraley, Bernard Kerr,
Zoya Bylinskii
|
THERIF: A Pipeline for Generating Themes for Readability with Iterative
Feedback
|
Extended version of CHI LBW'2023 paper
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Digital reading applications give readers the ability to customize fonts,
sizes, and spacings, all of which have been shown to improve the reading
experience for readers from different demographics. However, tweaking these
text features can be challenging, especially given their interactions on the
final look and feel of the text. Our solution is to offer readers preset
combinations of font, character, word and line spacing, which we bundle
together into reading themes. To arrive at a recommended set of reading themes,
we present our THERIF pipeline, which combines crowdsourced text adjustments,
ML-driven clustering of text formats, and design sessions. We show that after
four iterations of our pipeline, we converge on a set of three COR themes
(Compact, Open, and Relaxed) that meet diverse readers' preferences, when
evaluating the reading speeds, comprehension scores, and preferences of
hundreds of readers with and without dyslexia, using crowdsourced experiments.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 20:28:11 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Cai",
"Tianyuan",
""
],
[
"Niklaus",
"Aleena Gertrudes",
""
],
[
"Kraley",
"Michael",
""
],
[
"Kerr",
"Bernard",
""
],
[
"Bylinskii",
"Zoya",
""
]
] |
new_dataset
| 0.99233 |
2303.04242
|
Facundo Carrillo PhD
|
Facundo Carrillo, Elaine Hu
|
MEV in fixed gas price blockchains: Terra Classic as a case of study
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Maximum extractable value (MEV) has been extensively studied. In most papers,
the researchers have worked with the Ethereum blockchain almost exclusively.
Even though, Ethereum and other blockchains have dynamic gas prices this is not
the case for all blockchains; many of them have fixed gas prices. Extending the
research to other blockchains with fixed gas price could broaden the scope of
the existing studies on MEV. To our knowledge, there is not a vast
understanding of MEV in fixed gas price blockchains. Therefore, we propose to
study Terra Classic as an example to understand how MEV activities affect
blockchains with fixed gas price. We first analysed the data from Terra Classic
before the UST de-peg event in May 2022 and described the nature of the
exploited arbitrage opportunities. We found more than 188K successful
arbitrages, and most of them used UST as the initial token. The capital to
perform the arbitrage was less than 1K UST in 50% of the cases, and 80% of the
arbitrages had less than four swaps. Then, we explored the characteristics that
attribute to higher MEV. We found that searchers who use more complex
mechanisms, i.e. different contracts and accounts, made higher profits.
Finally, we concluded that the most profitable searchers used a strategy of
running bots in a multi-instance environment, i.e. running bots with different
virtual machines. We measured the importance of the geographic distribution of
the virtual machines that run the bots. We found that having good geographic
coverage makes the difference between winning or losing the arbitrage
opportunities. That is because, unlike MEV extraction in Ethereum, bots in
fixed gas price blockchains are not battling a gas war; they are fighting in a
latency war.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 21:28:13 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Carrillo",
"Facundo",
""
],
[
"Hu",
"Elaine",
""
]
] |
new_dataset
| 0.982703 |
2303.04265
|
Javad Manashti
|
Javad Manashti, Fran\c{c}ois Duhaime, Matthew F. Toews, Pouyan Pirnia,
Jn Kinsonn Telcy
|
Comparing PSDNet, pretrained networks, and traditional feature
extraction for predicting the particle size distribution of granular
materials from photographs
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This study aims to evaluate PSDNet, a series of convolutional neural networks
(ConvNets) trained with photographs to predict the particle size distribution
of granular materials. Nine traditional feature extraction methods and 15
pretrained ConvNets were also evaluated and compared. A dataset including 9600
photographs of 15 different granular materials was used. The influence of image
size and color band was verified by using six image sizes between 32 and 160
pixels, and both grayscale and color images as PSDNet inputs. In addition to
random training, validation, and testing datasets, a material removal method
was also used to evaluate the performances of each image analysis method. With
this method, each material was successively removed from the training and
validation datasets and used as the testing dataset. Results show that a
combination of all PSDNet color and grayscale features can lead to a root mean
square error (RMSE) on the percentages passing as low as 1.8 % with a random
testing dataset and 9.1% with the material removal method. For the random
datasets, a combination of all traditional features, and the features extracted
from InceptionResNetV2 led to RMSE on the percentages passing of 2.3 and 1.7 %,
respectively.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 22:29:38 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Manashti",
"Javad",
""
],
[
"Duhaime",
"François",
""
],
[
"Toews",
"Matthew F.",
""
],
[
"Pirnia",
"Pouyan",
""
],
[
"Telcy",
"Jn Kinsonn",
""
]
] |
new_dataset
| 0.994749 |
2303.04269
|
Javad Manashti
|
Javad Manashti, Pouyan Pirnia, Alireza Manashty, Sahar Ujan, Matthew
Toews, Fran\c{c}ois Duhaime
|
PSDNet: Determination of Particle Size Distributions Using Synthetic
Soil Images and Convolutional Neural Networks
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This project aimed to determine the grain size distribution of granular
materials from images using convolutional neural networks. The application of
ConvNet and pretrained ConvNet models, including AlexNet, SqueezeNet,
GoogLeNet, InceptionV3, DenseNet201, MobileNetV2, ResNet18, ResNet50,
ResNet101, Xception, InceptionResNetV2, ShuffleNet, and NASNetMobile was
studied. Synthetic images of granular materials created with the discrete
element code YADE were used. All the models were trained and verified with
grayscale and color band datasets with image sizes ranging from 32 to 160
pixels. The proposed ConvNet model predicts the percentages of mass retained on
the finest sieve, coarsest sieve, and all sieves with root-mean-square errors
of 1.8 %, 3.3 %, and 2.8 %, respectively, and a coefficient of determination of
0.99. For pretrained networks, root-mean-square errors of 2.4 % and 2.8 % were
obtained for the finest sieve with feature extraction and transfer learning
models, respectively.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 22:42:13 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Manashti",
"Javad",
""
],
[
"Pirnia",
"Pouyan",
""
],
[
"Manashty",
"Alireza",
""
],
[
"Ujan",
"Sahar",
""
],
[
"Toews",
"Matthew",
""
],
[
"Duhaime",
"François",
""
]
] |
new_dataset
| 0.966115 |
2303.04289
|
Atli Sigurgeirsson
|
Atli Thor Sigurgeirsson, Simon King
|
Do Prosody Transfer Models Transfer Prosody?
|
Accepted in ICASSP 2023, 5 pages, 2 figures, 3 tables
| null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Some recent models for Text-to-Speech synthesis aim to transfer the prosody
of a reference utterance to the generated target synthetic speech. This is done
by using a learned embedding of the reference utterance, which is used to
condition speech generation. During training, the reference utterance is
identical to the target utterance. Yet, during synthesis, these models are
often used to transfer prosody from a reference that differs from the text or
speaker being synthesized.
To address this inconsistency, we propose to use a different, but
prosodically-related, utterance during training too. We believe this should
encourage the model to learn to transfer only those characteristics that the
reference and target have in common. If prosody transfer methods do indeed
transfer prosody they should be able to be trained in the way we propose.
However, results show that a model trained under these conditions performs
significantly worse than one trained using the target utterance as a reference.
To explain this, we hypothesize that prosody transfer models do not learn a
transferable representation of prosody, but rather an utterance-level
representation which is highly dependent on both the reference speaker and
reference text.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 23:35:58 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Sigurgeirsson",
"Atli Thor",
""
],
[
"King",
"Simon",
""
]
] |
new_dataset
| 0.952543 |
2303.04292
|
Mojtaba Taherisadr
|
Mojtaba Taherisadr and Mohammad Abdullah Al Faruque and Salma Elmalaki
|
ERUDITE: Human-in-the-Loop IoT for an Adaptive Personalized Learning
System
|
It is under review in the IEEE IoT journal
| null | null | null |
cs.HC cs.LG eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Thanks to the rapid growth in wearable technologies and recent advancement in
machine learning and signal processing, monitoring complex human contexts
becomes feasible, paving the way to develop human-in-the-loop IoT systems that
naturally evolve to adapt to the human and environment state autonomously.
Nevertheless, a central challenge in designing many of these IoT systems arises
from the requirement to infer the human mental state, such as intention,
stress, cognition load, or learning ability. While different human contexts can
be inferred from the fusion of different sensor modalities that can correlate
to a particular mental state, the human brain provides a richer sensor modality
that gives us more insights into the required human context. This paper
proposes ERUDITE, a human-in-the-loop IoT system for the learning environment
that exploits recent wearable neurotechnology to decode brain signals. Through
insights from concept learning theory, ERUDITE can infer the human state of
learning and understand when human learning increases or declines. By
quantifying human learning as an input sensory signal, ERUDITE can provide
adequate personalized feedback to humans in a learning environment to enhance
their learning experience. ERUDITE is evaluated across $15$ participants and
showed that by using the brain signals as a sensor modality to infer the human
learning state and providing personalized adaptation to the learning
environment, the participants' learning performance increased on average by
$26\%$. Furthermore, we showed that ERUDITE can be deployed on an edge-based
prototype to evaluate its practicality and scalability.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 23:54:35 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Taherisadr",
"Mojtaba",
""
],
[
"Faruque",
"Mohammad Abdullah Al",
""
],
[
"Elmalaki",
"Salma",
""
]
] |
new_dataset
| 0.976009 |
2303.04302
|
Felipe Barbosa
|
Felipe Manfio Barbosa, Fernando Santos Os\'orio
|
Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics
| null | null | null | null |
cs.CV cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the main paths towards the reduction of traffic accidents is the
increase in vehicle safety through driver assistance systems or even systems
with a complete level of autonomy. In these types of systems, tasks such as
obstacle detection and segmentation, especially the Deep Learning-based ones,
play a fundamental role in scene understanding for correct and safe navigation.
Besides that, the wide variety of sensors in vehicles nowadays provides a rich
set of alternatives for improvement in the robustness of perception in
challenging situations, such as navigation under lighting and weather adverse
conditions. Despite the current focus given to the subject, the literature
lacks studies on radar-based and radar-camera fusion-based perception. Hence,
this work aims to carry out a study on the current scenario of camera and
radar-based perception for ADAS and autonomous vehicles. Concepts and
characteristics related to both sensors, as well as to their fusion, are
presented. Additionally, we give an overview of the Deep Learning-based
detection and segmentation tasks, and the main datasets, metrics, challenges,
and open questions in vehicle perception.
|
[
{
"version": "v1",
"created": "Wed, 8 Mar 2023 00:48:32 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Barbosa",
"Felipe Manfio",
""
],
[
"Osório",
"Fernando Santos",
""
]
] |
new_dataset
| 0.953604 |
2303.04376
|
Seunghoon Lee
|
Seunghoon Lee, Suhwan Cho, Dogyoon Lee, Minhyeok Lee, Sangyoun Lee
|
TSANET: Temporal and Scale Alignment for Unsupervised Video Object
Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unsupervised Video Object Segmentation (UVOS) refers to the challenging task
of segmenting the prominent object in videos without manual guidance. In other
words, the network detects the accurate region of the target object in a
sequence of RGB frames without prior knowledge. In recent works, two approaches
for UVOS have been discussed that can be divided into: appearance and
appearance-motion based methods. Appearance based methods utilize the
correlation information of inter-frames to capture target object that commonly
appears in a sequence. However, these methods does not consider the motion of
target object due to exploit the correlation information between randomly
paired frames. Appearance-motion based methods, on the other hand, fuse the
appearance features from RGB frames with the motion features from optical flow.
Motion cue provides useful information since salient objects typically show
distinctive motion in a sequence. However, these approaches have the limitation
that the dependency on optical flow is dominant. In this paper, we propose a
novel framework for UVOS that can address aforementioned limitations of two
approaches in terms of both time and scale. Temporal Alignment Fusion aligns
the saliency information of adjacent frames with the target frame to leverage
the information of adjacent frames. Scale Alignment Decoder predicts the target
object mask precisely by aggregating differently scaled feature maps via
continuous mapping with implicit neural representation. We present experimental
results on public benchmark datasets, DAVIS 2016 and FBMS, which demonstrate
the effectiveness of our method. Furthermore, we outperform the
state-of-the-art methods on DAVIS 2016.
|
[
{
"version": "v1",
"created": "Wed, 8 Mar 2023 04:59:43 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Lee",
"Seunghoon",
""
],
[
"Cho",
"Suhwan",
""
],
[
"Lee",
"Dogyoon",
""
],
[
"Lee",
"Minhyeok",
""
],
[
"Lee",
"Sangyoun",
""
]
] |
new_dataset
| 0.990851 |
2303.04378
|
Liangliang Yao
|
Liangliang Yao, Changhong Fu, Sihang Li, Guangze Zheng, and Junjie Ye
|
SGDViT: Saliency-Guided Dynamic Vision Transformer for UAV Tracking
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision-based object tracking has boosted extensive autonomous applications
for unmanned aerial vehicles (UAVs). However, the dynamic changes in flight
maneuver and viewpoint encountered in UAV tracking pose significant
difficulties, e.g. , aspect ratio change, and scale variation. The conventional
cross-correlation operation, while commonly used, has limitations in
effectively capturing perceptual similarity and incorporates extraneous
background information. To mitigate these limitations, this work presents a
novel saliency-guided dynamic vision Transformer (SGDViT) for UAV tracking. The
proposed method designs a new task-specific object saliency mining network to
refine the cross-correlation operation and effectively discriminate foreground
and background information. Additionally, a saliency adaptation embedding
operation dynamically generates tokens based on initial saliency, thereby
reducing the computational complexity of the Transformer architecture. Finally,
a lightweight saliency filtering Transformer further refines saliency
information and increases the focus on appearance information. The efficacy and
robustness of the proposed approach have been thoroughly assessed through
experiments on three widely-used UAV tracking benchmarks and real-world
scenarios, with results demonstrating its superiority. The source code and demo
videos are available at https://github.com/vision4robotics/SGDViT.
|
[
{
"version": "v1",
"created": "Wed, 8 Mar 2023 05:01:00 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Yao",
"Liangliang",
""
],
[
"Fu",
"Changhong",
""
],
[
"Li",
"Sihang",
""
],
[
"Zheng",
"Guangze",
""
],
[
"Ye",
"Junjie",
""
]
] |
new_dataset
| 0.996492 |
2303.04384
|
Zhenrong Zhang
|
Zhenrong Zhang, Pengfei Hu, Jiefeng Ma, Jun Du, Jianshu Zhang, Huihui
Zhu, Baocai Yin, Bing Yin and Cong Liu
|
SEMv2: Table Separation Line Detection Based on Conditional Convolution
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Table structure recognition is an indispensable element for enabling machines
to comprehend tables. Its primary purpose is to identify the internal structure
of a table. Nevertheless, due to the complexity and diversity of their
structure and style, it is highly challenging to parse the tabular data into a
structured format that machines can comprehend. In this work, we adhere to the
principle of the split-and-merge based methods and propose an accurate table
structure recognizer, termed SEMv2 (SEM: Split, Embed and Merge). Unlike the
previous works in the ``split'' stage, we aim to address the table separation
line instance-level discrimination problem and introduce a table separation
line detection strategy based on conditional convolution. Specifically, we
design the ``split'' in a top-down manner that detects the table separation
line instance first and then dynamically predicts the table separation line
mask for each instance. The final table separation line shape can be accurately
obtained by processing the table separation line mask in a row-wise/column-wise
manner. To comprehensively evaluate the SEMv2, we also present a more
challenging dataset for table structure recognition, dubbed iFLYTAB, which
encompasses multiple style tables in various scenarios such as photos, scanned
documents, etc. Extensive experiments on publicly available datasets (e.g.
SciTSR, PubTabNet and iFLYTAB) demonstrate the efficacy of our proposed
approach. The code and iFLYTAB dataset will be made publicly available upon
acceptance of this paper.
|
[
{
"version": "v1",
"created": "Wed, 8 Mar 2023 05:15:01 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Zhang",
"Zhenrong",
""
],
[
"Hu",
"Pengfei",
""
],
[
"Ma",
"Jiefeng",
""
],
[
"Du",
"Jun",
""
],
[
"Zhang",
"Jianshu",
""
],
[
"Zhu",
"Huihui",
""
],
[
"Yin",
"Baocai",
""
],
[
"Yin",
"Bing",
""
],
[
"Liu",
"Cong",
""
]
] |
new_dataset
| 0.990735 |
2303.04451
|
Petr Vanc
|
Petr Vanc, Jan Kristof Behrens, Karla Stepanova, Vaclav Hlavac
|
Communicating human intent to a robotic companion by multi-type gesture
sentences
|
7 pages, 9 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Human-Robot collaboration in home and industrial workspaces is on the rise.
However, the communication between robots and humans is a bottleneck. Although
people use a combination of different types of gestures to complement speech,
only a few robotic systems utilize gestures for communication. In this paper,
we propose a gesture pseudo-language and show how multiple types of gestures
can be combined to express human intent to a robot (i.e., expressing both the
desired action and its parameters - e.g., pointing to an object and showing
that the object should be emptied into a bowl). The demonstrated gestures and
the perceived table-top scene (object poses detected by CosyPose) are processed
in real-time) to extract the human's intent. We utilize behavior trees to
generate reactive robot behavior that handles various possible states of the
world (e.g., a drawer has to be opened before an object is placed into it) and
recovers from errors (e.g., when the scene changes). Furthermore, our system
enables switching between direct teleoperation of the end-effector and
high-level operation using the proposed gesture sentences. The system is
evaluated on increasingly complex tasks using a real 7-DoF Franka Emika Panda
manipulator. Controlling the robot via action gestures lowered the execution
time by up to 60%, compared to direct teleoperation.
|
[
{
"version": "v1",
"created": "Wed, 8 Mar 2023 09:02:12 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Vanc",
"Petr",
""
],
[
"Behrens",
"Jan Kristof",
""
],
[
"Stepanova",
"Karla",
""
],
[
"Hlavac",
"Vaclav",
""
]
] |
new_dataset
| 0.996712 |
2303.04670
|
Sankeerth Durvasula
|
Sankeerth Durvasula, Yushi Guan, Nandita Vijaykumar
|
EvConv: Fast CNN Inference on Event Camera Inputs For High-Speed Robot
Perception
| null | null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Event cameras capture visual information with a high temporal resolution and
a wide dynamic range. This enables capturing visual information at fine time
granularities (e.g., microseconds) in rapidly changing environments. This makes
event cameras highly useful for high-speed robotics tasks involving rapid
motion, such as high-speed perception, object tracking, and control. However,
convolutional neural network inference on event camera streams cannot currently
perform real-time inference at the high speeds at which event cameras operate -
current CNN inference times are typically closer in order of magnitude to the
frame rates of regular frame-based cameras. Real-time inference at event camera
rates is necessary to fully leverage the high frequency and high temporal
resolution that event cameras offer. This paper presents EvConv, a new approach
to enable fast inference on CNNs for inputs from event cameras. We observe that
consecutive inputs to the CNN from an event camera have only small differences
between them. Thus, we propose to perform inference on the difference between
consecutive input tensors, or the increment. This enables a significant
reduction in the number of floating-point operations required (and thus the
inference latency) because increments are very sparse. We design EvConv to
leverage the irregular sparsity in increments from event cameras and to retain
the sparsity of these increments across all layers of the network. We
demonstrate a reduction in the number of floating operations required in the
forward pass by up to 98%. We also demonstrate a speedup of up to 1.6X for
inference using CNNs for tasks such as depth estimation, object recognition,
and optical flow estimation, with almost no loss in accuracy.
|
[
{
"version": "v1",
"created": "Wed, 8 Mar 2023 15:47:13 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Durvasula",
"Sankeerth",
""
],
[
"Guan",
"Yushi",
""
],
[
"Vijaykumar",
"Nandita",
""
]
] |
new_dataset
| 0.995285 |
2303.04671
|
Chenfei Wu
|
Chenfei Wu, Shengming Yin, Weizhen Qi, Xiaodong Wang, Zecheng Tang,
Nan Duan
|
Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation
Models
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
ChatGPT is attracting a cross-field interest as it provides a language
interface with remarkable conversational competency and reasoning capabilities
across many domains. However, since ChatGPT is trained with languages, it is
currently not capable of processing or generating images from the visual world.
At the same time, Visual Foundation Models, such as Visual Transformers or
Stable Diffusion, although showing great visual understanding and generation
capabilities, they are only experts on specific tasks with one-round fixed
inputs and outputs. To this end, We build a system called \textbf{Visual
ChatGPT}, incorporating different Visual Foundation Models, to enable the user
to interact with ChatGPT by 1) sending and receiving not only languages but
also images 2) providing complex visual questions or visual editing
instructions that require the collaboration of multiple AI models with
multi-steps. 3) providing feedback and asking for corrected results. We design
a series of prompts to inject the visual model information into ChatGPT,
considering models of multiple inputs/outputs and models that require visual
feedback. Experiments show that Visual ChatGPT opens the door to investigating
the visual roles of ChatGPT with the help of Visual Foundation Models. Our
system is publicly available at
\url{https://github.com/microsoft/visual-chatgpt}.
|
[
{
"version": "v1",
"created": "Wed, 8 Mar 2023 15:50:02 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Wu",
"Chenfei",
""
],
[
"Yin",
"Shengming",
""
],
[
"Qi",
"Weizhen",
""
],
[
"Wang",
"Xiaodong",
""
],
[
"Tang",
"Zecheng",
""
],
[
"Duan",
"Nan",
""
]
] |
new_dataset
| 0.999056 |
2303.04753
|
Roberto C. Sundin
|
Roberto C. Sundin and David Umsonst
|
kollagen: A Collaborative SLAM Pose Graph Generator
|
Accepted for publication in 2023 IEEE International Conference on
Robotics and Automation (ICRA)
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we address the lack of datasets for - and the issue of
reproducibility in - collaborative SLAM pose graph optimizers by providing a
novel pose graph generator. Our pose graph generator, kollagen, is based on a
random walk in a planar grid world, similar to the popular M3500 dataset for
single agent SLAM. It is simple to use and the user can set several parameters,
e.g., the number of agents, the number of nodes, loop closure generation
probabilities, and standard deviations of the measurement noise. Furthermore, a
qualitative execution time analysis of our pose graph generator showcases the
speed of the generator in the tunable parameters.
In addition to the pose graph generator, our paper provides two example
datasets that researchers can use out-of-the-box to evaluate their algorithms.
One of the datasets has 8 agents, each with 3500 nodes, and 67645 constraints
in the pose graphs, while the other has 5 agents, each with 10000 nodes, and
76134 constraints. In addition, we show that current state-of-the-art pose
graph optimizers are able to process our generated datasets and perform pose
graph optimization.
The data generator can be found at
https://github.com/EricssonResearch/kollagen.
|
[
{
"version": "v1",
"created": "Wed, 8 Mar 2023 17:39:36 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Sundin",
"Roberto C.",
""
],
[
"Umsonst",
"David",
""
]
] |
new_dataset
| 0.995267 |
2303.04778
|
Zhongyi Jiang
|
Zhongyi Jiang, Min Zhu, Dongzhuo Li, Qiuzi Li, Yanhua O. Yuan, Lu Lu
|
Fourier-MIONet: Fourier-enhanced multiple-input neural operators for
multiphase modeling of geological carbon sequestration
| null | null | null | null |
cs.LG physics.comp-ph
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Geologic Carbon Storage (GCS) is an important technology that aims to reduce
the amount of carbon dioxide in the atmosphere. Multiphase flow in porous media
is essential to understand CO2 migration and pressure fields in the subsurface
associated with GCS. However, numerical simulation for such problems in 4D is
computationally challenging and expensive, due to the multiphysics and
multiscale nature of the highly nonlinear governing partial differential
equations (PDEs). It prevents us from considering multiple subsurface scenarios
and conducting real-time optimization. Here, we develop a Fourier-enhanced
multiple-input neural operator (Fourier-MIONet) to learn the solution operator
of the problem of multiphase flow in porous media. Fourier-MIONet utilizes the
recently developed framework of the multiple-input deep neural operators
(MIONet) and incorporates the Fourier neural operator (FNO) in the network
architecture. Once Fourier-MIONet is trained, it can predict the evolution of
saturation and pressure of the multiphase flow under various reservoir
conditions, such as permeability and porosity heterogeneity, anisotropy,
injection configurations, and multiphase flow properties. Compared to the
enhanced FNO (U-FNO), the proposed Fourier-MIONet has 90% fewer unknown
parameters, and it can be trained in significantly less time (about 3.5 times
faster) with much lower CPU memory (< 15%) and GPU memory (< 35%) requirements,
to achieve similar prediction accuracy. In addition to the lower computational
cost, Fourier-MIONet can be trained with only 6 snapshots of time to predict
the PDE solutions for 30 years. The excellent generalizability of
Fourier-MIONet is enabled by its adherence to the physical principle that the
solution to a PDE is continuous over time.
|
[
{
"version": "v1",
"created": "Wed, 8 Mar 2023 18:20:56 GMT"
}
] | 2023-03-09T00:00:00 |
[
[
"Jiang",
"Zhongyi",
""
],
[
"Zhu",
"Min",
""
],
[
"Li",
"Dongzhuo",
""
],
[
"Li",
"Qiuzi",
""
],
[
"Yuan",
"Yanhua O.",
""
],
[
"Lu",
"Lu",
""
]
] |
new_dataset
| 0.991055 |
1811.11881
|
Karthik Abinav Sankararaman
|
Nicole Immorlica and Karthik Abinav Sankararaman and Robert Schapire
and Aleksandrs Slivkins
|
Adversarial Bandits with Knapsacks
|
The extended abstract appeared in FOCS 2019. The definitive version
was published in JACM '22. V8 is the latest version with all technical
changes. Subsequent versions fixes minor LATEX presentation issues
| null | null | null |
cs.DS cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
We consider Bandits with Knapsacks (henceforth, BwK), a general model for
multi-armed bandits under supply/budget constraints. In particular, a bandit
algorithm needs to solve a well-known knapsack problem: find an optimal packing
of items into a limited-size knapsack. The BwK problem is a common
generalization of numerous motivating examples, which range from dynamic
pricing to repeated auctions to dynamic ad allocation to network routing and
scheduling. While the prior work on BwK focused on the stochastic version, we
pioneer the other extreme in which the outcomes can be chosen adversarially.
This is a considerably harder problem, compared to both the stochastic version
and the "classic" adversarial bandits, in that regret minimization is no longer
feasible. Instead, the objective is to minimize the competitive ratio: the
ratio of the benchmark reward to the algorithm's reward.
We design an algorithm with competitive ratio O(log T) relative to the best
fixed distribution over actions, where T is the time horizon; we also prove a
matching lower bound. The key conceptual contribution is a new perspective on
the stochastic version of the problem. We suggest a new algorithm for the
stochastic version, which builds on the framework of regret minimization in
repeated games and admits a substantially simpler analysis compared to prior
work. We then analyze this algorithm for the adversarial version and use it as
a subroutine to solve the latter.
|
[
{
"version": "v1",
"created": "Wed, 28 Nov 2018 23:43:11 GMT"
},
{
"version": "v10",
"created": "Mon, 6 Feb 2023 01:43:48 GMT"
},
{
"version": "v11",
"created": "Tue, 7 Mar 2023 04:06:03 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Dec 2018 02:13:00 GMT"
},
{
"version": "v3",
"created": "Thu, 14 Mar 2019 17:12:51 GMT"
},
{
"version": "v4",
"created": "Fri, 22 Mar 2019 22:17:04 GMT"
},
{
"version": "v5",
"created": "Sun, 13 Oct 2019 05:01:32 GMT"
},
{
"version": "v6",
"created": "Fri, 6 Nov 2020 19:18:05 GMT"
},
{
"version": "v7",
"created": "Thu, 23 Sep 2021 23:52:00 GMT"
},
{
"version": "v8",
"created": "Tue, 19 Jul 2022 05:21:00 GMT"
},
{
"version": "v9",
"created": "Wed, 3 Aug 2022 06:11:18 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Immorlica",
"Nicole",
""
],
[
"Sankararaman",
"Karthik Abinav",
""
],
[
"Schapire",
"Robert",
""
],
[
"Slivkins",
"Aleksandrs",
""
]
] |
new_dataset
| 0.999106 |
2106.12735
|
Qiuyu Mao
|
Yingjie Wang, Qiuyu Mao, Hanqi Zhu, Jiajun Deng, Yu Zhang, Jianmin Ji,
Houqiang Li, Yanyong Zhang
|
Multi-Modal 3D Object Detection in Autonomous Driving: a Survey
|
Accepted by International Journal of Computer Vision (IJCV)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this survey, we first introduce the background of popular sensors used for
self-driving, their data properties, and the corresponding object detection
algorithms. Next, we discuss existing datasets that can be used for evaluating
multi-modal 3D object detection algorithms. Then we present a review of
multi-modal fusion based 3D detection networks, taking a close look at their
fusion stage, fusion input and fusion granularity, and how these design choices
evolve with time and technology. After the review, we discuss open challenges
as well as possible solutions. We hope that this survey can help researchers to
get familiar with the field and embark on investigations in the area of
multi-modal 3D object detection.
|
[
{
"version": "v1",
"created": "Thu, 24 Jun 2021 02:52:12 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Jun 2021 15:39:13 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Mar 2023 05:29:56 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Wang",
"Yingjie",
""
],
[
"Mao",
"Qiuyu",
""
],
[
"Zhu",
"Hanqi",
""
],
[
"Deng",
"Jiajun",
""
],
[
"Zhang",
"Yu",
""
],
[
"Ji",
"Jianmin",
""
],
[
"Li",
"Houqiang",
""
],
[
"Zhang",
"Yanyong",
""
]
] |
new_dataset
| 0.999078 |
2108.06158
|
Andrea Mastropietro
|
Paola Stolfi, Andrea Mastropietro, Giuseppe Pasculli, Paolo Tieri,
Davide Vergni
|
NIAPU: network-informed adaptive positive-unlabeled learning for disease
gene identification
|
This article has been accepted for publication in Bioinformatics,
Published by Oxford University Press
| null |
10.1093/bioinformatics/btac848
| null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Gene-disease associations are fundamental for understanding disease etiology
and developing effective interventions and treatments. Identifying genes not
yet associated with a disease due to a lack of studies is a challenging task in
which prioritization based on prior knowledge is an important element. The
computational search for new candidate disease genes may be eased by
positive-unlabeled learning, the machine learning setting in which only a
subset of instances are labeled as positive while the rest of the data set is
unlabeled. In this work, we propose a set of effective network-based features
to be used in a novel Markov diffusion-based multi-class labeling strategy for
putative disease gene discovery. The performances of the new labeling algorithm
and the effectiveness of the proposed features have been tested on ten
different disease data sets using three machine learning algorithms. The new
features have been compared against classical topological and
functional/ontological features and a set of network- and biological-derived
features already used in gene discovery tasks. The predictive power of the
integrated methodology in searching for new disease genes has been found to be
competitive against state-of-the-art algorithms.
|
[
{
"version": "v1",
"created": "Fri, 13 Aug 2021 10:25:47 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Jun 2022 17:28:06 GMT"
},
{
"version": "v3",
"created": "Wed, 7 Dec 2022 10:44:50 GMT"
},
{
"version": "v4",
"created": "Wed, 25 Jan 2023 12:13:50 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Stolfi",
"Paola",
""
],
[
"Mastropietro",
"Andrea",
""
],
[
"Pasculli",
"Giuseppe",
""
],
[
"Tieri",
"Paolo",
""
],
[
"Vergni",
"Davide",
""
]
] |
new_dataset
| 0.981565 |
2111.14022
|
Hengtao He
|
Hengtao He, Xianghao Yu, Jun Zhang, S.H. Song, Khaled B. Letaief
|
Cell-Free Massive MIMO Detection: A Distributed Expectation Propagation
Approach
|
31 Pages, 8 Figures, 2 Tables. This paper has been submitted to the
IEEE for possible publication. arXiv admin note: substantial text overlap
with arXiv:2108.07498
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cell-free massive MIMO is one of the core technologies for future wireless
networks. It is expected to bring enormous benefits, including ultra-high
reliability, data throughput, energy efficiency, and uniform coverage. As a
radically distributed system, the performance of cell-free massive MIMO
critically relies on efficient distributed processing algorithms. In this
paper, we propose a distributed expectation propagation (EP) detector for
cell-free massive MIMO, which consists of two modules: a nonlinear module at
the central processing unit (CPU) and a linear module at each access point
(AP). The turbo principle in iterative channel decoding is utilized to compute
and pass the extrinsic information between the two modules. An analytical
framework is provided to characterize the asymptotic performance of the
proposed EP detector with a large number of antennas. Furthermore, a
distributed iterative channel estimation and data detection (ICD) algorithm is
developed to handle the practical setting with imperfect channel state
information (CSI). Simulation results will show that the proposed method
outperforms existing detectors for cell-free massive MIMO systems in terms of
the bit-error rate and demonstrate that the developed theoretical analysis
accurately predicts system performance. Finally, it is shown that with
imperfect CSI, the proposed ICD algorithm improves the system performance
significantly and enables non-orthogonal pilots to reduce the pilot overhead.
|
[
{
"version": "v1",
"created": "Sun, 28 Nov 2021 02:07:43 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Mar 2023 13:47:12 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"He",
"Hengtao",
""
],
[
"Yu",
"Xianghao",
""
],
[
"Zhang",
"Jun",
""
],
[
"Song",
"S. H.",
""
],
[
"Letaief",
"Khaled B.",
""
]
] |
new_dataset
| 0.998088 |
2203.03749
|
Kenny Chen
|
Kenny Chen, Ryan Nemiroff, Brett T. Lopez
|
Direct LiDAR-Inertial Odometry: Lightweight LIO with Continuous-Time
Motion Correction
|
IEEE International Conference on Robotics and Automation (ICRA) 2023.
Video: https://www.youtube.com/watch?v=4-oXjG8ow10
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Aggressive motions from agile flights or traversing irregular terrain induce
motion distortion in LiDAR scans that can degrade state estimation and mapping.
Some methods exist to mitigate this effect, but they are still too simplistic
or computationally costly for resource-constrained mobile robots. To this end,
this paper presents Direct LiDAR-Inertial Odometry (DLIO), a lightweight
LiDAR-inertial odometry algorithm with a new coarse-to-fine approach in
constructing continuous-time trajectories for precise motion correction. The
key to our method lies in the construction of a set of analytical equations
which are parameterized solely by time, enabling fast and parallelizable
point-wise deskewing. This method is feasible only because of the strong
convergence properties in our nonlinear geometric observer, which provides
provably correct state estimates for initializing the sensitive IMU integration
step. Moreover, by simultaneously performing motion correction and prior
generation, and by directly registering each scan to the map and bypassing
scan-to-scan, DLIO's condensed architecture is nearly 20% more computationally
efficient than the current state-of-the-art with a 12% increase in accuracy. We
demonstrate DLIO's superior localization accuracy, map quality, and lower
computational overhead as compared to four state-of-the-art algorithms through
extensive tests using multiple public benchmark and self-collected datasets.
|
[
{
"version": "v1",
"created": "Mon, 7 Mar 2022 22:21:59 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Mar 2022 22:55:26 GMT"
},
{
"version": "v3",
"created": "Fri, 16 Sep 2022 00:44:24 GMT"
},
{
"version": "v4",
"created": "Tue, 7 Mar 2023 03:11:27 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Chen",
"Kenny",
""
],
[
"Nemiroff",
"Ryan",
""
],
[
"Lopez",
"Brett T.",
""
]
] |
new_dataset
| 0.999456 |
2203.07530
|
Levi Burner
|
Levi Burner, Nitin J. Sanket, Cornelia Ferm\"uller, Yiannis Aloimonos
|
TTCDist: Fast Distance Estimation From an Active Monocular Camera Using
Time-to-Contact
|
19 pages, 24 figures, 1 table. To be published in ICRA 2023
| null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Distance estimation from vision is fundamental for a myriad of robotic
applications such as navigation, manipulation, and planning. Inspired by the
mammal's visual system, which gazes at specific objects, we develop two novel
constraints relating time-to-contact, acceleration, and distance that we call
the $\tau$-constraint and $\Phi$-constraint. They allow an active (moving)
camera to estimate depth efficiently and accurately while using only a small
portion of the image. The constraints are applicable to range sensing, sensor
fusion, and visual servoing.
We successfully validate the proposed constraints with two experiments. The
first applies both constraints in a trajectory estimation task with a monocular
camera and an Inertial Measurement Unit (IMU). Our methods achieve 30-70% less
average trajectory error while running 25$\times$ and 6.2$\times$ faster than
the popular Visual-Inertial Odometry methods VINS-Mono and ROVIO respectively.
The second experiment demonstrates that when the constraints are used for
feedback with efference copies the resulting closed loop system's eigenvalues
are invariant to scaling of the applied control signal. We believe these
results indicate the $\tau$ and $\Phi$ constraint's potential as the basis of
robust and efficient algorithms for a multitude of robotic applications.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 22:34:10 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Sep 2022 16:45:42 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Mar 2023 17:24:32 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Burner",
"Levi",
""
],
[
"Sanket",
"Nitin J.",
""
],
[
"Fermüller",
"Cornelia",
""
],
[
"Aloimonos",
"Yiannis",
""
]
] |
new_dataset
| 0.999689 |
2205.03699
|
Yuantong Li
|
Yuantong Li, Chi-hua Wang, Guang Cheng, Will Wei Sun
|
Rate-Optimal Contextual Online Matching Bandit
| null | null | null | null |
cs.LG cs.GT cs.MA stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Two-sided online matching platforms have been employed in various markets.
However, agents' preferences in present market are usually implicit and unknown
and must be learned from data. With the growing availability of side
information involved in the decision process, modern online matching
methodology demands the capability to track preference dynamics for agents
based on their contextual information. This motivates us to consider a novel
Contextual Online Matching Bandit prOblem (COMBO), which allows dynamic
preferences in matching decisions. Existing works focus on multi-armed bandit
with static preference, but this is insufficient: the two-sided preference
changes as along as one-side's contextual information updates, resulting in
non-static matching. In this paper, we propose a Centralized Contextual -
Explore Then Commit (CC-ETC) algorithm to adapt to the COMBO. CC-ETC solves
online matching with dynamic preference. In theory, we show that CC-ETC
achieves a sublinear regret upper bound O(log(T)) and is a rate-optimal
algorithm by proving a matching lower bound. In the experiments, we demonstrate
that CC-ETC is robust to variant preference schemes, dimensions of contexts,
reward noise levels, and contexts variation levels.
|
[
{
"version": "v1",
"created": "Sat, 7 May 2022 18:28:20 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Mar 2023 04:59:51 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Li",
"Yuantong",
""
],
[
"Wang",
"Chi-hua",
""
],
[
"Cheng",
"Guang",
""
],
[
"Sun",
"Will Wei",
""
]
] |
new_dataset
| 0.990783 |
2208.07750
|
Liang Lv
|
Liang Lv, Yi Fang, Lin Dai, Yonghui Li, and Mohsen Guizani
|
Asymmetric Dual-Mode Constellation and Protograph LDPC Code Design for
Generalized Spatial MPPM Systems
|
accepted by IEEE transactions on communications
| null |
10.1109/TCOMM.2023.3253687
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To achieve reliable and efficient transmissions in free-space optical (FSO)
communication, this paper designs a new protograph low-density parity-check
(PLDPC) coded generalized spatial multipulse position modulation (GSMPPM)
system over weak turbulence channels. Specifically, we investigate the PLDPC
code, generalized space shift keying (GSSK) modulation, and MPPM constellation.
First, we propose a type of novel GSMPPM constellations that intelligently
integrates the GSSK into MPPM, referred to as asymmetric dual-mode (ADM)
constellations, so as to improve the performance of the PLDPC-coded GSMPPM
system. Furthermore, exploiting a protograph extrinsic information transfer
(PEXIT) algorithm, we construct a type of improved PLDPC code, referred to as
I-PLDPC code, which outperforms the existing PLDPC codes over weak turbulence
channels. Analytical and simulation results show that the proposed ADM
constellations and the proposed I-PLDPC code can obtain noticeable performance
gains over their counterparts. Therefore, the proposed PLDPC-coded GSMPPM
system with ADM constellations is competent to satisfy the high-reliability
requirement for FSO applications.
|
[
{
"version": "v1",
"created": "Tue, 16 Aug 2022 13:48:24 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Mar 2023 03:03:19 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Lv",
"Liang",
""
],
[
"Fang",
"Yi",
""
],
[
"Dai",
"Lin",
""
],
[
"Li",
"Yonghui",
""
],
[
"Guizani",
"Mohsen",
""
]
] |
new_dataset
| 0.997235 |
2209.11368
|
Andrew SaLoutos
|
Andrew SaLoutos, Elijah Stanger-Jones, Menglong Guo, Hongmin Kim, and
Sangbae Kim
|
Design of a Multimodal Fingertip Sensor for Dynamic Manipulation
|
6 pages, 2 pages of references, supplementary video at
https://youtu.be/6Ph-cNJyJYQ. Appearing at ICRA 2023
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We introduce a spherical fingertip sensor for dynamic manipulation. It is
based on barometric pressure and time-of-flight proximity sensors and is
low-latency, compact, and physically robust. The sensor uses a trained neural
network to estimate the contact location and three-axis contact forces based on
data from the pressure sensors, which are embedded within the sensor's sphere
of polyurethane rubber. The time-of-flight sensors face in three different
outward directions, and an integrated microcontroller samples each of the
individual sensors at up to 200 Hz. To quantify the effect of system latency on
dynamic manipulation performance, we develop and analyze a metric called the
collision impulse ratio and characterize the end-to-end latency of our new
sensor. We also present experimental demonstrations with the sensor, including
measuring contact transitions, performing coarse mapping, maintaining a contact
force with a moving object, and reacting to avoid collisions.
|
[
{
"version": "v1",
"created": "Fri, 23 Sep 2022 01:56:49 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Mar 2023 02:46:39 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"SaLoutos",
"Andrew",
""
],
[
"Stanger-Jones",
"Elijah",
""
],
[
"Guo",
"Menglong",
""
],
[
"Kim",
"Hongmin",
""
],
[
"Kim",
"Sangbae",
""
]
] |
new_dataset
| 0.999443 |
2210.11262
|
Yanfei Xiang
|
Yanfei Xiang, Xin Wang, Shu Hu, Bin Zhu, Xiaomeng Huang, Xi Wu, Siwei
Lyu
|
RMBench: Benchmarking Deep Reinforcement Learning for Robotic
Manipulator Control
|
8 pages, 3 figures, 2 tables; update code's link
| null |
10.48550/ARXIV.2210.11262
| null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reinforcement learning is applied to solve actual complex tasks from
high-dimensional, sensory inputs. The last decade has developed a long list of
reinforcement learning algorithms. Recent progress benefits from deep learning
for raw sensory signal representation. One question naturally arises: how well
do they perform concerning different robotic manipulation tasks? Benchmarks use
objective performance metrics to offer a scientific way to compare algorithms.
In this paper, we present RMBench, the first benchmark for robotic
manipulations, which have high-dimensional continuous action and state spaces.
We implement and evaluate reinforcement learning algorithms that directly use
observed pixels as inputs. We report their average performance and learning
curves to show their performance and stability of training. Our study concludes
that none of the studied algorithms can handle all tasks well, soft
Actor-Critic outperforms most algorithms in average reward and stability, and
an algorithm combined with data augmentation may facilitate learning policies.
Our code is publicly available at
https://github.com/xiangyanfei212/RMBench-2022, including all benchmark tasks
and studied algorithms.
|
[
{
"version": "v1",
"created": "Thu, 20 Oct 2022 13:34:26 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Oct 2022 05:06:03 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Mar 2023 12:12:26 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Xiang",
"Yanfei",
""
],
[
"Wang",
"Xin",
""
],
[
"Hu",
"Shu",
""
],
[
"Zhu",
"Bin",
""
],
[
"Huang",
"Xiaomeng",
""
],
[
"Wu",
"Xi",
""
],
[
"Lyu",
"Siwei",
""
]
] |
new_dataset
| 0.985973 |
2211.00715
|
Yuhao Jiang
|
Yuhao Jiang, Fuchen Chen, Daniel M. Aukes
|
Tunable Dynamic Walking via Soft Twisted Beam Vibration
|
8 pages, 5 figure, this paper has been submitted to IEEE Robotics and
Automation Letters, copyright may be transferred without notice, after which
this version may no longer be accessible, the supplemental video is available
at: https://youtu.be/HpvOvaIC1Z4
|
IEEE Robotics and Automation Letters, vol. 8, no. 4, pp.
1967-1974, April 2023
|
10.1109/LRA.2023.3244716
| null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel mechanism that propagates vibration through soft twisted
beams, taking advantage of dynamically-coupled anisotropic stiffness to
simplify the actuation of walking robots. Using dynamic simulation and
experimental approaches, we show that the coupled stiffness of twisted beams
with terrain contact can be controlled to generate a variety of complex
trajectories by changing the frequency of the input signal. This work reveals
how ground contact influences the system's dynamic behavior, supporting the
design of walking robots inspired by this phenomenon. We also show that the
proposed twisted beam produces a tunable walking gait from a single vibrational
input.
|
[
{
"version": "v1",
"created": "Tue, 1 Nov 2022 19:28:07 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Jiang",
"Yuhao",
""
],
[
"Chen",
"Fuchen",
""
],
[
"Aukes",
"Daniel M.",
""
]
] |
new_dataset
| 0.986536 |
2212.04799
|
Cuiling Fan
|
Li Xu, Cuiling Fan, Sihem Mesnager, Rong Luo, Haode Yan
|
Subfield Codes of Several Few-Weight Linear Codes Parametrized by
Functions and Their Consequences
|
arXiv admin note: text overlap with arXiv:1804.06003,
arXiv:2207.07262 by other authors
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Subfield codes of linear codes over finite fields have recently received much
attention. Some of these codes are optimal and have applications in secrete
sharing, authentication codes and association schemes. In this paper, the
$q$-ary subfield codes $C_{f,g}^{(q)}$ of six different families of linear
codes $C_{f,g}$ parametrized by two functions $f, g$ over a finite field
$F_{q^m}$ are considered and studied, respectively. The parameters and
(Hamming) weight distribution of $C_{f,g}^{(q)}$ and their punctured codes
$\bar{C}_{f,g}^{(q)}$ are explicitly determined. The parameters of the duals of
these codes are also analyzed. Some of the resultant $q$-ary codes
$C_{f,g}^{(q)},$ $\bar{C}_{f,g}^{(q)}$ and their dual codes are optimal and
some have the best known parameters. The parameters and weight enumerators of
the first two families of linear codes $C_{f,g}$ are also settled, among which
the first family is an optimal two-weight linear code meeting the Griesmer
bound, and the dual codes of these two families are almost MDS codes. As a
byproduct of this paper, a family of $[2^{4m-2},2m+1,2^{4m-3}]$ quaternary
Hermitian self-dual code are obtained with $m \geq 2$. As an application, we
show that three families of the derived linear codes give rise to several
infinite families of $t$-designs ($t \in \{2, 3\}$).
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 12:03:05 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Mar 2023 01:02:59 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Xu",
"Li",
""
],
[
"Fan",
"Cuiling",
""
],
[
"Mesnager",
"Sihem",
""
],
[
"Luo",
"Rong",
""
],
[
"Yan",
"Haode",
""
]
] |
new_dataset
| 0.999837 |
2212.12287
|
Paolo Amore
|
Paolo Amore
|
Circle packing in regular polygons
|
38 pages, 20 figures
| null |
10.1063/5.0140644
| null |
cs.CG cond-mat.soft
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the packing of a large number of congruent and non--overlapping
circles inside a regular polygon. We have devised efficient algorithms that
allow one to generate configurations of $N$ densely packed circles inside a
regular polygon and we have carried out intensive numerical experiments
spanning several polygons (the largest number of sides considered here being
$16$) and up to $200$ circles ($400$ circles in the special cases of the
equilateral triangle and the regular hexagon) . Some of the configurations that
we have found possibly are not global maxima of the packing fraction,
particularly for $N \gg 1$, due to the great computational complexity of the
problem, but nonetheless they should provide good lower bounds for the packing
fraction at a given $N$. This is the first systematic numerical study of
packing in regular polygons, which previously had only been carried out for the
equilateral triangle, the square and the circle.
|
[
{
"version": "v1",
"created": "Fri, 23 Dec 2022 12:31:16 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Amore",
"Paolo",
""
]
] |
new_dataset
| 0.985772 |
2302.03573
|
Yilun Du
|
Ethan Chun, Yilun Du, Anthony Simeonov, Tomas Lozano-Perez, Leslie
Kaelbling
|
Local Neural Descriptor Fields: Locally Conditioned Object
Representations for Manipulation
|
ICRA 2023, Project Page: https://elchun.github.io/lndf/
| null | null | null |
cs.RO cs.AI cs.CV cs.LG
|
http://creativecommons.org/publicdomain/zero/1.0/
|
A robot operating in a household environment will see a wide range of unique
and unfamiliar objects. While a system could train on many of these, it is
infeasible to predict all the objects a robot will see. In this paper, we
present a method to generalize object manipulation skills acquired from a
limited number of demonstrations, to novel objects from unseen shape
categories. Our approach, Local Neural Descriptor Fields (L-NDF), utilizes
neural descriptors defined on the local geometry of the object to effectively
transfer manipulation demonstrations to novel objects at test time. In doing
so, we leverage the local geometry shared between objects to produce a more
general manipulation framework. We illustrate the efficacy of our approach in
manipulating novel objects in novel poses -- both in simulation and in the real
world.
|
[
{
"version": "v1",
"created": "Tue, 7 Feb 2023 16:37:19 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Mar 2023 21:03:39 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Chun",
"Ethan",
""
],
[
"Du",
"Yilun",
""
],
[
"Simeonov",
"Anthony",
""
],
[
"Lozano-Perez",
"Tomas",
""
],
[
"Kaelbling",
"Leslie",
""
]
] |
new_dataset
| 0.995439 |
2302.03976
|
Matthew Johnson
|
Matthew A. Johnson and Stavros Volos and Ken Gordon and Sean T. Allen
and Christoph M. Wintersteiger and Sylvan Clebsch and John Starks and Manuel
Costa
|
Parma: Confidential Containers via Attested Execution Policies
|
12 pages, 6 figures, 2 tables
| null | null | null |
cs.CR cs.NI cs.OS
|
http://creativecommons.org/licenses/by/4.0/
|
Container-based technologies empower cloud tenants to develop highly portable
software and deploy services in the cloud at a rapid pace. Cloud privacy,
meanwhile, is important as a large number of container deployments operate on
privacy-sensitive data, but challenging due to the increasing frequency and
sophistication of attacks. State-of-the-art confidential container-based
designs leverage process-based trusted execution environments (TEEs), but face
security and compatibility issues that limits their practical deployment. We
propose Parma, an architecture that provides lift-and-shift deployment of
unmodified containers while providing strong security protection against a
powerful attacker who controls the untrusted host and hypervisor. Parma
leverages VM-level isolation to execute a container group within a unique
VM-based TEE. Besides container integrity and user data confidentiality and
integrity, Parma also offers container attestation and execution integrity
based on an attested execution policy. Parma execution policies provide an
inductive proof over all future states of the container group. This proof,
which is established during initialization, forms a root of trust that can be
used for secure operations within the container group without requiring any
modifications of the containerized workflow itself (aside from the inclusion of
the execution policy.) We evaluate Parma on AMD SEV-SNP processors by running a
diverse set of workloads demonstrating that workflows exhibit 0-26% additional
overhead in performance over running outside the enclave, with a mean 13%
overhead on SPEC2017, while requiring no modifications to their program code.
Adding execution policies introduces less than 1% additional overhead.
Furthermore, we have deployed Parma as the underlying technology driving
Confidential Containers on Azure Container Instances.
|
[
{
"version": "v1",
"created": "Wed, 8 Feb 2023 10:15:07 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Feb 2023 10:04:07 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Mar 2023 16:16:33 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Johnson",
"Matthew A.",
""
],
[
"Volos",
"Stavros",
""
],
[
"Gordon",
"Ken",
""
],
[
"Allen",
"Sean T.",
""
],
[
"Wintersteiger",
"Christoph M.",
""
],
[
"Clebsch",
"Sylvan",
""
],
[
"Starks",
"John",
""
],
[
"Costa",
"Manuel",
""
]
] |
new_dataset
| 0.999737 |
2302.13149
|
Ali Al-Kaswan
|
Ali Al-Kaswan and Maliheh Izadi and Arie van Deursen
|
STACC: Code Comment Classification using SentenceTransformers
| null | null | null | null |
cs.SE cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Code comments are a key resource for information about software artefacts.
Depending on the use case, only some types of comments are useful. Thus,
automatic approaches to classify these comments have been proposed. In this
work, we address this need by proposing, STACC, a set of
SentenceTransformers-based binary classifiers. These lightweight classifiers
are trained and tested on the NLBSE Code Comment Classification tool
competition dataset, and surpass the baseline by a significant margin,
achieving an average F1 score of 0.74 against the baseline of 0.31, which is an
improvement of 139%. A replication package, as well as the models themselves,
are publicly available.
|
[
{
"version": "v1",
"created": "Sat, 25 Feb 2023 20:24:58 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Mar 2023 12:22:00 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Al-Kaswan",
"Ali",
""
],
[
"Izadi",
"Maliheh",
""
],
[
"van Deursen",
"Arie",
""
]
] |
new_dataset
| 0.996394 |
2303.00628
|
Changhan Wang
|
Mohamed Anwar, Bowen Shi, Vedanuj Goswami, Wei-Ning Hsu, Juan Pino,
Changhan Wang
|
MuAViC: A Multilingual Audio-Visual Corpus for Robust Speech Recognition
and Robust Speech-to-Text Translation
| null | null | null | null |
cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce MuAViC, a multilingual audio-visual corpus for robust speech
recognition and robust speech-to-text translation providing 1200 hours of
audio-visual speech in 9 languages. It is fully transcribed and covers 6
English-to-X translation as well as 6 X-to-English translation directions. To
the best of our knowledge, this is the first open benchmark for audio-visual
speech-to-text translation and the largest open benchmark for multilingual
audio-visual speech recognition. Our baseline results show that MuAViC is
effective for building noise-robust speech recognition and translation models.
We make the corpus available at https://github.com/facebookresearch/muavic.
|
[
{
"version": "v1",
"created": "Wed, 1 Mar 2023 16:31:01 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Mar 2023 16:41:01 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Anwar",
"Mohamed",
""
],
[
"Shi",
"Bowen",
""
],
[
"Goswami",
"Vedanuj",
""
],
[
"Hsu",
"Wei-Ning",
""
],
[
"Pino",
"Juan",
""
],
[
"Wang",
"Changhan",
""
]
] |
new_dataset
| 0.99972 |
2303.01032
|
Qi Zheng
|
Qi Zheng, Daqing Liu, Chaoyue Wang, Jing Zhang, Dadong Wang, Dacheng
Tao
|
ESceme: Vision-and-Language Navigation with Episodic Scene Memory
|
Tech. report; typos corrected
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Vision-and-language navigation (VLN) simulates a visual agent that follows
natural-language navigation instructions in real-world scenes. Existing
approaches have made enormous progress in navigation in new environments, such
as beam search, pre-exploration, and dynamic or hierarchical history encoding.
To balance generalization and efficiency, we resort to memorizing visited
scenarios apart from the ongoing route while navigating. In this work, we
introduce a mechanism of Episodic Scene memory (ESceme) for VLN that wakes an
agent's memories of past visits when it enters the current scene. The episodic
scene memory allows the agent to envision a bigger picture of the next
prediction. This way, the agent learns to utilize dynamically updated
information instead of merely adapting to static observations. We provide a
simple yet effective implementation of ESceme by enhancing the accessible views
at each location and progressively completing the memory while navigating. We
verify the superiority of ESceme on short-horizon (R2R), long-horizon (R4R),
and vision-and-dialog (CVDN) VLN tasks. Our ESceme also wins first place on the
CVDN leaderboard. Code is available: \url{https://github.com/qizhust/esceme}.}
|
[
{
"version": "v1",
"created": "Thu, 2 Mar 2023 07:42:07 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Mar 2023 03:52:21 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Zheng",
"Qi",
""
],
[
"Liu",
"Daqing",
""
],
[
"Wang",
"Chaoyue",
""
],
[
"Zhang",
"Jing",
""
],
[
"Wang",
"Dadong",
""
],
[
"Tao",
"Dacheng",
""
]
] |
new_dataset
| 0.99967 |
2303.03105
|
Weikai Kong
|
Weikai Kong, Shuhong Ye, Chenglin Yao, Jianfeng Ren
|
Confidence-based Event-centric Online Video Question Answering on a
Newly Constructed ATBS Dataset
|
Accepted for publication at the 2023 IEEE International Conference on
Acoustics, Speech, and Signal Processing (ICASSP 2023)
| null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep neural networks facilitate video question answering (VideoQA), but the
real-world applications on video streams such as CCTV and live cast place
higher demands on the solver. To address the challenges of VideoQA on long
videos of unknown length, we define a new set of problems called Online
Open-ended Video Question Answering (O^2VQA). It requires an online
state-updating mechanism for the solver to decide if the collected information
is sufficient to conclude an answer. We then propose a Confidence-based
Event-centric Online Video Question Answering (CEO-VQA) model to solve this
problem. Furthermore, a dataset called Answer Target in Background Stream
(ATBS) is constructed to evaluate this newly developed online VideoQA
application. Compared to the baseline VideoQA method that watches the whole
video, the experimental results show that the proposed method achieves a
significant performance gain.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 13:16:17 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Mar 2023 05:39:39 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Kong",
"Weikai",
""
],
[
"Ye",
"Shuhong",
""
],
[
"Yao",
"Chenglin",
""
],
[
"Ren",
"Jianfeng",
""
]
] |
new_dataset
| 0.971449 |
2303.03396
|
Lu Bai
|
Lixin Cui, Ming Li, Yue Wang, Lu Bai, Edwin R. Hancock
|
AERK: Aligned Entropic Reproducing Kernels through Continuous-time
Quantum Walks
|
Corresponding author: Lu Bai, bailu@bnu.edu.cn; bailucs@cufe.edu.cn
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we develop an Aligned Entropic Reproducing Kernel (AERK) for
graph classification. We commence by performing the Continuous-time Quantum
Walk (CTQW) on each graph structure, and computing the Averaged Mixing Matrix
(AMM) to describe how the CTQW visit all vertices from a starting vertex. More
specifically, we show how this AMM matrix allows us to compute a quantum
Shannon entropy for each vertex of a graph. For pairwise graphs, the proposed
AERK kernel is defined by computing a reproducing kernel based similarity
between the quantum Shannon entropies of their each pair of aligned vertices.
The analysis of theoretical properties reveals that the proposed AERK kernel
cannot only address the shortcoming of neglecting the structural correspondence
information between graphs arising in most existing R-convolution graph
kernels, but also overcome the problem of neglecting the structural differences
between pairs of aligned vertices arising in existing vertex-based matching
kernels. Moreover, unlike existing classical graph kernels that only focus on
the global or local structural information of graphs, the proposed AERK kernel
can simultaneously capture both global and local structural information through
the quantum Shannon entropies, reflecting more precise kernel based similarity
measures between pairs of graphs. The above theoretical properties explain the
effectiveness of the proposed kernel. The experimental evaluation on standard
graph datasets demonstrates that the proposed AERK kernel is able to outperform
state-of-the-art graph kernels for graph classification tasks.
|
[
{
"version": "v1",
"created": "Sat, 4 Mar 2023 16:48:39 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Cui",
"Lixin",
""
],
[
"Li",
"Ming",
""
],
[
"Wang",
"Yue",
""
],
[
"Bai",
"Lu",
""
],
[
"Hancock",
"Edwin R.",
""
]
] |
new_dataset
| 0.998222 |
2303.03510
|
Abdul Gafar Manuel Meque
|
Abdul Gafar Manuel Meque, Nisar Hussain, Grigori Sidorov, and
Alexander Gelbukh
|
Guilt Detection in Text: A Step Towards Understanding Complex Emotions
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a novel Natural Language Processing (NLP) task called Guilt
detection, which focuses on detecting guilt in text. We identify guilt as a
complex and vital emotion that has not been previously studied in NLP, and we
aim to provide a more fine-grained analysis of it. To address the lack of
publicly available corpora for guilt detection, we created VIC, a dataset
containing 4622 texts from three existing emotion detection datasets that we
binarized into guilt and no-guilt classes. We experimented with traditional
machine learning methods using bag-of-words and term frequency-inverse document
frequency features, achieving a 72% f1 score with the highest-performing model.
Our study provides a first step towards understanding guilt in text and opens
the door for future research in this area.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 21:36:19 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Meque",
"Abdul Gafar Manuel",
""
],
[
"Hussain",
"Nisar",
""
],
[
"Sidorov",
"Grigori",
""
],
[
"Gelbukh",
"Alexander",
""
]
] |
new_dataset
| 0.999084 |
2303.03614
|
Yuxiang Zeng
|
Zengyang Gong, Yuxiang Zeng, Lei Chen
|
A Fast Insertion Operator for Ridesharing over Time-Dependent Road
Networks
|
12 pages
| null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ridesharing has become a promising travel mode recently due to the economic
and social benefits. As an essential operator, "insertion operator" has been
extensively studied over static road networks. When a new request appears, the
insertion operator is used to find the optimal positions of a worker's current
route to insert the origin and destination of this request and minimize the
travel time of this worker. Previous works study how to conduct the insertion
operation efficiently in static road networks, however, in reality, the route
planning should be addressed by considering the dynamic traffic scenario (i.e.,
a time-dependent road network). Unfortunately, existing solutions to the
insertion operator become in efficient under this setting. Thus, this paper
studies the insertion operator over time-dependent road networks. Specially, to
reduce the high time complexity $O(n^3)$ of existing solution, we calculate the
compound travel time functions along the route to speed up the calculation of
the travel time between vertex pairs belonging to the route, as a result time
complexity of an insertion can be reduced to $O(n^2)$. Finally, we further
improve the method to a linear-time insertion algorithm by showing that it only
needs $O(1)$ time to find the best position of current route to insert the
origin when linearly enumerating each possible position for the new request's
destination. Evaluations on two real-world and large-scale datasets show that
our methods can accelerate the existing insertion algorithm by up to 25 times.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 03:00:26 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Gong",
"Zengyang",
""
],
[
"Zeng",
"Yuxiang",
""
],
[
"Chen",
"Lei",
""
]
] |
new_dataset
| 0.951765 |
2303.03626
|
Mingming Fan
|
Emily Kuang and Ruihuan Chen and Mingming Fan
|
Enhancing Older Adults' Gesture Typing Experience Using the T9 Keyboard
on Small Touchscreen Devices
|
Proceedings of the 2023 CHI Conference on Human Factors in Computing
Systems (CHI '23), April 23--28, 2023, Hamburg, Germany
| null |
10.1145/3544548.3581105
| null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Older adults increasingly adopt small-screen devices, but limited motor
dexterity hinders their ability to type effectively. While a 9-key (T9)
keyboard allocates larger space to each key, it is shared by multiple
consecutive letters. Consequently, users must interrupt their gestures when
typing consecutive letters, leading to inefficiencies and poor user experience.
Thus, we proposed a novel keyboard that leverages the currently unused key 1 to
duplicate letters from the previous key, allowing the entry of consecutive
letters without interruptions. A user study with 12 older adults showed that it
significantly outperformed the T9 with wiggle gesture in typing speed, KSPC,
insertion errors, and deletes per word while achieving comparable performance
as the conventional T9. Repeating the typing tasks with 12 young adults found
that the advantages of the novel T9 were consistent or enhanced. We also
provide error analysis and design considerations for improving gesture typing
on T9 for older adults.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 03:18:49 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Kuang",
"Emily",
""
],
[
"Chen",
"Ruihuan",
""
],
[
"Fan",
"Mingming",
""
]
] |
new_dataset
| 0.962416 |
2303.03672
|
Md Awsafur Rahman
|
Md Awsafur Rahman, Bishmoy Paul, Tanvir Mahmud and Shaikh Anowarul
Fattah
|
CIFF-Net: Contextual Image Feature Fusion for Melanoma Diagnosis
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Melanoma is considered to be the deadliest variant of skin cancer causing
around 75\% of total skin cancer deaths. To diagnose Melanoma, clinicians
assess and compare multiple skin lesions of the same patient concurrently to
gather contextual information regarding the patterns, and abnormality of the
skin. So far this concurrent multi-image comparative method has not been
explored by existing deep learning-based schemes. In this paper, based on
contextual image feature fusion (CIFF), a deep neural network (CIFF-Net) is
proposed, which integrates patient-level contextual information into the
traditional approaches for improved Melanoma diagnosis by concurrent
multi-image comparative method. The proposed multi-kernel self attention (MKSA)
module offers better generalization of the extracted features by introducing
multi-kernel operations in the self attention mechanisms. To utilize both self
attention and contextual feature-wise attention, an attention guided module
named contextual feature fusion (CFF) is proposed that integrates extracted
features from different contextual images into a single feature vector.
Finally, in comparative contextual feature fusion (CCFF) module, primary and
contextual features are compared concurrently to generate comparative features.
Significant improvement in performance has been achieved on the ISIC-2020
dataset over the traditional approaches that validate the effectiveness of the
proposed contextual learning scheme.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 06:16:10 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Rahman",
"Md Awsafur",
""
],
[
"Paul",
"Bishmoy",
""
],
[
"Mahmud",
"Tanvir",
""
],
[
"Fattah",
"Shaikh Anowarul",
""
]
] |
new_dataset
| 0.990326 |
2303.03699
|
Ebrahim Farahmand
|
Amin Kargar-Barzi, Ebrahim Farahmand, Ali Mahani, and Muhammad
Shafique
|
CAE-CNNLoc: An Edge-based WiFi Fingerprinting Indoor Localization Using
Convolutional Neural Network and Convolutional Auto-Encoder
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
With the ongoing development of Indoor Location-Based Services, accurate
location information of users in indoor environments has been a challenging
issue in recent years. Due to the widespread use of WiFi networks, WiFi
fingerprinting has become one of the most practical methods of locating mobile
users. In addition to localization accuracy, some other critical factors such
as cost, latency, and users' privacy should be considered in indoor
localization systems. In this study, we propose a lightweight Convolutional
Neural Network (CNN)-based method for edge devices (such as smartphones) to
overcome the above issues by eliminating the need for a cloud/server in the
localization system. To enable the use of the proposed model on
resource-constraint edge devices, post-training optimization techniques
including quantization, pruning and clustering are used to compress the network
model. The proposed method is evaluated for three different open datasets,
i.e., UJIIndoorLoc, Tampere and UTSIndoorLoc, as well as for our collected
dataset named SBUK-D to verify its scalability. The results demonstrate the
superiority of the proposed method compared to state-of-the-art studies. We
also evaluate performance efficiency of our localization method on an android
smartphone to demonstrate its applicability to edge devices. For UJIIndoorLoc
dataset, our model with post-training optimizations obtains approximately 99%
building accuracy, over 98% floor accuracy, and 4 m positioning mean error with
the model size and inference time of 60 KB and 270 us, respectively, which
demonstrate high accuracy as well as amenability to the resource-constrained
edge devices.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 07:30:57 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Kargar-Barzi",
"Amin",
""
],
[
"Farahmand",
"Ebrahim",
""
],
[
"Mahani",
"Ali",
""
],
[
"Shafique",
"Muhammad",
""
]
] |
new_dataset
| 0.98327 |
2303.03716
|
Fabian Sturm
|
Fabian Sturm, Elke Hergenroether, Julian Reinhardt, Petar Smilevski
Vojnovikj, Melanie Siegel
|
Challenges of the Creation of a Dataset for Vision Based Human Hand
Action Recognition in Industrial Assembly
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work presents the Industrial Hand Action Dataset V1, an industrial
assembly dataset consisting of 12 classes with 459,180 images in the basic
version and 2,295,900 images after spatial augmentation. Compared to other
freely available datasets tested, it has an above-average duration and, in
addition, meets the technical and legal requirements for industrial assembly
lines. Furthermore, the dataset contains occlusions, hand-object interaction,
and various fine-grained human hand actions for industrial assembly tasks that
were not found in combination in examined datasets. The recorded ground truth
assembly classes were selected after extensive observation of real-world use
cases. A Gated Transformer Network, a state-of-the-art model from the
transformer domain was adapted, and proved with a test accuracy of 86.25%
before hyperparameter tuning by 18,269,959 trainable parameters, that it is
possible to train sequential deep learning models with this dataset.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 07:57:12 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Sturm",
"Fabian",
""
],
[
"Hergenroether",
"Elke",
""
],
[
"Reinhardt",
"Julian",
""
],
[
"Vojnovikj",
"Petar Smilevski",
""
],
[
"Siegel",
"Melanie",
""
]
] |
new_dataset
| 0.993972 |
2303.03745
|
Amit Moryossef
|
Amit Moryossef, Yanai Elazar, and Yoav Goldberg
|
At Your Fingertips: Extracting Piano Fingering Instructions from Videos
|
6 pages, paper from 2019
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Piano fingering -- knowing which finger to use to play each note in a musical
piece, is a hard and important skill to master when learning to play the piano.
While some sheet music is available with expert-annotated fingering
information, most pieces lack this information, and people often resort to
learning the fingering from demonstrations in online videos. We consider the AI
task of automating the extraction of fingering information from videos. This is
a non-trivial task as fingers are often occluded by other fingers, and it is
often not clear from the video which of the keys were pressed, requiring the
synchronization of hand position information and knowledge about the notes that
were played. We show how to perform this task with high-accuracy using a
combination of deep-learning modules, including a GAN-based approach for
fine-tuning on out-of-domain data. We extract the fingering information with an
f1 score of 97\%. We run the resulting system on 90 videos, resulting in
high-quality piano fingering information of 150K notes, the largest available
dataset of piano-fingering to date.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 09:09:13 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Moryossef",
"Amit",
""
],
[
"Elazar",
"Yanai",
""
],
[
"Goldberg",
"Yoav",
""
]
] |
new_dataset
| 0.969316 |
2303.03749
|
Andreas Lochbihler
|
Alexander Bernauer and Sofia Faro and R\'emy H\"ammerle and Martin
Huschenbett and Moritz Kiefer and Andreas Lochbihler and Jussi M\"aki and
Francesco Mazzoli and Simon Meier and Neil Mitchell and Ratko G. Veprek
|
Daml: A Smart Contract Language for Securely Automating Real-World
Multi-Party Business Workflows
| null | null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Distributed ledger technologies, also known as blockchains for enterprises,
promise to significantly reduce the high cost of automating multi-party
business workflows. We argue that a programming language for writing such
on-ledger logic should satisfy three desiderata: (1) Provide concepts to
capture the legal rules that govern real-world business workflows. (2) Include
simple means for specifying policies for access and authorization. (3) Support
the composition of simple workflows into complex ones, even when the simple
workflows have already been deployed.
We present the open-source smart contract language Daml based on Haskell with
strict evaluation. Daml achieves these desiderata by offering novel primitives
for representing, accessing, and modifying data on the ledger, which are
mimicking the primitives of today's legal systems. Robust access and
authorization policies are specified as part of these primitives, and Daml's
built-in authorization rules enable delegation, which is key for workflow
composability. These properties make Daml well-suited for orchestrating
business workflows across multiple, otherwise heterogeneous parties.
Daml contracts run (1) on centralized ledgers backed by a database, (2) on
distributed deployments with Byzantine fault tolerant consensus, and (3) on top
of conventional blockchains, as a second layer via an atomic commit protocol.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 09:16:22 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Bernauer",
"Alexander",
""
],
[
"Faro",
"Sofia",
""
],
[
"Hämmerle",
"Rémy",
""
],
[
"Huschenbett",
"Martin",
""
],
[
"Kiefer",
"Moritz",
""
],
[
"Lochbihler",
"Andreas",
""
],
[
"Mäki",
"Jussi",
""
],
[
"Mazzoli",
"Francesco",
""
],
[
"Meier",
"Simon",
""
],
[
"Mitchell",
"Neil",
""
],
[
"Veprek",
"Ratko G.",
""
]
] |
new_dataset
| 0.997706 |
2303.03755
|
Elad Levi
|
Elad Levi, Eli Brosh, Mykola Mykhailych, Meir Perez
|
DLT: Conditioned layout generation with Joint Discrete-Continuous
Diffusion Layout Transformer
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Generating visual layouts is an essential ingredient of graphic design. The
ability to condition layout generation on a partial subset of component
attributes is critical to real-world applications that involve user
interaction. Recently, diffusion models have demonstrated high-quality
generative performances in various domains. However, it is unclear how to apply
diffusion models to the natural representation of layouts which consists of a
mix of discrete (class) and continuous (location, size) attributes. To address
the conditioning layout generation problem, we introduce DLT, a joint
discrete-continuous diffusion model. DLT is a transformer-based model which has
a flexible conditioning mechanism that allows for conditioning on any given
subset of all the layout component classes, locations, and sizes. Our method
outperforms state-of-the-art generative models on various layout generation
datasets with respect to different metrics and conditioning settings.
Additionally, we validate the effectiveness of our proposed conditioning
mechanism and the joint continuous-diffusion process. This joint process can be
incorporated into a wide range of mixed discrete-continuous generative tasks.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 09:30:43 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Levi",
"Elad",
""
],
[
"Brosh",
"Eli",
""
],
[
"Mykhailych",
"Mykola",
""
],
[
"Perez",
"Meir",
""
]
] |
new_dataset
| 0.996483 |
2303.03797
|
Simon Bultmann
|
Simon Bultmann, Raphael Memmesheimer, and Sven Behnke
|
External Camera-based Mobile Robot Pose Estimation for Collaborative
Perception with Smart Edge Sensors
|
Accepted for ICRA 2023, 7 pages, 8 figures
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an approach for estimating a mobile robot's pose w.r.t. the
allocentric coordinates of a network of static cameras using multi-view RGB
images. The images are processed online, locally on smart edge sensors by deep
neural networks to detect the robot and estimate 2D keypoints defined at
distinctive positions of the 3D robot model. Robot keypoint detections are
synchronized and fused on a central backend, where the robot's pose is
estimated via multi-view minimization of reprojection errors. Through the pose
estimation from external cameras, the robot's localization can be initialized
in an allocentric map from a completely unknown state (kidnapped robot problem)
and robustly tracked over time. We conduct a series of experiments evaluating
the accuracy and robustness of the camera-based pose estimation compared to the
robot's internal navigation stack, showing that our camera-based method
achieves pose errors below 3 cm and 1{\deg} and does not drift over time, as
the robot is localized allocentrically. With the robot's pose precisely
estimated, its observations can be fused into the allocentric scene model. We
show a real-world application, where observations from mobile robot and static
smart edge sensors are fused to collaboratively build a 3D semantic map of a
$\sim$240 m$^2$ indoor environment.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 11:03:33 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Bultmann",
"Simon",
""
],
[
"Memmesheimer",
"Raphael",
""
],
[
"Behnke",
"Sven",
""
]
] |
new_dataset
| 0.96702 |
2303.03839
|
Guillermo P\'erez
|
Swen Jacobs, Guillermo A. Perez, Philipp Schlehuber-Caissier
|
The Temporal Logic Synthesis Format TLSF v1.2
|
arXiv admin note: substantial text overlap with arXiv:1604.02284,
arXiv:1601.05228
| null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We present an extension of the Temporal Logic Synthesis Format (TLSF). TLSF
builds on standard LTL, but additionally supports high-level constructs, such
as sets and functions, as well as parameters that allow a specification to
define a whole a family of problems. Our extension introduces operators and a
new semantics option for LTLf , i.e., LTL on finite executions.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 12:09:39 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Jacobs",
"Swen",
""
],
[
"Perez",
"Guillermo A.",
""
],
[
"Schlehuber-Caissier",
"Philipp",
""
]
] |
new_dataset
| 0.999178 |
2303.03854
|
Zijian Wang
|
Zijian Wang, Boyuan Ouyang, Rafael Sacks
|
CBIM: object-level cloud collaboration platform for supporting
across-domain asynchronous design
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The conventional approach of designing BIM projects requires packaging of
building information as files to exchange designs. This study develops a series
of components to implement a previously established Cloud BIM (CBIM) platform
that facilitates fileless cloud collaboration across BIM disciplines. A CBIM
connector was developed to synchronize design changes from one discipline
client to a CBIM server. The server processes the modifications and propagates
relevant reference geometries to affected disciplines to assist their designs.
The success of the case study demonstrates a fileless approach for
multidisciplinary BIM collaboration and validates the practicality and
capabilities of the CBIM paradigm.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 12:39:07 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Wang",
"Zijian",
""
],
[
"Ouyang",
"Boyuan",
""
],
[
"Sacks",
"Rafael",
""
]
] |
new_dataset
| 0.99955 |
2303.03870
|
Aneesh Bhattacharya
|
Aneesh Bhattacharya, Uttaran Bhattacharya, Aniket Bera
|
DanceAnyWay: Synthesizing Mixed-Genre 3D Dance Movements Through Beat
Disentanglement
| null | null | null | null |
cs.SD cs.GR cs.MM eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We present DanceAnyWay, a hierarchical generative adversarial learning method
to synthesize mixed-genre dance movements of 3D human characters synchronized
with music. Our method learns to disentangle the dance movements at the beat
frames from the dance movements at all the remaining frames by operating at two
hierarchical levels. At the coarser "beat" level, it encodes the rhythm, pitch,
and melody information of the input music via dedicated feature representations
only at the beat frames. It leverages them to synthesize the beat poses of the
target dance using a sequence-to-sequence learning framework. At the finer
"repletion" level, our method encodes similar rhythm, pitch, and melody
information from all the frames of the input music via dedicated feature
representations and couples them with the synthesized beat poses from the
coarser level to synthesize the full target dance sequence using an adversarial
learning framework. By disentangling the broader dancing styles at the coarser
level from the specific dance movements at the finer level, our method can
efficiently synthesize dances composed of arbitrarily mixed genres and styles.
We evaluate the performance of our approach through extensive experiments on
both the mixed-genre TikTok dance dataset and the single-genre AIST++ dataset
and observe improvements of about 2% in motion quality metrics and 1.6% - 5.9%
in motion diversity metrics over the current baselines in the two datasets
respectively. We also conducted a user study to evaluate the visual quality of
our synthesized dances. We noted that, on average, the samples generated by our
method were about 9% more preferred by the participants and had a 12% better
five-point Likert-scale score over the best available current baseline in terms
of motion quality and diversity.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 22:20:24 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Bhattacharya",
"Aneesh",
""
],
[
"Bhattacharya",
"Uttaran",
""
],
[
"Bera",
"Aniket",
""
]
] |
new_dataset
| 0.998786 |
2303.03881
|
Ronit Purian PhD
|
Ronit Purian and Daniel Polani
|
Spatial, Social and Data Gaps in On-Demand Mobility Services: Towards a
Supply-Oriented MaaS
|
30 pages, 1 figure, 3 tables. September 30, 2021
| null | null | null |
cs.CY econ.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
After a decade of on-demand mobility services that change spatial behaviors
in metropolitan areas, the Shared Autonomous Vehicle (SAV) service is expected
to increase traffic congestion and unequal access to transport services. A
paradigm of scheduled supply that is aware of demand but not on-demand is
proposed, introducing coordination and social and behavioral understanding,
urban cognition and empowerment of agents, into a novel informational
framework. Daily routines and other patterns of spatial behaviors outline a
fundamental demand layer in a supply-oriented paradigm that captures urban
dynamics and spatial-temporal behaviors, mostly in groups. Rather than
real-time requests and instant responses that reward unplanned actions, and
beyond just reservation of travels in timetables, the intention is to capture
mobility flows in scheduled travels along the day considering time of day,
places, passengers etc. Regulating goal-directed behaviors and caring for
service resources and the overall system welfare is proposed to minimize
uncertainty, considering the capacity of mobility interactions to hold value,
i.e., Motility as a Service (MaaS). The principal-agent problem in the smart
city is a problem of collective action among service providers and users that
create expectations based on previous actions and reactions in mutual systems.
Planned behavior that accounts for service coordination is expected to
stabilize excessive rides and traffic load, and to induce a cognitive gain,
thus balancing information load and facilitating cognitive effort.
|
[
{
"version": "v1",
"created": "Mon, 20 Feb 2023 10:04:41 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Purian",
"Ronit",
""
],
[
"Polani",
"Daniel",
""
]
] |
new_dataset
| 0.974516 |
2303.03915
|
Paulo Villegas
|
Hugo Lauren\c{c}on, Lucile Saulnier, Thomas Wang, Christopher Akiki,
Albert Villanova del Moral, Teven Le Scao, Leandro Von Werra, Chenghao Mou,
Eduardo Gonz\'alez Ponferrada, Huu Nguyen, J\"org Frohberg, Mario
\v{S}a\v{s}ko, Quentin Lhoest, Angelina McMillan-Major, Gerard Dupont, Stella
Biderman, Anna Rogers, Loubna Ben allal, Francesco De Toni, Giada Pistilli,
Olivier Nguyen, Somaieh Nikpoor, Maraim Masoud, Pierre Colombo, Javier de la
Rosa, Paulo Villegas, Tristan Thrush, Shayne Longpre, Sebastian Nagel, Leon
Weber, Manuel Mu\~noz, Jian Zhu, Daniel Van Strien, Zaid Alyafeai, Khalid
Almubarak, Minh Chien Vu, Itziar Gonzalez-Dios, Aitor Soroa, Kyle Lo, Manan
Dey, Pedro Ortiz Suarez, Aaron Gokaslan, Shamik Bose, David Adelani, Long
Phan, Hieu Tran, Ian Yu, Suhas Pai, Jenny Chim, Violette Lepercq, Suzana
Ilic, Margaret Mitchell, Sasha Alexandra Luccioni, Yacine Jernite
|
The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset
|
NeurIPS 2022, Datasets and Benchmarks Track
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
As language models grow ever larger, the need for large-scale high-quality
text datasets has never been more pressing, especially in multilingual
settings. The BigScience workshop, a 1-year international and multidisciplinary
initiative, was formed with the goal of researching and training large language
models as a values-driven undertaking, putting issues of ethics, harm, and
governance in the foreground. This paper documents the data creation and
curation efforts undertaken by BigScience to assemble the Responsible
Open-science Open-collaboration Text Sources (ROOTS) corpus, a 1.6TB dataset
spanning 59 languages that was used to train the 176-billion-parameter
BigScience Large Open-science Open-access Multilingual (BLOOM) language model.
We further release a large initial subset of the corpus and analyses thereof,
and hope to empower large-scale monolingual and multilingual modeling projects
with both the data and the processing tools, as well as stimulate research
around this large multilingual corpus.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 14:25:44 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Laurençon",
"Hugo",
""
],
[
"Saulnier",
"Lucile",
""
],
[
"Wang",
"Thomas",
""
],
[
"Akiki",
"Christopher",
""
],
[
"del Moral",
"Albert Villanova",
""
],
[
"Scao",
"Teven Le",
""
],
[
"Von Werra",
"Leandro",
""
],
[
"Mou",
"Chenghao",
""
],
[
"Ponferrada",
"Eduardo González",
""
],
[
"Nguyen",
"Huu",
""
],
[
"Frohberg",
"Jörg",
""
],
[
"Šaško",
"Mario",
""
],
[
"Lhoest",
"Quentin",
""
],
[
"McMillan-Major",
"Angelina",
""
],
[
"Dupont",
"Gerard",
""
],
[
"Biderman",
"Stella",
""
],
[
"Rogers",
"Anna",
""
],
[
"allal",
"Loubna Ben",
""
],
[
"De Toni",
"Francesco",
""
],
[
"Pistilli",
"Giada",
""
],
[
"Nguyen",
"Olivier",
""
],
[
"Nikpoor",
"Somaieh",
""
],
[
"Masoud",
"Maraim",
""
],
[
"Colombo",
"Pierre",
""
],
[
"de la Rosa",
"Javier",
""
],
[
"Villegas",
"Paulo",
""
],
[
"Thrush",
"Tristan",
""
],
[
"Longpre",
"Shayne",
""
],
[
"Nagel",
"Sebastian",
""
],
[
"Weber",
"Leon",
""
],
[
"Muñoz",
"Manuel",
""
],
[
"Zhu",
"Jian",
""
],
[
"Van Strien",
"Daniel",
""
],
[
"Alyafeai",
"Zaid",
""
],
[
"Almubarak",
"Khalid",
""
],
[
"Vu",
"Minh Chien",
""
],
[
"Gonzalez-Dios",
"Itziar",
""
],
[
"Soroa",
"Aitor",
""
],
[
"Lo",
"Kyle",
""
],
[
"Dey",
"Manan",
""
],
[
"Suarez",
"Pedro Ortiz",
""
],
[
"Gokaslan",
"Aaron",
""
],
[
"Bose",
"Shamik",
""
],
[
"Adelani",
"David",
""
],
[
"Phan",
"Long",
""
],
[
"Tran",
"Hieu",
""
],
[
"Yu",
"Ian",
""
],
[
"Pai",
"Suhas",
""
],
[
"Chim",
"Jenny",
""
],
[
"Lepercq",
"Violette",
""
],
[
"Ilic",
"Suzana",
""
],
[
"Mitchell",
"Margaret",
""
],
[
"Luccioni",
"Sasha Alexandra",
""
],
[
"Jernite",
"Yacine",
""
]
] |
new_dataset
| 0.994384 |
2303.03926
|
Long Zhou
|
Ziqiang Zhang, Long Zhou, Chengyi Wang, Sanyuan Chen, Yu Wu, Shujie
Liu, Zhuo Chen, Yanqing Liu, Huaming Wang, Jinyu Li, Lei He, Sheng Zhao, Furu
Wei
|
Speak Foreign Languages with Your Own Voice: Cross-Lingual Neural Codec
Language Modeling
|
We encourage readers to listen to the audio samples on our demo page:
\url{https://aka.ms/vallex}
| null | null | null |
cs.CL cs.AI cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a cross-lingual neural codec language model, VALL-E X, for
cross-lingual speech synthesis. Specifically, we extend VALL-E and train a
multi-lingual conditional codec language model to predict the acoustic token
sequences of the target language speech by using both the source language
speech and the target language text as prompts. VALL-E X inherits strong
in-context learning capabilities and can be applied for zero-shot cross-lingual
text-to-speech synthesis and zero-shot speech-to-speech translation tasks.
Experimental results show that it can generate high-quality speech in the
target language via just one speech utterance in the source language as a
prompt while preserving the unseen speaker's voice, emotion, and acoustic
environment. Moreover, VALL-E X effectively alleviates the foreign accent
problems, which can be controlled by a language ID. Audio samples are available
at \url{https://aka.ms/vallex}.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 14:31:55 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Zhang",
"Ziqiang",
""
],
[
"Zhou",
"Long",
""
],
[
"Wang",
"Chengyi",
""
],
[
"Chen",
"Sanyuan",
""
],
[
"Wu",
"Yu",
""
],
[
"Liu",
"Shujie",
""
],
[
"Chen",
"Zhuo",
""
],
[
"Liu",
"Yanqing",
""
],
[
"Wang",
"Huaming",
""
],
[
"Li",
"Jinyu",
""
],
[
"He",
"Lei",
""
],
[
"Zhao",
"Sheng",
""
],
[
"Wei",
"Furu",
""
]
] |
new_dataset
| 0.981546 |
2303.03933
|
Baokun Wang
|
Jiafu Wu, Mufeng Yao, Dong Wu, Mingmin Chi, Baokun Wang, Ruofan Wu,
Xin Fu, Changhua Meng and Weiqiang Wang
|
DEDGAT: Dual Embedding of Directed Graph Attention Networks for
Detecting Financial Risk
| null | null | null | null |
cs.LG cs.AI cs.DC
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Graph representation plays an important role in the field of financial risk
control, where the relationship among users can be constructed in a graph
manner. In practical scenarios, the relationships between nodes in risk control
tasks are bidirectional, e.g., merchants having both revenue and expense
behaviors. Graph neural networks designed for undirected graphs usually
aggregate discriminative node or edge representations with an attention
strategy, but cannot fully exploit the out-degree information when used for the
tasks built on directed graph, which leads to the problem of a directional
bias. To tackle this problem, we propose a Directed Graph ATtention network
called DGAT, which explicitly takes out-degree into attention calculation. In
addition to having directional requirements, the same node might have different
representations of its input and output, and thus we further propose a dual
embedding of DGAT, referred to as DEDGAT. Specifically, DEDGAT assigns
in-degree and out-degree representations to each node and uses these two
embeddings to calculate the attention weights of in-degree and out-degree
nodes, respectively. Experiments performed on the benchmark datasets show that
DGAT and DEDGAT obtain better classification performance compared to undirected
GAT. Also,the visualization results demonstrate that our methods can fully use
both in-degree and out-degree information.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 07:21:21 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Wu",
"Jiafu",
""
],
[
"Yao",
"Mufeng",
""
],
[
"Wu",
"Dong",
""
],
[
"Chi",
"Mingmin",
""
],
[
"Wang",
"Baokun",
""
],
[
"Wu",
"Ruofan",
""
],
[
"Fu",
"Xin",
""
],
[
"Meng",
"Changhua",
""
],
[
"Wang",
"Weiqiang",
""
]
] |
new_dataset
| 0.992911 |
2303.03983
|
Lovro Lugovi\'c
|
Lovro Lugovi\'c, Fabrizio Montesi
|
Real-World Choreographic Programming: An Experience Report
| null | null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Choreographic programming is a programming paradigm, whereby the overall
behaviour of a distributed system is coded as a choreography from a global
viewpoint. The choreography can then be automatically compiled (projected) to a
correct implementation for each participant.
Choreographic programming relieves the programmer from manually writing the
separate send and receive actions performed by participants and avoids the
problem of communication mismatches. However, the applicability of this
paradigm in the real world remains largely unexplored for two reasons. First,
while there have been several proposals of choreographic programming languages,
none of them have been used to implement a realistic, widely-used protocol.
Thus there is a lack of experience on how realistic choreographic programs are
structured and on the relevance of the features explored in theoretical models.
Second, applications of choreographic programming shown so far are intrusive
since each participant must use exactly the code projected from the
choreography. This prevents using the projected code with existing third-party
implementations of some participants.
We carry out the first development in choreographic programming of a
widespread real-world protocol: the Internet Relay Chat (IRC) protocol. Our
development is based on Choral, an object-oriented choreographic programming
language. Two of Choral's features are key to our implementation: higher-order
choreographies for modelling the complex interaction patterns due to IRC's
asynchronous nature; and user-definable communication semantics for achieving
interoperability with third-party implementations. We also discover a missing
piece: the capability of statically detecting that choices on alternative
distributed behaviours are appropriately communicated by means of message
types. We extend the Choral compiler with an elegant solution based on
subtyping.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 15:32:50 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Lugović",
"Lovro",
""
],
[
"Montesi",
"Fabrizio",
""
]
] |
new_dataset
| 0.972145 |
2303.03991
|
Xiaofeng Wang
|
Xiaofeng Wang, Zheng Zhu, Wenbo Xu, Yunpeng Zhang, Yi Wei, Xu Chi, Yun
Ye, Dalong Du, Jiwen Lu, Xingang Wang
|
OpenOccupancy: A Large Scale Benchmark for Surrounding Semantic
Occupancy Perception
|
project page: https://github.com/JeffWang987/OpenOccupancy
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Semantic occupancy perception is essential for autonomous driving, as
automated vehicles require a fine-grained perception of the 3D urban
structures. However, existing relevant benchmarks lack diversity in urban
scenes, and they only evaluate front-view predictions. Towards a comprehensive
benchmarking of surrounding perception algorithms, we propose OpenOccupancy,
which is the first surrounding semantic occupancy perception benchmark. In the
OpenOccupancy benchmark, we extend the large-scale nuScenes dataset with dense
semantic occupancy annotations. Previous annotations rely on LiDAR points
superimposition, where some occupancy labels are missed due to sparse LiDAR
channels. To mitigate the problem, we introduce the Augmenting And Purifying
(AAP) pipeline to ~2x densify the annotations, where ~4000 human hours are
involved in the labeling process. Besides, camera-based, LiDAR-based and
multi-modal baselines are established for the OpenOccupancy benchmark.
Furthermore, considering the complexity of surrounding occupancy perception
lies in the computational burden of high-resolution 3D predictions, we propose
the Cascade Occupancy Network (CONet) to refine the coarse prediction, which
relatively enhances the performance by ~30% than the baseline. We hope the
OpenOccupancy benchmark will boost the development of surrounding occupancy
perception algorithms.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 15:43:39 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Wang",
"Xiaofeng",
""
],
[
"Zhu",
"Zheng",
""
],
[
"Xu",
"Wenbo",
""
],
[
"Zhang",
"Yunpeng",
""
],
[
"Wei",
"Yi",
""
],
[
"Chi",
"Xu",
""
],
[
"Ye",
"Yun",
""
],
[
"Du",
"Dalong",
""
],
[
"Lu",
"Jiwen",
""
],
[
"Wang",
"Xingang",
""
]
] |
new_dataset
| 0.999565 |
2303.04086
|
Haimin Luo
|
Haimin Luo, Siyuan Zhang, Fuqiang Zhao, Haotian Jing, Penghao Wang,
Zhenxiao Yu, Dongxue Yan, Junran Ding, Boyuan Zhang, Qiang Hu, Shu Yin, Lan
Xu, JIngyi Yu
|
NEPHELE: A Neural Platform for Highly Realistic Cloud Radiance Rendering
| null | null | null | null |
cs.GR cs.CV cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We have recently seen tremendous progress in neural rendering (NR) advances,
i.e., NeRF, for photo-real free-view synthesis. Yet, as a local technique based
on a single computer/GPU, even the best-engineered Instant-NGP or i-NGP cannot
reach real-time performance when rendering at a high resolution, and often
requires huge local computing resources. In this paper, we resort to cloud
rendering and present NEPHELE, a neural platform for highly realistic cloud
radiance rendering. In stark contrast with existing NR approaches, our NEPHELE
allows for more powerful rendering capabilities by combining multiple remote
GPUs and facilitates collaboration by allowing multiple people to view the same
NeRF scene simultaneously. We introduce i-NOLF to employ opacity light fields
for ultra-fast neural radiance rendering in a one-query-per-ray manner. We
further resemble the Lumigraph with geometry proxies for fast ray querying and
subsequently employ a small MLP to model the local opacity lumishperes for
high-quality rendering. We also adopt Perfect Spatial Hashing in i-NOLF to
enhance cache coherence. As a result, our i-NOLF achieves an order of magnitude
performance gain in terms of efficiency than i-NGP, especially for the
multi-user multi-viewpoint setting under cloud rendering scenarios. We further
tailor a task scheduler accompanied by our i-NOLF representation and
demonstrate the advance of our methodological design through a comprehensive
cloud platform, consisting of a series of cooperated modules, i.e., render
farms, task assigner, frame composer, and detailed streaming strategies. Using
such a cloud platform compatible with neural rendering, we further showcase the
capabilities of our cloud radiance rendering through a series of applications,
ranging from cloud VR/AR rendering.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 17:47:33 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Luo",
"Haimin",
""
],
[
"Zhang",
"Siyuan",
""
],
[
"Zhao",
"Fuqiang",
""
],
[
"Jing",
"Haotian",
""
],
[
"Wang",
"Penghao",
""
],
[
"Yu",
"Zhenxiao",
""
],
[
"Yan",
"Dongxue",
""
],
[
"Ding",
"Junran",
""
],
[
"Zhang",
"Boyuan",
""
],
[
"Hu",
"Qiang",
""
],
[
"Yin",
"Shu",
""
],
[
"Xu",
"Lan",
""
],
[
"Yu",
"JIngyi",
""
]
] |
new_dataset
| 0.996826 |
2303.04092
|
Ruochen Zhang
|
Ruochen Zhang and Carsten Eickhoff
|
CroCoSum: A Benchmark Dataset for Cross-Lingual Code-Switched
Summarization
|
Work in Progress
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cross-lingual summarization (CLS) has attracted increasing interest in recent
years due to the availability of large-scale web-mined datasets and the
advancements of multilingual language models. However, given the rareness of
naturally occurring CLS resources, the majority of datasets are forced to rely
on translation which can contain overly literal artifacts. This restricts our
ability to observe naturally occurring CLS pairs that capture organic diction,
including instances of code-switching. This alteration between languages in
mid-message is a common phenomenon in multilingual settings yet has been
largely overlooked in cross-lingual contexts due to data scarcity. To address
this gap, we introduce CroCoSum, a dataset of cross-lingual code-switched
summarization of technology news. It consists of over 24,000 English source
articles and 18,000 human-curated Chinese news summaries, with more than 92% of
the summaries containing code-switched phrases. For reference, we evaluate the
performance of existing approaches including pipeline, end-to-end, and
zero-shot methods. We show that leveraging existing resources as a pretraining
step does not improve performance on CroCoSum, indicating the limited
generalizability of existing resources. Finally, we discuss the challenges of
evaluating cross-lingual summarizers on code-switched generation through
qualitative error analyses. Our collection and code can be accessed at
https://github.com/RosenZhang/CroCoSum.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 17:52:51 GMT"
}
] | 2023-03-08T00:00:00 |
[
[
"Zhang",
"Ruochen",
""
],
[
"Eickhoff",
"Carsten",
""
]
] |
new_dataset
| 0.999836 |
1308.5395
|
Bob Allen
|
Robert B. Allen
|
Toward an Interactive Directory for Norfolk, Nebraska: 1899-1900
| null |
IFLA Newspaper and Genealogy Section Meeting, Singapore, Aug 2013
| null | null |
cs.DL
|
http://creativecommons.org/licenses/by/3.0/
|
We describe steps toward an interactive directory for the town of Norfolk,
Nebraska for the years 1899 and 1900. This directory would extend the
traditional city directory by including a wider range of entities being
described, much richer information about the entities mentioned and linkages to
mentions of the entities in material such as digitized historical newspapers.
Such a directory would be useful to readers who browse the historical
newspapers by providing structured summaries of the entities mentioned. We
describe the occurrence of entities in two years of the Norfolk Weekly News,
focusing on several individuals to better understand the types of information
which can be gleaned from historical newspapers and other historical materials.
We also describe a prototype program which coordinates information about
entities from the traditional city directories, the federal census, and from
newspapers. We discuss the structured coding for these entities, noting that
richer coding would increasingly include descriptions of events and scenarios.
We propose that rich content about individuals and communities could eventually
be modeled with agents and woven into historical narratives.
|
[
{
"version": "v1",
"created": "Sun, 25 Aug 2013 10:40:34 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Allen",
"Robert B.",
""
]
] |
new_dataset
| 0.980531 |
2101.00090
|
Americo Rio
|
Am\'erico Rio, Fernando Brito e Abreu
|
PHP code smells in web apps: survival and anomalies
| null | null |
10.1016/j.jss.2023.111644
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Context: Code smells are considered symptoms of poor design, leading to
future problems, such as reduced maintainability. Except for anecdotal cases
(e. g. code dropout), a code smell survives until it gets explicitly refactored
or removed. This paper presents a longitudinal study on the survival of code
smells for web apps built with PHP.
Objectives: RQ: (i) code smells survival depends on their scope? (ii)
practitioners attitudes towards code smells removal in web apps have changed
throughout time? (iii) how long code smells survive in web applications? (iv)
are there sudden variations (anomalies) in the density of code smells through
the evolution of web apps?
Method: We analyze the evolution of 6 code smells in 8 web applications
written in PHP at the server side, across several years, using the survival
analysis technique. We classify code smells according to scope in two
categories: scattered and localized. Scattered code smells are expected to be
more harmful since their influence is not circumscribed as in localized code
smells. We split the observations for each web app into two equal and
consecutive timeframes, to test the hypothesis that code smells awareness has
increased throughout time. As for the anomalies, we standardize their detection
criteria.
Results: We present some evidence that code smells survival depends on their
scope: the average survival rate decreases in some of them, while the opposite
is observed for the remainder. The survival of localized code smells is around
4 years, while the scattered ones live around 5 years. Around 60% of the smells
are removed, and some live through all the application life. We also show how a
graphical representation of anomalies found in the evolution of code smells
allows unveiling the story of a development project and make managers aware of
the need for enforcing regular refactoring practices.
|
[
{
"version": "v1",
"created": "Thu, 31 Dec 2020 22:05:24 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Rio",
"Américo",
""
],
[
"Abreu",
"Fernando Brito e",
""
]
] |
new_dataset
| 0.999203 |
2101.02530
|
Alexander Neergaard Zahid
|
Alexander Neergaard Olesen, Poul Jennum, Emmanuel Mignot and Helge B.
D. Sorensen
|
MSED: a multi-modal sleep event detection model for clinical sleep
analysis
|
10 pages, 4 figures. Accepted for publication in IEEE Transactions on
Biomedical Engineering
| null |
10.1109/TBME.2023.3252368
| null |
cs.CV cs.LG eess.SP stat.AP stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Clinical sleep analysis require manual analysis of sleep patterns for correct
diagnosis of sleep disorders. However, several studies have shown significant
variability in manual scoring of clinically relevant discrete sleep events,
such as arousals, leg movements, and sleep disordered breathing (apneas and
hypopneas). We investigated whether an automatic method could be used for event
detection and if a model trained on all events (joint model) performed better
than corresponding event-specific models (single-event models). We trained a
deep neural network event detection model on 1653 individual recordings and
tested the optimized model on 1000 separate hold-out recordings. F1 scores for
the optimized joint detection model were 0.70, 0.63, and 0.62 for arousals, leg
movements, and sleep disordered breathing, respectively, compared to 0.65,
0.61, and 0.60 for the optimized single-event models. Index values computed
from detected events correlated positively with manual annotations ($r^2$ =
0.73, $r^2$ = 0.77, $r^2$ = 0.78, respectively). We furthermore quantified
model accuracy based on temporal difference metrics, which improved overall by
using the joint model compared to single-event models. Our automatic model
jointly detects arousals, leg movements and sleep disordered breathing events
with high correlation with human annotations. Finally, we benchmark against
previous state-of-the-art multi-event detection models and found an overall
increase in F1 score with our proposed model despite a 97.5% reduction in model
size. Source code for training and inference is available at
https://github.com/neergaard/msed.git.
|
[
{
"version": "v1",
"created": "Thu, 7 Jan 2021 13:08:44 GMT"
},
{
"version": "v2",
"created": "Sun, 5 Mar 2023 21:16:18 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Olesen",
"Alexander Neergaard",
""
],
[
"Jennum",
"Poul",
""
],
[
"Mignot",
"Emmanuel",
""
],
[
"Sorensen",
"Helge B. D.",
""
]
] |
new_dataset
| 0.990733 |
2108.08679
|
Alexander Barg
|
Alexander Barg, Zitan Chen, and Itzhak Tamo
|
A construction of maximally recoverable codes
| null |
Designs, Codes and Cryptography, 2022, vol. 90, pp. 939-945
|
10.1007/s10623-022-01020-8
| null |
cs.IT math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We construct a family of linear maximally recoverable codes with locality $r$
and dimension $r+1.$ For codes of length $n$ with $r\approx n^\alpha,
0\le\alpha\le 1$ the code alphabet is of the order $n^{1+3\alpha},$ which
improves upon the previously known constructions of maximally recoverable
codes.
|
[
{
"version": "v1",
"created": "Thu, 19 Aug 2021 13:40:55 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Barg",
"Alexander",
""
],
[
"Chen",
"Zitan",
""
],
[
"Tamo",
"Itzhak",
""
]
] |
new_dataset
| 0.987356 |
2111.05974
|
Wei Xu
|
Wei Xu
|
User Centered Design (VII): From Automated Flight Deck to Intelligent
Flight Deck
|
in Chinese language
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Driven by the "user-centered design" philosophy, this paper first outlines
the human factors problems of the flight deck automation for large civil
aircraft and the human factors research carried out based on the
"human-centered automation" approach. This paper then reviews the previous
initial human factors research on intelligent civil flight deck based on the
"human-centered AI" approach and discusses the prospects for future human
factors research. Based on our proposed human factors engineering model for
intelligent human-computer interaction and the framework of joint cognitive
eco-systems, this paper proposes an initial human factors solution for the
single-pilot operations of large civil aircraft and presents preliminary
suggestions for future human factors research.
|
[
{
"version": "v1",
"created": "Wed, 10 Nov 2021 22:35:31 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Jan 2022 04:59:06 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Jan 2022 05:24:56 GMT"
},
{
"version": "v4",
"created": "Mon, 6 Mar 2023 06:35:43 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Xu",
"Wei",
""
]
] |
new_dataset
| 0.995749 |
2112.06623
|
Clemens-Alexander Brust
|
Clemens-Alexander Brust and Tim Sonnekalb and Bernd Gruner
|
ROMEO: Exploring Juliet through the Lens of Assembly Language
|
21 pages, code available at https://gitlab.com/dlr-dw/romeo
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic vulnerability detection on C/C++ source code has benefitted from
the introduction of machine learning to the field, with many recent
publications targeting this combination. In contrast, assembly language or
machine code artifacts receive less attention, although there are compelling
reasons to study them. They are more representative of what is executed, more
easily incorporated in dynamic analysis, and in the case of closed-source code,
there is no alternative.
We evaluate the representative capability of assembly language compared to
C/C++ source code for vulnerability detection. Furthermore, we investigate the
role of call graph context in detecting function-spanning vulnerabilities.
Finally, we verify whether compiling a benchmark dataset compromises an
experiment's soundness by inadvertently leaking label information.
We propose ROMEO, a publicly available, reproducible and reusable binary
vulnerability detection benchmark dataset derived from the synthetic Juliet
test suite. Alongside, we introduce a simple text-based assembly language
representation that includes context for function-spanning vulnerability
detection and semantics to detect high-level vulnerabilities. It is constructed
by disassembling the .text segment of the respective binaries.
We evaluate an x86 assembly language representation of the compiled dataset,
combined with an off-the-shelf classifier. It compares favorably to
state-of-the-art methods, including those operating on the full C/C++ code.
Including context information using the call graph improves detection of
function-spanning vulnerabilities. There is no label information leaked during
the compilation process.
|
[
{
"version": "v1",
"created": "Mon, 13 Dec 2021 13:06:48 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Jul 2022 08:30:33 GMT"
},
{
"version": "v3",
"created": "Mon, 6 Mar 2023 13:49:24 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Brust",
"Clemens-Alexander",
""
],
[
"Sonnekalb",
"Tim",
""
],
[
"Gruner",
"Bernd",
""
]
] |
new_dataset
| 0.99492 |
2201.05980
|
Wei Xu
|
Wei Xu
|
User-Centered Design (VIII): A New Framework of Intelligent
Sociotechnical Systems and Prospects for Future Human Factors Research
|
in Chinese language
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Traditional sociotechnical systems (STS) theory has been widely used, but
there are many new characteristics in the STS environment as we enter the
intelligence era, resulting in the limitations of traditional STS. Based on the
"user-centered design" philosophy, this paper proposes a new framework of
intelligent sociotechnical systems (iSTS) and outlines the new characteristics
of iSTS as well as its implications for the development of intelligent systems.
Future research of iSTS requires interdisciplinary collaboration, including
human factors engineering, this paper finally proposes suggestions from two
aspects of human factors engineering methodology and approaches.
|
[
{
"version": "v1",
"created": "Sun, 16 Jan 2022 06:18:04 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Mar 2022 07:47:47 GMT"
},
{
"version": "v3",
"created": "Mon, 6 Mar 2023 06:26:21 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Xu",
"Wei",
""
]
] |
new_dataset
| 0.992345 |
2201.06093
|
Asaf Shabtai
|
Edan Habler, Ron Bitton, Dan Avraham, Dudu Mimran, Eitan Klevansky,
Oleg Brodt, Heiko Lehmann, Yuval Elovici, and Asaf Shabtai
|
Adversarial Machine Learning Threat Analysis and Remediation in Open
Radio Access Network (O-RAN)
| null | null | null | null |
cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
O-RAN is a new, open, adaptive, and intelligent RAN architecture. Motivated
by the success of artificial intelligence in other domains, O-RAN strives to
leverage machine learning (ML) to automatically and efficiently manage network
resources in diverse use cases such as traffic steering, quality of experience
prediction, and anomaly detection. Unfortunately, it has been shown that
ML-based systems are vulnerable to an attack technique referred to as
adversarial machine learning (AML). This special kind of attack has already
been demonstrated in recent studies and in multiple domains. In this paper, we
present a systematic AML threat analysis for O-RAN. We start by reviewing
relevant ML use cases and analyzing the different ML workflow deployment
scenarios in O-RAN. Then, we define the threat model, identifying potential
adversaries, enumerating their adversarial capabilities, and analyzing their
main goals. Next, we explore the various AML threats associated with O-RAN and
review a large number of attacks that can be performed to realize these threats
and demonstrate an AML attack on a traffic steering model. In addition, we
analyze and propose various AML countermeasures for mitigating the identified
threats. Finally, based on the identified AML threats and countermeasures, we
present a methodology and a tool for performing risk assessment for AML attacks
for a specific ML use case in O-RAN.
|
[
{
"version": "v1",
"created": "Sun, 16 Jan 2022 17:01:38 GMT"
},
{
"version": "v2",
"created": "Sat, 4 Mar 2023 17:20:37 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Habler",
"Edan",
""
],
[
"Bitton",
"Ron",
""
],
[
"Avraham",
"Dan",
""
],
[
"Mimran",
"Dudu",
""
],
[
"Klevansky",
"Eitan",
""
],
[
"Brodt",
"Oleg",
""
],
[
"Lehmann",
"Heiko",
""
],
[
"Elovici",
"Yuval",
""
],
[
"Shabtai",
"Asaf",
""
]
] |
new_dataset
| 0.996411 |
2206.12455
|
Inwoo Hwang
|
Inwoo Hwang, Junho Kim, Young Min Kim
|
Ev-NeRF: Event Based Neural Radiance Field
|
Accepted to WACV 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present Ev-NeRF, a Neural Radiance Field derived from event data. While
event cameras can measure subtle brightness changes in high frame rates, the
measurements in low lighting or extreme motion suffer from significant domain
discrepancy with complex noise. As a result, the performance of event-based
vision tasks does not transfer to challenging environments, where the event
cameras are expected to thrive over normal cameras. We find that the multi-view
consistency of NeRF provides a powerful self-supervision signal for eliminating
the spurious measurements and extracting the consistent underlying structure
despite highly noisy input. Instead of posed images of the original NeRF, the
input to Ev-NeRF is the event measurements accompanied by the movements of the
sensors. Using the loss function that reflects the measurement model of the
sensor, Ev-NeRF creates an integrated neural volume that summarizes the
unstructured and sparse data points captured for about 2-4 seconds. The
generated neural volume can also produce intensity images from novel views with
reasonable depth estimates, which can serve as a high-quality input to various
vision-based tasks. Our results show that Ev-NeRF achieves competitive
performance for intensity image reconstruction under extreme noise conditions
and high-dynamic-range imaging.
|
[
{
"version": "v1",
"created": "Fri, 24 Jun 2022 18:27:30 GMT"
},
{
"version": "v2",
"created": "Sun, 5 Mar 2023 10:08:01 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Hwang",
"Inwoo",
""
],
[
"Kim",
"Junho",
""
],
[
"Kim",
"Young Min",
""
]
] |
new_dataset
| 0.997939 |
2207.03333
|
Jishnu Jaykumar P
|
Jishnu Jaykumar P and Yu-Wei Chao and Yu Xiang
|
FewSOL: A Dataset for Few-Shot Object Learning in Robotic Environments
| null | null | null | null |
cs.CV cs.AI cs.LG cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce the Few-Shot Object Learning (FewSOL) dataset for object
recognition with a few images per object. We captured 336 real-world objects
with 9 RGB-D images per object from different views. Object segmentation masks,
object poses and object attributes are provided. In addition, synthetic images
generated using 330 3D object models are used to augment the dataset. We
investigated (i) few-shot object classification and (ii) joint object
segmentation and few-shot classification with the state-of-the-art methods for
few-shot learning and meta-learning using our dataset. The evaluation results
show that there is still a large margin to be improved for few-shot object
classification in robotic environments. Our dataset can be used to study a set
of few-shot object recognition problems such as classification, detection and
segmentation, shape reconstruction, pose estimation, keypoint correspondences
and attribute recognition. The dataset and code are available at
https://irvlutd.github.io/FewSOL.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2022 05:57:24 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Nov 2022 16:55:14 GMT"
},
{
"version": "v3",
"created": "Sun, 5 Mar 2023 19:44:47 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"P",
"Jishnu Jaykumar",
""
],
[
"Chao",
"Yu-Wei",
""
],
[
"Xiang",
"Yu",
""
]
] |
new_dataset
| 0.999827 |
2207.04242
|
Bin Ren
|
Bin Ren, Hao Tang, Yiming Wang, Xia Li, Wei Wang, Nicu Sebe
|
PI-Trans: Parallel-ConvMLP and Implicit-Transformation Based GAN for
Cross-View Image Translation
|
5 pages, 5 figures
|
2023 IEEE International Conference on Acoustics, Speech and Signal
Processing
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For semantic-guided cross-view image translation, it is crucial to learn
where to sample pixels from the source view image and where to reallocate them
guided by the target view semantic map, especially when there is little overlap
or drastic view difference between the source and target images. Hence, one not
only needs to encode the long-range dependencies among pixels in both the
source view image and target view semantic map but also needs to translate
these learned dependencies. To this end, we propose a novel generative
adversarial network, PI-Trans, which mainly consists of a novel
Parallel-ConvMLP module and an Implicit Transformation module at multiple
semantic levels. Extensive experimental results show that PI-Trans achieves the
best qualitative and quantitative performance by a large margin compared to the
state-of-the-art methods on two challenging datasets. The source code is
available at https://github.com/Amazingren/PI-Trans.
|
[
{
"version": "v1",
"created": "Sat, 9 Jul 2022 10:35:44 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Mar 2023 09:54:55 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Ren",
"Bin",
""
],
[
"Tang",
"Hao",
""
],
[
"Wang",
"Yiming",
""
],
[
"Li",
"Xia",
""
],
[
"Wang",
"Wei",
""
],
[
"Sebe",
"Nicu",
""
]
] |
new_dataset
| 0.965506 |
2207.14465
|
Shijie Wang
|
Shijie Wang, Jianlong Chang, Zhihui Wang, Haojie Li, Wanli Ouyang, Qi
Tian
|
Fine-grained Retrieval Prompt Tuning
|
Accepted by AAAI 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Fine-grained object retrieval aims to learn discriminative representation to
retrieve visually similar objects. However, existing top-performing works
usually impose pairwise similarities on the semantic embedding spaces or design
a localization sub-network to continually fine-tune the entire model in limited
data scenarios, thus resulting in convergence to suboptimal solutions. In this
paper, we develop Fine-grained Retrieval Prompt Tuning (FRPT), which steers a
frozen pre-trained model to perform the fine-grained retrieval task from the
perspectives of sample prompting and feature adaptation. Specifically, FRPT
only needs to learn fewer parameters in the prompt and adaptation instead of
fine-tuning the entire model, thus solving the issue of convergence to
suboptimal solutions caused by fine-tuning the entire model. Technically, a
discriminative perturbation prompt (DPP) is introduced and deemed as a sample
prompting process, which amplifies and even exaggerates some discriminative
elements contributing to category prediction via a content-aware inhomogeneous
sampling operation. In this way, DPP can make the fine-grained retrieval task
aided by the perturbation prompts close to the solved task during the original
pre-training. Thereby, it preserves the generalization and discrimination of
representation extracted from input samples. Besides, a category-specific
awareness head is proposed and regarded as feature adaptation, which removes
the species discrepancies in features extracted by the pre-trained model using
category-guided instance normalization. And thus, it makes the optimized
features only include the discrepancies among subcategories. Extensive
experiments demonstrate that our FRPT with fewer learnable parameters achieves
the state-of-the-art performance on three widely-used fine-grained datasets.
|
[
{
"version": "v1",
"created": "Fri, 29 Jul 2022 04:10:04 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Nov 2022 08:40:26 GMT"
},
{
"version": "v3",
"created": "Mon, 6 Mar 2023 09:45:11 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Wang",
"Shijie",
""
],
[
"Chang",
"Jianlong",
""
],
[
"Wang",
"Zhihui",
""
],
[
"Li",
"Haojie",
""
],
[
"Ouyang",
"Wanli",
""
],
[
"Tian",
"Qi",
""
]
] |
new_dataset
| 0.995856 |
2208.07681
|
Chong Zhang
|
Chong Zhang, Lizhi Yang
|
Generating a Terrain-Robustness Benchmark for Legged Locomotion: A
Prototype via Terrain Authoring and Active Learning
|
7 pages, 7 figures. IEEE ICRA 2023
| null | null | null |
cs.RO cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Terrain-aware locomotion has become an emerging topic in legged robotics.
However, it is hard to generate diverse, challenging, and realistic
unstructured terrains in simulation, which limits the way researchers evaluate
their locomotion policies. In this paper, we prototype the generation of a
terrain dataset via terrain authoring and active learning, and the learned
samplers can stably generate diverse high-quality terrains. We expect the
generated dataset to make a terrain-robustness benchmark for legged locomotion.
The dataset, the code implementation, and some policy evaluations are released
at https://bit.ly/3bn4j7f.
|
[
{
"version": "v1",
"created": "Tue, 16 Aug 2022 11:42:28 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Sep 2022 13:12:46 GMT"
},
{
"version": "v3",
"created": "Mon, 6 Mar 2023 18:25:52 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Zhang",
"Chong",
""
],
[
"Yang",
"Lizhi",
""
]
] |
new_dataset
| 0.999782 |
2208.09686
|
Yuheng Shi
|
Yuheng Shi, Naiyan Wang, Xiaojie Guo
|
YOLOV: Making Still Image Object Detectors Great at Video Object
Detection
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Video object detection (VID) is challenging because of the high variation of
object appearance as well as the diverse deterioration in some frames. On the
positive side, the detection in a certain frame of a video, compared with that
in a still image, can draw support from other frames. Hence, how to aggregate
features across different frames is pivotal to VID problem. Most of existing
aggregation algorithms are customized for two-stage detectors. However, these
detectors are usually computationally expensive due to their two-stage nature.
This work proposes a simple yet effective strategy to address the above
concerns, which costs marginal overheads with significant gains in accuracy.
Concretely, different from traditional two-stage pipeline, we select important
regions after the one-stage detection to avoid processing massive low-quality
candidates. Besides, we evaluate the relationship between a target frame and
reference frames to guide the aggregation. We conduct extensive experiments and
ablation studies to verify the efficacy of our design, and reveal its
superiority over other state-of-the-art VID approaches in both effectiveness
and efficiency. Our YOLOX-based model can achieve promising performance
(\emph{e.g.}, 87.5\% AP50 at over 30 FPS on the ImageNet VID dataset on a
single 2080Ti GPU), making it attractive for large-scale or real-time
applications. The implementation is simple, we have made the demo codes and
models available at \url{https://github.com/YuHengsss/YOLOV}.
|
[
{
"version": "v1",
"created": "Sat, 20 Aug 2022 14:12:06 GMT"
},
{
"version": "v2",
"created": "Sun, 5 Mar 2023 09:22:53 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Shi",
"Yuheng",
""
],
[
"Wang",
"Naiyan",
""
],
[
"Guo",
"Xiaojie",
""
]
] |
new_dataset
| 0.998695 |
2209.15566
|
Angelo Bratta
|
Angelo Bratta, Avadesh Meduri, Michele Focchi, Ludovic Righetti, and
Claudio Semini
|
ContactNet: Online Multi-Contact Planning for Acyclic Legged Robot
Locomotion
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In legged logomotion, online trajectory optimization techniques generally
depend on heuristic-based contact planners in order to have low computation
times and achieve high replanning frequencies. In this work, we propose
ContactNet, a fast acyclic contact planner based on a multi-output regression
neural network. ContactNet ranks discretized stepping regions, allowing to
quickly choose the best feasible solution, even in complex environments. The
low computation time, in the order of 1 ms, makes possible the execution of the
contact planner concurrently with a trajectory optimizer in a Model Predictive
Control (MPC) fashion. We demonstrate the effectiveness of the approach in
simulation in different complex scenarios with the quadruped robot Solo12.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 16:25:00 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Mar 2023 14:04:55 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Bratta",
"Angelo",
""
],
[
"Meduri",
"Avadesh",
""
],
[
"Focchi",
"Michele",
""
],
[
"Righetti",
"Ludovic",
""
],
[
"Semini",
"Claudio",
""
]
] |
new_dataset
| 0.99317 |
2210.00722
|
Puhao Li
|
Puhao Li, Tengyu Liu, Yuyang Li, Yiran Geng, Yixin Zhu, Yaodong Yang,
Siyuan Huang
|
GenDexGrasp: Generalizable Dexterous Grasping
|
Accepted to ICRA 2023 (camera-ready version)
| null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Generating dexterous grasping has been a long-standing and challenging
robotic task. Despite recent progress, existing methods primarily suffer from
two issues. First, most prior arts focus on a specific type of robot hand,
lacking the generalizable capability of handling unseen ones. Second, prior
arts oftentimes fail to rapidly generate diverse grasps with a high success
rate. To jointly tackle these challenges with a unified solution, we propose
GenDexGrasp, a novel hand-agnostic grasping algorithm for generalizable
grasping. GenDexGrasp is trained on our proposed large-scale multi-hand
grasping dataset MultiDex synthesized with force closure optimization. By
leveraging the contact map as a hand-agnostic intermediate representation,
GenDexGrasp efficiently generates diverse and plausible grasping poses with a
high success rate and can transfer among diverse multi-fingered robotic hands.
Compared with previous methods, GenDexGrasp achieves a three-way trade-off
among success rate, inference speed, and diversity. Code is available at
https://github.com/tengyu-liu/GenDexGrasp.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 05:38:20 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Mar 2023 10:03:01 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Li",
"Puhao",
""
],
[
"Liu",
"Tengyu",
""
],
[
"Li",
"Yuyang",
""
],
[
"Geng",
"Yiran",
""
],
[
"Zhu",
"Yixin",
""
],
[
"Yang",
"Yaodong",
""
],
[
"Huang",
"Siyuan",
""
]
] |
new_dataset
| 0.997714 |
2210.06002
|
Zhilei Liu
|
Chenggong Zhang and Zhilei Liu
|
Face Super-Resolution with Progressive Embedding of Multi-scale Face
Priors
|
Accepted by IJCB 2022
| null |
10.1109/IJCB54206.2022.10007954
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The face super-resolution (FSR) task is to reconstruct high-resolution face
images from low-resolution inputs. Recent works have achieved success on this
task by utilizing facial priors such as facial landmarks. Most existing methods
pay more attention to global shape and structure information, but less to local
texture information, which makes them cannot recover local details well. In
this paper, we propose a novel recurrent convolutional network based framework
for face super-resolution, which progressively introduces both global shape and
local texture information. We take full advantage of the intermediate outputs
of the recurrent network, and landmarks information and facial action units
(AUs) information are extracted in the output of the first and second steps
respectively, rather than low-resolution input. Moreover, we introduced AU
classification results as a novel quantitative metric for facial details
restoration. Extensive experiments show that our proposed method significantly
outperforms state-of-the-art FSR methods in terms of image quality and facial
details restoration.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 08:16:52 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Zhang",
"Chenggong",
""
],
[
"Liu",
"Zhilei",
""
]
] |
new_dataset
| 0.997934 |
2210.06132
|
Swaroop Joshi
|
Jaskaran Singh Bhatia, Parthasarathy P D, Snigdha Tiwari, Dhruv
Nagpal, Swaroop Joshi
|
Integrating Accessibility in a Mobile App Development Course
|
7 pages, 1 figure, submitted to ACM SIGCSE 2023
| null |
10.1145/3545945.3569825
| null |
cs.CY
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The growing interest in accessible software reflects in computing educators'
and education researchers' efforts to include accessibility in core computing
education. We integrated accessibility in a junior/senior-level Android app
development course at a large private university in India. The course
introduced three accessibility-related topics using various interventions:
Accessibility Awareness (a guest lecture by a legal expert), Technical
Knowledge (lectures on Android accessibility guidelines and testing practices
and graded components for implementing accessibility in programming
assignments), and Empathy (an activity that required students to blindfold
themselves and interact with their phones using a screen-reader). We evaluated
their impact on student learning using three instruments: (A) A pre/post-course
questionnaire, (B) Reflective questions on each of the four programming
assignments, and (C) Midterm and Final exam questions. Our findings demonstrate
that: (A) significantly more ($p<.05$) students considered disabilities when
designing an app after taking this course, (B) many students developed empathy
towards the challenges persons with disabilities face while using inaccessible
apps, and (C) all students could correctly identify at least one accessibility
issue in the user interface of a real-world app given its screenshot, and 90%
of them could provide a correct solution to fix it.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 12:44:33 GMT"
}
] | 2023-03-07T00:00:00 |
[
[
"Bhatia",
"Jaskaran Singh",
""
],
[
"D",
"Parthasarathy P",
""
],
[
"Tiwari",
"Snigdha",
""
],
[
"Nagpal",
"Dhruv",
""
],
[
"Joshi",
"Swaroop",
""
]
] |
new_dataset
| 0.992511 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.