id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2302.05355
|
Fabio Massacci
|
Francesco Ciclosi, Silvia Vidor, and Fabio Massacci
|
Building cross-language corpora for human understanding of privacy
policies
| null | null | null | null |
cs.CR cs.CY
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Making sure that users understand privacy policies that impact them is a key
challenge for a real GDPR deployment. Research studies are mostly carried in
English, but in Europe and elsewhere, users speak a language that is not
English. Replicating studies in different languages requires the availability
of comparable cross-language privacy policies corpora. This work provides a
methodology for building comparable cross-language in a national language and a
reference study language. We provide an application example of our methodology
comparing English and Italian extending the corpus of one of the first studies
about users understanding of technical terms in privacy policies. We also
investigate other open issues that can make replication harder.
|
[
{
"version": "v1",
"created": "Fri, 10 Feb 2023 16:16:55 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Ciclosi",
"Francesco",
""
],
[
"Vidor",
"Silvia",
""
],
[
"Massacci",
"Fabio",
""
]
] |
new_dataset
| 0.960544 |
2302.05393
|
Pedro Sarmento
|
Pedro Sarmento, Adarsh Kumar, Yu-Hua Chen, CJ Carr, Zack Zukowski,
Mathieu Barthet
|
GTR-CTRL: Instrument and Genre Conditioning for Guitar-Focused Music
Generation with Transformers
|
This preprint is licensed under a Creative Commons Attribution 4.0
International License (CC BY 4.0). The Version of Record of this contribution
is published in Proceedings of EvoMUSART: International Conference on
Computational Intelligence in Music, Sound, Art and Design (Part of EvoStar)
2023
|
EvoMUSART: International Conference on Computational Intelligence
in Music, Sound, Art and Design (Part of EvoStar) 2023
| null | null |
cs.SD cs.AI cs.MM eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Recently, symbolic music generation with deep learning techniques has
witnessed steady improvements. Most works on this topic focus on MIDI
representations, but less attention has been paid to symbolic music generation
using guitar tablatures (tabs) which can be used to encode multiple
instruments. Tabs include information on expressive techniques and fingerings
for fretted string instruments in addition to rhythm and pitch. In this work,
we use the DadaGP dataset for guitar tab music generation, a corpus of over 26k
songs in GuitarPro and token formats. We introduce methods to condition a
Transformer-XL deep learning model to generate guitar tabs (GTR-CTRL) based on
desired instrumentation (inst-CTRL) and genre (genre-CTRL). Special control
tokens are appended at the beginning of each song in the training corpus. We
assess the performance of the model with and without conditioning. We propose
instrument presence metrics to assess the inst-CTRL model's response to a given
instrumentation prompt. We trained a BERT model for downstream genre
classification and used it to assess the results obtained with the genre-CTRL
model. Statistical analyses evidence significant differences between the
conditioned and unconditioned models. Overall, results indicate that the
GTR-CTRL methods provide more flexibility and control for guitar-focused
symbolic music generation than an unconditioned model.
|
[
{
"version": "v1",
"created": "Fri, 10 Feb 2023 17:43:03 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Sarmento",
"Pedro",
""
],
[
"Kumar",
"Adarsh",
""
],
[
"Chen",
"Yu-Hua",
""
],
[
"Carr",
"CJ",
""
],
[
"Zukowski",
"Zack",
""
],
[
"Barthet",
"Mathieu",
""
]
] |
new_dataset
| 0.999809 |
2302.05406
|
Pedro Colon-Hernandez
|
Pedro Colon-Hernandez, Henry Lieberman, Yida Xin, Claire Yin, Cynthia
Breazeal, Peter Chin
|
Adversarial Transformer Language Models for Contextual Commonsense
Inference
|
Submitted to Semantic Web Journal special edition.
https://semantic-web-journal.org/content/adversarial-transformer-language-models-contextual-commonsense-inference-1
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Contextualized or discourse aware commonsense inference is the task of
generating coherent commonsense assertions (i.e., facts) from a given story,
and a particular sentence from that story. Some problems with the task are:
lack of controllability for topics of the inferred facts; lack of commonsense
knowledge during training; and, possibly, hallucinated or false facts. In this
work, we utilize a transformer model for this task and develop techniques to
address the aforementioned problems in the task. We control the inference by
introducing a new technique we call "hinting". Hinting is a kind of language
model prompting, that utilizes both hard prompts (specific words) and soft
prompts (virtual learnable templates). This serves as a control signal to
advise the language model "what to talk about". Next, we establish a
methodology for performing joint inference with multiple commonsense knowledge
bases. Joint inference of commonsense requires care, because it is imprecise
and the level of generality is more flexible. You want to be sure that the
results "still make sense" for the context. To this end, we align the textual
version of assertions from three knowledge graphs (ConceptNet, ATOMIC2020, and
GLUCOSE) with a story and a target sentence. This combination allows us to
train a single model to perform joint inference with multiple knowledge graphs.
We show experimental results for the three knowledge graphs on joint inference.
Our final contribution is exploring a GAN architecture that generates the
contextualized commonsense assertions and scores them as to their plausibility
through a discriminator. The result is an integrated system for contextual
commonsense inference in stories, that can controllably generate plausible
commonsense assertions, and takes advantage of joint inference between multiple
commonsense knowledge bases.
|
[
{
"version": "v1",
"created": "Fri, 10 Feb 2023 18:21:13 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Colon-Hernandez",
"Pedro",
""
],
[
"Lieberman",
"Henry",
""
],
[
"Xin",
"Yida",
""
],
[
"Yin",
"Claire",
""
],
[
"Breazeal",
"Cynthia",
""
],
[
"Chin",
"Peter",
""
]
] |
new_dataset
| 0.96923 |
2302.05442
|
Mostafa Dehghani
|
Mostafa Dehghani, Josip Djolonga, Basil Mustafa, Piotr Padlewski,
Jonathan Heek, Justin Gilmer, Andreas Steiner, Mathilde Caron, Robert
Geirhos, Ibrahim Alabdulmohsin, Rodolphe Jenatton, Lucas Beyer, Michael
Tschannen, Anurag Arnab, Xiao Wang, Carlos Riquelme, Matthias Minderer, Joan
Puigcerver, Utku Evci, Manoj Kumar, Sjoerd van Steenkiste, Gamaleldin F.
Elsayed, Aravindh Mahendran, Fisher Yu, Avital Oliver, Fantine Huot, Jasmijn
Bastings, Mark Patrick Collier, Alexey Gritsenko, Vighnesh Birodkar, Cristina
Vasconcelos, Yi Tay, Thomas Mensink, Alexander Kolesnikov, Filip Paveti\'c,
Dustin Tran, Thomas Kipf, Mario Lu\v{c}i\'c, Xiaohua Zhai, Daniel Keysers,
Jeremiah Harmsen, Neil Houlsby
|
Scaling Vision Transformers to 22 Billion Parameters
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The scaling of Transformers has driven breakthrough capabilities for language
models. At present, the largest large language models (LLMs) contain upwards of
100B parameters. Vision Transformers (ViT) have introduced the same
architecture to image and video modelling, but these have not yet been
successfully scaled to nearly the same degree; the largest dense ViT contains
4B parameters (Chen et al., 2022). We present a recipe for highly efficient and
stable training of a 22B-parameter ViT (ViT-22B) and perform a wide variety of
experiments on the resulting model. When evaluated on downstream tasks (often
with a lightweight linear model on frozen features), ViT-22B demonstrates
increasing performance with scale. We further observe other interesting
benefits of scale, including an improved tradeoff between fairness and
performance, state-of-the-art alignment to human visual perception in terms of
shape/texture bias, and improved robustness. ViT-22B demonstrates the potential
for "LLM-like" scaling in vision, and provides key steps towards getting there.
|
[
{
"version": "v1",
"created": "Fri, 10 Feb 2023 18:58:21 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Dehghani",
"Mostafa",
""
],
[
"Djolonga",
"Josip",
""
],
[
"Mustafa",
"Basil",
""
],
[
"Padlewski",
"Piotr",
""
],
[
"Heek",
"Jonathan",
""
],
[
"Gilmer",
"Justin",
""
],
[
"Steiner",
"Andreas",
""
],
[
"Caron",
"Mathilde",
""
],
[
"Geirhos",
"Robert",
""
],
[
"Alabdulmohsin",
"Ibrahim",
""
],
[
"Jenatton",
"Rodolphe",
""
],
[
"Beyer",
"Lucas",
""
],
[
"Tschannen",
"Michael",
""
],
[
"Arnab",
"Anurag",
""
],
[
"Wang",
"Xiao",
""
],
[
"Riquelme",
"Carlos",
""
],
[
"Minderer",
"Matthias",
""
],
[
"Puigcerver",
"Joan",
""
],
[
"Evci",
"Utku",
""
],
[
"Kumar",
"Manoj",
""
],
[
"van Steenkiste",
"Sjoerd",
""
],
[
"Elsayed",
"Gamaleldin F.",
""
],
[
"Mahendran",
"Aravindh",
""
],
[
"Yu",
"Fisher",
""
],
[
"Oliver",
"Avital",
""
],
[
"Huot",
"Fantine",
""
],
[
"Bastings",
"Jasmijn",
""
],
[
"Collier",
"Mark Patrick",
""
],
[
"Gritsenko",
"Alexey",
""
],
[
"Birodkar",
"Vighnesh",
""
],
[
"Vasconcelos",
"Cristina",
""
],
[
"Tay",
"Yi",
""
],
[
"Mensink",
"Thomas",
""
],
[
"Kolesnikov",
"Alexander",
""
],
[
"Pavetić",
"Filip",
""
],
[
"Tran",
"Dustin",
""
],
[
"Kipf",
"Thomas",
""
],
[
"Lučić",
"Mario",
""
],
[
"Zhai",
"Xiaohua",
""
],
[
"Keysers",
"Daniel",
""
],
[
"Harmsen",
"Jeremiah",
""
],
[
"Houlsby",
"Neil",
""
]
] |
new_dataset
| 0.97029 |
2207.06590
|
Jun Yang
|
Jun Yang, Yuehan Wang, Yiling Lou, Ming Wen and Lingming Zhang
|
Attention: Not Just Another Dataset for Patch-Correctness Checking
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automated Program Repair (APR) techniques have drawn wide attention from both
academia and industry. Meanwhile, one main limitation with the current
state-of-the-art APR tools is that patches passing all the original tests are
not necessarily the correct ones wanted by developers, i.e., the plausible
patch problem. To date, various Patch-Correctness Checking (PCC) techniques
have been proposed to address this important issue. However, they are only
evaluated on very limited datasets as the APR tools used for generating such
patches can only explore a small subset of the search space of possible
patches, posing serious threats to external validity to existing PCC studies.
In this paper, we construct an extensive PCC dataset (the largest manually
labeled PCC dataset to our knowledge) to revisit all state-of-the-art PCC
techniques. More specifically, our PCC dataset includes 1,988 patches generated
from the recent PraPR APR tool, which leverages highly-optimized bytecode-level
patch executions and can exhaustively explore all possible plausible patches
within its large predefined search space (including well-known fixing patterns
from various prior APR tools). Our extensive study of representative PCC
techniques on the new dataset has revealed various surprising findings and
provided guidelines for future PCC research.
|
[
{
"version": "v1",
"created": "Thu, 14 Jul 2022 01:07:17 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Feb 2023 23:10:09 GMT"
}
] | 2023-02-10T00:00:00 |
[
[
"Yang",
"Jun",
""
],
[
"Wang",
"Yuehan",
""
],
[
"Lou",
"Yiling",
""
],
[
"Wen",
"Ming",
""
],
[
"Zhang",
"Lingming",
""
]
] |
new_dataset
| 0.999723 |
2210.01963
|
Kanishka Misra
|
Kanishka Misra, Julia Taylor Rayz, Allyson Ettinger
|
COMPS: Conceptual Minimal Pair Sentences for testing Robust Property
Knowledge and its Inheritance in Pre-trained Language Models
|
EACL 2023 Camera Ready version. Code can be found at
https://github.com/kanishkamisra/comps
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
A characteristic feature of human semantic cognition is its ability to not
only store and retrieve the properties of concepts observed through experience,
but to also facilitate the inheritance of properties (can breathe) from
superordinate concepts (animal) to their subordinates (dog) -- i.e. demonstrate
property inheritance. In this paper, we present COMPS, a collection of minimal
pair sentences that jointly tests pre-trained language models (PLMs) on their
ability to attribute properties to concepts and their ability to demonstrate
property inheritance behavior. Analyses of 22 different PLMs on COMPS reveal
that they can easily distinguish between concepts on the basis of a property
when they are trivially different, but find it relatively difficult when
concepts are related on the basis of nuanced knowledge representations.
Furthermore, we find that PLMs can demonstrate behavior consistent with
property inheritance to a great extent, but fail in the presence of distracting
information, which decreases the performance of many models, sometimes even
below chance. This lack of robustness in demonstrating simple reasoning raises
important questions about PLMs' capacity to make correct inferences even when
they appear to possess the prerequisite knowledge.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2022 00:04:18 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Oct 2022 14:10:29 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Oct 2022 01:57:57 GMT"
},
{
"version": "v4",
"created": "Thu, 9 Feb 2023 02:31:06 GMT"
}
] | 2023-02-10T00:00:00 |
[
[
"Misra",
"Kanishka",
""
],
[
"Rayz",
"Julia Taylor",
""
],
[
"Ettinger",
"Allyson",
""
]
] |
new_dataset
| 0.985695 |
2210.03170
|
Rafael Ferreira Da Silva
|
Tain\~a Coleman, Henri Casanova, Ketan Maheshwari, Lo\"ic Pottier,
Sean R. Wilkinson, Justin Wozniak, Fr\'ed\'eric Suter, Mallikarjun Shankar,
Rafael Ferreira da Silva
|
WfBench: Automated Generation of Scientific Workflow Benchmarks
| null | null |
10.1109/PMBS56514.2022.00014
| null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
The prevalence of scientific workflows with high computational demands calls
for their execution on various distributed computing platforms, including
large-scale leadership-class high-performance computing (HPC) clusters. To
handle the deployment, monitoring, and optimization of workflow executions,
many workflow systems have been developed over the past decade. There is a need
for workflow benchmarks that can be used to evaluate the performance of
workflow systems on current and future software stacks and hardware platforms.
We present a generator of realistic workflow benchmark specifications that
can be translated into benchmark code to be executed with current workflow
systems. Our approach generates workflow tasks with arbitrary performance
characteristics (CPU, memory, and I/O usage) and with realistic task dependency
structures based on those seen in production workflows. We present experimental
results that show that our approach generates benchmarks that are
representative of production workflows, and conduct a case study to demonstrate
the use and usefulness of our generated benchmarks to evaluate the performance
of workflow systems under different configuration scenarios.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 19:22:06 GMT"
}
] | 2023-02-10T00:00:00 |
[
[
"Coleman",
"Tainã",
""
],
[
"Casanova",
"Henri",
""
],
[
"Maheshwari",
"Ketan",
""
],
[
"Pottier",
"Loïc",
""
],
[
"Wilkinson",
"Sean R.",
""
],
[
"Wozniak",
"Justin",
""
],
[
"Suter",
"Frédéric",
""
],
[
"Shankar",
"Mallikarjun",
""
],
[
"da Silva",
"Rafael Ferreira",
""
]
] |
new_dataset
| 0.975555 |
2210.15016
|
Pengchao Hu
|
Pengchao Hu, Man Lu, Lei Wang, Guoyue Jiang
|
TPU-MLIR: A Compiler For TPU Using MLIR
|
A way to design AI Compiler for ASIC chips by MLIR
| null | null | null |
cs.PL cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-level intermediate representations (MLIR) show great promise for
reducing the cost of building domain-specific compilers by providing a reusable
and extensible compiler infrastructure. This work presents TPU-MLIR, an
end-to-end compiler based on MLIR that deploys pre-trained neural network (NN)
models to a custom ASIC called a Tensor Processing Unit (TPU). TPU-MLIR defines
two new dialects to implement its functionality: 1. a Tensor operation (TOP)
dialect that encodes the deep learning graph semantics and independent of the
deep learning framework and 2. a TPU kernel dialect to provide a standard
kernel computation on TPU. A NN model is translated to the TOP dialect and then
lowered to the TPU dialect for different TPUs according to the chip's
configuration. We demonstrate how to use the MLIR pass pipeline to organize and
perform optimization on TPU to generate machine code. The paper also presents a
verification procedure to ensure the correctness of each transform stage.
|
[
{
"version": "v1",
"created": "Sun, 23 Oct 2022 10:45:54 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Feb 2023 08:01:21 GMT"
}
] | 2023-02-10T00:00:00 |
[
[
"Hu",
"Pengchao",
""
],
[
"Lu",
"Man",
""
],
[
"Wang",
"Lei",
""
],
[
"Jiang",
"Guoyue",
""
]
] |
new_dataset
| 0.978871 |
2301.02065
|
Patrick Ebel
|
Patrick Ebel, Christoph Lingenfelder, Andreas Vogelsang
|
On the Forces of Driver Distraction: Explainable Predictions for the
Visual Demand of In-Vehicle Touchscreen Interactions
|
Accepted for publication in Accident Analysis and Prevention
|
Accident Analysis & Prevention Volume 183, April 2023
|
10.1016/j.aap.2023.106956
| null |
cs.HC cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With modern infotainment systems, drivers are increasingly tempted to engage
in secondary tasks while driving. Since distracted driving is already one of
the main causes of fatal accidents, in-vehicle touchscreen Human-Machine
Interfaces (HMIs) must be as little distracting as possible. To ensure that
these systems are safe to use, they undergo elaborate and expensive empirical
testing, requiring fully functional prototypes. Thus, early-stage methods
informing designers about the implication their design may have on driver
distraction are of great value. This paper presents a machine learning method
that, based on anticipated usage scenarios, predicts the visual demand of
in-vehicle touchscreen interactions and provides local and global explanations
of the factors influencing drivers' visual attention allocation. The approach
is based on large-scale natural driving data continuously collected from
production line vehicles and employs the SHapley Additive exPlanation (SHAP)
method to provide explanations leveraging informed design decisions. Our
approach is more accurate than related work and identifies interactions during
which long glances occur with 68 % accuracy and predicts the total glance
duration with a mean error of 2.4 s. Our explanations replicate the results of
various recent studies and provide fast and easily accessible insights into the
effect of UI elements, driving automation, and vehicle speed on driver
distraction. The system can not only help designers to evaluate current designs
but also help them to better anticipate and understand the implications their
design decisions might have on future designs.
|
[
{
"version": "v1",
"created": "Thu, 5 Jan 2023 13:50:26 GMT"
}
] | 2023-02-10T00:00:00 |
[
[
"Ebel",
"Patrick",
""
],
[
"Lingenfelder",
"Christoph",
""
],
[
"Vogelsang",
"Andreas",
""
]
] |
new_dataset
| 0.978436 |
2302.01707
|
Thomas Durieux
|
Thomas Durieux
|
Parfum: Detection and Automatic Repair of Dockerfile Smells
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Docker is a popular tool for developers and organizations to package, deploy,
and run applications in a lightweight, portable container. One key component of
Docker is the Dockerfile, a simple text file that specifies the steps needed to
build a Docker image. While Dockerfiles are easy to create and use, creating an
optimal image is complex in particular since it is easy to not follow the best
practices, when it happens we call it Docker smell. To improve the quality of
Dockerfiles, previous works have focused on detecting Docker smells, but they
do not offer suggestions or repair the smells. In this paper, we propose,
Parfum, a tool that detects and automatically repairs Docker smells while
producing minimal patches. Parfum is based on a new Dockerfile AST parser
called Dinghy. We evaluate the effectiveness of Parfum by analyzing and
repairing a large set of Dockerfiles and comparing it against existing tools.
We also measure the impact of the repair on the Docker image in terms of build
failure and image size. Finally, we opened 35 pull requests to collect
developers' feedback and ensure that the repairs and the smells are meaningful.
Our results show that Parfum is able to repair 806 245 Docker smells and have a
significant impact on the Docker image size, and finally, developers are
welcoming the patches generated by Parfum while merging 20 pull requests.
|
[
{
"version": "v1",
"created": "Fri, 3 Feb 2023 13:04:47 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Feb 2023 15:05:28 GMT"
}
] | 2023-02-10T00:00:00 |
[
[
"Durieux",
"Thomas",
""
]
] |
new_dataset
| 0.999487 |
2302.03130
|
Hyunjik Kim
|
Matthias Bauer, Emilien Dupont, Andy Brock, Dan Rosenbaum, Jonathan
Richard Schwarz, Hyunjik Kim
|
Spatial Functa: Scaling Functa to ImageNet Classification and Generation
| null | null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural fields, also known as implicit neural representations, have emerged as
a powerful means to represent complex signals of various modalities. Based on
this Dupont et al. (2022) introduce a framework that views neural fields as
data, termed *functa*, and proposes to do deep learning directly on this
dataset of neural fields. In this work, we show that the proposed framework
faces limitations when scaling up to even moderately complex datasets such as
CIFAR-10. We then propose *spatial functa*, which overcome these limitations by
using spatially arranged latent representations of neural fields, thereby
allowing us to scale up the approach to ImageNet-1k at 256x256 resolution. We
demonstrate competitive performance to Vision Transformers (Steiner et al.,
2022) on classification and Latent Diffusion (Rombach et al., 2022) on image
generation respectively.
|
[
{
"version": "v1",
"created": "Mon, 6 Feb 2023 21:35:44 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Feb 2023 12:43:24 GMT"
}
] | 2023-02-10T00:00:00 |
[
[
"Bauer",
"Matthias",
""
],
[
"Dupont",
"Emilien",
""
],
[
"Brock",
"Andy",
""
],
[
"Rosenbaum",
"Dan",
""
],
[
"Schwarz",
"Jonathan Richard",
""
],
[
"Kim",
"Hyunjik",
""
]
] |
new_dataset
| 0.988651 |
2302.03731
|
Yifan Sun
|
Yifan Sun, Jingyan Shen, Yunfan Jiang, Zhaohui Huang, Minsheng Hao,
Xuegong Zhang
|
MMA-RNN: A Multi-level Multi-task Attention-based Recurrent Neural
Network for Discrimination and Localization of Atrial Fibrillation
|
9 pages, 5 figures
| null | null | null |
cs.LG q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The automatic detection of atrial fibrillation based on electrocardiograph
(ECG) signals has received wide attention both clinically and practically. It
is challenging to process ECG signals with cyclical pattern, varying length and
unstable quality due to noise and distortion. Besides, there has been
insufficient research on separating persistent atrial fibrillation from
paroxysmal atrial fibrillation, and little discussion on locating the onsets
and end points of AF episodes. It is even more arduous to perform well on these
two distinct but interrelated tasks, while avoiding the mistakes inherent from
stage-by-stage approaches. This paper proposes the Multi-level Multi-task
Attention-based Recurrent Neural Network for three-class discrimination on
patients and localization of the exact timing of AF episodes. Our model
captures three-level sequential features based on a hierarchical architecture
utilizing Bidirectional Long and Short-Term Memory Network (Bi-LSTM) and
attention layers, and accomplishes the two tasks simultaneously with a
multi-head classifier. The model is designed as an end-to-end framework to
enhance information interaction and reduce error accumulation. Finally, we
conduct experiments on CPSC 2021 dataset and the result demonstrates the
superior performance of our method, indicating the potential application of
MMA-RNN to wearable mobile devices for routine AF monitoring and early
diagnosis.
|
[
{
"version": "v1",
"created": "Tue, 7 Feb 2023 19:59:55 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Feb 2023 01:29:04 GMT"
}
] | 2023-02-10T00:00:00 |
[
[
"Sun",
"Yifan",
""
],
[
"Shen",
"Jingyan",
""
],
[
"Jiang",
"Yunfan",
""
],
[
"Huang",
"Zhaohui",
""
],
[
"Hao",
"Minsheng",
""
],
[
"Zhang",
"Xuegong",
""
]
] |
new_dataset
| 0.9938 |
2302.04343
|
Amir Namavar Jahromi
|
Amir Namavar Jahromi and Ebrahim Pourjafari and Hadis Karimipour and
Amit Satpathy and Lovell Hodge
|
CRL+: A Novel Semi-Supervised Deep Active Contrastive Representation
Learning-Based Text Classification Model for Insurance Data
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Financial sector and especially the insurance industry collect vast volumes
of text on a daily basis and through multiple channels (their agents, customer
care centers, emails, social networks, and web in general). The information
collected includes policies, expert and health reports, claims and complaints,
results of surveys, and relevant social media posts. It is difficult to
effectively extract label, classify, and interpret the essential information
from such varied and unstructured material. Therefore, the Insurance Industry
is among the ones that can benefit from applying technologies for the
intelligent analysis of free text through Natural Language Processing (NLP).
In this paper, CRL+, a novel text classification model combining Contrastive
Representation Learning (CRL) and Active Learning is proposed to handle the
challenge of using semi-supervised learning for text classification. In this
method, supervised (CRL) is used to train a RoBERTa transformer model to encode
the textual data into a contrastive representation space and then classify
using a classification layer. This (CRL)-based transformer model is used as the
base model in the proposed Active Learning mechanism to classify all the data
in an iterative manner. The proposed model is evaluated using unstructured
obituary data with objective to determine the cause of the death from the data.
This model is compared with the CRL model and an Active Learning model with the
RoBERTa base model. The experiment shows that the proposed method can
outperform both methods for this specific task.
|
[
{
"version": "v1",
"created": "Wed, 8 Feb 2023 21:23:52 GMT"
}
] | 2023-02-10T00:00:00 |
[
[
"Jahromi",
"Amir Namavar",
""
],
[
"Pourjafari",
"Ebrahim",
""
],
[
"Karimipour",
"Hadis",
""
],
[
"Satpathy",
"Amit",
""
],
[
"Hodge",
"Lovell",
""
]
] |
new_dataset
| 0.999288 |
2302.04384
|
Zhiqiang Zhao
|
Ying Zhang, Zhiqiang Zhao, Zhuo Feng
|
SF-SGL: Solver-Free Spectral Graph Learning from Linear Measurements
|
arXiv admin note: text overlap with arXiv:2104.07867
|
IEEE Transactions on Computer-Aided Design of Integrated Circuits
and Systems. 2022 Aug 15
|
10.1109/TCAD.2022.3198513
| null |
cs.LG cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
This work introduces a highly-scalable spectral graph densification framework
(SGL) for learning resistor networks with linear measurements, such as node
voltages and currents. We show that the proposed graph learning approach is
equivalent to solving the classical graphical Lasso problems with
Laplacian-like precision matrices. We prove that given $O(\log N)$ pairs of
voltage and current measurements, it is possible to recover sparse $N$-node
resistor networks that can well preserve the effective resistance distances on
the original graph. In addition, the learned graphs also preserve the
structural (spectral) properties of the original graph, which can potentially
be leveraged in many circuit design and optimization tasks.
To achieve more scalable performance, we also introduce a solver-free method
(SF-SGL) that exploits multilevel spectral approximation of the graphs and
allows for a scalable and flexible decomposition of the entire graph spectrum
(to be learned) into multiple different eigenvalue clusters (frequency bands).
Such a solver-free approach allows us to more efficiently identify the most
spectrally-critical edges for reducing various ranges of spectral embedding
distortions. Through extensive experiments for a variety of real-world test
cases, we show that the proposed approach is highly scalable for learning
sparse resistor networks without sacrificing solution quality. We also
introduce a data-driven EDA algorithm for vectorless power/thermal integrity
verifications to allow estimating worst-case voltage/temperature (gradient)
distributions across the entire chip by leveraging a few voltage/temperature
measurements.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 00:33:19 GMT"
}
] | 2023-02-10T00:00:00 |
[
[
"Zhang",
"Ying",
""
],
[
"Zhao",
"Zhiqiang",
""
],
[
"Feng",
"Zhuo",
""
]
] |
new_dataset
| 0.991426 |
2302.04398
|
Deokhwan Han
|
Deokhwan Han, Jeonghun Park and Namyoon Lee
|
FDD Massive MIMO Without CSI Feedback
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transmitter channel state information (CSIT) is indispensable for the
spectral efficiency gains offered by massive multiple-input multiple-output
(MIMO) systems. In a frequency-division-duplexing (FDD) massive MIMO system,
CSIT is typically acquired through downlink channel estimation and user
feedback, but as the number of antennas increases, the overhead for CSI
training and feedback per user grows, leading to a decrease in spectral
efficiency. In this paper, we show that, using uplink pilots in FDD, the
downlink sum spectral efficiency gain with perfect downlink CSIT is achievable
when the number of antennas at a base station is infinite under some mild
channel conditions. The key idea showing our result is the mean squared
error-optimal downlink channel reconstruction method using uplink pilots, which
exploits the geometry reciprocity of uplink and downlink channels. We also
present a robust downlink precoding method harnessing the reconstructed channel
with the error covariance matrix. Our system-level simulations show that our
proposed precoding method can attain comparable sum spectral efficiency to
zero-forcing precoding with perfect downlink CSIT, without CSI training and
feedback.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 01:44:49 GMT"
}
] | 2023-02-10T00:00:00 |
[
[
"Han",
"Deokhwan",
""
],
[
"Park",
"Jeonghun",
""
],
[
"Lee",
"Namyoon",
""
]
] |
new_dataset
| 0.996289 |
2302.04434
|
Anjana Arunkumar
|
Anjana Arunkumar, Swaroop Mishra, Bhavdeep Sachdeva, Chitta Baral,
Chris Bryan
|
Real-Time Visual Feedback to Guide Benchmark Creation: A
Human-and-Metric-in-the-Loop Workflow
|
EACL 2023
| null | null | null |
cs.CL cs.AI cs.HC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent research has shown that language models exploit `artifacts' in
benchmarks to solve tasks, rather than truly learning them, leading to inflated
model performance. In pursuit of creating better benchmarks, we propose VAIDA,
a novel benchmark creation paradigm for NLP, that focuses on guiding
crowdworkers, an under-explored facet of addressing benchmark idiosyncrasies.
VAIDA facilitates sample correction by providing realtime visual feedback and
recommendations to improve sample quality. Our approach is domain, model, task,
and metric agnostic, and constitutes a paradigm shift for robust, validated,
and dynamic benchmark creation via human-and-metric-in-the-loop workflows. We
evaluate via expert review and a user study with NASA TLX. We find that VAIDA
decreases effort, frustration, mental, and temporal demands of crowdworkers and
analysts, simultaneously increasing the performance of both user groups with a
45.8% decrease in the level of artifacts in created samples. As a by product of
our user study, we observe that created samples are adversarial across models,
leading to decreases of 31.3% (BERT), 22.5% (RoBERTa), 14.98% (GPT-3 fewshot)
in performance.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 04:43:10 GMT"
}
] | 2023-02-10T00:00:00 |
[
[
"Arunkumar",
"Anjana",
""
],
[
"Mishra",
"Swaroop",
""
],
[
"Sachdeva",
"Bhavdeep",
""
],
[
"Baral",
"Chitta",
""
],
[
"Bryan",
"Chris",
""
]
] |
new_dataset
| 0.986741 |
2302.04461
|
Satoshi Takabe
|
Satoshi Takabe and Takashi Abe
|
Hubbard-Stratonovich Detector for Simple Trainable MIMO Signal Detection
|
6 pages, 5 figures
| null | null | null |
cs.IT cs.LG eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Massive multiple-input multiple-output (MIMO) is a key technology used in
fifth-generation wireless communication networks and beyond. Recently, various
MIMO signal detectors based on deep learning have been proposed. Especially,
deep unfolding (DU), which involves unrolling of an existing iterative
algorithm and embedding of trainable parameters, has been applied with
remarkable detection performance. Although DU has a lesser number of trainable
parameters than conventional deep neural networks, the computational
complexities related to training and execution have been problematic because
DU-based MIMO detectors usually utilize matrix inversion to improve their
detection performance. In this study, we attempted to construct a DU-based
trainable MIMO detector with the simplest structure. The proposed detector
based on the Hubbard--Stratonovich (HS) transformation and DU is called the
trainable HS (THS) detector. It requires only $O(1)$ trainable parameters and
its training and execution cost is $O(n^2)$ per iteration, where $n$ is the
number of transmitting antennas. Numerical results show that the detection
performance of the THS detector is better than that of existing algorithms of
the same complexity and close to that of a DU-based detector, which has higher
training and execution costs than the THS detector.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 06:51:25 GMT"
}
] | 2023-02-10T00:00:00 |
[
[
"Takabe",
"Satoshi",
""
],
[
"Abe",
"Takashi",
""
]
] |
new_dataset
| 0.990388 |
2302.04486
|
Can Pu
|
Can Pu, Chuanyu Yang, Jinnian Pu and Robert B. Fisher
|
A General Mobile Manipulator Automation Framework for Flexible
Manufacturing in Hostile Industrial Environments
|
25 pages
| null | null | null |
cs.RO cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To enable a mobile manipulator to perform human tasks from a single teaching
demonstration is vital to flexible manufacturing. We call our proposed method
MMPA (Mobile Manipulator Process Automation with One-shot Teaching). Currently,
there is no effective and robust MMPA framework which is not influenced by
harsh industrial environments and the mobile base's parking precision. The
proposed MMPA framework consists of two stages: collecting data (mobile base's
location, environment information, end-effector's path) in the teaching stage
for robot learning; letting the end-effector repeat the nearly same path as the
reference path in the world frame to reproduce the work in the automation
stage. More specifically, in the automation stage, the robot navigates to the
specified location without the need of a precise parking. Then, based on
colored point cloud registration, the proposed IPE (Iterative Pose Estimation
by Eye & Hand) algorithm could estimate the accurate 6D relative parking pose
of the robot arm base without the need of any marker. Finally, the robot could
learn the error compensation from the parking pose's bias to modify the
end-effector's path to make it repeat a nearly same path in the world
coordinate system as recorded in the teaching stage. Hundreds of trials have
been conducted with a real mobile manipulator to show the superior robustness
of the system and the accuracy of the process automation regardless of the
harsh industrial conditions and parking precision. For the released code,
please contact marketing@amigaga.com
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 08:17:38 GMT"
}
] | 2023-02-10T00:00:00 |
[
[
"Pu",
"Can",
""
],
[
"Yang",
"Chuanyu",
""
],
[
"Pu",
"Jinnian",
""
],
[
"Fisher",
"Robert B.",
""
]
] |
new_dataset
| 0.996723 |
2302.04521
|
Shuang Gao
|
Xiaoibin Wang, Shuang Gao, Yuntao Zou, Jianlan Guo and Chu Wang
|
IH-ViT: Vision Transformer-based Integrated Circuit Appear-ance Defect
Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For the problems of low recognition rate and slow recognition speed of
traditional detection methods in IC appearance defect detection, we propose an
IC appearance defect detection algo-rithm IH-ViT. Our proposed model takes
advantage of the respective strengths of CNN and ViT to acquire image features
from both local and global aspects, and finally fuses the two features for
decision making to determine the class of defects, thus obtaining better
accuracy of IC defect recognition. To address the problem that IC appearance
defects are mainly reflected in the dif-ferences in details, which are
difficult to identify by traditional algorithms, we improved the tra-ditional
ViT by performing an additional convolution operation inside the batch. For the
problem of information imbalance of samples due to diverse sources of data
sets, we adopt a dual-channel image segmentation technique to further improve
the accuracy of IC appearance defects. Finally, after testing, our proposed
hybrid IH-ViT model achieved 72.51% accuracy, which is 2.8% and 6.06% higher
than ResNet50 and ViT models alone. The proposed algorithm can quickly and
accurately detect the defect status of IC appearance and effectively improve
the productivity of IC packaging and testing companies.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 09:27:40 GMT"
}
] | 2023-02-10T00:00:00 |
[
[
"Wang",
"Xiaoibin",
""
],
[
"Gao",
"Shuang",
""
],
[
"Zou",
"Yuntao",
""
],
[
"Guo",
"Jianlan",
""
],
[
"Wang",
"Chu",
""
]
] |
new_dataset
| 0.983759 |
2302.04541
|
Georgios Karanztas
|
George Karantzas
|
Forensic Log Based Detection For Keystroke Injection "BadUsb" Attacks
|
15 pages, 3 figures, code examples included
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
This document describes an experiment with main purpose to detect BadUSB
attacks that utilize external Human Interaction Device hardware gadgets to
inject keystrokes and acquire remote code execution. One of the main goals, is
to detect such activity based on behavioral factors and allow everyone with a
basic set of cognitive capabilities ,regardless of the user being a human or a
computer, to identify anomalous speed related indicators but also correlate
such speed changes with other elements such as commonly malicious processes
like powershell processes being called in close proximity timing-wise, PnP
device events occurring correlated with driver images loaded.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 10:12:54 GMT"
}
] | 2023-02-10T00:00:00 |
[
[
"Karantzas",
"George",
""
]
] |
new_dataset
| 0.994862 |
2302.04603
|
Kars Alfrink
|
Kars Alfrink, Ianus Keller, Neelke Doorn, Gerd Kortuem
|
Contestable Camera Cars: A Speculative Design Exploration of Public AI
That Is Open and Responsive to Dispute
|
Conditionally accepted to CHI 2023
| null |
10.1145/3544548.3580984
| null |
cs.HC cs.AI cs.CY cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Local governments increasingly use artificial intelligence (AI) for automated
decision-making. Contestability, making systems responsive to dispute, is a way
to ensure they respect human rights to autonomy and dignity. We investigate the
design of public urban AI systems for contestability through the example of
camera cars: human-driven vehicles equipped with image sensors. Applying a
provisional framework for contestable AI, we use speculative design to create a
concept video of a contestable camera car. Using this concept video, we then
conduct semi-structured interviews with 17 civil servants who work with AI
employed by a large northwestern European city. The resulting data is analyzed
using reflexive thematic analysis to identify the main challenges facing the
implementation of contestability in public AI. We describe how civic
participation faces issues of representation, public AI systems should
integrate with existing democratic practices, and cities must expand capacities
for responsible AI development and operation.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 12:38:51 GMT"
}
] | 2023-02-10T00:00:00 |
[
[
"Alfrink",
"Kars",
""
],
[
"Keller",
"Ianus",
""
],
[
"Doorn",
"Neelke",
""
],
[
"Kortuem",
"Gerd",
""
]
] |
new_dataset
| 0.969046 |
2302.04611
|
Shengchao Liu
|
Shengchao Liu, Yutao Zhu, Jiarui Lu, Zhao Xu, Weili Nie, Anthony
Gitter, Chaowei Xiao, Jian Tang, Hongyu Guo, Anima Anandkumar
|
A Text-guided Protein Design Framework
| null | null | null | null |
cs.LG cs.AI q-bio.QM stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current AI-assisted protein design mainly utilizes protein sequential and
structural information. Meanwhile, there exists tremendous knowledge curated by
humans in the text format describing proteins' high-level properties. Yet,
whether the incorporation of such text data can help protein design tasks has
not been explored. To bridge this gap, we propose ProteinDT, a multi-modal
framework that leverages textual descriptions for protein design. ProteinDT
consists of three subsequent steps: ProteinCLAP that aligns the representation
of two modalities, a facilitator that generates the protein representation from
the text modality, and a decoder that generates the protein sequences from the
representation. To train ProteinDT, we construct a large dataset,
SwissProtCLAP, with 441K text and protein pairs. We empirically verify the
effectiveness of ProteinDT from three aspects: (1) consistently superior
performance on four out of six protein property prediction benchmarks; (2) over
90% accuracy for text-guided protein generation; and (3) promising results for
zero-shot text-guided protein editing.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 12:59:16 GMT"
}
] | 2023-02-10T00:00:00 |
[
[
"Liu",
"Shengchao",
""
],
[
"Zhu",
"Yutao",
""
],
[
"Lu",
"Jiarui",
""
],
[
"Xu",
"Zhao",
""
],
[
"Nie",
"Weili",
""
],
[
"Gitter",
"Anthony",
""
],
[
"Xiao",
"Chaowei",
""
],
[
"Tang",
"Jian",
""
],
[
"Guo",
"Hongyu",
""
],
[
"Anandkumar",
"Anima",
""
]
] |
new_dataset
| 0.995308 |
2302.04659
|
Jiayuan Gu
|
Jiayuan Gu, Fanbo Xiang, Xuanlin Li, Zhan Ling, Xiqiang Liu, Tongzhou
Mu, Yihe Tang, Stone Tao, Xinyue Wei, Yunchao Yao, Xiaodi Yuan, Pengwei Xie,
Zhiao Huang, Rui Chen, Hao Su
|
ManiSkill2: A Unified Benchmark for Generalizable Manipulation Skills
|
Published as a conference paper at ICLR 2023. Project website:
https://maniskill2.github.io/
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Generalizable manipulation skills, which can be composed to tackle
long-horizon and complex daily chores, are one of the cornerstones of Embodied
AI. However, existing benchmarks, mostly composed of a suite of simulatable
environments, are insufficient to push cutting-edge research works because they
lack object-level topological and geometric variations, are not based on fully
dynamic simulation, or are short of native support for multiple types of
manipulation tasks. To this end, we present ManiSkill2, the next generation of
the SAPIEN ManiSkill benchmark, to address critical pain points often
encountered by researchers when using benchmarks for generalizable manipulation
skills. ManiSkill2 includes 20 manipulation task families with 2000+ object
models and 4M+ demonstration frames, which cover stationary/mobile-base,
single/dual-arm, and rigid/soft-body manipulation tasks with 2D/3D-input data
simulated by fully dynamic engines. It defines a unified interface and
evaluation protocol to support a wide range of algorithms (e.g., classic
sense-plan-act, RL, IL), visual observations (point cloud, RGBD), and
controllers (e.g., action type and parameterization). Moreover, it empowers
fast visual input learning algorithms so that a CNN-based policy can collect
samples at about 2000 FPS with 1 GPU and 16 processes on a regular workstation.
It implements a render server infrastructure to allow sharing rendering
resources across all environments, thereby significantly reducing memory usage.
We open-source all codes of our benchmark (simulator, environments, and
baselines) and host an online challenge open to interdisciplinary researchers.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 14:24:01 GMT"
}
] | 2023-02-10T00:00:00 |
[
[
"Gu",
"Jiayuan",
""
],
[
"Xiang",
"Fanbo",
""
],
[
"Li",
"Xuanlin",
""
],
[
"Ling",
"Zhan",
""
],
[
"Liu",
"Xiqiang",
""
],
[
"Mu",
"Tongzhou",
""
],
[
"Tang",
"Yihe",
""
],
[
"Tao",
"Stone",
""
],
[
"Wei",
"Xinyue",
""
],
[
"Yao",
"Yunchao",
""
],
[
"Yuan",
"Xiaodi",
""
],
[
"Xie",
"Pengwei",
""
],
[
"Huang",
"Zhiao",
""
],
[
"Chen",
"Rui",
""
],
[
"Su",
"Hao",
""
]
] |
new_dataset
| 0.99956 |
2302.04691
|
Giuseppe Silano
|
Giuseppe Silano, Tomas Baca, Robert Penicka, Davide Liuzza, and Martin
Saska
|
Power Line Inspection Tasks with Multi-Aerial Robot Systems via Signal
Temporal Logic Specifications
|
8 pages, 12 figures, journal paper
|
IEEE Robotics and Automation Letters, vol. 6, no. 2, pp.
4169-4176, April, 2021
|
10.1109/LRA.2021.3068114
| null |
cs.RO cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A framework for computing feasible and constrained trajectories for a fleet
of quad-rotors leveraging on Signal Temporal Logic (STL) specifications for
power line inspection tasks is proposed in this paper. The planner allows the
formulation of complex missions that avoid obstacles and maintain a safe
distance between drones while performing the planned mission. An optimization
problem is set to generate optimal strategies that satisfy these specifications
and also take vehicle constraints into account. Further, an event-triggered
replanner is proposed to reply to unforeseen events and external disturbances.
An energy minimization term is also considered to implicitly save quad-rotors
battery life while carrying out the mission. Numerical simulations in MATLAB
and experimental results show the validity and the effectiveness of the
proposed approach, and demonstrate its applicability in real-world scenarios.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 15:18:14 GMT"
}
] | 2023-02-10T00:00:00 |
[
[
"Silano",
"Giuseppe",
""
],
[
"Baca",
"Tomas",
""
],
[
"Penicka",
"Robert",
""
],
[
"Liuzza",
"Davide",
""
],
[
"Saska",
"Martin",
""
]
] |
new_dataset
| 0.980772 |
2302.04708
|
Giuseppe Silano
|
Andriy Dmytruk, Giuseppe Silano, Davide Bicego, Daniel Bonilla Licea,
and Martin Saska
|
A Perception-Aware NMPC for Vision-Based Target Tracking and Collision
Avoidance with a Multi-Rotor UAV
|
6 pages, 6 figures, conference
|
2022 International Conference on Unmanned Aircraft Systems
(ICUAS), pp. 1668-1673, June, 2022, Dubrovnik, Croatia
|
10.1109/ICUAS54217.2022.9836071
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A perception-aware Nonlinear Model Predictive Control (NMPC) strategy aimed
at performing vision-based target tracking and collision avoidance with a
multi-rotor aerial vehicle is presented in this paper. The proposed control
strategy considers both realistic actuation limits at the torque level and
visual perception constraints to enforce the visibility coverage of a target
while complying with the mission objectives. Furthermore, the approach allows
to safely navigate in a workspace area populated by dynamic obstacles with a
ballistic motion. The formulation is meant to be generic and set upon a large
class of multi-rotor vehicles that covers both coplanar designs like quadrotors
as well as fully-actuated platforms with tilted propellers. The feasibility and
effectiveness of the control strategy are demonstrated via closed-loop
simulations achieved in MATLAB.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 15:46:29 GMT"
}
] | 2023-02-10T00:00:00 |
[
[
"Dmytruk",
"Andriy",
""
],
[
"Silano",
"Giuseppe",
""
],
[
"Bicego",
"Davide",
""
],
[
"Licea",
"Daniel Bonilla",
""
],
[
"Saska",
"Martin",
""
]
] |
new_dataset
| 0.979924 |
2302.04778
|
Giuseppe Silano
|
Daniel Hert, Tomas Baca, Pavel Petracek, Vit Kratky, Vojtech Spurny,
Matej Petrlik, Matous Vrba, David Zaitlik, Pavel Stoudek, Viktor Walter, Petr
Stepan, Jiri Horyna, Vaclav Pritzl, Giuseppe Silano, Daniel Bonilla Licea,
Petr Stibinger, Robert Penicka, Tiago Nascimento, Martin Saska
|
MRS Modular UAV Hardware Platforms for Supporting Research in Real-World
Outdoor and Indoor Environments
|
10 pages, 17 figures, conference
|
2022 International Conference on Unmanned Aircraft Systems
(ICUAS), pp. 1264-1273, June, 2022, Dubrovnik, Croatia
|
10.1109/ICUAS54217.2022.9836083
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a family of autonomous Unmanned Aerial Vehicles (UAVs)
platforms designed for a diverse range of indoor and outdoor applications. The
proposed UAV design is highly modular in terms of used actuators, sensor
configurations, and even UAV frames. This allows to achieve, with minimal
effort, a proper experimental setup for single, as well as, multi robot
scenarios. Presented platforms are intended to facilitate the transition from
simulations, and simplified laboratory experiments, into the deployment of
aerial robots into uncertain and hard-to-model real-world conditions. We
present mechanical designs, electric configurations, and dynamic models of the
UAVs, followed by numerous recommendations and technical details required for
building such a fully autonomous UAV system for experimental verification of
scientific achievements. To show strength and high variability of the proposed
system, we present results of tens of completely different real-robot
experiments in various environments using distinct actuator and sensory
configurations.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 17:11:43 GMT"
}
] | 2023-02-10T00:00:00 |
[
[
"Hert",
"Daniel",
""
],
[
"Baca",
"Tomas",
""
],
[
"Petracek",
"Pavel",
""
],
[
"Kratky",
"Vit",
""
],
[
"Spurny",
"Vojtech",
""
],
[
"Petrlik",
"Matej",
""
],
[
"Vrba",
"Matous",
""
],
[
"Zaitlik",
"David",
""
],
[
"Stoudek",
"Pavel",
""
],
[
"Walter",
"Viktor",
""
],
[
"Stepan",
"Petr",
""
],
[
"Horyna",
"Jiri",
""
],
[
"Pritzl",
"Vaclav",
""
],
[
"Silano",
"Giuseppe",
""
],
[
"Licea",
"Daniel Bonilla",
""
],
[
"Stibinger",
"Petr",
""
],
[
"Penicka",
"Robert",
""
],
[
"Nascimento",
"Tiago",
""
],
[
"Saska",
"Martin",
""
]
] |
new_dataset
| 0.998762 |
2302.04800
|
Salwa Al Khatib
|
Salwa Al Khatib, Mohamed El Amine Boudjoghra, Jameel Hassan
|
Drawing Attention to Detail: Pose Alignment through Self-Attention for
Fine-Grained Object Classification
|
Course Assignment
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Intra-class variations in the open world lead to various challenges in
classification tasks. To overcome these challenges, fine-grained classification
was introduced, and many approaches were proposed. Some rely on locating and
using distinguishable local parts within images to achieve invariance to
viewpoint changes, intra-class differences, and local part deformations. Our
approach, which is inspired by P2P-Net, offers an end-to-end trainable
attention-based parts alignment module, where we replace the graph-matching
component used in it with a self-attention mechanism. The attention module is
able to learn the optimal arrangement of parts while attending to each other,
before contributing to the global loss.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 17:47:47 GMT"
}
] | 2023-02-10T00:00:00 |
[
[
"Khatib",
"Salwa Al",
""
],
[
"Boudjoghra",
"Mohamed El Amine",
""
],
[
"Hassan",
"Jameel",
""
]
] |
new_dataset
| 0.958482 |
2302.04815
|
Abdul Samadh Jameel Hassan
|
Jameel Hassan Abdul Samadh, Salwa K. Al Khatib
|
To Perceive or Not to Perceive: Lightweight Stacked Hourglass Network
|
Course project
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human pose estimation (HPE) is a classical task in computer vision that
focuses on representing the orientation of a person by identifying the
positions of their joints. We design a lighterversion of the stacked hourglass
network with minimal loss in performance of the model. The lightweight
2-stacked hourglass has a reduced number of channels with depthwise separable
convolutions, residual connections with concatenation, and residual connections
between the necks of the hourglasses. The final model has a marginal drop in
performance with 79% reduction in the number of parameters and a similar drop
in MAdds
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 18:04:43 GMT"
}
] | 2023-02-10T00:00:00 |
[
[
"Samadh",
"Jameel Hassan Abdul",
""
],
[
"Khatib",
"Salwa K. Al",
""
]
] |
new_dataset
| 0.959737 |
2302.04824
|
Daniela Ushizima
|
Jerome Quenum, David Perlmutter, Ying Huang, Iryna Zenyuk, and Daniela
Ushizima
|
Lithium Metal Battery Quality Control via Transformer-CNN Segmentation
|
17 pages, 5 figures
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Lithium metal battery (LMB) has the potential to be the next-generation
battery system because of their high theoretical energy density. However,
defects known as dendrites are formed by heterogeneous lithium (Li) plating,
which hinder the development and utilization of LMBs. Non-destructive
techniques to observe the dendrite morphology often use computerized X-ray
tomography (XCT) imaging to provide cross-sectional views. To retrieve
three-dimensional structures inside a battery, image segmentation becomes
essential to quantitatively analyze XCT images. This work proposes a new binary
semantic segmentation approach using a transformer-based neural network (T-Net)
model capable of segmenting out dendrites from XCT data. In addition, we
compare the performance of the proposed T-Net with three other algorithms, such
as U-Net, Y-Net, and E-Net, consisting of an Ensemble Network model for XCT
analysis. Our results show the advantages of using T-Net in terms of object
metrics, such as mean Intersection over Union (mIoU) and mean Dice Similarity
Coefficient (mDSC) as well as qualitatively through several comparative
visualizations.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 18:25:24 GMT"
}
] | 2023-02-10T00:00:00 |
[
[
"Quenum",
"Jerome",
""
],
[
"Perlmutter",
"David",
""
],
[
"Huang",
"Ying",
""
],
[
"Zenyuk",
"Iryna",
""
],
[
"Ushizima",
"Daniela",
""
]
] |
new_dataset
| 0.995592 |
2302.04868
|
Junxuan Li
|
Junxuan Li, Shunsuke Saito, Tomas Simon, Stephen Lombardi, Hongdong
Li, Jason Saragih
|
MEGANE: Morphable Eyeglass and Avatar Network
|
Project page: https://junxuan-li.github.io/megane/
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Eyeglasses play an important role in the perception of identity. Authentic
virtual representations of faces can benefit greatly from their inclusion.
However, modeling the geometric and appearance interactions of glasses and the
face of virtual representations of humans is challenging. Glasses and faces
affect each other's geometry at their contact points, and also induce
appearance changes due to light transport. Most existing approaches do not
capture these physical interactions since they model eyeglasses and faces
independently. Others attempt to resolve interactions as a 2D image synthesis
problem and suffer from view and temporal inconsistencies. In this work, we
propose a 3D compositional morphable model of eyeglasses that accurately
incorporates high-fidelity geometric and photometric interaction effects. To
support the large variation in eyeglass topology efficiently, we employ a
hybrid representation that combines surface geometry and a volumetric
representation. Unlike volumetric approaches, our model naturally retains
correspondences across glasses, and hence explicit modification of geometry,
such as lens insertion and frame deformation, is greatly simplified. In
addition, our model is relightable under point lights and natural illumination,
supporting high-fidelity rendering of various frame materials, including
translucent plastic and metal within a single morphable model. Importantly, our
approach models global light transport effects, such as casting shadows between
faces and glasses. Our morphable model for eyeglasses can also be fit to novel
glasses via inverse rendering. We compare our approach to state-of-the-art
methods and demonstrate significant quality improvements.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 18:59:49 GMT"
}
] | 2023-02-10T00:00:00 |
[
[
"Li",
"Junxuan",
""
],
[
"Saito",
"Shunsuke",
""
],
[
"Simon",
"Tomas",
""
],
[
"Lombardi",
"Stephen",
""
],
[
"Li",
"Hongdong",
""
],
[
"Saragih",
"Jason",
""
]
] |
new_dataset
| 0.998967 |
2112.05555
|
Mathieu Bernard
|
Mathieu Bernard and Maxime Poli and Julien Karadayi and Emmanuel
Dupoux
|
Shennong: a Python toolbox for audio speech features extraction
| null |
Behavior Research Methods, 2023
|
10.3758/s13428-022-02029-6
| null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We introduce Shennong, a Python toolbox and command-line utility for speech
features extraction. It implements a wide range of well-established state of
art algorithms including spectro-temporal filters such as Mel-Frequency
Cepstral Filterbanks or Predictive Linear Filters, pre-trained neural networks,
pitch estimators as well as speaker normalization methods and post-processing
algorithms. Shennong is an open source, easy-to-use, reliable and extensible
framework. The use of Python makes the integration to others speech modeling
and machine learning tools easy. It aims to replace or complement several
heterogeneous software, such as Kaldi or Praat. After describing the Shennong
software architecture, its core components and implemented algorithms, this
paper illustrates its use on three applications: a comparison of speech
features performances on a phones discrimination task, an analysis of a Vocal
Tract Length Normalization model as a function of the speech duration used for
training and a comparison of pitch estimation algorithms under various noise
conditions.
|
[
{
"version": "v1",
"created": "Fri, 10 Dec 2021 14:08:52 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Bernard",
"Mathieu",
""
],
[
"Poli",
"Maxime",
""
],
[
"Karadayi",
"Julien",
""
],
[
"Dupoux",
"Emmanuel",
""
]
] |
new_dataset
| 0.9882 |
2201.04806
|
Shaoxiong Zhang
|
Shaoxiong Zhang, Yunhong Wang, Tianrui Chai, Annan Li, Anil K. Jain
|
RealGait: Gait Recognition for Person Re-Identification
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human gait is considered a unique biometric identifier which can be acquired
in a covert manner at a distance. However, models trained on existing public
domain gait datasets which are captured in controlled scenarios lead to drastic
performance decline when applied to real-world unconstrained gait data. On the
other hand, video person re-identification techniques have achieved promising
performance on large-scale publicly available datasets. Given the diversity of
clothing characteristics, clothing cue is not reliable for person recognition
in general. So, it is actually not clear why the state-of-the-art person
re-identification methods work as well as they do. In this paper, we construct
a new gait dataset by extracting silhouettes from an existing video person
re-identification challenge which consists of 1,404 persons walking in an
unconstrained manner. Based on this dataset, a consistent and comparative study
between gait recognition and person re-identification can be carried out. Given
that our experimental results show that current gait recognition approaches
designed under data collected in controlled scenarios are inappropriate for
real surveillance scenarios, we propose a novel gait recognition method, called
RealGait. Our results suggest that recognizing people by their gait in real
surveillance scenarios is feasible and the underlying gait pattern is probably
the true reason why video person re-idenfification works in practice.
|
[
{
"version": "v1",
"created": "Thu, 13 Jan 2022 06:30:56 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Feb 2023 08:47:52 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Zhang",
"Shaoxiong",
""
],
[
"Wang",
"Yunhong",
""
],
[
"Chai",
"Tianrui",
""
],
[
"Li",
"Annan",
""
],
[
"Jain",
"Anil K.",
""
]
] |
new_dataset
| 0.98867 |
2203.16597
|
Israel Leyva-Mayorga
|
Israel Leyva-Mayorga, Beatriz Soret, Bho Matthiesen, Maik R\"oper,
Dirk W\"ubben, Armin Dekorsy, and Petar Popovski
|
NGSO Constellation Design for Global Connectivity
|
Book chapter submitted to IET Non-Geostationary Satellite
Communications Systems
| null |
10.1049/PBTE105E
| null |
cs.NI eess.SP
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Non-geostationary orbit (NGSO) satellite constellations represent a
cornerstone in the NewSpace paradigm and thus have become one of the hottest
topics for the industry, academia, but also for national space agencies and
regulators. For instance, numerous companies worldwide, including Starlink,
OneWeb, Kepler, SPUTNIX, and Amazon have started or will soon start to deploy
their own NGSO constellations, which aim to provide either broadband or IoT
services. One of the major drivers for such a high interest on NGSO
constellations is that, with an appropriate design, they are capable of
providing global coverage and connectivity.
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2022 18:26:43 GMT"
},
{
"version": "v2",
"created": "Sat, 9 Apr 2022 15:34:13 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Leyva-Mayorga",
"Israel",
""
],
[
"Soret",
"Beatriz",
""
],
[
"Matthiesen",
"Bho",
""
],
[
"Röper",
"Maik",
""
],
[
"Wübben",
"Dirk",
""
],
[
"Dekorsy",
"Armin",
""
],
[
"Popovski",
"Petar",
""
]
] |
new_dataset
| 0.999613 |
2205.00738
|
Corentin Dumery
|
Corentin Dumery, Fran\c{c}ois Protais, S\'ebastien Mestrallet,
Christophe Bourcier, Franck Ledoux
|
Evocube: a Genetic Labeling Framework for Polycube-Maps
| null |
Computer Graphics Forum, Volume 41, Issue 6, September 2022, Pages
467-479
|
10.1111/cgf.14649
| null |
cs.GR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Polycube-maps are used as base-complexes in various fields of computational
geometry, including the generation of regular all-hexahedral meshes free of
internal singularities. However, the strict alignment constraints behind
polycube-based methods make their computation challenging for CAD models used
in numerical simulation via Finite Element Method (FEM). We propose a novel
approach based on an evolutionary algorithm to robustly compute polycube-maps
in this context. We address the labeling problem, which aims to precompute
polycube alignment by assigning one of the base axes to each boundary face on
the input. Previous research has described ways to initialize and improve a
labeling via greedy local fixes. However, such algorithms lack robustness and
often converge to inaccurate solutions for complex geometries. Our proposed
framework alleviates this issue by embedding labeling operations in an
evolutionary heuristic, defining fitness, crossover, and mutations in the
context of labeling optimization. We evaluate our method on a thousand smooth
and CAD meshes, showing Evocube converges to valid labelings on a wide range of
shapes. The limitations of our method are also discussed thoroughly.
|
[
{
"version": "v1",
"created": "Mon, 2 May 2022 08:43:27 GMT"
},
{
"version": "v2",
"created": "Sat, 26 Nov 2022 10:02:59 GMT"
},
{
"version": "v3",
"created": "Wed, 8 Feb 2023 10:30:55 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Dumery",
"Corentin",
""
],
[
"Protais",
"François",
""
],
[
"Mestrallet",
"Sébastien",
""
],
[
"Bourcier",
"Christophe",
""
],
[
"Ledoux",
"Franck",
""
]
] |
new_dataset
| 0.994502 |
2207.01206
|
Shunyu Yao
|
Shunyu Yao, Howard Chen, John Yang, Karthik Narasimhan
|
WebShop: Towards Scalable Real-World Web Interaction with Grounded
Language Agents
|
Project page with code, data, demos: https://webshop-pnlp.github.io.
v3 is NeurIPS camera ready version. v4 fixes the choice oracle result as per
https://github.com/princeton-nlp/WebShop/issues/15
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Existing benchmarks for grounding language in interactive environments either
lack real-world linguistic elements, or prove difficult to scale up due to
substantial human involvement in the collection of data or feedback signals. To
bridge this gap, we develop WebShop -- a simulated e-commerce website
environment with $1.18$ million real-world products and $12,087$ crowd-sourced
text instructions. Given a text instruction specifying a product requirement,
an agent needs to navigate multiple types of webpages and issue diverse actions
to find, customize, and purchase an item. WebShop provides several challenges
for language grounding including understanding compositional instructions,
query (re-)formulation, comprehending and acting on noisy text in webpages, and
performing strategic exploration. We collect over $1,600$ human demonstrations
for the task, and train and evaluate a diverse range of agents using
reinforcement learning, imitation learning, and pre-trained image and language
models. Our best model achieves a task success rate of $29\%$, which
outperforms rule-based heuristics ($9.6\%$) but is far lower than human expert
performance ($59\%$). We also analyze agent and human trajectories and ablate
various model components to provide insights for developing future agents with
stronger language understanding and decision making abilities. Finally, we show
that agents trained on WebShop exhibit non-trivial sim-to-real transfer when
evaluated on amazon.com and ebay.com, indicating the potential value of WebShop
in developing practical web-based agents that can operate in the wild.
|
[
{
"version": "v1",
"created": "Mon, 4 Jul 2022 05:30:22 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Jul 2022 01:47:06 GMT"
},
{
"version": "v3",
"created": "Thu, 24 Nov 2022 16:44:26 GMT"
},
{
"version": "v4",
"created": "Wed, 8 Feb 2023 01:39:30 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Yao",
"Shunyu",
""
],
[
"Chen",
"Howard",
""
],
[
"Yang",
"John",
""
],
[
"Narasimhan",
"Karthik",
""
]
] |
new_dataset
| 0.980952 |
2207.04236
|
Min H. Kim
|
Inseung Hwang, Daniel S. Jeon, Adolfo Mu\~noz, Diego Gutierrez, Xin
Tong, Min H. Kim
|
Sparse Ellipsometry: Portable Acquisition of Polarimetric SVBRDF and
Shape with Unstructured Flash Photography
| null |
ACM Transactions on Graphics 41, 4, Article 133 (July 2022)
|
10.1145/3528223.3530075
| null |
cs.GR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ellipsometry techniques allow to measure polarization information of
materials, requiring precise rotations of optical components with different
configurations of lights and sensors. This results in cumbersome capture
devices, carefully calibrated in lab conditions, and in very long acquisition
times, usually in the order of a few days per object. Recent techniques allow
to capture polarimetric spatially-varying reflectance information, but limited
to a single view, or to cover all view directions, but limited to spherical
objects made of a single homogeneous material. We present sparse ellipsometry,
a portable polarimetric acquisition method that captures both polarimetric
SVBRDF and 3D shape simultaneously. Our handheld device consists of
off-the-shelf, fixed optical components. Instead of days, the total acquisition
time varies between twenty and thirty minutes per object. We develop a complete
polarimetric SVBRDF model that includes diffuse and specular components, as
well as single scattering, and devise a novel polarimetric inverse rendering
algorithm with data augmentation of specular reflection samples via generative
modeling. Our results show a strong agreement with a recent ground-truth
dataset of captured polarimetric BRDFs of real-world objects.
|
[
{
"version": "v1",
"created": "Sat, 9 Jul 2022 09:42:59 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Feb 2023 12:27:13 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Hwang",
"Inseung",
""
],
[
"Jeon",
"Daniel S.",
""
],
[
"Muñoz",
"Adolfo",
""
],
[
"Gutierrez",
"Diego",
""
],
[
"Tong",
"Xin",
""
],
[
"Kim",
"Min H.",
""
]
] |
new_dataset
| 0.981512 |
2207.10985
|
Yunlong Ran
|
Yunlong Ran, Jing Zeng, Shibo He, Lincheng Li, Yingfeng Chen, Gimhee
Lee, Jiming Chen, Qi Ye
|
NeurAR: Neural Uncertainty for Autonomous 3D Reconstruction with
Implicit Neural Representations
|
8 pages, 6 figures, 3 tables
|
IEEE Robotics and Automation Letters(RAL) Volume: 8, Issue: 2,
February 2023 Page(s): 1125 - 1132
|
10.1109/LRA.2023.3235686
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Implicit neural representations have shown compelling results in offline 3D
reconstruction and also recently demonstrated the potential for online SLAM
systems. However, applying them to autonomous 3D reconstruction, where a robot
is required to explore a scene and plan a view path for the reconstruction, has
not been studied. In this paper, we explore for the first time the possibility
of using implicit neural representations for autonomous 3D scene reconstruction
by addressing two key challenges: 1) seeking a criterion to measure the quality
of the candidate viewpoints for the view planning based on the new
representations, and 2) learning the criterion from data that can generalize to
different scenes instead of a hand-crafting one. To solve the challenges,
firstly, a proxy of Peak Signal-to-Noise Ratio (PSNR) is proposed to quantify a
viewpoint quality; secondly, the proxy is optimized jointly with the parameters
of an implicit neural network for the scene. With the proposed view quality
criterion from neural networks (termed as Neural Uncertainty), we can then
apply implicit representations to autonomous 3D reconstruction. Our method
demonstrates significant improvements on various metrics for the rendered image
quality and the geometry quality of the reconstructed 3D models when compared
with variants using TSDF or reconstruction without view planning. Project
webpage https://kingteeloki-ran.github.io/NeurAR/
|
[
{
"version": "v1",
"created": "Fri, 22 Jul 2022 10:05:36 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Feb 2023 06:24:39 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Ran",
"Yunlong",
""
],
[
"Zeng",
"Jing",
""
],
[
"He",
"Shibo",
""
],
[
"Li",
"Lincheng",
""
],
[
"Chen",
"Yingfeng",
""
],
[
"Lee",
"Gimhee",
""
],
[
"Chen",
"Jiming",
""
],
[
"Ye",
"Qi",
""
]
] |
new_dataset
| 0.987024 |
2209.04355
|
Hanlei Zhang
|
Hanlei Zhang, Hua Xu, Xin Wang, Qianrui Zhou, Shaojie Zhao, Jiayan
Teng
|
MIntRec: A New Dataset for Multimodal Intent Recognition
|
Accepted by ACM MM 2022 (Main Track, Long Paper)
| null |
10.1145/3503161.3547906
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multimodal intent recognition is a significant task for understanding human
language in real-world multimodal scenes. Most existing intent recognition
methods have limitations in leveraging the multimodal information due to the
restrictions of the benchmark datasets with only text information. This paper
introduces a novel dataset for multimodal intent recognition (MIntRec) to
address this issue. It formulates coarse-grained and fine-grained intent
taxonomies based on the data collected from the TV series Superstore. The
dataset consists of 2,224 high-quality samples with text, video, and audio
modalities and has multimodal annotations among twenty intent categories.
Furthermore, we provide annotated bounding boxes of speakers in each video
segment and achieve an automatic process for speaker annotation. MIntRec is
helpful for researchers to mine relationships between different modalities to
enhance the capability of intent recognition. We extract features from each
modality and model cross-modal interactions by adapting three powerful
multimodal fusion methods to build baselines. Extensive experiments show that
employing the non-verbal modalities achieves substantial improvements compared
with the text-only modality, demonstrating the effectiveness of using
multimodal information for intent recognition. The gap between the
best-performing methods and humans indicates the challenge and importance of
this task for the community. The full dataset and codes are available for use
at https://github.com/thuiar/MIntRec.
|
[
{
"version": "v1",
"created": "Fri, 9 Sep 2022 15:37:39 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Zhang",
"Hanlei",
""
],
[
"Xu",
"Hua",
""
],
[
"Wang",
"Xin",
""
],
[
"Zhou",
"Qianrui",
""
],
[
"Zhao",
"Shaojie",
""
],
[
"Teng",
"Jiayan",
""
]
] |
new_dataset
| 0.999877 |
2209.05136
|
Jo\~ao Mota
|
Jo\~ao Mota, Marco Giunti, Ant\'onio Ravara
|
On using VeriFast, VerCors, Plural, and KeY to check object usage
| null | null | null | null |
cs.LO cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Typestates are a notion of behavioral types that describe protocols for
stateful objects, specifying the available methods for each state, in terms of
a state machine. Usually, objects with protocol are either forced to be used in
a linear way, which restricts what a programmer can do, or deductive
verification is required to verify programs where these objects may be aliased.
To evaluate the strengths and limitations of static verification tools for
object-oriented languages in checking the correct use of shared objects with
protocol, we present a survey on four tools for Java: VeriFast, VerCors,
Plural, and KeY. We describe the implementation of a file reader and of a
linked-list, check for each tool its ability to statically guarantee protocol
compliance as well as protocol completion, even when objects are shared in
collections, and evaluate the programmer's effort in making the code acceptable
to these tools.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 10:44:23 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Feb 2023 13:43:41 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Mota",
"João",
""
],
[
"Giunti",
"Marco",
""
],
[
"Ravara",
"António",
""
]
] |
new_dataset
| 0.993494 |
2209.08499
|
Alexander Badri-Spr\"owitz
|
Abhishek Chatterjee, An Mo, Bernadett Kiss, Emre Cemal G\"onen,
Alexander Badri-Spr\"owitz
|
Multi-segmented Adaptive Feet for Versatile Legged Locomotion in Natural
Terrain
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most legged robots are built with leg structures from serially mounted links
and actuators and are controlled through complex controllers and sensor
feedback. In comparison, animals developed multi-segment legs, mechanical
coupling between joints, and multi-segmented feet. They run agile over all
terrains, arguably with simpler locomotion control. Here we focus on developing
foot mechanisms that resist slipping and sinking also in natural terrain. We
present first results of multi-segment feet mounted to a bird-inspired robot
leg with multi-joint mechanical tendon coupling. Our one- and two-segment,
mechanically adaptive feet show increased viable horizontal forces on multiple
soft and hard substrates before starting to slip. We also observe that
segmented feet reduce sinking on soft substrates compared to ball-feet and
cylinder-feet. We report how multi-segmented feet provide a large range of
viable centre of pressure points well suited for bipedal robots, but also for
quadruped robots on slopes and natural terrain. Our results also offer a
functional understanding of segmented feet in animals like ratite birds.
|
[
{
"version": "v1",
"created": "Sun, 18 Sep 2022 08:00:02 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Feb 2023 15:38:16 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Chatterjee",
"Abhishek",
""
],
[
"Mo",
"An",
""
],
[
"Kiss",
"Bernadett",
""
],
[
"Gönen",
"Emre Cemal",
""
],
[
"Badri-Spröwitz",
"Alexander",
""
]
] |
new_dataset
| 0.986356 |
2212.08021
|
Francesco Pierri
|
Francesco Pierri
|
Political advertisement on Facebook and Instagram in the run up to 2022
Italian general election
|
10 pages, 12 figures. To be published in the proceedings of the ACM
Web Science Conference (2023)
| null |
10.1145/3578503.3583598
| null |
cs.CY cs.SI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Targeted advertising on online social platforms has become increasingly
relevant in the political marketing toolkit. Monitoring political advertising
is crucial to ensure accountability and transparency of democratic processes.
Leveraging Meta public library of sponsored content, we study the extent to
which political ads were delivered on Facebook and Instagram in the run up to
2022 Italian general election. Analyzing over 23 k unique ads paid by 2.7 k
unique sponsors, with an associated amount spent of 4 M EUR and over 1 billion
views generated, we investigate temporal, geographical, and demographic
patterns of the political campaigning activity of main coalitions. We find
results that are in accordance with their political agenda and the electoral
outcome, highlighting how the most active coalitions also obtained most of the
votes and showing regional differences that are coherent with the (targeted)
political base of each group. Our work raises attention to the need for further
studies of digital advertising and its implications for individuals' opinions
and choices.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2022 13:37:18 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Feb 2023 12:06:31 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Pierri",
"Francesco",
""
]
] |
new_dataset
| 0.999167 |
2301.05206
|
Lin Jiarong
|
Jiarong Lin, Chongjiang Yuan, Yixi Cai, Haotian Li, Yuying Zou,
Xiaoping Hong and Fu Zhang
|
ImMesh: An Immediate LiDAR Localization and Meshing Framework
| null | null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose a novel LiDAR(-inertial) odometry and mapping
framework to achieve the goal of simultaneous localization and meshing in
real-time. This proposed framework termed ImMesh comprises four tightly-coupled
modules: receiver, localization, meshing, and broadcaster. The localization
module utilizes the prepossessed sensor data from the receiver, estimates the
sensor pose online by registering LiDAR scans to maps, and dynamically grows
the map. Then, our meshing module takes the registered LiDAR scan for
incrementally reconstructing the triangle mesh on the fly. Finally, the
real-time odometry, map, and mesh are published via our broadcaster. The key
contribution of this work is the meshing module, which represents a scene by an
efficient hierarchical voxels structure, performs fast finding of voxels
observed by new scans, and reconstructs triangle facets in each voxel in an
incremental manner. This voxel-wise meshing operation is delicately designed
for the purpose of efficiency; it first performs a dimension reduction by
projecting 3D points to a 2D local plane contained in the voxel, and then
executes the meshing operation with pull, commit and push steps for incremental
reconstruction of triangle facets. To the best of our knowledge, this is the
first work in literature that can reconstruct online the triangle mesh of
large-scale scenes, just relying on a standard CPU without GPU acceleration. To
share our findings and make contributions to the community, we make our code
publicly available on our GitHub: https://github.com/hku-mars/ImMesh.
|
[
{
"version": "v1",
"created": "Thu, 12 Jan 2023 18:43:16 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Feb 2023 02:26:59 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Lin",
"Jiarong",
""
],
[
"Yuan",
"Chongjiang",
""
],
[
"Cai",
"Yixi",
""
],
[
"Li",
"Haotian",
""
],
[
"Zou",
"Yuying",
""
],
[
"Hong",
"Xiaoping",
""
],
[
"Zhang",
"Fu",
""
]
] |
new_dataset
| 0.99223 |
2302.01791
|
Jiayu Jiao Jr
|
Jiayu Jiao, Yu-Ming Tang, Kun-Yu Lin, Yipeng Gao, Jinhua Ma, Yaowei
Wang and Wei-Shi Zheng
|
DilateFormer: Multi-Scale Dilated Transformer for Visual Recognition
|
Accepted to IEEE Transaction on Multimedia, 2023 (Submission date:
22-Sep-2022)
|
IEEE Transaction on Multimedia, 2023
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As a de facto solution, the vanilla Vision Transformers (ViTs) are encouraged
to model long-range dependencies between arbitrary image patches while the
global attended receptive field leads to quadratic computational cost. Another
branch of Vision Transformers exploits local attention inspired by CNNs, which
only models the interactions between patches in small neighborhoods. Although
such a solution reduces the computational cost, it naturally suffers from small
attended receptive fields, which may limit the performance. In this work, we
explore effective Vision Transformers to pursue a preferable trade-off between
the computational complexity and size of the attended receptive field. By
analyzing the patch interaction of global attention in ViTs, we observe two key
properties in the shallow layers, namely locality and sparsity, indicating the
redundancy of global dependency modeling in shallow layers of ViTs.
Accordingly, we propose Multi-Scale Dilated Attention (MSDA) to model local and
sparse patch interaction within the sliding window. With a pyramid
architecture, we construct a Multi-Scale Dilated Transformer (DilateFormer) by
stacking MSDA blocks at low-level stages and global multi-head self-attention
blocks at high-level stages. Our experiment results show that our DilateFormer
achieves state-of-the-art performance on various vision tasks. On ImageNet-1K
classification task, DilateFormer achieves comparable performance with 70%
fewer FLOPs compared with existing state-of-the-art models. Our
DilateFormer-Base achieves 85.6% top-1 accuracy on ImageNet-1K classification
task, 53.5% box mAP/46.1% mask mAP on COCO object detection/instance
segmentation task and 51.1% MS mIoU on ADE20K semantic segmentation task.
|
[
{
"version": "v1",
"created": "Fri, 3 Feb 2023 14:59:31 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Jiao",
"Jiayu",
""
],
[
"Tang",
"Yu-Ming",
""
],
[
"Lin",
"Kun-Yu",
""
],
[
"Gao",
"Yipeng",
""
],
[
"Ma",
"Jinhua",
""
],
[
"Wang",
"Yaowei",
""
],
[
"Zheng",
"Wei-Shi",
""
]
] |
new_dataset
| 0.997254 |
2302.02232
|
Mustafa Jarrar
|
Sana Ghanem, Mustafa Jarrar, Radi Jarrar, Ibrahim Bounhas
|
A Benchmark and Scoring Algorithm for Enriching Arabic Synonyms
| null |
The 12th International Global Wordnet Conference (GWC2023), Global
Wordnet Association. (pp. ). San Sebastian, Spain, 2023
| null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper addresses the task of extending a given synset with additional
synonyms taking into account synonymy strength as a fuzzy value. Given a
mono/multilingual synset and a threshold (a fuzzy value [0-1]), our goal is to
extract new synonyms above this threshold from existing lexicons. We present
twofold contributions: an algorithm and a benchmark dataset. The dataset
consists of 3K candidate synonyms for 500 synsets. Each candidate synonym is
annotated with a fuzzy value by four linguists. The dataset is important for
(i) understanding how much linguists (dis/)agree on synonymy, in addition to
(ii) using the dataset as a baseline to evaluate our algorithm. Our proposed
algorithm extracts synonyms from existing lexicons and computes a fuzzy value
for each candidate. Our evaluations show that the algorithm behaves like a
linguist and its fuzzy values are close to those proposed by linguists (using
RMSE and MAE). The dataset and a demo page are publicly available at
https://portal.sina.birzeit.edu/synonyms.
|
[
{
"version": "v1",
"created": "Sat, 4 Feb 2023 20:30:32 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Ghanem",
"Sana",
""
],
[
"Jarrar",
"Mustafa",
""
],
[
"Jarrar",
"Radi",
""
],
[
"Bounhas",
"Ibrahim",
""
]
] |
new_dataset
| 0.999696 |
2302.02956
|
Dmytro Pavlichenko
|
Dmytro Pavlichenko, Grzegorz Ficht, Arash Amini, Mojtaba Hosseini,
Raphael Memmesheimer, Angel Villar-Corrales, Stefan M. Schulz, Marcell
Missura, Maren Bennewitz, Sven Behnke
|
RoboCup 2022 AdultSize Winner NimbRo: Upgraded Perception, Capture Steps
Gait and Phase-based In-walk Kicks
| null |
In: RoboCup 2022: Robot World Cup XXV. LNCS 13561, Springer, May
2023
| null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Beating the human world champions by 2050 is an ambitious goal of the
Humanoid League that provides a strong incentive for RoboCup teams to further
improve and develop their systems. In this paper, we present upgrades of our
system which enabled our team NimbRo to win the Soccer Tournament, the Drop-in
Games, and the Technical Challenges in the Humanoid AdultSize League of RoboCup
2022. Strong performance in these competitions resulted in the Best Humanoid
award in the Humanoid League. The mentioned upgrades include: hardware upgrade
of the vision module, balanced walking with Capture Steps, and the introduction
of phase-based in-walk kicks.
|
[
{
"version": "v1",
"created": "Mon, 6 Feb 2023 17:38:46 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Feb 2023 23:22:20 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Pavlichenko",
"Dmytro",
""
],
[
"Ficht",
"Grzegorz",
""
],
[
"Amini",
"Arash",
""
],
[
"Hosseini",
"Mojtaba",
""
],
[
"Memmesheimer",
"Raphael",
""
],
[
"Villar-Corrales",
"Angel",
""
],
[
"Schulz",
"Stefan M.",
""
],
[
"Missura",
"Marcell",
""
],
[
"Bennewitz",
"Maren",
""
],
[
"Behnke",
"Sven",
""
]
] |
new_dataset
| 0.997856 |
2302.03126
|
Sanad Malaysha
|
Sanad Malaysha, Mustafa Jarrar, Mohammed Khalilia
|
Context-Gloss Augmentation for Improving Arabic Target Sense
Verification
| null |
The 12th International Global Wordnet Conference (GWC2023), Global
Wordnet Association. (pp. ). San Sebastian, Spain, 2023
| null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Arabic language lacks semantic datasets and sense inventories. The most
common semantically-labeled dataset for Arabic is the ArabGlossBERT, a
relatively small dataset that consists of 167K context-gloss pairs (about 60K
positive and 107K negative pairs), collected from Arabic dictionaries. This
paper presents an enrichment to the ArabGlossBERT dataset, by augmenting it
using (Arabic-English-Arabic) machine back-translation. Augmentation increased
the dataset size to 352K pairs (149K positive and 203K negative pairs). We
measure the impact of augmentation using different data configurations to
fine-tune BERT on target sense verification (TSV) task. Overall, the accuracy
ranges between 78% to 84% for different data configurations. Although our
approach performed at par with the baseline, we did observe some improvements
for some POS tags in some experiments. Furthermore, our fine-tuned models are
trained on a larger dataset covering larger vocabulary and contexts. We provide
an in-depth analysis of the accuracy for each part-of-speech (POS).
|
[
{
"version": "v1",
"created": "Mon, 6 Feb 2023 21:24:02 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Malaysha",
"Sanad",
""
],
[
"Jarrar",
"Mustafa",
""
],
[
"Khalilia",
"Mohammed",
""
]
] |
new_dataset
| 0.999684 |
2302.03778
|
Rachel Horne Ms
|
Rachel Horne, Tom Putland, Mark Brady
|
Regulating trusted autonomous systems in Australia
|
12 pages
| null | null | null |
cs.CY cs.ET
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Australia is a leader in autonomous systems technology, particularly in the
mining industry, borne from necessity in a geographically dispersed and complex
natural environment. Increasingly advanced autonomous systems are becoming more
prevalent in Australia, particularly as the safety, environmental and
efficiency benefits become better understood, and the increasing sophistication
of technology improves capability and availability. Increasing use of these
systems, including in the maritime domain and air domain, is placing pressure
on the national safety regulators, who must either continue to apply their
traditional regulatory approach requiring exemptions to enable operation of
emerging technology, or seize the opportunity to put in place an agile and
adaptive approach better suited to the rapid developments of the twenty first
century. In Australia the key national safety regulators have demonstrated an
appetite for working with industry to facilitate innovation, but their limited
resources mean progress is slow. There is a critical role to be played by third
parties from industry, government, and academia who can work together to
develop, test and publish new assurance and accreditation frameworks for
trusted autonomous systems, and assist in the transition to an adaptive and
agile regulatory philosophy. This is necessary to ensure the benefits of
autonomous systems can be realised, without compromising safety. This paper
will identify the growing use cases for autonomous systems in Australia, in the
maritime, air and land domains, assess the current regulatory framework, argue
that Australia's regulatory approach needs to become more agile and
anticipatory, and investigate how third party projects could positively impact
the assurance and accreditation process for autonomous systems in the future.
|
[
{
"version": "v1",
"created": "Tue, 7 Feb 2023 22:26:17 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Horne",
"Rachel",
""
],
[
"Putland",
"Tom",
""
],
[
"Brady",
"Mark",
""
]
] |
new_dataset
| 0.992978 |
2302.03815
|
Shuaiqi Liu
|
Shuaiqi Liu, Jiannong Cao, Ruosong Yang, Zhiyuan Wen
|
Long Text and Multi-Table Summarization: Dataset and Method
|
EMNLP 2022 Findings
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic document summarization aims to produce a concise summary covering
the input document's salient information. Within a report document, the salient
information can be scattered in the textual and non-textual content. However,
existing document summarization datasets and methods usually focus on the text
and filter out the non-textual content. Missing tabular data can limit produced
summaries' informativeness, especially when summaries require covering
quantitative descriptions of critical metrics in tables. Existing datasets and
methods cannot meet the requirements of summarizing long text and multiple
tables in each report. To deal with the scarcity of available data, we propose
FINDSum, the first large-scale dataset for long text and multi-table
summarization. Built on 21,125 annual reports from 3,794 companies, it has two
subsets for summarizing each company's results of operations and liquidity. To
summarize the long text and dozens of tables in each report, we present three
types of summarization methods. Besides, we propose a set of evaluation metrics
to assess the usage of numerical information in produced summaries. Dataset
analyses and experimental results indicate the importance of jointly
considering input textual and tabular data when summarizing report documents.
|
[
{
"version": "v1",
"created": "Wed, 8 Feb 2023 00:46:55 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Liu",
"Shuaiqi",
""
],
[
"Cao",
"Jiannong",
""
],
[
"Yang",
"Ruosong",
""
],
[
"Wen",
"Zhiyuan",
""
]
] |
new_dataset
| 0.999249 |
2302.03820
|
Fan Yang
|
Fan Yang, Shigeyuki Odashima, Sosuke Yamao, Hiroaki Fujimoto, Shoichi
Masui, and Shan Jiang
|
A Unified Multi-view Multi-person Tracking Framework
|
Accepted to Computational Visual Media
|
Computational Visual Media, 2023
| null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Although there is a significant development in 3D Multi-view Multi-person
Tracking (3D MM-Tracking), current 3D MM-Tracking frameworks are designed
separately for footprint and pose tracking. Specifically, frameworks designed
for footprint tracking cannot be utilized in 3D pose tracking, because they
directly obtain 3D positions on the ground plane with a homography projection,
which is inapplicable to 3D poses above the ground. In contrast, frameworks
designed for pose tracking generally isolate multi-view and multi-frame
associations and may not be robust to footprint tracking, since footprint
tracking utilizes fewer key points than pose tracking, which weakens multi-view
association cues in a single frame. This study presents a Unified Multi-view
Multi-person Tracking framework to bridge the gap between footprint tracking
and pose tracking. Without additional modifications, the framework can adopt
monocular 2D bounding boxes and 2D poses as the input to produce robust 3D
trajectories for multiple persons. Importantly, multi-frame and multi-view
information are jointly employed to improve the performance of association and
triangulation. The effectiveness of our framework is verified by accomplishing
state-of-the-art performance on the Campus and Shelf datasets for 3D pose
tracking, and by comparable results on the WILDTRACK and MMPTRACK datasets for
3D footprint tracking.
|
[
{
"version": "v1",
"created": "Wed, 8 Feb 2023 01:08:02 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Yang",
"Fan",
""
],
[
"Odashima",
"Shigeyuki",
""
],
[
"Yamao",
"Sosuke",
""
],
[
"Fujimoto",
"Hiroaki",
""
],
[
"Masui",
"Shoichi",
""
],
[
"Jiang",
"Shan",
""
]
] |
new_dataset
| 0.993802 |
2302.03877
|
Mijanur Rahman
|
Md. Mijanur Rahman, Md Tanzinul Kabir Tonmoy, Saifur Rahman Shihab,
Riya Farhana
|
Blockchain-based certificate authentication system with enabling
correction
|
Under peer-review
| null | null | null |
cs.CR cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Blockchain has proven to be an emerging technology in the digital world,
changing the way everyone thinks about data security and bringing efficiency to
several industries. It has already been applied to a wide range of
applications, from financial services and supply chain management to voting
systems and identity verification. An organization must verify its candidates
before selecting them. Choosing an unqualified candidate can ruin an
organization's reputation. In this digital era, many key fraudulent schemes are
rampant in many companies and one of them is certificate fraud. It is possible
to validate a candidate's qualifications using traditional methods, but there
are drawbacks such as security issues and time consumption. In this paper, a
blockchain-based academic certificate authentication system will be used to
ensure authenticity and make the assertion of the decentralized system secure.
However, the system will generate, authenticate and make corrections on
academic certificates. Ultimately, some blockchain-based authentication systems
already exist, they can't correct any errors that occur during generation. The
proposed system will help in many ways, such as providing a user-friendly
university admission, and smooth job hiring process, etc. In conclusion, our
proposed system can permanently eradicate certificate forgeries and create and
promote trust in society.
|
[
{
"version": "v1",
"created": "Wed, 8 Feb 2023 04:42:48 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Rahman",
"Md. Mijanur",
""
],
[
"Tonmoy",
"Md Tanzinul Kabir",
""
],
[
"Shihab",
"Saifur Rahman",
""
],
[
"Farhana",
"Riya",
""
]
] |
new_dataset
| 0.99713 |
2302.03893
|
Shiyuan Sun
|
Shiyuan Sun, Weidong Mei, Fang Yang, Nan An, Jian Song, and Rui Zhang
|
Optical Intelligent Reflecting Surface Assisted MIMO VLC: Channel
Modeling and Capacity Characterization
|
30 pages, 7 figures, 3 tables
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although the multi-antenna or so-called multiple-input multiple-output (MIMO)
transmission has been the enabling technology for the past generations of
radio-frequency (RF)-based wireless communication systems, its application to
the visible light communication (VLC) still faces a critical challenge as the
MIMO spatial multiplexing gain can be hardly attained in VLC channels due to
their strong spatial correlation. In this paper, we tackle this problem by
deploying the optical intelligent reflecting surface (OIRS) in the environment
to boost the capacity of MIMO VLC. Firstly, based on the extremely near-field
channel condition in VLC, we propose a new channel model for OIRS-assisted MIMO
VLC and reveal its peculiar ``no crosstalk'' property, where the OIRS
reflecting elements can be respectively configured to align with one pair of
transmitter and receiver antennas without causing crosstalk to each other.
Next, we characterize the OIRS-assisted MIMO VLC capacities under different
practical power constraints and then proceed to maximize them by jointly
optimizing the OIRS element alignment and transmitter emission power. In
particular, for optimizing the OIRS element alignment, we propose two
algorithms, namely, location-aided interior-point algorithm and log-det-based
alternating optimization algorithm, to balance the performance versus
complexity trade-off; while the optimal transmitter emission power is derived
in closed form. Numerical results are provided, which validate the capacity
improvement of OIRS-assisted MIMO VLC against the VLC without OIRS and
demonstrate the superior performance of the proposed algorithms compared to
baseline schemes.
|
[
{
"version": "v1",
"created": "Wed, 8 Feb 2023 06:00:58 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Sun",
"Shiyuan",
""
],
[
"Mei",
"Weidong",
""
],
[
"Yang",
"Fang",
""
],
[
"An",
"Nan",
""
],
[
"Song",
"Jian",
""
],
[
"Zhang",
"Rui",
""
]
] |
new_dataset
| 0.992415 |
2302.03905
|
Chengyue Jiang
|
Chengyue Jiang, Yong Jiang, Weiqi Wu, Yuting Zheng, Pengjun Xie, Kewei
Tu
|
COMBO: A Complete Benchmark for Open KG Canonicalization
|
18 pages
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Open knowledge graph (KG) consists of (subject, relation, object) triples
extracted from millions of raw text. The subject and object noun phrases and
the relation in open KG have severe redundancy and ambiguity and need to be
canonicalized. Existing datasets for open KG canonicalization only provide gold
entity-level canonicalization for noun phrases. In this paper, we present
COMBO, a Complete Benchmark for Open KG canonicalization. Compared with
existing datasets, we additionally provide gold canonicalization for relation
phrases, gold ontology-level canonicalization for noun phrases, as well as
source sentences from which triples are extracted. We also propose metrics for
evaluating each type of canonicalization. On the COMBO dataset, we empirically
compare previously proposed canonicalization methods as well as a few simple
baseline methods based on pretrained language models. We find that properly
encoding the phrases in a triple using pretrained language models results in
better relation canonicalization and ontology-level canonicalization of the
noun phrase. We release our dataset, baselines, and evaluation scripts at
https://github.com/jeffchy/COMBO/tree/main.
|
[
{
"version": "v1",
"created": "Wed, 8 Feb 2023 06:46:01 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Jiang",
"Chengyue",
""
],
[
"Jiang",
"Yong",
""
],
[
"Wu",
"Weiqi",
""
],
[
"Zheng",
"Yuting",
""
],
[
"Xie",
"Pengjun",
""
],
[
"Tu",
"Kewei",
""
]
] |
new_dataset
| 0.999353 |
2302.03914
|
Jiawei Liu
|
Jiawei Liu and Xingping Dong and Sanyuan Zhao and Jianbing Shen
|
Generalized Few-Shot 3D Object Detection of LiDAR Point Cloud for
Autonomous Driving
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent years have witnessed huge successes in 3D object detection to
recognize common objects for autonomous driving (e.g., vehicles and
pedestrians). However, most methods rely heavily on a large amount of
well-labeled training data. This limits their capability of detecting rare
fine-grained objects (e.g., police cars and ambulances), which is important for
special cases, such as emergency rescue, and so on. To achieve simultaneous
detection for both common and rare objects, we propose a novel task, called
generalized few-shot 3D object detection, where we have a large amount of
training data for common (base) objects, but only a few data for rare (novel)
classes. Specifically, we analyze in-depth differences between images and point
clouds, and then present a practical principle for the few-shot setting in the
3D LiDAR dataset. To solve this task, we propose a simple and effective
detection framework, including (1) an incremental fine-tuning method to extend
existing 3D detection models to recognize both common and rare objects, and (2)
a sample adaptive balance loss to alleviate the issue of long-tailed data
distribution in autonomous driving scenarios. On the nuScenes dataset, we
conduct sufficient experiments to demonstrate that our approach can
successfully detect the rare (novel) classes that contain only a few training
data, while also maintaining the detection accuracy of common objects.
|
[
{
"version": "v1",
"created": "Wed, 8 Feb 2023 07:11:36 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Liu",
"Jiawei",
""
],
[
"Dong",
"Xingping",
""
],
[
"Zhao",
"Sanyuan",
""
],
[
"Shen",
"Jianbing",
""
]
] |
new_dataset
| 0.994821 |
2302.03924
|
Zhongxin Liu
|
Zhongxin Liu, Zhijie Tang, Xin Xia, Xiaohu Yang
|
CCRep: Learning Code Change Representations via Pre-Trained Code Model
and Query Back
|
Accepted by ICSE 2023
| null | null | null |
cs.SE cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Representing code changes as numeric feature vectors, i.e., code change
representations, is usually an essential step to automate many software
engineering tasks related to code changes, e.g., commit message generation and
just-in-time defect prediction. Intuitively, the quality of code change
representations is crucial for the effectiveness of automated approaches. Prior
work on code changes usually designs and evaluates code change representation
approaches for a specific task, and little work has investigated code change
encoders that can be used and jointly trained on various tasks. To fill this
gap, this work proposes a novel Code Change Representation learning approach
named CCRep, which can learn to encode code changes as feature vectors for
diverse downstream tasks. Specifically, CCRep regards a code change as the
combination of its before-change and after-change code, leverages a pre-trained
code model to obtain high-quality contextual embeddings of code, and uses a
novel mechanism named query back to extract and encode the changed code
fragments and make them explicitly interact with the whole code change. To
evaluate CCRep and demonstrate its applicability to diverse code-change-related
tasks, we apply it to three tasks: commit message generation, patch correctness
assessment, and just-in-time defect prediction. Experimental results show that
CCRep outperforms the state-of-the-art techniques on each task.
|
[
{
"version": "v1",
"created": "Wed, 8 Feb 2023 07:43:55 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Liu",
"Zhongxin",
""
],
[
"Tang",
"Zhijie",
""
],
[
"Xia",
"Xin",
""
],
[
"Yang",
"Xiaohu",
""
]
] |
new_dataset
| 0.972443 |
2302.03941
|
Sankarshan Damle
|
Sankarshan Damle, Vlasis Koutsos, Dimitrios Papadopoulos, Dimitris
Chatzopoulos, Sujit Gujar
|
AVeCQ: Anonymous Verifiable Crowdsourcing with Worker Qualities
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
In crowdsourcing systems, requesters publish tasks, and interested workers
provide answers to get rewards. Worker anonymity motivates participation since
it protects their privacy. Anonymity with unlinkability is an enhanced version
of anonymity because it makes it impossible to ``link'' workers across the
tasks they participate in. Another core feature of crowdsourcing systems is
worker quality which expresses a worker's trustworthiness and quantifies their
historical performance. Notably, worker quality depends on the participation
history, revealing information about it, while unlinkability aims to
disassociate the workers' identities from their past activity. In this work, we
present AVeCQ, the first crowdsourcing system that reconciles these properties,
achieving enhanced anonymity and verifiable worker quality updates. AVeCQ
relies on a suite of cryptographic tools, such as zero-knowledge proofs, to (i)
guarantee workers' privacy, (ii) prove the correctness of worker quality scores
and task answers, and (iii) commensurate payments. AVeCQ is developed
modularly, where the requesters and workers communicate over a platform that
supports pseudonymity, information logging, and payments. In order to compare
AVeCQ with the state-of-the-art, we prototype it over Ethereum. AVeCQ
outperforms the state-of-the-art in three popular crowdsourcing tasks (image
annotation, average review, and Gallup polls). For instance, for an Average
Review task with $5$ choices and $128$ participating workers AVeCQ is 40\%
faster (including overhead to compute and verify the necessary proofs and
blockchain transaction processing time) with the task's requester consuming
87\% fewer gas units.
|
[
{
"version": "v1",
"created": "Wed, 8 Feb 2023 08:34:28 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Damle",
"Sankarshan",
""
],
[
"Koutsos",
"Vlasis",
""
],
[
"Papadopoulos",
"Dimitrios",
""
],
[
"Chatzopoulos",
"Dimitris",
""
],
[
"Gujar",
"Sujit",
""
]
] |
new_dataset
| 0.95819 |
2302.03984
|
Andrew Adamatzky
|
Andrew Adamatzky, Giuseppe Tarabella, Neil Phillips, Alessandro
Chiolerio, Passquale D'Angelo, Anna Nicolaidou, Georgios Ch. Sirakoulis
|
Kombucha electronics
| null | null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A kombucha is a tea and sugar fermented by over sixty kinds of yeasts and
bacteria. This symbiotic community produces kombucha mats, which are
cellulose-based hydrogels. The kombucha mats can be used as an alternative to
animal leather in industry and fashion once they have been dried and cured.
Prior to this study, we demonstrated that living kombucha mats display dynamic
electrical activity and distinct stimulating responses. For use in organic
textiles, cured mats of kombucha are inert. To make kombucha wearables
functional, it is necessary to incorporate electrical circuits. We demonstrate
that creating electrical conductors on kombucha mats is possible. After
repeated bending and stretching, the circuits maintain their functionality. In
addition, the abilities and electronic properties of the proposed kombucha,
such as being lighter, less expensive, and more flexible than conventional
electronic systems, pave the way for their use in a diverse range of
applications.
|
[
{
"version": "v1",
"created": "Wed, 8 Feb 2023 10:48:42 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Adamatzky",
"Andrew",
""
],
[
"Tarabella",
"Giuseppe",
""
],
[
"Phillips",
"Neil",
""
],
[
"Chiolerio",
"Alessandro",
""
],
[
"D'Angelo",
"Passquale",
""
],
[
"Nicolaidou",
"Anna",
""
],
[
"Sirakoulis",
"Georgios Ch.",
""
]
] |
new_dataset
| 0.999564 |
2302.03997
|
Yuan Cao
|
Yuan Cao, Xudong Zhang, Fan Zhang, Feifei Kou, Josiah Poon, Xiongnan
Jin, Yongheng Wang and Jinpeng Chen
|
SimCGNN: Simple Contrastive Graph Neural Network for Session-based
Recommendation
| null | null | null | null |
cs.IR cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Session-based recommendation (SBR) problem, which focuses on next-item
prediction for anonymous users, has received increasingly more attention from
researchers. Existing graph-based SBR methods all lack the ability to
differentiate between sessions with the same last item, and suffer from severe
popularity bias. Inspired by nowadays emerging contrastive learning methods,
this paper presents a Simple Contrastive Graph Neural Network for Session-based
Recommendation (SimCGNN). In SimCGNN, we first obtain normalized session
embeddings on constructed session graphs. We next construct positive and
negative samples of the sessions by two forward propagation and a novel
negative sample selection strategy, and then calculate the constructive loss.
Finally, session embeddings are used to give prediction. Extensive experiments
conducted on two real-word datasets show our SimCGNN achieves a significant
improvement over state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Wed, 8 Feb 2023 11:13:22 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Cao",
"Yuan",
""
],
[
"Zhang",
"Xudong",
""
],
[
"Zhang",
"Fan",
""
],
[
"Kou",
"Feifei",
""
],
[
"Poon",
"Josiah",
""
],
[
"Jin",
"Xiongnan",
""
],
[
"Wang",
"Yongheng",
""
],
[
"Chen",
"Jinpeng",
""
]
] |
new_dataset
| 0.996042 |
2302.04038
|
Andr\'e Panisson
|
Arthur Capozzi, Gianmarco De Francisci Morales, Yelena Mejova, Corrado
Monti, Andr\'e Panisson
|
The Thin Ideology of Populist Advertising on Facebook during the 2019 EU
Elections
| null |
In Proceedings of the ACM Web Conference 2023 (WWW '23), May 1-5,
2023, Austin, TX, USA. ACM, New York, NY, USA, 11 pages
|
10.1145/3543507.3583267
| null |
cs.SI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social media has been an important tool in the expansion of the populist
message, and it is thought to have contributed to the electoral success of
populist parties in the past decade. This study compares how populist parties
advertised on Facebook during the 2019 European Parliamentary election. In
particular, we examine commonalities and differences in which audiences they
reach and on which issues they focus. By using data from Meta (previously
Facebook) Ad Library, we analyze 45k ad campaigns by 39 parties, both populist
and mainstream, in Germany, United Kingdom, Italy, Spain, and Poland. While
populist parties represent just over 20% of the total expenditure on political
ads, they account for 40% of the total impressions$\unicode{x2013}$most of
which from Eurosceptic and far-right parties$\unicode{x2013}$thus hinting at a
competitive advantage for populist parties on Facebook. We further find that
ads posted by populist parties are more likely to reach male audiences, and
sometimes much older ones. In terms of issues, populist politicians focus on
monetary policy, state bureaucracy and reforms, and security, while the focus
on EU and Brexit is on par with non-populist, mainstream parties. However,
issue preferences are largely country-specific, thus supporting the view in
political science that populism is a "thin ideology", that does not have a
universal, coherent policy agenda. This study illustrates the usefulness of
publicly available advertising data for monitoring the populist outreach to,
and engagement with, millions of potential voters, while outlining the
limitations of currently available data.
|
[
{
"version": "v1",
"created": "Wed, 8 Feb 2023 13:22:11 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Capozzi",
"Arthur",
""
],
[
"Morales",
"Gianmarco De Francisci",
""
],
[
"Mejova",
"Yelena",
""
],
[
"Monti",
"Corrado",
""
],
[
"Panisson",
"André",
""
]
] |
new_dataset
| 0.99659 |
2302.04156
|
Rui Cao
|
Rui Cao, Roy Ka-Wei Lee, Wen-Haw Chong, Jing Jiang
|
Prompting for Multimodal Hateful Meme Classification
|
Accepted in EMNLP, 2022
| null | null | null |
cs.CL cs.IR cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Hateful meme classification is a challenging multimodal task that requires
complex reasoning and contextual background knowledge. Ideally, we could
leverage an explicit external knowledge base to supplement contextual and
cultural information in hateful memes. However, there is no known explicit
external knowledge base that could provide such hate speech contextual
information. To address this gap, we propose PromptHate, a simple yet effective
prompt-based model that prompts pre-trained language models (PLMs) for hateful
meme classification. Specifically, we construct simple prompts and provide a
few in-context examples to exploit the implicit knowledge in the pre-trained
RoBERTa language model for hateful meme classification. We conduct extensive
experiments on two publicly available hateful and offensive meme datasets. Our
experimental results show that PromptHate is able to achieve a high AUC of
90.96, outperforming state-of-the-art baselines on the hateful meme
classification task. We also perform fine-grained analyses and case studies on
various prompt settings and demonstrate the effectiveness of the prompts on
hateful meme classification.
|
[
{
"version": "v1",
"created": "Wed, 8 Feb 2023 16:04:08 GMT"
}
] | 2023-02-09T00:00:00 |
[
[
"Cao",
"Rui",
""
],
[
"Lee",
"Roy Ka-Wei",
""
],
[
"Chong",
"Wen-Haw",
""
],
[
"Jiang",
"Jing",
""
]
] |
new_dataset
| 0.995637 |
2011.08340
|
Manish Motwani
|
Manish Motwani and Yuriy Brun
|
Better Automatic Program Repair by Using Bug Reports and Tests Together
|
accepted in ICSE'23 technical track
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Automated program repair is already deployed in industry, but concerns remain
about repair quality. Recent research has shown that one of the main reasons
repair tools produce incorrect (but seemingly correct) patches is imperfect
fault localization (FL). This paper demonstrates that combining information
from natural-language bug reports and test executions when localizing faults
can have a significant positive impact on repair quality. For example, existing
repair tools with such FL are able to correctly repair 7 defects in the
Defects4J benchmark that no prior tools have repaired correctly. We develop,
Blues, the first information-retrieval-based, statement-level FL technique that
requires no training data. We further develop RAFL, the first unsupervised
method for combining multiple FL techniques, which outperforms a supervised
method. Using RAFL, we create SBIR by combining Blues with a spectrum-based
(SBFL) technique. Evaluated on 815 real-world defects, SBIR consistently ranks
buggy statements higher than its underlying techniques. We then modify three
state-of-the-art repair tools, Arja, SequenceR, and SimFix, to use SBIR, SBFL,
and Blues as their internal FL. We evaluate the quality of the produced patches
on 689 real-world defects. Arja and SequenceR significantly benefit from SBIR:
Arja using SBIR correctly repairs 28 defects, but only 21 using SBFL, and only
15 using Blues; SequenceR using SBIR correctly repairs 12 defects, but only 10
using SBFL, and only 4 using Blues. SimFix, (which has internal mechanisms to
overcome poor FL), correctly repairs 30 defects using SBIR and SBFL, but only
13 using Blues. Our work is the first investigation of simultaneously using
multiple software artifacts for automated program repair, and our promising
findings suggest future research in this directions is likely to be fruitful.
|
[
{
"version": "v1",
"created": "Mon, 16 Nov 2020 23:51:42 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Nov 2020 02:19:30 GMT"
},
{
"version": "v3",
"created": "Tue, 15 Mar 2022 14:58:47 GMT"
},
{
"version": "v4",
"created": "Mon, 6 Feb 2023 20:42:43 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Motwani",
"Manish",
""
],
[
"Brun",
"Yuriy",
""
]
] |
new_dataset
| 0.952139 |
2111.12924
|
Shichao Li
|
Shichao Li and Kwang-Ting Cheng
|
Joint stereo 3D object detection and implicit surface reconstruction
| null | null | null | null |
cs.CV cs.GR cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We present a new learning-based framework S-3D-RCNN that can recover accurate
object orientation in SO(3) and simultaneously predict implicit shapes for
outdoor rigid objects from stereo RGB images. In contrast to previous studies
that map local appearance to observation angles, we explore a progressive
approach by extracting meaningful Intermediate Geometrical Representations
(IGRs) to estimate egocentric object orientation. This approach features a deep
model that transforms perceived intensities to object part coordinates, which
are mapped to a 3D representation encoding object orientation in the camera
coordinate system. To enable implicit shape estimation, the IGRs are further
extended to model visible object surface with a point-based representation and
explicitly addresses the unseen surface hallucination problem. Extensive
experiments validate the effectiveness of the proposed IGRs and S-3D-RCNN
achieves superior 3D scene understanding performance using existing and
proposed new metrics on the KITTI benchmark. Code and pre-trained models will
be available at this https URL.
|
[
{
"version": "v1",
"created": "Thu, 25 Nov 2021 05:52:30 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Jul 2022 11:58:27 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Feb 2023 05:53:41 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Li",
"Shichao",
""
],
[
"Cheng",
"Kwang-Ting",
""
]
] |
new_dataset
| 0.951757 |
2201.09463
|
Zhengwei Bai
|
Zhengwei Bai, Guoyuan Wu, Xuewei Qi, Yongkang Liu, Kentaro Oguchi,
Matthew J. Barth
|
Cyber Mobility Mirror for Enabling Cooperative Driving Automation in
Mixed Traffic: A Co-Simulation Platform
|
Accepted by the IEEE Intelligent Transportation Systems Magazine
|
IEEE Intelligent Transportation Systems Magazine 2022
|
10.1109/MITS.2022.3203662
| null |
cs.SE cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Endowed with automation and connectivity, Connected and Automated Vehicles
are meant to be a revolutionary promoter for Cooperative Driving Automation.
Nevertheless, CAVs need high-fidelity perception information on their
surroundings, which is available but costly to collect from various onboard
sensors as well as vehicle-to-everything (V2X) communications. Therefore,
authentic perception information based on high-fidelity sensors via a
cost-effective platform is crucial for enabling CDA-related research, e.g.,
cooperative decision-making or control. Most state-of-the-art traffic
simulation studies for CAVs rely on situation-awareness information by directly
calling on intrinsic attributes of the objects, which impedes the reliability
and fidelity of the assessment of CDA algorithms. In this study, a
\textit{Cyber Mobility Mirror (CMM)} Co-Simulation Platform is designed for
enabling CDA by providing authentic perception information. The \textit{CMM}
Co-Simulation Platform can emulate the real world with a high-fidelity sensor
perception system and a cyber world with a real-time rebuilding system acting
as a "\textit{Mirror}" of the real-world environment. Concretely, the
real-world simulator is mainly in charge of simulating the traffic environment,
sensors, as well as the authentic perception process. The mirror-world
simulator is responsible for rebuilding objects and providing their information
as intrinsic attributes of the simulator to support the development and
evaluation of CDA algorithms. To illustrate the functionality of the proposed
co-simulation platform, a roadside LiDAR-based vehicle perception system for
enabling CDA is prototyped as a study case. Specific traffic environments and
CDA tasks are designed for experiments whose results are demonstrated and
analyzed to show the performance of the platform.
|
[
{
"version": "v1",
"created": "Mon, 24 Jan 2022 05:27:20 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Aug 2022 22:52:25 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Bai",
"Zhengwei",
""
],
[
"Wu",
"Guoyuan",
""
],
[
"Qi",
"Xuewei",
""
],
[
"Liu",
"Yongkang",
""
],
[
"Oguchi",
"Kentaro",
""
],
[
"Barth",
"Matthew J.",
""
]
] |
new_dataset
| 0.980608 |
2201.10881
|
Pablo Ortiz
|
Per Erik Solberg and Pablo Ortiz
|
The Norwegian Parliamentary Speech Corpus
|
6 pages, submitted to LREC 2022
|
LREC 2022
| null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
The Norwegian Parliamentary Speech Corpus (NPSC) is a speech dataset with
recordings of meetings from Stortinget, the Norwegian parliament. It is the
first, publicly available dataset containing unscripted, Norwegian speech
designed for training of automatic speech recognition (ASR) systems. The
recordings are manually transcribed and annotated with language codes and
speakers, and there are detailed metadata about the speakers. The
transcriptions exist in both normalized and non-normalized form, and
non-standardized words are explicitly marked and annotated with standardized
equivalents. To test the usefulness of this dataset, we have compared an ASR
system trained on the NPSC with a baseline system trained on only
manuscript-read speech. These systems were tested on an independent dataset
containing spontaneous, dialectal speech. The NPSC-trained system performed
significantly better, with a 22.9% relative improvement in word error rate
(WER). Moreover, training on the NPSC is shown to have a "democratizing" effect
in terms of dialects, as improvements are generally larger for dialects with
higher WER from the baseline system.
|
[
{
"version": "v1",
"created": "Wed, 26 Jan 2022 11:41:55 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Solberg",
"Per Erik",
""
],
[
"Ortiz",
"Pablo",
""
]
] |
new_dataset
| 0.999766 |
2203.06319
|
Zhengwei Bai
|
Zhengwei Bai, Guoyuan Wu, Matthew J. Barth, Yongkang Liu, Emrah Akin
Sisbot, Kentaro Oguchi
|
PillarGrid: Deep Learning-based Cooperative Perception for 3D Object
Detection from Onboard-Roadside LiDAR
|
Submitted to The 25th IEEE International Conference on Intelligent
Transportation Systems (IEEE ITSC 2022)
| null |
10.1109/ITSC55140.2022.9921947
| null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
3D object detection plays a fundamental role in enabling autonomous driving,
which is regarded as the significant key to unlocking the bottleneck of
contemporary transportation systems from the perspectives of safety, mobility,
and sustainability. Most of the state-of-the-art (SOTA) object detection
methods from point clouds are developed based on a single onboard LiDAR, whose
performance will be inevitably limited by the range and occlusion, especially
in dense traffic scenarios. In this paper, we propose \textit{PillarGrid}, a
novel cooperative perception method fusing information from multiple 3D LiDARs
(both on-board and roadside), to enhance the situation awareness for connected
and automated vehicles (CAVs). PillarGrid consists of four main phases: 1)
cooperative preprocessing of point clouds, 2) pillar-wise voxelization and
feature extraction, 3) grid-wise deep fusion of features from multiple sensors,
and 4) convolutional neural network (CNN)-based augmented 3D object detection.
A novel cooperative perception platform is developed for model training and
testing. Extensive experimentation shows that PillarGrid outperforms the SOTA
single-LiDAR-based 3D object detection methods with respect to both accuracy
and range by a large margin.
|
[
{
"version": "v1",
"created": "Sat, 12 Mar 2022 02:28:41 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Mar 2022 03:30:02 GMT"
},
{
"version": "v3",
"created": "Sat, 19 Mar 2022 22:58:55 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Bai",
"Zhengwei",
""
],
[
"Wu",
"Guoyuan",
""
],
[
"Barth",
"Matthew J.",
""
],
[
"Liu",
"Yongkang",
""
],
[
"Sisbot",
"Emrah Akin",
""
],
[
"Oguchi",
"Kentaro",
""
]
] |
new_dataset
| 0.996481 |
2203.08388
|
Zhiruo Wang
|
Zhiruo Wang, Grace Cuenca, Shuyan Zhou, Frank F. Xu, Graham Neubig
|
MCoNaLa: A Benchmark for Code Generation from Multiple Natural Languages
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
While there has been a recent burgeoning of applications at the intersection
of natural and programming languages, such as code generation and code
summarization, these applications are usually English-centric. This creates a
barrier for program developers who are not proficient in English. To mitigate
this gap in technology development across languages, we propose a multilingual
dataset, MCoNaLa, to benchmark code generation from natural language commands
extending beyond English. Modeled off of the methodology from the English
Code/Natural Language Challenge (CoNaLa) dataset, we annotated a total of 896
NL-code pairs in three languages: Spanish, Japanese, and Russian. We present a
quantitative evaluation of performance on the MCoNaLa dataset by testing with
state-of-the-art code generation systems. While the difficulties vary across
these three languages, all systems lag significantly behind their English
counterparts, revealing the challenges in adapting code generation to new
languages.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 04:21:50 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Feb 2023 03:12:50 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Wang",
"Zhiruo",
""
],
[
"Cuenca",
"Grace",
""
],
[
"Zhou",
"Shuyan",
""
],
[
"Xu",
"Frank F.",
""
],
[
"Neubig",
"Graham",
""
]
] |
new_dataset
| 0.999364 |
2205.07030
|
Koray Kavakl{\i}
|
Koray Kavakl{\i}, Yuta Itoh, Hakan Urey, Kaan Ak\c{s}it
|
Realistic Defocus Blur for Multiplane Computer-Generated Holography
|
16 pages in total, first 9 pages are for the manuscript, remaining
pages are for supplementary. For more visit:
https://complightlab.com/publications/realistic_defocus_cgh For our codebase
visit https://github.com/complight/realistic_defocus
| null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper introduces a new multiplane CGH computation method to reconstruct
artefact-free high-quality holograms with natural-looking defocus blur. Our
method introduces a new targeting scheme and a new loss function. While the
targeting scheme accounts for defocused parts of the scene at each depth plane,
the new loss function analyzes focused and defocused parts separately in
reconstructed images. Our method support phase-only CGH calculations using
various iterative (e.g., Gerchberg-Saxton, Gradient Descent) and non-iterative
(e.g., Double Phase) CGH techniques. We achieve our best image quality using a
modified gradient descent-based optimization recipe where we introduce a
constraint inspired by the double phase method. We validate our method
experimentally using our proof-of-concept holographic display, comparing
various algorithms, including multi-depth scenes with sparse and dense
contents.
|
[
{
"version": "v1",
"created": "Sat, 14 May 2022 10:17:34 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Feb 2023 21:06:15 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Kavaklı",
"Koray",
""
],
[
"Itoh",
"Yuta",
""
],
[
"Urey",
"Hakan",
""
],
[
"Akşit",
"Kaan",
""
]
] |
new_dataset
| 0.966711 |
2207.07253
|
Jingjing Wu
|
Jingjing Wu, Pengyuan Lyu, Guangming Lu, Chengquan Zhang, and Wenjie
Pei
|
Single Shot Self-Reliant Scene Text Spotter by Decoupled yet
Collaborative Detection and Recognition
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Typical text spotters follow the two-stage spotting paradigm which detects
the boundary for a text instance first and then performs text recognition
within the detected regions. Despite the remarkable progress of such spotting
paradigm, an important limitation is that the performance of text recognition
depends heavily on the precision of text detection, resulting in the potential
error propagation from detection to recognition. In this work, we propose the
single shot Self-Reliant Scene Text Spotter v2 (SRSTS v2), which circumvents
this limitation by decoupling recognition from detection while optimizing two
tasks collaboratively. Specifically, our SRSTS v2 samples representative
feature points around each potential text instance, and conducts both text
detection and recognition in parallel guided by these sampled points. Thus, the
text recognition is no longer dependent on detection, thereby alleviating the
error propagation from detection to recognition. Moreover, the sampling module
is learned under the supervision from both detection and recognition, which
allows for the collaborative optimization and mutual enhancement between two
tasks. Benefiting from such sampling-driven concurrent spotting framework, our
approach is able to recognize the text instances correctly even if the precise
text boundaries are challenging to detect. Extensive experiments on four
benchmarks demonstrate that our method compares favorably to state-of-the-art
spotters.
|
[
{
"version": "v1",
"created": "Fri, 15 Jul 2022 01:59:14 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Jul 2022 11:38:34 GMT"
},
{
"version": "v3",
"created": "Mon, 6 Feb 2023 04:06:44 GMT"
},
{
"version": "v4",
"created": "Tue, 7 Feb 2023 08:41:17 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Wu",
"Jingjing",
""
],
[
"Lyu",
"Pengyuan",
""
],
[
"Lu",
"Guangming",
""
],
[
"Zhang",
"Chengquan",
""
],
[
"Pei",
"Wenjie",
""
]
] |
new_dataset
| 0.965557 |
2209.08803
|
Jeongeun Park
|
Jeongeun Park, Taerim Yoon, Jejoon Hong, Youngjae Yu, Matthew Pan, and
Sungjoon Choi
|
Zero-shot Active Visual Search (ZAVIS): Intelligent Object Search for
Robotic Assistants
|
To be appear on ICRA 2023
| null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we focus on the problem of efficiently locating a target
object described with free-form language using a mobile robot equipped with
vision sensors (e.g., an RGBD camera). Conventional active visual search
predefines a set of objects to search for, rendering these techniques
restrictive in practice. To provide added flexibility in active visual
searching, we propose a system where a user can enter target commands using
free-form language; we call this system Active Visual Search in the Wild
(AVSW). AVSW detects and plans to search for a target object inputted by a user
through a semantic grid map represented by static landmarks (e.g., desk or
bed). For efficient planning of object search patterns, AVSW considers
commonsense knowledge-based co-occurrence and predictive uncertainty while
deciding which landmarks to visit first. We validate the proposed method with
respect to SR (success rate) and SPL (success weighted by path length) in both
simulated and real-world environments. The proposed method outperforms previous
methods in terms of SPL in simulated scenarios with an average gap of 0.283. We
further demonstrate AVSW with a Pioneer-3AT robot in real-world studies.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 07:18:46 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Sep 2022 09:55:31 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Feb 2023 15:46:11 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Park",
"Jeongeun",
""
],
[
"Yoon",
"Taerim",
""
],
[
"Hong",
"Jejoon",
""
],
[
"Yu",
"Youngjae",
""
],
[
"Pan",
"Matthew",
""
],
[
"Choi",
"Sungjoon",
""
]
] |
new_dataset
| 0.996214 |
2209.13756
|
Tianhao Wu
|
Tianhao Wu, Boyang Li, Yihang Luo, Yingqian Wang, Chao Xiao, Ting Liu,
Jungang Yang, Wei An, Yulan Guo
|
MTU-Net: Multi-level TransUNet for Space-based Infrared Tiny Ship
Detection
| null | null |
10.1109/TGRS.2023.3235002
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Space-based infrared tiny ship detection aims at separating tiny ships from
the images captured by earth orbiting satellites. Due to the extremely large
image coverage area (e.g., thousands square kilometers), candidate targets in
these images are much smaller, dimer, more changeable than those targets
observed by aerial-based and land-based imaging devices. Existing short imaging
distance-based infrared datasets and target detection methods cannot be well
adopted to the space-based surveillance task. To address these problems, we
develop a space-based infrared tiny ship detection dataset (namely,
NUDT-SIRST-Sea) with 48 space-based infrared images and 17598 pixel-level tiny
ship annotations. Each image covers about 10000 square kilometers of area with
10000X10000 pixels. Considering the extreme characteristics (e.g., small, dim,
changeable) of those tiny ships in such challenging scenes, we propose a
multi-level TransUNet (MTU-Net) in this paper. Specifically, we design a Vision
Transformer (ViT) Convolutional Neural Network (CNN) hybrid encoder to extract
multi-level features. Local feature maps are first extracted by several
convolution layers and then fed into the multi-level feature extraction module
(MVTM) to capture long-distance dependency. We further propose a
copy-rotate-resize-paste (CRRP) data augmentation approach to accelerate the
training phase, which effectively alleviates the issue of sample imbalance
between targets and background. Besides, we design a FocalIoU loss to achieve
both target localization and shape description. Experimental results on the
NUDT-SIRST-Sea dataset show that our MTU-Net outperforms traditional and
existing deep learning based SIRST methods in terms of probability of
detection, false alarm rate and intersection over union.
|
[
{
"version": "v1",
"created": "Wed, 28 Sep 2022 00:48:14 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Wu",
"Tianhao",
""
],
[
"Li",
"Boyang",
""
],
[
"Luo",
"Yihang",
""
],
[
"Wang",
"Yingqian",
""
],
[
"Xiao",
"Chao",
""
],
[
"Liu",
"Ting",
""
],
[
"Yang",
"Jungang",
""
],
[
"An",
"Wei",
""
],
[
"Guo",
"Yulan",
""
]
] |
new_dataset
| 0.999759 |
2210.02517
|
Zulfiqar Zaidi
|
Zulfiqar Zaidi, Daniel Martin, Nathaniel Belles, Viacheslav Zakharov,
Arjun Krishna, Kin Man Lee, Peter Wagstaff, Sumedh Naik, Matthew Sklar, Sugju
Choi, Yoshiki Kakehi, Ruturaj Patil, Divya Mallemadugula, Florian Pesce,
Peter Wilson, Wendell Hom, Matan Diamond, Bryan Zhao, Nina Moorman, Rohan
Paleja, Letian Chen, Esmaeil Seraj, Matthew Gombolay
|
Athletic Mobile Manipulator System for Robotic Wheelchair Tennis
|
8 pages, accepted at RA-L, will also be presented at IROS 2023
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Athletics are a quintessential and universal expression of humanity. From
French monks who in the 12th century invented jeu de paume, the precursor to
modern lawn tennis, back to the K'iche' people who played the Maya Ballgame as
a form of religious expression over three thousand years ago, humans have
sought to train their minds and bodies to excel in sporting contests. Advances
in robotics are opening up the possibility of robots in sports. Yet, key
challenges remain, as most prior works in robotics for sports are limited to
pristine sensing environments, do not require significant force generation, or
are on miniaturized scales unsuited for joint human-robot play. In this paper,
we propose the first open-source, autonomous robot for playing regulation
wheelchair tennis. We demonstrate the performance of our full-stack system in
executing ground strokes and evaluate each of the system's hardware and
software components. The goal of this paper is to (1) inspire more research in
human-scale robot athletics and (2) establish the first baseline for a
reproducible wheelchair tennis robot for regulation singles play. Our paper
contributes to the science of systems design and poses a set of key challenges
for the robotics community to address in striving towards robots that can match
human capabilities in sports.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2022 19:25:41 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Feb 2023 17:41:53 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Zaidi",
"Zulfiqar",
""
],
[
"Martin",
"Daniel",
""
],
[
"Belles",
"Nathaniel",
""
],
[
"Zakharov",
"Viacheslav",
""
],
[
"Krishna",
"Arjun",
""
],
[
"Lee",
"Kin Man",
""
],
[
"Wagstaff",
"Peter",
""
],
[
"Naik",
"Sumedh",
""
],
[
"Sklar",
"Matthew",
""
],
[
"Choi",
"Sugju",
""
],
[
"Kakehi",
"Yoshiki",
""
],
[
"Patil",
"Ruturaj",
""
],
[
"Mallemadugula",
"Divya",
""
],
[
"Pesce",
"Florian",
""
],
[
"Wilson",
"Peter",
""
],
[
"Hom",
"Wendell",
""
],
[
"Diamond",
"Matan",
""
],
[
"Zhao",
"Bryan",
""
],
[
"Moorman",
"Nina",
""
],
[
"Paleja",
"Rohan",
""
],
[
"Chen",
"Letian",
""
],
[
"Seraj",
"Esmaeil",
""
],
[
"Gombolay",
"Matthew",
""
]
] |
new_dataset
| 0.998876 |
2210.05406
|
Sean Moran
|
Lili Tao, Alexandru-Petre Cazan, Senad Ibraimoski, Sean Moran
|
Code Librarian: A Software Package Recommendation System
| null | null | null | null |
cs.SE cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The use of packaged libraries can significantly shorten the software
development cycle by improving the quality and readability of code. In this
paper, we present a recommendation engine called Librarian for open source
libraries. A candidate library package is recommended for a given context if:
1) it has been frequently used with the imported libraries in the program; 2)
it has similar functionality to the imported libraries in the program; 3) it
has similar functionality to the developer's implementation, and 4) it can be
used efficiently in the context of the provided code. We apply the
state-of-the-art CodeBERT-based model for analysing the context of the source
code to deliver relevant library recommendations to users.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 12:30:05 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Feb 2023 17:51:12 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Tao",
"Lili",
""
],
[
"Cazan",
"Alexandru-Petre",
""
],
[
"Ibraimoski",
"Senad",
""
],
[
"Moran",
"Sean",
""
]
] |
new_dataset
| 0.999563 |
2210.07598
|
Hanlin Wu
|
Hanlin Wu, Ning Ni, Libao Zhang
|
Lightweight Stepless Super-Resolution of Remote Sensing Images via
Saliency-Aware Dynamic Routing Strategy
| null | null |
10.1109/TGRS.2023.3236624
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning-based algorithms have greatly improved the performance of
remote sensing image (RSI) super-resolution (SR). However, increasing network
depth and parameters cause a huge burden of computing and storage. Directly
reducing the depth or width of existing models results in a large performance
drop. We observe that the SR difficulty of different regions in an RSI varies
greatly, and existing methods use the same deep network to process all regions
in an image, resulting in a waste of computing resources. In addition, existing
SR methods generally predefine integer scale factors and cannot perform
stepless SR, i.e., a single model can deal with any potential scale factor.
Retraining the model on each scale factor wastes considerable computing
resources and model storage space. To address the above problems, we propose a
saliency-aware dynamic routing network (SalDRN) for lightweight and stepless SR
of RSIs. First, we introduce visual saliency as an indicator of region-level SR
difficulty and integrate a lightweight saliency detector into the SalDRN to
capture pixel-level visual characteristics. Then, we devise a saliency-aware
dynamic routing strategy that employs path selection switches to adaptively
select feature extraction paths of appropriate depth according to the SR
difficulty of sub-image patches. Finally, we propose a novel lightweight
stepless upsampling module whose core is an implicit feature function for
realizing mapping from low-resolution feature space to high-resolution feature
space. Comprehensive experiments verify that the SalDRN can achieve a good
trade-off between performance and complexity. The code is available at
\url{https://github.com/hanlinwu/SalDRN}.
|
[
{
"version": "v1",
"created": "Fri, 14 Oct 2022 07:49:03 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Wu",
"Hanlin",
""
],
[
"Ni",
"Ning",
""
],
[
"Zhang",
"Libao",
""
]
] |
new_dataset
| 0.975437 |
2210.08616
|
Marco Di Renzo
|
Marco Di Renzo, Davide Dardari, and Nicolo' Decarli
|
LoS MIMO-Arrays vs. LoS MIMO-Surfaces
|
IEEE EuCAP 2023
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The wireless research community has expressed major interest in the
sub-terahertz band for enabling mobile communications in future wireless
networks. The sub-terahertz band offers a large amount of available bandwidth
and, therefore, the promise to realize wireless communications at optical
speeds. At such high frequency bands, the transceivers need to have larger
apertures and need to be deployed more densely than at lower frequency bands.
These factors proportionally increase the far-field limit and the spherical
curvature of the electromagnetic waves cannot be ignored anymore. This offers
the opportunity to realize spatial multiplexing even in line-of-sight channels.
In this paper, we overview and compare existing design options to realize
high-rank transmissions in line-of-sight channels.
|
[
{
"version": "v1",
"created": "Sun, 16 Oct 2022 19:19:57 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Feb 2023 18:44:09 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Di Renzo",
"Marco",
""
],
[
"Dardari",
"Davide",
""
],
[
"Decarli",
"Nicolo'",
""
]
] |
new_dataset
| 0.999058 |
2210.16765
|
Jiawei Lian
|
Jiawei Lian, Shaohui Mei, Shun Zhang and Mingyang Ma
|
Benchmarking Adversarial Patch Against Aerial Detection
|
14 pages, 14 figures
| null |
10.1109/TGRS.2022.3225306
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
DNNs are vulnerable to adversarial examples, which poses great security
concerns for security-critical systems. In this paper, a novel
adaptive-patch-based physical attack (AP-PA) framework is proposed, which aims
to generate adversarial patches that are adaptive in both physical dynamics and
varying scales, and by which the particular targets can be hidden from being
detected. Furthermore, the adversarial patch is also gifted with attack
effectiveness against all targets of the same class with a patch outside the
target (No need to smear targeted objects) and robust enough in the physical
world. In addition, a new loss is devised to consider more available
information of detected objects to optimize the adversarial patch, which can
significantly improve the patch's attack efficacy (Average precision drop up to
87.86% and 85.48% in white-box and black-box settings, respectively) and
optimizing efficiency. We also establish one of the first comprehensive,
coherent, and rigorous benchmarks to evaluate the attack efficacy of
adversarial patches on aerial detection tasks. Finally, several proportionally
scaled experiments are performed physically to demonstrate that the elaborated
adversarial patches can successfully deceive aerial detection algorithms in
dynamic physical circumstances. The code is available at
https://github.com/JiaweiLian/AP-PA.
|
[
{
"version": "v1",
"created": "Sun, 30 Oct 2022 07:55:59 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Lian",
"Jiawei",
""
],
[
"Mei",
"Shaohui",
""
],
[
"Zhang",
"Shun",
""
],
[
"Ma",
"Mingyang",
""
]
] |
new_dataset
| 0.998947 |
2211.12109
|
Anastasia Antsiferova
|
Anastasia Antsiferova, Sergey Lavrushkin, Maksim Smirnov, Alexander
Gushchin, Dmitriy Vatolin, Dmitriy Kulikov
|
Video compression dataset and benchmark of learning-based video-quality
metrics
|
10 pages, 4 figures, 6 tables, 1 supplementary material
| null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video-quality measurement is a critical task in video processing. Nowadays,
many implementations of new encoding standards - such as AV1, VVC, and LCEVC -
use deep-learning-based decoding algorithms with perceptual metrics that serve
as optimization objectives. But investigations of the performance of modern
video- and image-quality metrics commonly employ videos compressed using older
standards, such as AVC. In this paper, we present a new benchmark for
video-quality metrics that evaluates video compression. It is based on a new
dataset consisting of about 2,500 streams encoded using different standards,
including AVC, HEVC, AV1, VP9, and VVC. Subjective scores were collected using
crowdsourced pairwise comparisons. The list of evaluated metrics includes
recent ones based on machine learning and neural networks. The results
demonstrate that new no-reference metrics exhibit a high correlation with
subjective quality and approach the capability of top full-reference metrics.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 09:22:28 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Feb 2023 09:28:48 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Antsiferova",
"Anastasia",
""
],
[
"Lavrushkin",
"Sergey",
""
],
[
"Smirnov",
"Maksim",
""
],
[
"Gushchin",
"Alexander",
""
],
[
"Vatolin",
"Dmitriy",
""
],
[
"Kulikov",
"Dmitriy",
""
]
] |
new_dataset
| 0.999612 |
2212.02277
|
Yuanxin Ye
|
Bai Zhu, Chao Yang, Jinkun Dai, Jianwei Fan, Yuanxin Ye
|
R2FD2: Fast and Robust Matching of Multimodal Remote Sensing Image via
Repeatable Feature Detector and Rotation-invariant Feature Descriptor
|
33 pages, 15 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Automatically identifying feature correspondences between multimodal images
is facing enormous challenges because of the significant differences both in
radiation and geometry. To address these problems, we propose a novel feature
matching method (named R2FD2) that is robust to radiation and rotation
differences. Our R2FD2 is conducted in two critical contributions, consisting
of a repeatable feature detector and a rotation-invariant feature descriptor.
In the first stage, a repeatable feature detector called the Multi-channel
Auto-correlation of the Log-Gabor (MALG) is presented for feature detection,
which combines the multi-channel auto-correlation strategy with the Log-Gabor
wavelets to detect interest points (IPs) with high repeatability and uniform
distribution. In the second stage, a rotation-invariant feature descriptor is
constructed, named the Rotation-invariant Maximum index map of the Log-Gabor
(RMLG), which consists of two components: fast assignment of dominant
orientation and construction of feature representation. In the process of fast
assignment of dominant orientation, a Rotation-invariant Maximum Index Map
(RMIM) is built to address rotation deformations. Then, the proposed RMLG
incorporates the rotation-invariant RMIM with the spatial configuration of
DAISY to depict a more discriminative feature representation, which improves
RMLG's resistance to radiation and rotation variances.Experimental results show
that the proposed R2FD2 outperforms five state-of-the-art feature matching
methods, and has superior advantages in adaptability and universality.
Moreover, our R2FD2 achieves the accuracy of matching within two pixels and has
a great advantage in matching efficiency over other state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Mon, 5 Dec 2022 13:55:02 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Dec 2022 07:05:04 GMT"
},
{
"version": "v3",
"created": "Thu, 2 Feb 2023 06:46:04 GMT"
},
{
"version": "v4",
"created": "Tue, 7 Feb 2023 08:03:40 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Zhu",
"Bai",
""
],
[
"Yang",
"Chao",
""
],
[
"Dai",
"Jinkun",
""
],
[
"Fan",
"Jianwei",
""
],
[
"Ye",
"Yuanxin",
""
]
] |
new_dataset
| 0.984351 |
2301.12028
|
Melissa Antonelli
|
Melissa Antonelli and Ugo Dal Lago and Davide Davoli and Isabel
Oitavem and Paolo Pistone
|
An Arithmetic Theory for the Poly-Time Random Functions
|
37 pages, pre-print
| null | null | null |
cs.CC cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a new bounded theory RS^1_2 and show that the functions which
are Sigma^b_1-representable in it are precisely random functions which can be
computed in polynomial time. Concretely, we pass through a class of oracle
functions over string, called POR, together with the theory of arithmetic
RS^1_2. Then, we show that functions computed by poly-time PTMs are
arithmetically characterized by a class of probabilistic bounded formulas.
|
[
{
"version": "v1",
"created": "Fri, 27 Jan 2023 23:45:18 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Feb 2023 20:40:21 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Antonelli",
"Melissa",
""
],
[
"Lago",
"Ugo Dal",
""
],
[
"Davoli",
"Davide",
""
],
[
"Oitavem",
"Isabel",
""
],
[
"Pistone",
"Paolo",
""
]
] |
new_dataset
| 0.987625 |
2301.12344
|
Xuchen Liu
|
Xuchen Liu (1 and 2), Minghao Dou (1 and 2), Dongyue Huang (1 and 2),
Biao Wang (3 and 4), Jinqiang Cui (4), Qinyuan Ren (5 and 4), Lihua Dou (6),
Zhi Gao (7), Jie Chen (1) and Ben M. Chen (2) ((1) Shanghai Research
Institute for Intelligent Autonomous Systems, Tongji University, Shanghai,
China, (2) Department of Mechanical and Automation Engineering, the Chinese
University of Hong Kong, Hong Kong, China, (3) College of Automation
Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing,
China, (4) Peng Cheng Laboratory, Shenzhen, China, (5) College of Control
Science and Engineering, Zhejiang University, Hangzhou, China, (6) School of
Automation, Beijing Institute of Technology, Beijing, China, (7) School of
Remote Sensing and Information Engineering, Wuhan University, Wuhan, China)
|
TJ-FlyingFish: Design and Implementation of an Aerial-Aquatic Quadrotor
with Tiltable Propulsion Units
|
6 pages, 9 figures, accepted to 2023 IEEE International Conference on
Robotics and Automation (ICRA)
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Aerial-aquatic vehicles are capable to move in the two most dominant fluids,
making them more promising for a wide range of applications. We propose a
prototype with special designs for propulsion and thruster configuration to
cope with the vast differences in the fluid properties of water and air. For
propulsion, the operating range is switched for the different mediums by the
dual-speed propulsion unit, providing sufficient thrust and also ensuring
output efficiency. For thruster configuration, thrust vectoring is realized by
the rotation of the propulsion unit around the mount arm, thus enhancing the
underwater maneuverability. This paper presents a quadrotor prototype of this
concept and the design details and realization in practice.
|
[
{
"version": "v1",
"created": "Sun, 29 Jan 2023 03:54:05 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Feb 2023 02:49:27 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Liu",
"Xuchen",
"",
"1 and 2"
],
[
"Dou",
"Minghao",
"",
"1 and 2"
],
[
"Huang",
"Dongyue",
"",
"1 and 2"
],
[
"Wang",
"Biao",
"",
"3 and 4"
],
[
"Cui",
"Jinqiang",
"",
"5 and 4"
],
[
"Ren",
"Qinyuan",
"",
"5 and 4"
],
[
"Dou",
"Lihua",
""
],
[
"Gao",
"Zhi",
""
],
[
"Chen",
"Jie",
""
],
[
"Chen",
"Ben M.",
""
]
] |
new_dataset
| 0.999861 |
2301.12458
|
Xiang Li
|
Xiang Li, Tiandi Ye, Caihua Shan, Dongsheng Li, Ming Gao
|
SeeGera: Self-supervised Semi-implicit Graph Variational Auto-encoders
with Masking
|
Accepted by WebConf 2023
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Generative graph self-supervised learning (SSL) aims to learn node
representations by reconstructing the input graph data. However, most existing
methods focus on unsupervised learning tasks only and very few work has shown
its superiority over the state-of-the-art graph contrastive learning (GCL)
models, especially on the classification task. While a very recent model has
been proposed to bridge the gap, its performance on unsupervised learning tasks
is still unknown. In this paper, to comprehensively enhance the performance of
generative graph SSL against other GCL models on both unsupervised and
supervised learning tasks, we propose the SeeGera model, which is based on the
family of self-supervised variational graph auto-encoder (VGAE). Specifically,
SeeGera adopts the semi-implicit variational inference framework, a
hierarchical variational framework, and mainly focuses on feature
reconstruction and structure/feature masking. On the one hand, SeeGera
co-embeds both nodes and features in the encoder and reconstructs both links
and features in the decoder. Since feature embeddings contain rich semantic
information on features, they can be combined with node embeddings to provide
fine-grained knowledge for feature reconstruction. On the other hand, SeeGera
adds an additional layer for structure/feature masking to the hierarchical
variational framework, which boosts the model generalizability. We conduct
extensive experiments comparing SeeGera with 9 other state-of-the-art
competitors. Our results show that SeeGera can compare favorably against other
state-of-the-art GCL methods in a variety of unsupervised and supervised
learning tasks.
|
[
{
"version": "v1",
"created": "Sun, 29 Jan 2023 15:00:43 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Feb 2023 08:49:21 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Feb 2023 08:45:35 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Li",
"Xiang",
""
],
[
"Ye",
"Tiandi",
""
],
[
"Shan",
"Caihua",
""
],
[
"Li",
"Dongsheng",
""
],
[
"Gao",
"Ming",
""
]
] |
new_dataset
| 0.986039 |
2302.00627
|
Juan Ignacio Iba\~nez
|
Juan Ignacio Iba\~nez, Francisco Rua
|
The energy consumption of Proof-of-Stake systems: Replication and
expansion
|
27 pages, 3 figures, working paper
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Blockchain technology and, more generally, distributed ledger technology
(DLT) systems, face public scrutiny for their energy consumption levels.
However, many point out that high energy consumption is a feature of (small
block size) proof-of-work (PoW) DLTs, but not of proof-of-stake (PoS) DLTs.
With the energy consumption of PoS systems being an under-researched area, we
replicate, expand and update embryonary work modelling it and comparing
different PoS-based DLTs with each other and with other non-PoS systems. In
doing so, we suggest and implement a number of improvements to an existing PoS
energy consumption model. We find that there may be significant differences in
the energy consumption of PoS systems analysed and confirm that, regardless of
these differences, their energy consumption is several orders of magnitude
below that of Bitcoin Core.
|
[
{
"version": "v1",
"created": "Sat, 14 Jan 2023 00:10:23 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Feb 2023 19:39:15 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Feb 2023 02:08:43 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Ibañez",
"Juan Ignacio",
""
],
[
"Rua",
"Francisco",
""
]
] |
new_dataset
| 0.996657 |
2302.02088
|
Susan Liang
|
Susan Liang, Chao Huang, Yapeng Tian, Anurag Kumar, Chenliang Xu
|
AV-NeRF: Learning Neural Fields for Real-World Audio-Visual Scene
Synthesis
| null | null | null | null |
cs.CV cs.GR cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human perception of the complex world relies on a comprehensive analysis of
multi-modal signals, and the co-occurrences of audio and video signals provide
humans with rich cues. This paper focuses on novel audio-visual scene synthesis
in the real world. Given a video recording of an audio-visual scene, the task
is to synthesize new videos with spatial audios along arbitrary novel camera
trajectories in that audio-visual scene. Directly using a NeRF-based model for
audio synthesis is insufficient due to its lack of prior knowledge and acoustic
supervision. To tackle the challenges, we first propose an acoustic-aware audio
generation module that integrates our prior knowledge of audio propagation into
NeRF, in which we associate audio generation with the 3D geometry of the visual
environment. In addition, we propose a coordinate transformation module that
expresses a viewing direction relative to the sound source. Such a direction
transformation helps the model learn sound source-centric acoustic fields.
Moreover, we utilize a head-related impulse response function to synthesize
pseudo binaural audio for data augmentation that strengthens training. We
qualitatively and quantitatively demonstrate the advantage of our model on
real-world audio-visual scenes. We refer interested readers to view our video
results for convincing comparisons.
|
[
{
"version": "v1",
"created": "Sat, 4 Feb 2023 04:17:19 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Feb 2023 17:38:18 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Liang",
"Susan",
""
],
[
"Huang",
"Chao",
""
],
[
"Tian",
"Yapeng",
""
],
[
"Kumar",
"Anurag",
""
],
[
"Xu",
"Chenliang",
""
]
] |
new_dataset
| 0.977254 |
2302.02150
|
Dimitris Iakovidis
|
Dimitrios E. Diamantis, Panagiota Gatoula, Anastasios Koulaouzidis,
and Dimitris K. Iakovidis
|
This Intestine Does Not Exist: Multiscale Residual Variational
Autoencoder for Realistic Wireless Capsule Endoscopy Image Generation
|
10 pages
| null | null | null |
cs.CV cs.LG eess.IV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Medical image synthesis has emerged as a promising solution to address the
limited availability of annotated medical data needed for training machine
learning algorithms in the context of image-based Clinical Decision Support
(CDS) systems. To this end, Generative Adversarial Networks (GANs) have been
mainly applied to support the algorithm training process by generating
synthetic images for data augmentation. However, in the field of Wireless
Capsule Endoscopy (WCE), the limited content diversity and size of existing
publicly available annotated datasets, adversely affect both the training
stability and synthesis performance of GANs. Aiming to a viable solution for
WCE image synthesis, a novel Variational Autoencoder architecture is proposed,
namely "This Intestine Does not Exist" (TIDE). The proposed architecture
comprises multiscale feature extraction convolutional blocks and residual
connections, which enable the generation of high-quality and diverse datasets
even with a limited number of training images. Contrary to the current
approaches, which are oriented towards the augmentation of the available
datasets, this study demonstrates that using TIDE, real WCE datasets can be
fully substituted by artificially generated ones, without compromising
classification performance. Furthermore, qualitative and user evaluation
studies by experienced WCE specialists, validate from a medical viewpoint that
both the normal and abnormal WCE images synthesized by TIDE are sufficiently
realistic.
|
[
{
"version": "v1",
"created": "Sat, 4 Feb 2023 11:49:38 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Feb 2023 03:50:25 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Diamantis",
"Dimitrios E.",
""
],
[
"Gatoula",
"Panagiota",
""
],
[
"Koulaouzidis",
"Anastasios",
""
],
[
"Iakovidis",
"Dimitris K.",
""
]
] |
new_dataset
| 0.999269 |
2302.02693
|
Xue Yang
|
Qinrou Wen, Jirui Yang, Xue Yang, Kewei Liang
|
PatchDCT: Patch Refinement for High Quality Instance Segmentation
|
15 pages, 7 figures, 13 tables, accepted by ICLR 2023, the source
code is available at https://github.com/olivia-w12/PatchDCT
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
High-quality instance segmentation has shown emerging importance in computer
vision. Without any refinement, DCT-Mask directly generates high-resolution
masks by compressed vectors. To further refine masks obtained by compressed
vectors, we propose for the first time a compressed vector based multi-stage
refinement framework. However, the vanilla combination does not bring
significant gains, because changes in some elements of the DCT vector will
affect the prediction of the entire mask. Thus, we propose a simple and novel
method named PatchDCT, which separates the mask decoded from a DCT vector into
several patches and refines each patch by the designed classifier and
regressor. Specifically, the classifier is used to distinguish mixed patches
from all patches, and to correct previously mispredicted foreground and
background patches. In contrast, the regressor is used for DCT vector
prediction of mixed patches, further refining the segmentation quality at
boundary locations. Experiments on COCO show that our method achieves 2.0%,
3.2%, 4.5% AP and 3.4%, 5.3%, 7.0% Boundary AP improvements over Mask-RCNN on
COCO, LVIS, and Cityscapes, respectively. It also surpasses DCT-Mask by 0.7%,
1.1%, 1.3% AP and 0.9%, 1.7%, 4.2% Boundary AP on COCO, LVIS and Cityscapes.
Besides, the performance of PatchDCT is also competitive with other
state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Mon, 6 Feb 2023 10:51:21 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Feb 2023 13:14:04 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Wen",
"Qinrou",
""
],
[
"Yang",
"Jirui",
""
],
[
"Yang",
"Xue",
""
],
[
"Liang",
"Kewei",
""
]
] |
new_dataset
| 0.997626 |
2302.03036
|
Joe Toplyn
|
Joe Toplyn
|
Witscript 2: A System for Generating Improvised Jokes Without Wordplay
|
5 pages. Published in the Proceedings of the 13th International
Conference on Computational Creativity (ICCC 2022), pages 54-58. arXiv admin
note: text overlap with arXiv:2301.02695. substantial text overlap with
arXiv:2302.02008
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A previous paper presented Witscript, a system for generating conversational
jokes that rely on wordplay. This paper extends that work by presenting
Witscript 2, which uses a large language model to generate conversational jokes
that rely on common sense instead of wordplay. Like Witscript, Witscript 2 is
based on joke-writing algorithms created by an expert comedy writer. Human
evaluators judged Witscript 2's responses to input sentences to be jokes 46% of
the time, compared to 70% of the time for human-written responses. This is
evidence that Witscript 2 represents another step toward giving a chatbot a
humanlike sense of humor.
|
[
{
"version": "v1",
"created": "Fri, 3 Feb 2023 21:51:55 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Toplyn",
"Joe",
""
]
] |
new_dataset
| 0.999153 |
2302.03037
|
Khaza Anuarul Hoque
|
Ripan Kumar Kundu, Rifatul Islam, John Quarles, Khaza Anuarul Hoque
|
LiteVR: Interpretable and Lightweight Cybersickness Detection using
Explainable AI
|
Accepted for publication in IEEE VR 2023 conference. arXiv admin
note: substantial text overlap with arXiv:2302.01985
| null | null | null |
cs.HC cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Cybersickness is a common ailment associated with virtual reality (VR) user
experiences. Several automated methods exist based on machine learning (ML) and
deep learning (DL) to detect cybersickness. However, most of these
cybersickness detection methods are perceived as computationally intensive and
black-box methods. Thus, those techniques are neither trustworthy nor practical
for deploying on standalone energy-constrained VR head-mounted devices (HMDs).
In this work, we present an explainable artificial intelligence (XAI)-based
framework, LiteVR, for cybersickness detection, explaining the model's outcome
and reducing the feature dimensions and overall computational costs. First, we
develop three cybersickness DL models based on long-term short-term memory
(LSTM), gated recurrent unit (GRU), and multilayer perceptron (MLP). Then, we
employed a post-hoc explanation, such as SHapley Additive Explanations (SHAP),
to explain the results and extract the most dominant features of cybersickness.
Finally, we retrain the DL models with the reduced number of features. Our
results show that eye-tracking features are the most dominant for cybersickness
detection. Furthermore, based on the XAI-based feature ranking and
dimensionality reduction, we significantly reduce the model's size by up to
4.3x, training time by up to 5.6x, and its inference time by up to 3.8x, with
higher cybersickness detection accuracy and low regression error (i.e., on Fast
Motion Scale (FMS)). Our proposed lite LSTM model obtained an accuracy of 94%
in classifying cybersickness and regressing (i.e., FMS 1-10) with a Root Mean
Square Error (RMSE) of 0.30, which outperforms the state-of-the-art. Our
proposed LiteVR framework can help researchers and practitioners analyze,
detect, and deploy their DL-based cybersickness detection models in standalone
VR HMDs.
|
[
{
"version": "v1",
"created": "Sun, 5 Feb 2023 21:51:12 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Kundu",
"Ripan Kumar",
""
],
[
"Islam",
"Rifatul",
""
],
[
"Quarles",
"John",
""
],
[
"Hoque",
"Khaza Anuarul",
""
]
] |
new_dataset
| 0.990389 |
2302.03074
|
Siddharth Bhaskar
|
Siddharth Bhaskar, Jane Chandlee, Adam Jardine
|
Rational functions via recursive schemes
| null | null | null | null |
cs.LO cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
We give a new characterization of the class of rational string functions from
formal language theory using order-preserving interpretations with respect to a
very weak monadic programming language. This refines the known characterization
of rational functions by order-preserving MSO interpretations.
|
[
{
"version": "v1",
"created": "Mon, 6 Feb 2023 19:22:18 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Bhaskar",
"Siddharth",
""
],
[
"Chandlee",
"Jane",
""
],
[
"Jardine",
"Adam",
""
]
] |
new_dataset
| 0.992429 |
2302.03088
|
Laura Stegner
|
David Porfirio, Laura Stegner, Maya Cakmak, Allison Saupp\'e, Aws
Albarghouthi, Bilge Mutlu
|
Sketching Robot Programs On the Fly
|
Accepted at HRI '23, March 13-16, 2023, Stockholm, Sweden
| null |
10.1145/3568162.3576991
| null |
cs.RO cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Service robots for personal use in the home and the workplace require
end-user development solutions for swiftly scripting robot tasks as the need
arises. Many existing solutions preserve ease, efficiency, and convenience
through simple programming interfaces or by restricting task complexity. Others
facilitate meticulous task design but often do so at the expense of simplicity
and efficiency. There is a need for robot programming solutions that reconcile
the complexity of robotics with the on-the-fly goals of end-user development.
In response to this need, we present a novel, multimodal, and on-the-fly
development system, Tabula. Inspired by a formative design study with a
prototype, Tabula leverages a combination of spoken language for specifying the
core of a robot task and sketching for contextualizing the core. The result is
that developers can script partial, sloppy versions of robot programs to be
completed and refined by a program synthesizer. Lastly, we demonstrate our
anticipated use cases of Tabula via a set of application scenarios.
|
[
{
"version": "v1",
"created": "Mon, 6 Feb 2023 19:44:05 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Porfirio",
"David",
""
],
[
"Stegner",
"Laura",
""
],
[
"Cakmak",
"Maya",
""
],
[
"Sauppé",
"Allison",
""
],
[
"Albarghouthi",
"Aws",
""
],
[
"Mutlu",
"Bilge",
""
]
] |
new_dataset
| 0.998605 |
2302.03099
|
Alex Qiu
|
Alex Qiu, Claire Young, Anthony Gunderman, Milad Azizkhani, Yue Chen,
Ai-Ping Hu
|
Tendon-Driven Soft Robotic Gripper with Integrated Ripeness Sensing for
Blackberry Harvesting
|
7 Pages, 9 figures, submitted to and accepted by ICRA 2023
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Growing global demand for food, coupled with continuing labor shortages,
motivates the need for automated agricultural harvesting. While some specialty
crops (e.g., apples, peaches, blueberries) can be harvested via existing
harvesting modalities, fruits such as blackberries and raspberries require
delicate handling to mitigate fruit damage that could significantly impact
marketability. This motivates the development of soft robotic solutions that
enable efficient, delicate harvesting. This paper presents the design,
fabrication, and feasibility testing of a tendon-driven soft gripping system
focused on blackberries, which are a fragile fruit susceptible to post-harvest
damage. The gripper is both low-cost and small form factor, allowing for the
integration of a micro-servo for tendon retraction, a near-infrared (NIR) based
blackberry ripeness sensor utilizing the reflectance modality for identifying
fully ripe blackberries, and an endoscopic camera for visual servoing with a
UR-5. The gripper was used to harvest 139 berries with manual positioning in
two separate field tests. Field testing found an average retention force of
2.06 N and 6.08 N for ripe and unripe blackberries, respectively. Sensor tests
identified an average reflectance of 16.78 and 21.70 for ripe and unripe
blackberries, respectively, indicating a clear distinction between the two
ripeness levels. Finally, the soft robotic gripper was integrated onto a UR5
robot arm and successfully harvested fifteen artificial blackberries in a lab
setting using visual servoing.
|
[
{
"version": "v1",
"created": "Mon, 6 Feb 2023 19:59:33 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Qiu",
"Alex",
""
],
[
"Young",
"Claire",
""
],
[
"Gunderman",
"Anthony",
""
],
[
"Azizkhani",
"Milad",
""
],
[
"Chen",
"Yue",
""
],
[
"Hu",
"Ai-Ping",
""
]
] |
new_dataset
| 0.998175 |
2302.03128
|
Zhengwei Bai
|
Zhengwei Bai, Guoyuan Wu, Matthew J. Barth, Yongkang Liu, Emrah Akin
Sisbot, Kentaro Oguchi
|
Cooperverse: A Mobile-Edge-Cloud Framework for Universal Cooperative
Perception with Mixed Connectivity and Automation
|
6 pages, 7 figures
| null | null | null |
cs.CV cs.MA
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Cooperative perception (CP) is attracting increasing attention and is
regarded as the core foundation to support cooperative driving automation, a
potential key solution to addressing the safety, mobility, and sustainability
issues of contemporary transportation systems. However, current research on CP
is still at the beginning stages where a systematic problem formulation of CP
is still missing, acting as the essential guideline of the system design of a
CP system under real-world situations. In this paper, we formulate a universal
CP system into an optimization problem and a mobile-edge-cloud framework called
Cooperverse. This system addresses CP in a mixed connectivity and automation
environment. A Dynamic Feature Sharing (DFS) methodology is introduced to
support this CP system under certain constraints and a Random Priority
Filtering (RPF) method is proposed to conduct DFS with high performance.
Experiments have been conducted based on a high-fidelity CP platform, and the
results show that the Cooperverse framework is effective for dynamic node
engagement and the proposed DFS methodology can improve system CP performance
by 14.5% and the RPF method can reduce the communication cost for mobile nodes
by 90% with only 1.7% drop for average precision.
|
[
{
"version": "v1",
"created": "Mon, 6 Feb 2023 21:30:08 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Bai",
"Zhengwei",
""
],
[
"Wu",
"Guoyuan",
""
],
[
"Barth",
"Matthew J.",
""
],
[
"Liu",
"Yongkang",
""
],
[
"Sisbot",
"Emrah Akin",
""
],
[
"Oguchi",
"Kentaro",
""
]
] |
new_dataset
| 0.999021 |
2302.03288
|
Toon Van de Maele
|
Toon Van de Maele, Tim Verbelen, Pietro Mazzaglia, Stefano Ferraro,
Bart Dhoedt
|
Object-Centric Scene Representations using Active Inference
| null | null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Representing a scene and its constituent objects from raw sensory data is a
core ability for enabling robots to interact with their environment. In this
paper, we propose a novel approach for scene understanding, leveraging a
hierarchical object-centric generative model that enables an agent to infer
object category and pose in an allocentric reference frame using active
inference, a neuro-inspired framework for action and perception. For evaluating
the behavior of an active vision agent, we also propose a new benchmark where,
given a target viewpoint of a particular object, the agent needs to find the
best matching viewpoint given a workspace with randomly positioned objects in
3D. We demonstrate that our active inference agent is able to balance epistemic
foraging and goal-driven behavior, and outperforms both supervised and
reinforcement learning baselines by a large margin.
|
[
{
"version": "v1",
"created": "Tue, 7 Feb 2023 06:45:19 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Van de Maele",
"Toon",
""
],
[
"Verbelen",
"Tim",
""
],
[
"Mazzaglia",
"Pietro",
""
],
[
"Ferraro",
"Stefano",
""
],
[
"Dhoedt",
"Bart",
""
]
] |
new_dataset
| 0.99431 |
2302.03385
|
Haoran Li
|
Li Haoran and Liu Shasha and Ma Mingjun and Hu Guangzheng and Chen
Yaran and Zhao Dongbin
|
NeuronsGym: A Hybrid Framework and Benchmark for Robot Tasks with
Sim2Real Policy Learning
|
16 pages, 10 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
The rise of embodied AI has greatly improved the possibility of general
mobile agent systems. At present, many evaluation platforms with rich scenes,
high visual fidelity and various application scenarios have been developed. In
this paper, we present a hybrid framework named NeuronsGym that can be used for
policy learning of robot tasks, covering a simulation platform for training
policy, and a physical system for studying sim2real problems. Unlike most
current single-task, slow-moving robotic platforms, our framework provides
agile physical robots with a wider range of speeds, and can be employed to
train robotic navigation and confrontation policies. At the same time, in order
to evaluate the safety of robot navigation, we propose a safety-weighted path
length (SFPL) to improve the safety evaluation in the current mobile robot
navigation. Based on this platform, we build a new benchmark for navigation and
confrontation tasks under this platform by comparing the current mainstream
sim2real methods, and hold the 2022 IEEE Conference on Games (CoG) RoboMaster
sim2real challenge. We release the codes of this
framework\footnote{\url{https://github.com/DRL-CASIA/NeuronsGym}} and hope that
this platform can promote the development of more flexible and agile general
mobile agent algorithms.
|
[
{
"version": "v1",
"created": "Tue, 7 Feb 2023 10:45:20 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Haoran",
"Li",
""
],
[
"Shasha",
"Liu",
""
],
[
"Mingjun",
"Ma",
""
],
[
"Guangzheng",
"Hu",
""
],
[
"Yaran",
"Chen",
""
],
[
"Dongbin",
"Zhao",
""
]
] |
new_dataset
| 0.999162 |
2302.03474
|
Mathias Bos
|
Mathias Bos, Bastiaan Vandewal, Wilm Decr\'e, Jan Swevers
|
MPC-based Motion Planning for Autonomous Truck-Trailer Maneuvering
|
This work has been submitted to IFAC for possible publication. (IFAC
World Congress 2023)
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Time-optimal motion planning of autonomous vehicles in complex environments
is a highly researched topic. This paper describes a novel approach to optimize
and execute locally feasible trajectories for the maneuvering of a
truck-trailer Autonomous Mobile Robot (AMR), by dividing the environment in a
sequence or route of freely accessible overlapping corridors. Multi-stage
optimal control generates local trajectories through advancing subsets of this
route. To cope with the advancing subsets and changing environments, the
optimal control problem is solved online with a receding horizon in a Model
Predictive Control (MPC) fashion with an improved update strategy. This
strategy seamlessly integrates the computationally expensive MPC updates with a
low-cost feedback controller for trajectory tracking, for disturbance
rejection, and for stabilization of the unstable kinematics of the reversing
truck-trailer AMR. This methodology is implemented in a flexible software
framework for an effortless transition from offline simulations to deployment
of experiments. An experimental setup showcasing the truck-trailer AMR
performing two reverse parking maneuvers validates the presented method.
|
[
{
"version": "v1",
"created": "Tue, 7 Feb 2023 14:00:22 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Bos",
"Mathias",
""
],
[
"Vandewal",
"Bastiaan",
""
],
[
"Decré",
"Wilm",
""
],
[
"Swevers",
"Jan",
""
]
] |
new_dataset
| 0.989936 |
2302.03540
|
Eugene Kharitonov
|
Eugene Kharitonov and Damien Vincent and Zal\'an Borsos and Rapha\"el
Marinier and Sertan Girgin and Olivier Pietquin and Matt Sharifi and Marco
Tagliasacchi and Neil Zeghidour
|
Speak, Read and Prompt: High-Fidelity Text-to-Speech with Minimal
Supervision
| null | null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce SPEAR-TTS, a multi-speaker text-to-speech (TTS) system that can
be trained with minimal supervision. By combining two types of discrete speech
representations, we cast TTS as a composition of two sequence-to-sequence
tasks: from text to high-level semantic tokens (akin to "reading") and from
semantic tokens to low-level acoustic tokens ("speaking"). Decoupling these two
tasks enables training of the "speaking" module using abundant audio-only data,
and unlocks the highly efficient combination of pretraining and backtranslation
to reduce the need for parallel data when training the "reading" component. To
control the speaker identity, we adopt example prompting, which allows
SPEAR-TTS to generalize to unseen speakers using only a short sample of 3
seconds, without any explicit speaker representation or speaker-id labels. Our
experiments demonstrate that SPEAR-TTS achieves a character error rate that is
competitive with state-of-the-art methods using only 15 minutes of parallel
data, while matching ground-truth speech in terms of naturalness and acoustic
quality, as measured in subjective tests.
|
[
{
"version": "v1",
"created": "Tue, 7 Feb 2023 15:48:31 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Kharitonov",
"Eugene",
""
],
[
"Vincent",
"Damien",
""
],
[
"Borsos",
"Zalán",
""
],
[
"Marinier",
"Raphaël",
""
],
[
"Girgin",
"Sertan",
""
],
[
"Pietquin",
"Olivier",
""
],
[
"Sharifi",
"Matt",
""
],
[
"Tagliasacchi",
"Marco",
""
],
[
"Zeghidour",
"Neil",
""
]
] |
new_dataset
| 0.98093 |
2302.03580
|
T. Konstantin Rusch
|
L\'eonard Equer, T. Konstantin Rusch, Siddhartha Mishra
|
Multi-Scale Message Passing Neural PDE Solvers
| null | null | null | null |
cs.LG cs.NA math.NA stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel multi-scale message passing neural network algorithm for
learning the solutions of time-dependent PDEs. Our algorithm possesses both
temporal and spatial multi-scale resolution features by incorporating
multi-scale sequence models and graph gating modules in the encoder and
processor, respectively. Benchmark numerical experiments are presented to
demonstrate that the proposed algorithm outperforms baselines, particularly on
a PDE with a range of spatial and temporal scales.
|
[
{
"version": "v1",
"created": "Tue, 7 Feb 2023 16:45:52 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Equer",
"Léonard",
""
],
[
"Rusch",
"T. Konstantin",
""
],
[
"Mishra",
"Siddhartha",
""
]
] |
new_dataset
| 0.998246 |
2302.03589
|
Ludovica Pannitto
|
Ludovica Pannitto and Aur\'elie Herbelot
|
CALaMo: a Constructionist Assessment of Language Models
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This paper presents a novel framework for evaluating Neural Language Models'
linguistic abilities using a constructionist approach. Not only is the
usage-based model in line with the underlying stochastic philosophy of neural
architectures, but it also allows the linguist to keep meaning as a determinant
factor in the analysis. We outline the framework and present two possible
scenarios for its application.
|
[
{
"version": "v1",
"created": "Tue, 7 Feb 2023 16:56:48 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Pannitto",
"Ludovica",
""
],
[
"Herbelot",
"Aurélie",
""
]
] |
new_dataset
| 0.999126 |
2302.03594
|
Zihan Zhu
|
Zihan Zhu, Songyou Peng, Viktor Larsson, Zhaopeng Cui, Martin R.
Oswald, Andreas Geiger, Marc Pollefeys
|
NICER-SLAM: Neural Implicit Scene Encoding for RGB SLAM
|
Video: https://youtu.be/tUXzqEZWg2w
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural implicit representations have recently become popular in simultaneous
localization and mapping (SLAM), especially in dense visual SLAM. However,
previous works in this direction either rely on RGB-D sensors, or require a
separate monocular SLAM approach for camera tracking and do not produce
high-fidelity dense 3D scene reconstruction. In this paper, we present
NICER-SLAM, a dense RGB SLAM system that simultaneously optimizes for camera
poses and a hierarchical neural implicit map representation, which also allows
for high-quality novel view synthesis. To facilitate the optimization process
for mapping, we integrate additional supervision signals including
easy-to-obtain monocular geometric cues and optical flow, and also introduce a
simple warping loss to further enforce geometry consistency. Moreover, to
further boost performance in complicated indoor scenes, we also propose a local
adaptive transformation from signed distance functions (SDFs) to density in the
volume rendering equation. On both synthetic and real-world datasets we
demonstrate strong performance in dense mapping, tracking, and novel view
synthesis, even competitive with recent RGB-D SLAM systems.
|
[
{
"version": "v1",
"created": "Tue, 7 Feb 2023 17:06:34 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Zhu",
"Zihan",
""
],
[
"Peng",
"Songyou",
""
],
[
"Larsson",
"Viktor",
""
],
[
"Cui",
"Zhaopeng",
""
],
[
"Oswald",
"Martin R.",
""
],
[
"Geiger",
"Andreas",
""
],
[
"Pollefeys",
"Marc",
""
]
] |
new_dataset
| 0.987494 |
2302.03644
|
Henning Zwirnmann
|
Henning Zwirnmann, Dennis Knobbe, Utku Culha and Sami Haddadin
|
Dual-Material 3D-Printed PaCoMe-Like Fingers for Flexible Biolaboratory
Automation
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Robotic automation in life science research is a paradigm that has gained
increasing relevance in recent years. Current solutions in this area often have
limited scope, such as pick-and-place tasks for a specific object. Thus, each
new process requires a separate toolset, which prevents the realization of more
complex workflows and reduces the acceptance of robotic automation tools. Here,
we present a novel finger system for a parallel gripper for biolaboratory
automation that can handle a wide range of liquid containers. This flexibility
is enabled by developing the fingers as a dual-extrusion 3D print. The coating
with a soft material from the second extruder in one seamless print and the
fingertip design are key features to enhance grasping capabilities. By
employing a passive compliant mechanism that was previously presented in a
finger called ``PaCoMe'', a simple actuation system and a low weight are
maintained. The ability to resist chemicals and high temperatures and the
integration with a tool exchange system make the fingers usable for daily
laboratory use and complex workflows. We present their task suitability in
several experiments showing the wide range of vessels that can be handled as
well as their tolerance against displacements and grasp stability.
|
[
{
"version": "v1",
"created": "Tue, 7 Feb 2023 17:52:19 GMT"
}
] | 2023-02-08T00:00:00 |
[
[
"Zwirnmann",
"Henning",
""
],
[
"Knobbe",
"Dennis",
""
],
[
"Culha",
"Utku",
""
],
[
"Haddadin",
"Sami",
""
]
] |
new_dataset
| 0.998731 |
2005.12806
|
Nikolaos Misirlis
|
Nikolaos Misirlis, Miriam H. Zwaan, David Weber
|
International students' loneliness, depression and stress levels in
COVID-19 crisis. The role of social media and the host university
|
14 pages
|
Journal of Contemporary Education Theory & Research 4 (2), 20-25
(2020)
|
10.5281/zenodo.4256624
| null |
cs.SI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The move to university life is characterized by strong emotions, some of them
negative, such as loneliness, anxiety, and depression. These negative emotions
are strengthened due to the obligatory lockdown due to the COVID-19 pandemic.
Previous research indicates association among the use of social media,
university satisfaction, and the aforementioned emotions. We report findings
from 248 international undergraduates in The Netherlands, all students at the
International School of Business. Our results indicate strong correlations
between anxiety, loneliness, and COVID-19-related stress with university
satisfaction together with social capital. Keywords: COVID-19; Pandemic;
lockdown; loneliness; depression; anxiety; international students
|
[
{
"version": "v1",
"created": "Tue, 26 May 2020 15:39:27 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Misirlis",
"Nikolaos",
""
],
[
"Zwaan",
"Miriam H.",
""
],
[
"Weber",
"David",
""
]
] |
new_dataset
| 0.99826 |
2101.10463
|
An Zou
|
An Zou, Jing Li, Christopher D. Gill, and Xuan Zhang
|
RTGPU: Real-Time GPU Scheduling of Hard Deadline Parallel Tasks with
Fine-Grain Utilization
| null | null | null | null |
cs.DC cs.AI cs.AR cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many emerging cyber-physical systems, such as autonomous vehicles and robots,
rely heavily on artificial intelligence and machine learning algorithms to
perform important system operations. Since these highly parallel applications
are computationally intensive, they need to be accelerated by graphics
processing units (GPUs) to meet stringent timing constraints. However, despite
the wide adoption of GPUs, efficiently scheduling multiple GPU applications
while providing rigorous real-time guarantees remains a challenge. In this
paper, we propose RTGPU, which can schedule the execution of multiple GPU
applications in real-time to meet hard deadlines. Each GPU application can have
multiple CPU execution and memory copy segments, as well as GPU kernels. We
start with a model to explicitly account for the CPU and memory copy segments
of these applications. We then consider the GPU architecture in the development
of a precise timing model for the GPU kernels and leverage a technique known as
persistent threads to implement fine-grained kernel scheduling with improved
performance through interleaved execution. Next, we propose a general method
for scheduling parallel GPU applications in real time. Finally, to schedule
multiple parallel GPU applications, we propose a practical real-time scheduling
algorithm based on federated scheduling and grid search (for GPU kernel
segments) with uniprocessor fixed priority scheduling (for multiple CPU and
memory copy segments). Our approach provides superior schedulability compared
with previous work, and gives real-time guarantees to meet hard deadlines for
multiple GPU applications according to comprehensive validation and evaluation
on a real NVIDIA GTX1080Ti GPU system.
|
[
{
"version": "v1",
"created": "Mon, 25 Jan 2021 22:34:06 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Jan 2021 02:22:33 GMT"
},
{
"version": "v3",
"created": "Mon, 6 Feb 2023 11:39:58 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Zou",
"An",
""
],
[
"Li",
"Jing",
""
],
[
"Gill",
"Christopher D.",
""
],
[
"Zhang",
"Xuan",
""
]
] |
new_dataset
| 0.998231 |
2109.04243
|
Flavio Bertini
|
Taron Davtian, Flavio Bertini and Rajesh Sharma
|
Understanding Cycling Mobility: Bologna Case Study
| null | null |
10.1007/s43762-022-00073-8
| null |
cs.CY cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding human mobility in urban environments is of the utmost
importance to manage traffic and for deploying new resources and services. In
recent years, the problem is exacerbated due to rapid urbanization and climate
changes. In an urban context, human mobility has many facets, and cycling
represents one of the most eco-friendly and efficient/effective ways to move in
touristic and historical cities. The main objective of this work is to study
the cycling mobility within the city of Bologna, Italy. We used six months
dataset that consists of 320,118 self-reported bike trips. In particular, we
performed several descriptive analysis to understand spatial and temporal
patterns of bike users for understanding popular roads, and most favorite
points within the city. This analysis involved several other public datasets in
order to explore variables that can possibly affect the cycling activity, such
as weather, pollution, and events. The main results of this study indicate that
bike usage is more correlated to temperature, and precipitation and has no
correlation to wind speed and pollution. In addition, we also exploited various
machine learning and deep learning approaches for predicting short-term trips
in the near future (that is for the following 30, and 60 minutes), that could
help local governmental agencies for urban planning. Our best model achieved an
R square of 0.91, a Mean Absolute Error of 5.38 and a Root Mean Squared Error
of 8.12 for the 30-minutes time interval.
|
[
{
"version": "v1",
"created": "Thu, 9 Sep 2021 13:11:35 GMT"
}
] | 2023-02-07T00:00:00 |
[
[
"Davtian",
"Taron",
""
],
[
"Bertini",
"Flavio",
""
],
[
"Sharma",
"Rajesh",
""
]
] |
new_dataset
| 0.999571 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.