id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.13222
|
Hardik Dharmesh Ruparel
|
Kavit Gangar, Hardik Ruparel, Shreyas Lele
|
Hindi to English: Transformer-Based Neural Machine Translation
|
10 pages, 2 figures
|
Springer International Conference on Communication, Computing and
Electronics Systems. 2020 337-347
|
10.1007/978-981-33-4909-4_25
| null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine Translation (MT) is one of the most prominent tasks in Natural
Language Processing (NLP) which involves the automatic conversion of texts from
one natural language to another while preserving its meaning and fluency.
Although the research in machine translation has been going on since multiple
decades, the newer approach of integrating deep learning techniques in natural
language processing has led to significant improvements in the translation
quality. In this paper, we have developed a Neural Machine Translation (NMT)
system by training the Transformer model to translate texts from Indian
Language Hindi to English. Hindi being a low resource language has made it
difficult for neural networks to understand the language thereby leading to a
slow growth in the development of neural machine translators. Thus, to address
this gap, we implemented back-translation to augment the training data and for
creating the vocabulary, we experimented with both word and subword level
tokenization using Byte Pair Encoding (BPE) thereby ending up training the
Transformer in 10 different configurations. This led us to achieve a
state-of-the-art BLEU score of 24.53 on the test set of IIT Bombay
English-Hindi Corpus in one of the configurations.
|
[
{
"version": "v1",
"created": "Sat, 23 Sep 2023 00:00:09 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Gangar",
"Kavit",
""
],
[
"Ruparel",
"Hardik",
""
],
[
"Lele",
"Shreyas",
""
]
] |
new_dataset
| 0.981156 |
2309.13225
|
Christopher Ye
|
Barna Saha and Christopher Ye
|
Faster Approximate All Pairs Shortest Paths
|
81 pages
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The all pairs shortest path problem (APSP) is one of the foundational
problems in computer science. For weighted dense graphs on $n$ vertices, no
truly sub-cubic algorithms exist to compute APSP exactly even for undirected
graphs. This is popularly known as the APSP conjecture and has played a
prominent role in developing the field of fine-grained complexity. The seminal
result of Seidel uses fast matrix multiplication (FMM) to compute APSP on
unweighted undirected graphs exactly in $\tilde{O}(n^{\omega})$ time, where
$\omega=2.372$. Even for unweighted undirected graphs, it is not possible to
obtain a $(2-\epsilon)$-approximation of APSP in $o(n^\omega)$ time.
In this paper, we provide a multitude of new results for multiplicative and
additive approximations of APSP in undirected graphs for both unweighted and
weighted cases. We provide new algorithms for multiplicative 2-approximation of
unweighted graphs: a deterministic one that runs in $\tilde{O}(n^{2.072})$ time
and a randomized one that runs in $\tilde{O}(n^{2.032})$ on expectation
improving upon the best known bound of $\tilde{O}(n^{2.25})$ by Roditty (STOC,
2023). For $2$-approximating paths of length $\geq k$, $k \geq 4$, we provide
the first improvement after Dor, Halperin, Zwick (2000) for dense graphs even
just using combinatorial methods, and then improve it further using FMM. We
next consider additive approximations, and provide improved bounds for all
additive $\beta$-approximations, $\beta \geq 4$. For weighted graphs, we show
that by allowing small additive errors along with an
$(1+\epsilon)$-multiplicative approximation, it is possible to improve upon
Zwick's $\tilde{O}(n^\omega)$ algorithm. Our results point out the crucial role
that FMM can play even on approximating APSP on unweighted undirected graphs,
and reveal new bottlenecks towards achieving a quadratic running time to
approximate APSP.
|
[
{
"version": "v1",
"created": "Sat, 23 Sep 2023 00:27:31 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Saha",
"Barna",
""
],
[
"Ye",
"Christopher",
""
]
] |
new_dataset
| 0.990714 |
2309.13230
|
Xiang Geng
|
Xiang Geng, Zhejian Lai, Yu Zhang, Shimin Tao, Hao Yang, Jiajun Chen,
Shujian Huang
|
NJUNLP's Participation for the WMT2023 Quality Estimation Shared Task
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the submissions of the NJUNLP team to the WMT 2023 Quality
Estimation (QE) shared task. Our team submitted predictions for the
English-German language pair on all two sub-tasks: (i) sentence- and word-level
quality prediction; and (ii) fine-grained error span detection. This year, we
further explore pseudo data methods for QE based on NJUQE framework
(https://github.com/NJUNLP/njuqe). We generate pseudo MQM data using parallel
data from the WMT translation task. We pre-train the XLMR large model on pseudo
QE data, then fine-tune it on real QE data. At both stages, we jointly learn
sentence-level scores and word-level tags. Empirically, we conduct experiments
to find the key hyper-parameters that improve the performance. Technically, we
propose a simple method that covert the word-level outputs to fine-grained
error span results. Overall, our models achieved the best results in
English-German for both word-level and fine-grained error span detection
sub-tasks by a considerable margin.
|
[
{
"version": "v1",
"created": "Sat, 23 Sep 2023 01:52:14 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Geng",
"Xiang",
""
],
[
"Lai",
"Zhejian",
""
],
[
"Zhang",
"Yu",
""
],
[
"Tao",
"Shimin",
""
],
[
"Yang",
"Hao",
""
],
[
"Chen",
"Jiajun",
""
],
[
"Huang",
"Shujian",
""
]
] |
new_dataset
| 0.970556 |
2309.13242
|
Hantao Zhou
|
Hantao Zhou, Rui Yang, Yachao Zhang, Haoran Duan, Yawen Huang, Runze
Hu, Xiu Li, Yefeng Zheng
|
UniHead: Unifying Multi-Perception for Detection Heads
|
10 pages, 5 figures
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The detection head constitutes a pivotal component within object detectors,
tasked with executing both classification and localization functions.
Regrettably, the commonly used parallel head often lacks omni perceptual
capabilities, such as deformation perception, global perception and cross-task
perception. Despite numerous methods attempt to enhance these abilities from a
single aspect, achieving a comprehensive and unified solution remains a
significant challenge. In response to this challenge, we have developed an
innovative detection head, termed UniHead, to unify three perceptual abilities
simultaneously. More precisely, our approach (1) introduces deformation
perception, enabling the model to adaptively sample object features; (2)
proposes a Dual-axial Aggregation Transformer (DAT) to adeptly model long-range
dependencies, thereby achieving global perception; and (3) devises a Cross-task
Interaction Transformer (CIT) that facilitates interaction between the
classification and localization branches, thus aligning the two tasks. As a
plug-and-play method, the proposed UniHead can be conveniently integrated with
existing detectors. Extensive experiments on the COCO dataset demonstrate that
our UniHead can bring significant improvements to many detectors. For instance,
the UniHead can obtain +2.7 AP gains in RetinaNet, +2.9 AP gains in FreeAnchor,
and +2.1 AP gains in GFL. The code will be publicly available. Code Url:
https://github.com/zht8506/UniHead.
|
[
{
"version": "v1",
"created": "Sat, 23 Sep 2023 03:22:48 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Zhou",
"Hantao",
""
],
[
"Yang",
"Rui",
""
],
[
"Zhang",
"Yachao",
""
],
[
"Duan",
"Haoran",
""
],
[
"Huang",
"Yawen",
""
],
[
"Hu",
"Runze",
""
],
[
"Li",
"Xiu",
""
],
[
"Zheng",
"Yefeng",
""
]
] |
new_dataset
| 0.992524 |
2309.13243
|
Jieun Han
|
Jieun Han, Haneul Yoo, Junho Myung, Minsun Kim, Tak Yeon Lee, So-Yeon
Ahn, Alice Oh
|
ChEDDAR: Student-ChatGPT Dialogue in EFL Writing Education
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The integration of generative AI in education is expanding, yet empirical
analyses of large-scale, real-world interactions between students and AI
systems still remain limited. In this study, we present ChEDDAR, ChatGPT & EFL
Learner's Dialogue Dataset As Revising an essay, which is collected from a
semester-long longitudinal experiment involving 212 college students enrolled
in English as Foreign Langauge (EFL) writing courses. The students were asked
to revise their essays through dialogues with ChatGPT. ChEDDAR includes a
conversation log, utterance-level essay edit history, self-rated satisfaction,
and students' intent, in addition to session-level pre-and-post surveys
documenting their objectives and overall experiences. We analyze students'
usage patterns and perceptions regarding generative AI with respect to their
intent and satisfaction. As a foundational step, we establish baseline results
for two pivotal tasks in task-oriented dialogue systems within educational
contexts: intent detection and satisfaction estimation. We finally suggest
further research to refine the integration of generative AI into education
settings, outlining potential scenarios utilizing ChEDDAR. ChEDDAR is publicly
available at https://github.com/zeunie/ChEDDAR.
|
[
{
"version": "v1",
"created": "Sat, 23 Sep 2023 03:28:25 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Han",
"Jieun",
""
],
[
"Yoo",
"Haneul",
""
],
[
"Myung",
"Junho",
""
],
[
"Kim",
"Minsun",
""
],
[
"Lee",
"Tak Yeon",
""
],
[
"Ahn",
"So-Yeon",
""
],
[
"Oh",
"Alice",
""
]
] |
new_dataset
| 0.982109 |
2309.13274
|
Mingzhen Sun
|
Mingzhen Sun, Weining Wang, Zihan Qin, Jiahui Sun, Sihan Chen, Jing
Liu
|
GLOBER: Coherent Non-autoregressive Video Generation via GLOBal Guided
Video DecodER
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video generation necessitates both global coherence and local realism. This
work presents a novel non-autoregressive method GLOBER, which first generates
global features to obtain comprehensive global guidance and then synthesizes
video frames based on the global features to generate coherent videos.
Specifically, we propose a video auto-encoder, where a video encoder encodes
videos into global features, and a video decoder, built on a diffusion model,
decodes the global features and synthesizes video frames in a
non-autoregressive manner. To achieve maximum flexibility, our video decoder
perceives temporal information through normalized frame indexes, which enables
it to synthesize arbitrary sub video clips with predetermined starting and
ending frame indexes. Moreover, a novel adversarial loss is introduced to
improve the global coherence and local realism between the synthesized video
frames. Finally, we employ a diffusion-based video generator to fit the global
features outputted by the video encoder for video generation. Extensive
experimental results demonstrate the effectiveness and efficiency of our
proposed method, and new state-of-the-art results have been achieved on
multiple benchmarks.
|
[
{
"version": "v1",
"created": "Sat, 23 Sep 2023 06:04:57 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Sun",
"Mingzhen",
""
],
[
"Wang",
"Weining",
""
],
[
"Qin",
"Zihan",
""
],
[
"Sun",
"Jiahui",
""
],
[
"Chen",
"Sihan",
""
],
[
"Liu",
"Jing",
""
]
] |
new_dataset
| 0.959547 |
2309.13297
|
Siva Uday Sampreeth Chebolu
|
Siva Uday Sampreeth Chebolu and Franck Dernoncourt and Nedim Lipka and
Thamar Solorio
|
OATS: Opinion Aspect Target Sentiment Quadruple Extraction Dataset for
Aspect-Based Sentiment Analysis
|
Initial submission
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Aspect-based sentiment Analysis (ABSA) delves into understanding sentiments
specific to distinct elements within textual content. It aims to analyze
user-generated reviews to determine a) the target entity being reviewed, b) the
high-level aspect to which it belongs, c) the sentiment words used to express
the opinion, and d) the sentiment expressed toward the targets and the aspects.
While various benchmark datasets have fostered advancements in ABSA, they often
come with domain limitations and data granularity challenges. Addressing these,
we introduce the OATS dataset, which encompasses three fresh domains and
consists of 20,000 sentence-level quadruples and 13,000 review-level tuples.
Our initiative seeks to bridge specific observed gaps: the recurrent focus on
familiar domains like restaurants and laptops, limited data for intricate
quadruple extraction tasks, and an occasional oversight of the synergy between
sentence and review-level sentiments. Moreover, to elucidate OATS's potential
and shed light on various ABSA subtasks that OATS can solve, we conducted
in-domain and cross-domain experiments, establishing initial baselines. We hope
the OATS dataset augments current resources, paving the way for an encompassing
exploration of ABSA.
|
[
{
"version": "v1",
"created": "Sat, 23 Sep 2023 07:39:16 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Chebolu",
"Siva Uday Sampreeth",
""
],
[
"Dernoncourt",
"Franck",
""
],
[
"Lipka",
"Nedim",
""
],
[
"Solorio",
"Thamar",
""
]
] |
new_dataset
| 0.999541 |
2309.13318
|
Olga Zamaraeva
|
Olga Zamaraeva, Carlos G\'omez-Rodr\'iguez
|
Spanish Resource Grammar version 2023
|
10 pages, 4 figures
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present the latest version of the Spanish Resource Grammar (SRG). The new
SRG uses the recent version of Freeling morphological analyzer and tagger and
is accompanied by a manually verified treebank and a list of documented issues.
We also present the grammar's coverage and overgeneration on a small portion of
a learner corpus, an entirely new research line with respect to the SRG. The
grammar can be used for linguistic research, such as for empirically driven
development of syntactic theory, and in natural language processing
applications such as computer-assisted language learning. Finally, as the
treebanks grow, they can be used for training high-quality semantic parsers and
other systems which may benefit from precise and detailed semantics.
|
[
{
"version": "v1",
"created": "Sat, 23 Sep 2023 09:24:05 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Zamaraeva",
"Olga",
""
],
[
"Gómez-Rodríguez",
"Carlos",
""
]
] |
new_dataset
| 0.998796 |
2309.13320
|
Amir Hossein Kargaran
|
Amir Hossein Kargaran, Fran\c{c}ois Yvon, Hinrich Sch\"utze
|
GlotScript: A Resource and Tool for Low Resource Writing System
Identification
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We present GlotScript, an open resource and tool for low resource writing
system identification. GlotScript-R is a resource that provides the attested
writing systems for more than 7,000 languages. It is compiled by aggregating
information from existing writing system resources. GlotScript-T is a writing
system identification tool that covers all 161 Unicode 15.0 scripts. For an
input text, it returns its script distribution where scripts are identified by
ISO 15924 codes. We also present two use cases for GlotScript. First, we
demonstrate that GlotScript supports cleaning multilingual corpora such as mC4
and OSCAR. Second, we analyze the tokenization of a number of language models
such as GPT-4 using GlotScript and provide insights on the coverage of low
resource scripts and languages by each language model. We hope that GlotScript
will become a useful resource for work on low resource languages in the NLP
community. GlotScript-R and GlotScript-T are available at
https://github.com/cisnlp/GlotScript.
|
[
{
"version": "v1",
"created": "Sat, 23 Sep 2023 09:35:55 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Kargaran",
"Amir Hossein",
""
],
[
"Yvon",
"François",
""
],
[
"Schütze",
"Hinrich",
""
]
] |
new_dataset
| 0.998496 |
2309.13345
|
Zican Dong
|
Zican Dong, Tianyi Tang, Junyi Li, Wayne Xin Zhao, Ji-Rong Wen
|
BAMBOO: A Comprehensive Benchmark for Evaluating Long Text Modeling
Capacities of Large Language Models
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models (LLMs) have achieved dramatic proficiency over NLP
tasks with normal length. Recently, multiple studies have committed to
extending the context length and enhancing the long text modeling capabilities
of LLMs. To comprehensively evaluate the long context ability of LLMs, we
propose BAMBOO, a multi-task long context benchmark. BAMBOO has been designed
with four principles: comprehensive capacity evaluation, avoidance of data
contamination, accurate automatic evaluation, and different length levels. It
consists of 10 datasets from 5 different long text understanding tasks, i.e.
question answering, hallucination detection, text sorting, language modeling,
and code completion, to cover core capacities and various domains of LLMs. We
conduct experiments with five long context models on BAMBOO and further discuss
four key research questions of long text. We also qualitatively analyze current
long context models and point out future directions for enhancing long text
modeling capacities. We release our data, prompts, and code at
https://github.com/RUCAIBox/BAMBOO.
|
[
{
"version": "v1",
"created": "Sat, 23 Sep 2023 11:36:15 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Dong",
"Zican",
""
],
[
"Tang",
"Tianyi",
""
],
[
"Li",
"Junyi",
""
],
[
"Zhao",
"Wayne Xin",
""
],
[
"Wen",
"Ji-Rong",
""
]
] |
new_dataset
| 0.991008 |
2309.13347
|
Sameer Pradhan
|
Sameer S. Pradhan and Ronald A. Cole and Wayne H. Ward
|
My Science Tutor (MyST) -- A Large Corpus of Children's Conversational
Speech
| null | null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This article describes the MyST corpus developed as part of the My Science
Tutor project -- one of the largest collections of children's conversational
speech comprising approximately 400 hours, spanning some 230K utterances across
about 10.5K virtual tutor sessions by around 1.3K third, fourth and fifth grade
students. 100K of all utterances have been transcribed thus far. The corpus is
freely available (https://myst.cemantix.org) for non-commercial use using a
creative commons license. It is also available for commercial use
(https://boulderlearning.com/resources/myst-corpus/). To date, ten
organizations have licensed the corpus for commercial use, and approximately 40
university and other not-for-profit research groups have downloaded the corpus.
It is our hope that the corpus can be used to improve automatic speech
recognition algorithms, build and evaluate conversational AI agents for
education, and together help accelerate development of multimodal applications
to improve children's excitement and learning about science, and help them
learn remotely.
|
[
{
"version": "v1",
"created": "Sat, 23 Sep 2023 11:52:36 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Pradhan",
"Sameer S.",
""
],
[
"Cole",
"Ronald A.",
""
],
[
"Ward",
"Wayne H.",
""
]
] |
new_dataset
| 0.972714 |
2309.13354
|
Mohammad Zohair
|
Mohammad Kashif, Mohammad Zohair, Saquib Ali
|
Lexical Squad@Multimodal Hate Speech Event Detection 2023: Multimodal
Hate Speech Detection using Fused Ensemble Approach
|
8 pages, 5 figures, 4 tables
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
With a surge in the usage of social media postings to express opinions,
emotions, and ideologies, there has been a significant shift towards the
calibration of social media as a rapid medium of conveying viewpoints and
outlooks over the globe. Concurrently, the emergence of a multitude of
conflicts between two entities has given rise to a stream of social media
content containing propaganda, hate speech, and inconsiderate views. Thus, the
issue of monitoring social media postings is rising swiftly, attracting major
attention from those willing to solve such problems. One such problem is Hate
Speech detection. To mitigate this problem, we present our novel ensemble
learning approach for detecting hate speech, by classifying text-embedded
images into two labels, namely "Hate Speech" and "No Hate Speech". We have
incorporated state-of-art models including InceptionV3, BERT, and XLNet. Our
proposed ensemble model yielded promising results with 75.21 and 74.96 as
accuracy and F-1 score (respectively). We also present an empirical evaluation
of the text-embedded images to elaborate on how well the model was able to
predict and classify. We release our codebase here
(https://github.com/M0hammad-Kashif/MultiModalHateSpeech).
|
[
{
"version": "v1",
"created": "Sat, 23 Sep 2023 12:06:05 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Kashif",
"Mohammad",
""
],
[
"Zohair",
"Mohammad",
""
],
[
"Ali",
"Saquib",
""
]
] |
new_dataset
| 0.994486 |
2309.13362
|
Ramy Taki Eldin F.
|
Ramy Taki Eldin
|
Matrix product and quasi-twisted codes in one class
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many classical constructions, such as Plotkin's and Turyn's, were generalized
by matrix product (MP) codes. Quasi-twisted (QT) codes, on the other hand, form
an algebraically rich structure class that contains many codes with best-known
parameters. We significantly extend the definition of MP codes to establish a
broader class of generalized matrix product (GMP) codes that contains QT codes
as well. We propose a generator matrix formula for any linear GMP code and
provide a condition for determining the code size. We prove that any QT code
has a GMP structure. Then we show how to build a generator polynomial matrix
for a QT code from its GMP structure, and vice versa. Despite that the class of
QT codes contains many codes with best-known parameters, we present different
examples of GMP codes with best-known parameters that are neither MP nor QT.
Two different lower bounds on the minimum distance of GMP codes are presented;
they generalize their counterparts in the MP codes literature. The second
proposed lower bound replaces the non-singular by columns matrix with a less
restrictive condition. Some examples are provided for comparing the two
proposed bounds, as well as showing that these bounds are tight.
|
[
{
"version": "v1",
"created": "Sat, 23 Sep 2023 12:56:53 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Eldin",
"Ramy Taki",
""
]
] |
new_dataset
| 0.998936 |
2309.13387
|
Vipin Gautam
|
Vipin Gautam, Shitala Prasad and Sharad Sinha
|
YOLORe-IDNet: An Efficient Multi-Camera System for Person-Tracking
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The growing need for video surveillance in public spaces has created a demand
for systems that can track individuals across multiple cameras feeds in
real-time. While existing tracking systems have achieved impressive performance
using deep learning models, they often rely on pre-existing images of suspects
or historical data. However, this is not always feasible in cases where
suspicious individuals are identified in real-time and without prior knowledge.
We propose a person-tracking system that combines correlation filters and
Intersection Over Union (IOU) constraints for robust tracking, along with a
deep learning model for cross-camera person re-identification (Re-ID) on top of
YOLOv5. The proposed system quickly identifies and tracks suspect in real-time
across multiple cameras and recovers well after full or partial occlusion,
making it suitable for security and surveillance applications. It is
computationally efficient and achieves a high F1-Score of 79% and an IOU of 59%
comparable to existing state-of-the-art algorithms, as demonstrated in our
evaluation on a publicly available OTB-100 dataset. The proposed system offers
a robust and efficient solution for the real-time tracking of individuals
across multiple camera feeds. Its ability to track targets without prior
knowledge or historical data is a significant improvement over existing
systems, making it well-suited for public safety and surveillance applications.
|
[
{
"version": "v1",
"created": "Sat, 23 Sep 2023 14:11:13 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Gautam",
"Vipin",
""
],
[
"Prasad",
"Shitala",
""
],
[
"Sinha",
"Sharad",
""
]
] |
new_dataset
| 0.994652 |
2309.13425
|
Shu Zhong
|
Han Cui, Shu Zhong, Jiacheng Wu, Zichao Shen, Naim Dahnoun, Yiren Zhao
|
MiliPoint: A Point Cloud Dataset for mmWave Radar
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Millimetre-wave (mmWave) radar has emerged as an attractive and
cost-effective alternative for human activity sensing compared to traditional
camera-based systems. mmWave radars are also non-intrusive, providing better
protection for user privacy. However, as a Radio Frequency (RF) based
technology, mmWave radars rely on capturing reflected signals from objects,
making them more prone to noise compared to cameras. This raises an intriguing
question for the deep learning community: Can we develop more effective point
set-based deep learning methods for such attractive sensors?
To answer this question, our work, termed MiliPoint, delves into this idea by
providing a large-scale, open dataset for the community to explore how mmWave
radars can be utilised for human activity recognition. Moreover, MiliPoint
stands out as it is larger in size than existing datasets, has more diverse
human actions represented, and encompasses all three key tasks in human
activity recognition. We have also established a range of point-based deep
neural networks such as DGCNN, PointNet++ and PointTransformer, on MiliPoint,
which can serve to set the ground baseline for further development.
|
[
{
"version": "v1",
"created": "Sat, 23 Sep 2023 16:32:36 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Cui",
"Han",
""
],
[
"Zhong",
"Shu",
""
],
[
"Wu",
"Jiacheng",
""
],
[
"Shen",
"Zichao",
""
],
[
"Dahnoun",
"Naim",
""
],
[
"Zhao",
"Yiren",
""
]
] |
new_dataset
| 0.999723 |
2309.13446
|
Meng Liu
|
Meng Liu, Mingda Zhang, Jialu Liu, Hanjun Dai, Ming-Hsuan Yang,
Shuiwang Ji, Zheyun Feng, Boqing Gong
|
Video Timeline Modeling For News Story Understanding
|
Accepted as a spotlight by NeurIPS 2023, Track on Datasets and
Benchmarks
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a novel problem, namely video timeline modeling.
Our objective is to create a video-associated timeline from a set of videos
related to a specific topic, thereby facilitating the content and structure
understanding of the story being told. This problem has significant potential
in various real-world applications, such as news story summarization. To
bootstrap research in this area, we curate a realistic benchmark dataset,
YouTube-News-Timeline, consisting of over $12$k timelines and $300$k YouTube
news videos. Additionally, we propose a set of quantitative metrics as the
protocol to comprehensively evaluate and compare methodologies. With such a
testbed, we further develop and benchmark exploratory deep learning approaches
to tackle this problem. We anticipate that this exploratory work will pave the
way for further research in video timeline modeling. The assets are available
via
https://github.com/google-research/google-research/tree/master/video_timeline_modeling.
|
[
{
"version": "v1",
"created": "Sat, 23 Sep 2023 18:24:15 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Liu",
"Meng",
""
],
[
"Zhang",
"Mingda",
""
],
[
"Liu",
"Jialu",
""
],
[
"Dai",
"Hanjun",
""
],
[
"Yang",
"Ming-Hsuan",
""
],
[
"Ji",
"Shuiwang",
""
],
[
"Feng",
"Zheyun",
""
],
[
"Gong",
"Boqing",
""
]
] |
new_dataset
| 0.999432 |
2309.13473
|
Mohak Chadha
|
Mohak Chadha, Eishi Arima, Amir Raoofy, Michael Gerndt, Martin Schulz
|
Sustainability in HPC: Vision and Opportunities
|
Accepted at the ACM Sustainable Supercomputing Workshop in
conjunction with SC'23
| null | null | null |
cs.DC cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tackling climate change by reducing and eventually eliminating carbon
emissions is a significant milestone on the path toward establishing an
environmentally sustainable society. As we transition into the exascale era,
marked by an increasing demand and scale of HPC resources, the HPC community
must embrace the challenge of reducing carbon emissions from designing and
operating modern HPC systems. In this position paper, we describe challenges
and highlight different opportunities that can aid HPC sites in reducing the
carbon footprint of modern HPC systems.
|
[
{
"version": "v1",
"created": "Sat, 23 Sep 2023 20:13:40 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Chadha",
"Mohak",
""
],
[
"Arima",
"Eishi",
""
],
[
"Raoofy",
"Amir",
""
],
[
"Gerndt",
"Michael",
""
],
[
"Schulz",
"Martin",
""
]
] |
new_dataset
| 0.997557 |
2309.13509
|
Aya Watanabe
|
Aya Watanabe, Shinnosuke Takamichi, Yuki Saito, Wataru Nakata, Detai
Xin, Hiroshi Saruwatari
|
Coco-Nut: Corpus of Japanese Utterance and Voice Characteristics
Description for Prompt-based Control
|
Submitted to ASRU2023
| null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
In text-to-speech, controlling voice characteristics is important in
achieving various-purpose speech synthesis. Considering the success of
text-conditioned generation, such as text-to-image, free-form text instruction
should be useful for intuitive and complicated control of voice
characteristics. A sufficiently large corpus of high-quality and diverse voice
samples with corresponding free-form descriptions can advance such control
research. However, neither an open corpus nor a scalable method is currently
available. To this end, we develop Coco-Nut, a new corpus including diverse
Japanese utterances, along with text transcriptions and free-form voice
characteristics descriptions. Our methodology to construct this corpus consists
of 1) automatic collection of voice-related audio data from the Internet, 2)
quality assurance, and 3) manual annotation using crowdsourcing. Additionally,
we benchmark our corpus on the prompt embedding model trained by contrastive
speech-text learning.
|
[
{
"version": "v1",
"created": "Sun, 24 Sep 2023 00:15:31 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Watanabe",
"Aya",
""
],
[
"Takamichi",
"Shinnosuke",
""
],
[
"Saito",
"Yuki",
""
],
[
"Nakata",
"Wataru",
""
],
[
"Xin",
"Detai",
""
],
[
"Saruwatari",
"Hiroshi",
""
]
] |
new_dataset
| 0.999526 |
2309.13523
|
JoonHo Lee
|
Amirreza Shaban, JoonHo Lee, Sanghun Jung, Xiangyun Meng, Byron Boots
|
LiDAR-UDA: Self-ensembling Through Time for Unsupervised LiDAR Domain
Adaptation
|
Accepted ICCV 2023 (Oral)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce LiDAR-UDA, a novel two-stage self-training-based Unsupervised
Domain Adaptation (UDA) method for LiDAR segmentation. Existing self-training
methods use a model trained on labeled source data to generate pseudo labels
for target data and refine the predictions via fine-tuning the network on the
pseudo labels. These methods suffer from domain shifts caused by different
LiDAR sensor configurations in the source and target domains. We propose two
techniques to reduce sensor discrepancy and improve pseudo label quality: 1)
LiDAR beam subsampling, which simulates different LiDAR scanning patterns by
randomly dropping beams; 2) cross-frame ensembling, which exploits temporal
consistency of consecutive frames to generate more reliable pseudo labels. Our
method is simple, generalizable, and does not incur any extra inference cost.
We evaluate our method on several public LiDAR datasets and show that it
outperforms the state-of-the-art methods by more than $3.9\%$ mIoU on average
for all scenarios. Code will be available at
https://github.com/JHLee0513/LiDARUDA.
|
[
{
"version": "v1",
"created": "Sun, 24 Sep 2023 02:02:00 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Shaban",
"Amirreza",
""
],
[
"Lee",
"JoonHo",
""
],
[
"Jung",
"Sanghun",
""
],
[
"Meng",
"Xiangyun",
""
],
[
"Boots",
"Byron",
""
]
] |
new_dataset
| 0.995849 |
2309.13559
|
Nan Chen
|
Nan Chen, Fanze Kong, Haotian Li, Jiayuan Liu, Ziwei Ye, Wei Xu,
Fangcheng Zhu, Ximin Lyu, and Fu Zhang
|
Swashplateless-elevon Actuation for a Dual-rotor Tail-sitter VTOL UAV
|
8 pages, 13 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose a novel swashplateless-elevon actuation (SEA) for
dual-rotor tail-sitter vertical takeoff and landing (VTOL) unmanned aerial
vehicles (UAVs). In contrast to the conventional elevon actuation (CEA) which
controls both pitch and yaw using elevons, the SEA adopts swashplateless
mechanisms to generate an extra moment through motor speed modulation to
control pitch and uses elevons solely for controlling yaw, without requiring
additional actuators. This decoupled control strategy mitigates the saturation
of elevons' deflection needed for large pitch and yaw control actions, thus
improving the UAV's control performance on trajectory tracking and disturbance
rejection performance in the presence of large external disturbances.
Furthermore, the SEA overcomes the actuation degradation issues experienced by
the CEA when the UAV is in close proximity to the ground, leading to a smoother
and more stable take-off process. We validate and compare the performances of
the SEA and the CEA in various real-world flight conditions, including
take-off, trajectory tracking, and hover flight and position steps under
external disturbance. Experimental results demonstrate that the SEA has better
performances than the CEA. Moreover, we verify the SEA's feasibility in the
attitude transition process and fixed-wing-mode flight of the VTOL UAV. The
results indicate that the SEA can accurately control pitch in the presence of
high-speed incoming airflow and maintain a stable attitude during fixed-wing
mode flight. Video of all experiments can be found in
youtube.com/watch?v=Sx9Rk4Zf7sQ
|
[
{
"version": "v1",
"created": "Sun, 24 Sep 2023 06:16:23 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Chen",
"Nan",
""
],
[
"Kong",
"Fanze",
""
],
[
"Li",
"Haotian",
""
],
[
"Liu",
"Jiayuan",
""
],
[
"Ye",
"Ziwei",
""
],
[
"Xu",
"Wei",
""
],
[
"Zhu",
"Fangcheng",
""
],
[
"Lyu",
"Ximin",
""
],
[
"Zhang",
"Fu",
""
]
] |
new_dataset
| 0.998048 |
2309.13561
|
Dean Ninalga
|
Dean Ninalga
|
Cordyceps@LT-EDI: Patching Language-Specific Homophobia/Transphobia
Classifiers with a Multilingual Understanding
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Detecting transphobia, homophobia, and various other forms of hate speech is
difficult. Signals can vary depending on factors such as language, culture,
geographical region, and the particular online platform. Here, we present a
joint multilingual (M-L) and language-specific (L-S) approach to homophobia and
transphobic hate speech detection (HSD). M-L models are needed to catch words,
phrases, and concepts that are less common or missing in a particular language
and subsequently overlooked by L-S models. Nonetheless, L-S models are better
situated to understand the cultural and linguistic context of the users who
typically write in a particular language. Here we construct a simple and
successful way to merge the M-L and L-S approaches through simple weight
interpolation in such a way that is interpretable and data-driven. We
demonstrate our system on task A of the 'Shared Task on Homophobia/Transphobia
Detection in social media comments' dataset for homophobia and transphobic HSD.
Our system achieves the best results in three of five languages and achieves a
0.997 macro average F1-score on Malayalam texts.
|
[
{
"version": "v1",
"created": "Sun, 24 Sep 2023 06:37:54 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Ninalga",
"Dean",
""
]
] |
new_dataset
| 0.975171 |
2309.13578
|
Giang Cao
|
Khoa Dang Nguyen, Thanh-Hai Phung, Hoang-Giang Cao
|
A SAM-based Solution for Hierarchical Panoptic Segmentation of Crops and
Weeds Competition
|
Technical report of NYCU-WEED team for the challenge of hierarchical
panoptic segmentation of crops and weeds using the PhenoBench dataset at the
8th Workshop on Computer Vision in Plant Phenotyping and Agriculture (CVPPA)
- International Conference on Computer Vision (ICCV) 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Panoptic segmentation in agriculture is an advanced computer vision technique
that provides a comprehensive understanding of field composition. It
facilitates various tasks such as crop and weed segmentation, plant panoptic
segmentation, and leaf instance segmentation, all aimed at addressing
challenges in agriculture. Exploring the application of panoptic segmentation
in agriculture, the 8th Workshop on Computer Vision in Plant Phenotyping and
Agriculture (CVPPA) hosted the challenge of hierarchical panoptic segmentation
of crops and weeds using the PhenoBench dataset. To tackle the tasks presented
in this competition, we propose an approach that combines the effectiveness of
the Segment AnyThing Model (SAM) for instance segmentation with prompt input
from object detection models. Specifically, we integrated two notable
approaches in object detection, namely DINO and YOLO-v8. Our best-performing
model achieved a PQ+ score of 81.33 based on the evaluation metrics of the
competition.
|
[
{
"version": "v1",
"created": "Sun, 24 Sep 2023 08:34:12 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Nguyen",
"Khoa Dang",
""
],
[
"Phung",
"Thanh-Hai",
""
],
[
"Cao",
"Hoang-Giang",
""
]
] |
new_dataset
| 0.964674 |
2309.13596
|
Runkai Zhao
|
Runkai Zhao, Yuwen Heng, Yuanda Gao, Shilei Liu, Heng Wang, Changhao
Yao, Jiawen Chen, Weidong Cai
|
Advancements in 3D Lane Detection Using LiDAR Point Clouds: From Data
Collection to Model Development
|
7 pages, 6 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Advanced Driver-Assistance Systems (ADAS) have successfully integrated
learning-based techniques into vehicle perception and decision-making. However,
their application in 3D lane detection for effective driving environment
perception is hindered by the lack of comprehensive LiDAR datasets. The sparse
nature of LiDAR point cloud data prevents an efficient manual annotation
process. To solve this problem, we present LiSV-3DLane, a large-scale 3D lane
dataset that comprises 20k frames of surround-view LiDAR point clouds with
enriched semantic annotation. Unlike existing datasets confined to a frontal
perspective, LiSV-3DLane provides a full 360-degree spatial panorama around the
ego vehicle, capturing complex lane patterns in both urban and highway
environments. We leverage the geometric traits of lane lines and the intrinsic
spatial attributes of LiDAR data to design a simple yet effective automatic
annotation pipeline for generating finer lane labels. To propel future
research, we propose a novel LiDAR-based 3D lane detection model, LiLaDet,
incorporating the spatial geometry learning of the LiDAR point cloud into
Bird's Eye View (BEV) based lane identification. Experimental results indicate
that LiLaDet outperforms existing camera- and LiDAR-based approaches in the 3D
lane detection task on the K-Lane dataset and our LiSV-3DLane.
|
[
{
"version": "v1",
"created": "Sun, 24 Sep 2023 09:58:49 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Zhao",
"Runkai",
""
],
[
"Heng",
"Yuwen",
""
],
[
"Gao",
"Yuanda",
""
],
[
"Liu",
"Shilei",
""
],
[
"Wang",
"Heng",
""
],
[
"Yao",
"Changhao",
""
],
[
"Chen",
"Jiawen",
""
],
[
"Cai",
"Weidong",
""
]
] |
new_dataset
| 0.999489 |
2309.13600
|
Itamar Zimerman
|
Itamar Zimerman and Lior Wolf
|
Multi-Dimensional Hyena for Spatial Inductive Bias
|
10 pages, 3 figures
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, Vision Transformers have attracted increasing interest from
computer vision researchers. However, the advantage of these transformers over
CNNs is only fully manifested when trained over a large dataset, mainly due to
the reduced inductive bias towards spatial locality within the transformer's
self-attention mechanism. In this work, we present a data-efficient vision
transformer that does not rely on self-attention. Instead, it employs a novel
generalization to multiple axes of the very recent Hyena layer. We propose
several alternative approaches for obtaining this generalization and delve into
their unique distinctions and considerations from both empirical and
theoretical perspectives.
Our empirical findings indicate that the proposed Hyena N-D layer boosts the
performance of various Vision Transformer architectures, such as ViT, Swin, and
DeiT across multiple datasets. Furthermore, in the small dataset regime, our
Hyena-based ViT is favorable to ViT variants from the recent literature that
are specifically designed for solving the same challenge, i.e., working with
small datasets or incorporating image-specific inductive bias into the
self-attention mechanism. Finally, we show that a hybrid approach that is based
on Hyena N-D for the first layers in ViT, followed by layers that incorporate
conventional attention, consistently boosts the performance of various vision
transformer architectures.
|
[
{
"version": "v1",
"created": "Sun, 24 Sep 2023 10:22:35 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Zimerman",
"Itamar",
""
],
[
"Wolf",
"Lior",
""
]
] |
new_dataset
| 0.995925 |
2309.13631
|
Deng Boyuan
|
Jingwei Li, Boyuan Deng, Xinyu Zhang, Kangyao Huang
|
6-DOF All-Terrain Cyclocopter
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents the design of a 6-DOF all-terrain micro aerial vehicle
and two control strategies for multimodal flight, which are experimentally
validated. The micro aerial vehicle is propelled by four motors and controlled
by a single servo for the control of the cycloidal rotors(cyclorotors) speed
and lift direction. Despite the addition of the servo, the system remains
underactuated. To address the traditional underactuation problem of cycloidal
rotor aircraft, we increase the number of control variables. We propose a PID
and a nonlinear model predictive control (NMPC) framework to tackle the model's
nonlinearities and achieve control of attitude, position, and their
derivatives.Experimental results demonstrate the effectiveness of the proposed
multimodal control strategy for 6-DOF all-terrain micro aerial vehicles. The
vehicle can operate in aerial, terrestrial, and aquatic modes and can adapt to
different terrains and environmental conditions. Our approach enhances the
vehicle's performance in each mode of operation, and the results show the
advantages of the proposed strategy compared to other control strategies.
|
[
{
"version": "v1",
"created": "Sun, 24 Sep 2023 13:13:13 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Li",
"Jingwei",
""
],
[
"Deng",
"Boyuan",
""
],
[
"Zhang",
"Xinyu",
""
],
[
"Huang",
"Kangyao",
""
]
] |
new_dataset
| 0.999676 |
2309.13646
|
Haoqing Li
|
Haoqing Li, Jinfu Yang, Runshi Wang, Yifei Xu
|
ILNet: Low-level Matters for Salient Infrared Small Target Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Infrared small target detection is a technique for finding small targets from
infrared clutter background. Due to the dearth of high-level semantic
information, small infrared target features are weakened in the deep layers of
the CNN, which underachieves the CNN's representation ability. To address the
above problem, in this paper, we propose an infrared low-level network (ILNet)
that considers infrared small targets as salient areas with little semantic
information. Unlike other SOTA methods, ILNet pays greater attention to
low-level information instead of treating them equally. A new lightweight
feature fusion module, named Interactive Polarized Orthogonal Fusion module
(IPOF), is proposed, which integrates more important low-level features from
the shallow layers into the deep layers. A Dynamic One-Dimensional Aggregation
layers (DODA) are inserted into the IPOF, to dynamically adjust the aggregation
of low dimensional information according to the number of input channels. In
addition, the idea of ensemble learning is used to design a Representative
Block (RB) to dynamically allocate weights for shallow and deep layers.
Experimental results on the challenging NUAA-SIRST (78.22% nIoU and 1.33e-6 Fa)
and IRSTD-1K (68.91% nIoU and 3.23e-6 Fa) dataset demonstrate that the proposed
ILNet can get better performances than other SOTA methods. Moreover, ILNet can
obtain a greater improvement with the increasement of data volume. Training
code are available at https://github.com/Li-Haoqing/ILNet.
|
[
{
"version": "v1",
"created": "Sun, 24 Sep 2023 14:09:37 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Li",
"Haoqing",
""
],
[
"Yang",
"Jinfu",
""
],
[
"Wang",
"Runshi",
""
],
[
"Xu",
"Yifei",
""
]
] |
new_dataset
| 0.999047 |
2309.13676
|
Naimul Haque
|
Naimul Haque, Meraj Serker and Tariq Bin Bashar
|
BdSpell: A YOLO-based Real-time Finger Spelling System for Bangla Sign
Language
| null | null | null | null |
cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
In the domain of Bangla Sign Language (BdSL) interpretation, prior approaches
often imposed a burden on users, requiring them to spell words without hidden
characters, which were subsequently corrected using Bangla grammar rules due to
the missing classes in BdSL36 dataset. However, this method posed a challenge
in accurately guessing the incorrect spelling of words. To address this
limitation, we propose a novel real-time finger spelling system based on the
YOLOv5 architecture. Our system employs specified rules and numerical classes
as triggers to efficiently generate hidden and compound characters, eliminating
the necessity for additional classes and significantly enhancing user
convenience. Notably, our approach achieves character spelling in an impressive
1.32 seconds with a remarkable accuracy rate of 98\%. Furthermore, our YOLOv5
model, trained on 9147 images, demonstrates an exceptional mean Average
Precision (mAP) of 96.4\%. These advancements represent a substantial
progression in augmenting BdSL interpretation, promising increased inclusivity
and accessibility for the linguistic minority. This innovative framework,
characterized by compatibility with existing YOLO versions, stands as a
transformative milestone in enhancing communication modalities and linguistic
equity within the Bangla Sign Language community.
|
[
{
"version": "v1",
"created": "Sun, 24 Sep 2023 15:51:39 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Haque",
"Naimul",
""
],
[
"Serker",
"Meraj",
""
],
[
"Bashar",
"Tariq Bin",
""
]
] |
new_dataset
| 0.999791 |
2309.13679
|
Li-Fan Wu
|
Li-Fan Wu, Zihan Wang, Mo Rastgaar, Nina Mahmoudian
|
Neural Network-PSO-based Velocity Control Algorithm for Landing UAVs on
a Boat
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Precise landing of Unmanned Aerial Vehicles (UAVs) onto moving platforms like
Autonomous Surface Vehicles (ASVs) is both important and challenging,
especially in GPS-denied environments, for collaborative navigation of
heterogeneous vehicles. UAVs need to land within a confined space onboard ASV
to get energy replenishment, while ASV is subject to translational and
rotational disturbances due to wind and water flow. Current solutions either
rely on high-level waypoint navigation, which struggles to robustly land on
varied-speed targets, or necessitate laborious manual tuning of controller
parameters, and expensive sensors for target localization. Therefore, we
propose an adaptive velocity control algorithm that leverages Particle Swarm
Optimization (PSO) and Neural Network (NN) to optimize PID parameters across
varying flight altitudes and distinct speeds of a moving boat. The cost
function of PSO includes the status change rates of UAV and proximity to the
target. The NN further interpolates the PSO-founded PID parameters. The
proposed method implemented on a water strider hexacopter design, not only
ensures accuracy but also increases robustness. Moreover, this NN-PSO can be
readily adapted to suit various mission requirements. Its ability to achieve
precise landings extends its applicability to scenarios, including but not
limited to rescue missions, package deliveries, and workspace inspections.
|
[
{
"version": "v1",
"created": "Sun, 24 Sep 2023 16:05:31 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Wu",
"Li-Fan",
""
],
[
"Wang",
"Zihan",
""
],
[
"Rastgaar",
"Mo",
""
],
[
"Mahmoudian",
"Nina",
""
]
] |
new_dataset
| 0.997403 |
2309.13700
|
Yijun Yang
|
Yijun Yang, Angelica I. Aviles-Rivero, Huazhu Fu, Ye Liu, Weiming
Wang, Lei Zhu
|
Video Adverse-Weather-Component Suppression Network via Weather
Messenger and Adversarial Backpropagation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Although convolutional neural networks (CNNs) have been proposed to remove
adverse weather conditions in single images using a single set of pre-trained
weights, they fail to restore weather videos due to the absence of temporal
information. Furthermore, existing methods for removing adverse weather
conditions (e.g., rain, fog, and snow) from videos can only handle one type of
adverse weather. In this work, we propose the first framework for restoring
videos from all adverse weather conditions by developing a video
adverse-weather-component suppression network (ViWS-Net). To achieve this, we
first devise a weather-agnostic video transformer encoder with multiple
transformer stages. Moreover, we design a long short-term temporal modeling
mechanism for weather messenger to early fuse input adjacent video frames and
learn weather-specific information. We further introduce a weather
discriminator with gradient reversion, to maintain the weather-invariant common
information and suppress the weather-specific information in pixel features, by
adversarially predicting weather types. Finally, we develop a messenger-driven
video transformer decoder to retrieve the residual weather-specific feature,
which is spatiotemporally aggregated with hierarchical pixel features and
refined to predict the clean target frame of input videos. Experimental
results, on benchmark datasets and real-world weather videos, demonstrate that
our ViWS-Net outperforms current state-of-the-art methods in terms of restoring
videos degraded by any weather condition.
|
[
{
"version": "v1",
"created": "Sun, 24 Sep 2023 17:13:55 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Yang",
"Yijun",
""
],
[
"Aviles-Rivero",
"Angelica I.",
""
],
[
"Fu",
"Huazhu",
""
],
[
"Liu",
"Ye",
""
],
[
"Wang",
"Weiming",
""
],
[
"Zhu",
"Lei",
""
]
] |
new_dataset
| 0.982595 |
2309.13707
|
Kai Gao
|
Kai Gao, Yan Ding, Shiqi Zhang, Jingjin Yu
|
ORLA*: Mobile Manipulator-Based Object Rearrangement with Lazy A*
|
Submitted to ICRA 2024
| null | null | null |
cs.RO cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Effectively performing object rearrangement is an essential skill for mobile
manipulators, e.g., setting up a dinner table or organizing a desk. A key
challenge in such problems is deciding an appropriate manipulation order for
objects to effectively untangle dependencies between objects while considering
the necessary motions for realizing the manipulations (e.g., pick and place).
To our knowledge, computing time-optimal multi-object rearrangement solutions
for mobile manipulators remains a largely untapped research direction. In this
research, we propose ORLA*, which leverages delayed (lazy) evaluation in
searching for a high-quality object pick and place sequence that considers both
end-effector and mobile robot base travel. ORLA* also supports multi-layered
rearrangement tasks considering pile stability using machine learning.
Employing an optimal solver for finding temporary locations for displacing
objects, ORLA* can achieve global optimality. Through extensive simulation and
ablation study, we confirm the effectiveness of ORLA* delivering quality
solutions for challenging rearrangement instances. Supplementary materials are
available at: https://gaokai15.github.io/ORLA-Star/
|
[
{
"version": "v1",
"created": "Sun, 24 Sep 2023 17:40:19 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Gao",
"Kai",
""
],
[
"Ding",
"Yan",
""
],
[
"Zhang",
"Shiqi",
""
],
[
"Yu",
"Jingjin",
""
]
] |
new_dataset
| 0.99468 |
2309.13745
|
Hao Wang
|
Hao Wang, Omkar Salunkhe, Walter Quadrini, Dan L\"amkull, Fredrik Ore,
Bj\"orn Johansson, Johan Stahre
|
Computer Vision Technology for Robotized Wire Harness Assembly
|
This paper has been accepted by CIRP CMS 2023. The information of the
published version will be updated later
| null | null | null |
cs.RO cs.AI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Wire harnesses are essential hardware for electronic systems in modern
automotive vehicles. With a shift in the automotive industry towards
electrification and autonomous driving, more and more automotive electronics
are responsible for energy transmission and safety-critical functions such as
maneuvering, driver assistance, and safety system. This paradigm shift places
more demand on automotive wiring harnesses from the safety perspective and
stresses the greater importance of high-quality wire harness assembly in
vehicles. However, most of the current operations of wire harness assembly are
still performed manually by skilled workers, and some of the manual processes
are problematic from different perspectives, such as quality control and
ergonomics. There is also a persistent demand in the industry to increase
competitiveness and gain market share. Hence, assuring assembly quality while
improving ergonomics and optimizing labor costs is desired. Robotized assembly,
accomplished by robots or in human-robot collaboration, is a key enabler for
fulfilling the increasingly demanding quality and safety as it enables more
replicable, transparent, and comprehensible processes than completely manual
operations. However, robotized assembly of wire harnesses is challenging in
real environments due to the flexibility of the deformable objects, though many
preliminary automation solutions have been proposed under simplified industrial
configurations. Previous research efforts have proposed the use of computer
vision technology to facilitate robotized automation of wire harness assembly,
enabling the robots to better perceive and manipulate the flexible wire
harness. This article presents an overview on computer vision technology
proposed for robotized wire harness assembly and derives research gaps that
require further study to facilitate a more practical robotized assembly of wire
harness.
|
[
{
"version": "v1",
"created": "Sun, 24 Sep 2023 20:28:19 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Wang",
"Hao",
""
],
[
"Salunkhe",
"Omkar",
""
],
[
"Quadrini",
"Walter",
""
],
[
"Lämkull",
"Dan",
""
],
[
"Ore",
"Fredrik",
""
],
[
"Johansson",
"Björn",
""
],
[
"Stahre",
"Johan",
""
]
] |
new_dataset
| 0.991802 |
2309.13842
|
Xin Zheng
|
Xin Zheng, Jianke Zhu
|
Traj-LO: In Defense of LiDAR-Only Odometry Using an Effective
Continuous-Time Trajectory
|
Video https://youtu.be/hbtKzElYKkQ?si=3KEVy0hlHBsKV8j0 and Project
site https://github.com/kevin2431/Traj-LO
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
LiDAR Odometry is an essential component in many robotic applications. Unlike
the mainstreamed approaches that focus on improving the accuracy by the
additional inertial sensors, this letter explores the capability of LiDAR-only
odometry through a continuous-time perspective. Firstly, the measurements of
LiDAR are regarded as streaming points continuously captured at high frequency.
Secondly, the LiDAR movement is parameterized by a simple yet effective
continuous-time trajectory. Therefore, our proposed Traj-LO approach tries to
recover the spatial-temporal consistent movement of LiDAR by tightly coupling
the geometric information from LiDAR points and kinematic constraints from
trajectory smoothness. This framework is generalized for different kinds of
LiDAR as well as multi-LiDAR systems. Extensive experiments on the public
datasets demonstrate the robustness and effectiveness of our proposed
LiDAR-only approach, even in scenarios where the kinematic state exceeds the
IMU's measuring range. Our implementation is open-sourced on GitHub.
|
[
{
"version": "v1",
"created": "Mon, 25 Sep 2023 03:05:06 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Zheng",
"Xin",
""
],
[
"Zhu",
"Jianke",
""
]
] |
new_dataset
| 0.994879 |
2309.13853
|
Xunzhao Yin
|
Xunzhao Yin, Yu Qian, Alptekin Vardar, Marcel Gunther, Franz Muller,
Nellie Laleni, Zijian Zhao, Zhouhang Jiang, Zhiguo Shi, Yiyu Shi, Xiao Gong,
Cheng Zhuo, Thomas Kampfe, Kai Ni
|
A Ferroelectric Compute-in-Memory Annealer for Combinatorial
Optimization Problems
|
39 pages, 12 figures
| null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computationally hard combinatorial optimization problems (COPs) are
ubiquitous in many applications, including logistical planning, resource
allocation, chip design, drug explorations, and more. Due to their critical
significance and the inability of conventional hardware in efficiently handling
scaled COPs, there is a growing interest in developing computing hardware
tailored specifically for COPs, including digital annealers, dynamical Ising
machines, and quantum/photonic systems. However, significant hurdles still
remain, such as the memory access issue, the system scalability and restricted
applicability to certain types of COPs, and VLSI-incompatibility, respectively.
Here, a ferroelectric field effect transistor (FeFET) based compute-in-memory
(CiM) annealer is proposed. After converting COPs into quadratic unconstrained
binary optimization (QUBO) formulations, a hardware-algorithm co-design is
conducted, yielding an energy-efficient, versatile, and scalable hardware for
COPs. To accelerate the core vector-matrix-vector (VMV) multiplication of QUBO
formulations, a FeFET based CiM array is exploited, which can accelerate the
intended operation in-situ due to its unique three-terminal structure. In
particular, a lossless compression technique is proposed to prune typically
sparse QUBO matrix to reduce hardware cost. Furthermore, a multi-epoch
simulated annealing (MESA) algorithm is proposed to replace conventional
simulated annealing for its faster convergence and better solution quality. The
effectiveness of the proposed techniques is validated through the utilization
of developed chip prototypes for successfully solving graph coloring problem,
indicating great promise of FeFET CiM annealer in solving general COPs.
|
[
{
"version": "v1",
"created": "Mon, 25 Sep 2023 03:46:19 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Yin",
"Xunzhao",
""
],
[
"Qian",
"Yu",
""
],
[
"Vardar",
"Alptekin",
""
],
[
"Gunther",
"Marcel",
""
],
[
"Muller",
"Franz",
""
],
[
"Laleni",
"Nellie",
""
],
[
"Zhao",
"Zijian",
""
],
[
"Jiang",
"Zhouhang",
""
],
[
"Shi",
"Zhiguo",
""
],
[
"Shi",
"Yiyu",
""
],
[
"Gong",
"Xiao",
""
],
[
"Zhuo",
"Cheng",
""
],
[
"Kampfe",
"Thomas",
""
],
[
"Ni",
"Kai",
""
]
] |
new_dataset
| 0.998009 |
2309.13909
|
Zhenglin Chen
|
Qianyun Zhu, Yifeng Xie, Fangyang Ye, Zhenyuan Gao, Binjie Che,
Zhenglin Chen, Dongmei Yu
|
Chinese herb medicine in augmented reality
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Augmented reality becomes popular in education gradually, which provides a
contextual and adaptive learning experience. Here, we develop a Chinese herb
medicine AR platform based the 3dsMax and the Unity that allows users to
visualize and interact with the herb model and learn the related information.
The users use their mobile camera to scan the 2D herb picture to trigger the
presentation of 3D AR model and corresponding text information on the screen in
real-time. The system shows good performance and has high accuracy for the
identification of herbal medicine after interference test and occlusion test.
Users can interact with the herb AR model by rotating, scaling, and viewing
transformation, which effectively enhances learners' interest in Chinese herb
medicine.
|
[
{
"version": "v1",
"created": "Mon, 25 Sep 2023 07:12:58 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Zhu",
"Qianyun",
""
],
[
"Xie",
"Yifeng",
""
],
[
"Ye",
"Fangyang",
""
],
[
"Gao",
"Zhenyuan",
""
],
[
"Che",
"Binjie",
""
],
[
"Chen",
"Zhenglin",
""
],
[
"Yu",
"Dongmei",
""
]
] |
new_dataset
| 0.998718 |
2309.13920
|
Alberto Pacheco-Gonzalez
|
Alberto Pacheco-Gonzalez, Raymundo Torres, Raul Chacon, Isidro Robledo
|
Real-Time Emergency Vehicle Detection using Mel Spectrograms and Regular
Expressions
|
in Spanish language
| null | null | null |
cs.SD cs.SC eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In emergency situations, the movement of vehicles through city streets can be
problematic due to vehicular traffic. This paper presents a method for
detecting emergency vehicle sirens in real time. To derive a siren Hi-Lo audio
fingerprint it was necessary to apply digital signal processing techniques and
signal symbolization, contrasting against a deep neural network audio
classifier feeding 280 environmental sounds and 38 Hi-Lo sirens. In both
methods, their precision was evaluated based on a confusion matrix and various
metrics. The precision of the developed DSP algorithm presented a greater
ability to discriminate between signal and noise, compared to the CNN model.
|
[
{
"version": "v1",
"created": "Mon, 25 Sep 2023 07:40:19 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Pacheco-Gonzalez",
"Alberto",
""
],
[
"Torres",
"Raymundo",
""
],
[
"Chacon",
"Raul",
""
],
[
"Robledo",
"Isidro",
""
]
] |
new_dataset
| 0.957336 |
2309.13925
|
Tongtong Yuan
|
Tongtong Yuan, Xuange Zhang, Kun Liu, Bo Liu, Jian Jin, Zhenzhen Jiao
|
UCF-Crime Annotation: A Benchmark for Surveillance Video-and-Language
Understanding
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Surveillance videos are an essential component of daily life with various
critical applications, particularly in public security. However, current
surveillance video tasks mainly focus on classifying and localizing anomalous
events. Existing methods are limited to detecting and classifying the
predefined events with unsatisfactory generalization ability and semantic
understanding, although they have obtained considerable performance. To address
this issue, we propose constructing the first multimodal surveillance video
dataset by manually annotating the real-world surveillance dataset UCF-Crime
with fine-grained event content and timing. Our newly annotated dataset, UCA
(UCF-Crime Annotation), provides a novel benchmark for multimodal surveillance
video analysis. It not only describes events in detailed descriptions but also
provides precise temporal grounding of the events in 0.1-second intervals. UCA
contains 20,822 sentences, with an average length of 23 words, and its
annotated videos are as long as 102 hours. Furthermore, we benchmark the
state-of-the-art models of multiple multimodal tasks on this newly created
dataset, including temporal sentence grounding in videos, video captioning, and
dense video captioning. Through our experiments, we found that mainstream
models used in previously publicly available datasets perform poorly on
multimodal surveillance video scenarios, which highlights the necessity of
constructing this dataset. The link to our dataset and code is provided at:
https://github.com/Xuange923/UCA-dataset.
|
[
{
"version": "v1",
"created": "Mon, 25 Sep 2023 07:46:56 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Yuan",
"Tongtong",
""
],
[
"Zhang",
"Xuange",
""
],
[
"Liu",
"Kun",
""
],
[
"Liu",
"Bo",
""
],
[
"Jin",
"Jian",
""
],
[
"Jiao",
"Zhenzhen",
""
]
] |
new_dataset
| 0.997346 |
2309.13952
|
Antoine Yang
|
Antoine Yang, Arsha Nagrani, Ivan Laptev, Josef Sivic, Cordelia Schmid
|
VidChapters-7M: Video Chapters at Scale
|
Accepted at NeurIPS 2023 Track on Datasets and Benchmarks; Project
Webpage: https://antoyang.github.io/vidchapters.html ; 31 pages; 8 figures
| null | null | null |
cs.CV cs.AI cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Segmenting long videos into chapters enables users to quickly navigate to the
information of their interest. This important topic has been understudied due
to the lack of publicly released datasets. To address this issue, we present
VidChapters-7M, a dataset of 817K user-chaptered videos including 7M chapters
in total. VidChapters-7M is automatically created from videos online in a
scalable manner by scraping user-annotated chapters and hence without any
additional manual annotation. We introduce the following three tasks based on
this data. First, the video chapter generation task consists of temporally
segmenting the video and generating a chapter title for each segment. To
further dissect the problem, we also define two variants of this task: video
chapter generation given ground-truth boundaries, which requires generating a
chapter title given an annotated video segment, and video chapter grounding,
which requires temporally localizing a chapter given its annotated title. We
benchmark both simple baselines and state-of-the-art video-language models for
these three tasks. We also show that pretraining on VidChapters-7M transfers
well to dense video captioning tasks in both zero-shot and finetuning settings,
largely improving the state of the art on the YouCook2 and ViTT benchmarks.
Finally, our experiments reveal that downstream performance scales well with
the size of the pretraining dataset. Our dataset, code, and models are publicly
available at https://antoyang.github.io/vidchapters.html.
|
[
{
"version": "v1",
"created": "Mon, 25 Sep 2023 08:38:11 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Yang",
"Antoine",
""
],
[
"Nagrani",
"Arsha",
""
],
[
"Laptev",
"Ivan",
""
],
[
"Sivic",
"Josef",
""
],
[
"Schmid",
"Cordelia",
""
]
] |
new_dataset
| 0.999872 |
2309.13962
|
Jyoti Kini
|
Jyoti Kini, Sarah Fleischer, Ishan Dave, Mubarak Shah
|
Egocentric RGB+Depth Action Recognition in Industry-Like Settings
| null | null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Action recognition from an egocentric viewpoint is a crucial perception task
in robotics and enables a wide range of human-robot interactions. While most
computer vision approaches prioritize the RGB camera, the Depth modality -
which can further amplify the subtleties of actions from an egocentric
perspective - remains underexplored. Our work focuses on recognizing actions
from egocentric RGB and Depth modalities in an industry-like environment. To
study this problem, we consider the recent MECCANO dataset, which provides a
wide range of assembling actions. Our framework is based on the 3D Video SWIN
Transformer to encode both RGB and Depth modalities effectively. To address the
inherent skewness in real-world multimodal action occurrences, we propose a
training strategy using an exponentially decaying variant of the focal loss
modulating factor. Additionally, to leverage the information in both RGB and
Depth modalities, we opt for late fusion to combine the predictions from each
modality. We thoroughly evaluate our method on the action recognition task of
the MECCANO dataset, and it significantly outperforms the prior work. Notably,
our method also secured first place at the multimodal action recognition
challenge at ICIAP 2023.
|
[
{
"version": "v1",
"created": "Mon, 25 Sep 2023 08:56:22 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Kini",
"Jyoti",
""
],
[
"Fleischer",
"Sarah",
""
],
[
"Dave",
"Ishan",
""
],
[
"Shah",
"Mubarak",
""
]
] |
new_dataset
| 0.995397 |
2309.13979
|
Gordana Dodig-Crnkovic
|
Gordana Dodig-Crnkovic
|
Morphological Computing as Logic Underlying Cognition in Human, Animal,
and Intelligent Machine
|
20 pages, no figures
| null | null | null |
cs.OH cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This work examines the interconnections between logic, epistemology, and
sciences within the Naturalist tradition. It presents a scheme that connects
logic, mathematics, physics, chemistry, biology, and cognition, emphasizing
scale-invariant, self-organizing dynamics across organizational tiers of
nature. The inherent logic of agency exists in natural processes at various
levels, under information exchanges. It applies to humans, animals, and
artifactual agents. The common human-centric, natural language-based logic is
an example of complex logic evolved by living organisms that already appears in
the simplest form at the level of basal cognition of unicellular organisms.
Thus, cognitive logic stems from the evolution of physical, chemical, and
biological logic. In a computing nature framework with a self-organizing
agency, innovative computational frameworks grounded in
morphological/physical/natural computation can be used to explain the genesis
of human-centered logic through the steps of naturalized logical processes at
lower levels of organization. The Extended Evolutionary Synthesis of living
agents is essential for understanding the emergence of human-level logic and
the relationship between logic and information processing/computational
epistemology. We conclude that more research is needed to elucidate the details
of the mechanisms linking natural phenomena with the logic of agency in nature.
|
[
{
"version": "v1",
"created": "Mon, 25 Sep 2023 09:31:25 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Dodig-Crnkovic",
"Gordana",
""
]
] |
new_dataset
| 0.991477 |
2309.14030
|
Yiqun Duan
|
Yiqun Duan, Jinzhao Zhou, Zhen Wang, Yu-Kai Wang, Chin-Teng Lin
|
DeWave: Discrete EEG Waves Encoding for Brain Dynamics to Text
Translation
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The translation of brain dynamics into natural language is pivotal for
brain-computer interfaces (BCIs), a field that has seen substantial growth in
recent years. With the swift advancement of large language models, such as
ChatGPT, the need to bridge the gap between the brain and languages becomes
increasingly pressing. Current methods, however, require eye-tracking fixations
or event markers to segment brain dynamics into word-level features, which can
restrict the practical application of these systems. These event markers may
not be readily available or could be challenging to acquire during real-time
inference, and the sequence of eye fixations may not align with the order of
spoken words. To tackle these issues, we introduce a novel framework, DeWave,
that integrates discrete encoding sequences into open-vocabulary EEG-to-text
translation tasks. DeWave uses a quantized variational encoder to derive
discrete codex encoding and align it with pre-trained language models. This
discrete codex representation brings forth two advantages: 1) it alleviates the
order mismatch between eye fixations and spoken words by introducing text-EEG
contrastive alignment training, and 2) it minimizes the interference caused by
individual differences in EEG waves through an invariant discrete codex. Our
model surpasses the previous baseline (40.1 and 31.7) by 3.06% and 6.34%,
respectively, achieving 41.35 BLEU-1 and 33.71 Rouge-F on the ZuCo Dataset.
Furthermore, this work is the first to facilitate the translation of entire EEG
signal periods without needing word-level order markers (e.g., eye fixations),
scoring 20.5 BLEU-1 and 29.5 Rouge-1 on the ZuCo Dataset, respectively. Codes
and the final paper will be public soon.
|
[
{
"version": "v1",
"created": "Mon, 25 Sep 2023 10:52:28 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Duan",
"Yiqun",
""
],
[
"Zhou",
"Jinzhao",
""
],
[
"Wang",
"Zhen",
""
],
[
"Wang",
"Yu-Kai",
""
],
[
"Lin",
"Chin-Teng",
""
]
] |
new_dataset
| 0.987617 |
2309.14092
|
Alexandre Goossens
|
Alexandre Goossens, Adrian Rebmann, Johannes De Smedt, Jan Vanthienen,
Han van der Aa
|
From OCEL to DOCEL -- Datasets and Automated Transformation
| null | null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Object-centric event data represent processes from the point of view of all
the involved object types. This perspective has gained interest in recent years
as it supports the analysis of processes that previously could not be
adequately captured, due to the lack of a clear case notion as well as an
increasing amount of output data that needs to be stored. Although publicly
available event logs are crucial artifacts for researchers to develop and
evaluate novel process mining techniques, the currently available
object-centric event logs have limitations in this regard. Specifically, they
mainly focus on control-flow and rarely contain objects with attributes that
change over time, even though this is not realistic, as the attribute values of
objects can be altered during their lifecycle. This paper addresses this gap by
providing two means of establishing object-centric datasets with dynamically
evolving attributes. First, we provide event log generators, which allow
researchers to generate customized, artificial logs with dynamic attributes in
the recently proposed DOCEL format. Second, we propose and evaluate an
algorithm to convert OCEL logs into DOCEL logs, which involves the detection of
event attributes that capture evolving object information and the creation of
dynamic attributes from these. Through these contributions, this paper supports
the advancement of object-centric process analysis by providing researchers
with new means to obtain relevant data to use during the development of new
techniques.
|
[
{
"version": "v1",
"created": "Mon, 25 Sep 2023 12:31:50 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Goossens",
"Alexandre",
""
],
[
"Rebmann",
"Adrian",
""
],
[
"De Smedt",
"Johannes",
""
],
[
"Vanthienen",
"Jan",
""
],
[
"van der Aa",
"Han",
""
]
] |
new_dataset
| 0.987295 |
2309.14118
|
Vinitra Swamy
|
Vinitra Swamy, Malika Satayeva, Jibril Frej, Thierry Bossy, Thijs
Vogels, Martin Jaggi, Tanja K\"aser, Mary-Anne Hartley
|
MultiModN- Multimodal, Multi-Task, Interpretable Modular Networks
|
Accepted as a full paper at NeurIPS 2023 in New Orleans, USA
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Predicting multiple real-world tasks in a single model often requires a
particularly diverse feature space. Multimodal (MM) models aim to extract the
synergistic predictive potential of multiple data types to create a shared
feature space with aligned semantic meaning across inputs of drastically
varying sizes (i.e. images, text, sound). Most current MM architectures fuse
these representations in parallel, which not only limits their interpretability
but also creates a dependency on modality availability. We present MultiModN, a
multimodal, modular network that fuses latent representations in a sequence of
any number, combination, or type of modality while providing granular real-time
predictive feedback on any number or combination of predictive tasks.
MultiModN's composable pipeline is interpretable-by-design, as well as innately
multi-task and robust to the fundamental issue of biased missingness. We
perform four experiments on several benchmark MM datasets across 10 real-world
tasks (predicting medical diagnoses, academic performance, and weather), and
show that MultiModN's sequential MM fusion does not compromise performance
compared with a baseline of parallel fusion. By simulating the challenging bias
of missing not-at-random (MNAR), this work shows that, contrary to MultiModN,
parallel fusion baselines erroneously learn MNAR and suffer catastrophic
failure when faced with different patterns of MNAR at inference. To the best of
our knowledge, this is the first inherently MNAR-resistant approach to MM
modeling. In conclusion, MultiModN provides granular insights, robustness, and
flexibility without compromising performance.
|
[
{
"version": "v1",
"created": "Mon, 25 Sep 2023 13:16:57 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Swamy",
"Vinitra",
""
],
[
"Satayeva",
"Malika",
""
],
[
"Frej",
"Jibril",
""
],
[
"Bossy",
"Thierry",
""
],
[
"Vogels",
"Thijs",
""
],
[
"Jaggi",
"Martin",
""
],
[
"Käser",
"Tanja",
""
],
[
"Hartley",
"Mary-Anne",
""
]
] |
new_dataset
| 0.974662 |
2309.14185
|
Denis Pankratov
|
Hovhannes A. Harutyunyan, Kamran Koupayi, Denis Pankratov
|
Temporal Separators with Deadlines
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study temporal analogues of the Unrestricted Vertex Separator problem from
the static world. An $(s,z)$-temporal separator is a set of vertices whose
removal disconnects vertex $s$ from vertex $z$ for every time step in a
temporal graph. The $(s,z)$-Temporal Separator problem asks to find the minimum
size of an $(s,z)$-temporal separator for the given temporal graph. We
introduce a generalization of this problem called the $(s,z,t)$-Temporal
Separator problem, where the goal is to find a smallest subset of vertices
whose removal eliminates all temporal paths from $s$ to $z$ which take less
than $t$ time steps. Let $\tau$ denote the number of time steps over which the
temporal graph is defined (we consider discrete time steps). We characterize
the set of parameters $\tau$ and $t$ when the problem is $\mathcal{NP}$-hard
and when it is polynomial time solvable. Then we present a $\tau$-approximation
algorithm for the $(s,z)$-Temporal Separator problem and convert it to a
$\tau^2$-approximation algorithm for the $(s,z,t)$-Temporal Separator problem.
We also present an inapproximability lower bound of $\Omega(\ln(n) +
\ln(\tau))$ for the $(s,z,t)$-Temporal Separator problem assuming that
$\mathcal{NP}\not\subset\mbox{\sc Dtime}(n^{\log\log n})$. Then we consider
three special families of graphs: (1) graphs of branchwidth at most $2$, (2)
graphs $G$ such that the removal of $s$ and $z$ leaves a tree, and (3) graphs
of bounded pathwidth. We present polynomial-time algorithms to find a minimum
$(s,z,t)$-temporal separator for (1) and (2). As for (3), we show a
polynomial-time reduction from the Discrete Segment Covering problem with
bounded-length segments to the $(s,z,t)$-Temporal Separator problem where the
temporal graph has bounded pathwidth.
|
[
{
"version": "v1",
"created": "Mon, 25 Sep 2023 14:46:54 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Harutyunyan",
"Hovhannes A.",
""
],
[
"Koupayi",
"Kamran",
""
],
[
"Pankratov",
"Denis",
""
]
] |
new_dataset
| 0.991416 |
2309.14217
|
Edgar Martinez-Moro
|
Maryam Bajalan, Javier de la Cruz, Alexandre Fotue-Tabue, Edgar
Mart\'inez-Moro
|
On LCP codes over a mixed ring alphabet
|
Submitted to Discrete Mathematics
| null | null | null |
cs.IT math.IT math.RA
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper, we introduce a standard generator matrix for mixed-alphabet
linear codes over finite chain rings. Furthermore, we show that, when one has a
linear complementary pair (LCP) of mixed-alphabet linear codes, both codes are
weakly-free. Additionally, we establish that any mixed-alphabet product group
code is separable. Thus, if one has a pair $\{C, D\}$ of mixed-alphabet product
group codes over a finite chain ring that forms a LCP, it follows that $C$ and
the Euclidean dual of $D$ are permutation equivalent.
|
[
{
"version": "v1",
"created": "Mon, 25 Sep 2023 15:22:58 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Bajalan",
"Maryam",
""
],
[
"de la Cruz",
"Javier",
""
],
[
"Fotue-Tabue",
"Alexandre",
""
],
[
"Martínez-Moro",
"Edgar",
""
]
] |
new_dataset
| 0.999549 |
2309.14293
|
Saeejith Nair
|
Saeejith Nair, Yuhao Chen, Mohammad Javad Shafiee, Alexander Wong
|
NAS-NeRF: Generative Neural Architecture Search for Neural Radiance
Fields
|
9 pages
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural radiance fields (NeRFs) enable high-quality novel view synthesis, but
their prohibitively high computational complexity limits deployability,
especially on resource-constrained platforms. To enable practical usage of
NeRFs, quality tuning is essential to reduce computational complexity, akin to
adjustable graphics settings in video games. However while existing solutions
strive for efficiency, they use one-size-fits-all architectures regardless of
scene complexity, although the same architecture may be unnecessarily large for
simple scenes but insufficient for complex ones. Thus as NeRFs become more
widely used for 3D visualization, there is a need to dynamically optimize the
neural network component of NeRFs to achieve a balance between computational
complexity and specific targets for synthesis quality. Addressing this gap, we
introduce NAS-NeRF: a generative neural architecture search strategy uniquely
tailored to generate NeRF architectures on a per-scene basis by optimizing the
trade-off between complexity and performance, while adhering to constraints on
computational budget and minimum synthesis quality. Our experiments on the
Blender synthetic dataset show the proposed NAS-NeRF can generate architectures
up to 5.74$\times$ smaller, with 4.19$\times$ fewer FLOPs, and 1.93$\times$
faster on a GPU than baseline NeRFs, without suffering a drop in SSIM.
Furthermore, we illustrate that NAS-NeRF can also achieve architectures up to
23$\times$ smaller, 22$\times$ fewer FLOPs, and 4.7$\times$ faster than
baseline NeRFs with only a 5.3\% average SSIM drop. The source code for our
work is also made publicly available at
https://saeejithnair.github.io/NAS-NeRF.
|
[
{
"version": "v1",
"created": "Mon, 25 Sep 2023 17:04:30 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Nair",
"Saeejith",
""
],
[
"Chen",
"Yuhao",
""
],
[
"Shafiee",
"Mohammad Javad",
""
],
[
"Wong",
"Alexander",
""
]
] |
new_dataset
| 0.984243 |
2309.14320
|
Rutav Shah
|
Rutav Shah, Roberto Mart\'in-Mart\'in, Yuke Zhu
|
MUTEX: Learning Unified Policies from Multimodal Task Specifications
|
Accepted at 7th Conference on Robot Learning (CoRL 2023), Atlanta,
USA
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Humans use different modalities, such as speech, text, images, videos, etc.,
to communicate their intent and goals with teammates. For robots to become
better assistants, we aim to endow them with the ability to follow instructions
and understand tasks specified by their human partners. Most robotic policy
learning methods have focused on one single modality of task specification
while ignoring the rich cross-modal information. We present MUTEX, a unified
approach to policy learning from multimodal task specifications. It trains a
transformer-based architecture to facilitate cross-modal reasoning, combining
masked modeling and cross-modal matching objectives in a two-stage training
procedure. After training, MUTEX can follow a task specification in any of the
six learned modalities (video demonstrations, goal images, text goal
descriptions, text instructions, speech goal descriptions, and speech
instructions) or a combination of them. We systematically evaluate the benefits
of MUTEX in a newly designed dataset with 100 tasks in simulation and 50 tasks
in the real world, annotated with multiple instances of task specifications in
different modalities, and observe improved performance over methods trained
specifically for any single modality. More information at
https://ut-austin-rpl.github.io/MUTEX/
|
[
{
"version": "v1",
"created": "Mon, 25 Sep 2023 17:45:31 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Shah",
"Rutav",
""
],
[
"Martín-Martín",
"Roberto",
""
],
[
"Zhu",
"Yuke",
""
]
] |
new_dataset
| 0.997723 |
2309.14341
|
Deepak Pathak
|
Xuxin Cheng, Kexin Shi, Ananye Agarwal, Deepak Pathak
|
Extreme Parkour with Legged Robots
|
Website and videos at https://extreme-parkour.github.io/
| null | null | null |
cs.RO cs.AI cs.CV cs.LG cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humans can perform parkour by traversing obstacles in a highly dynamic
fashion requiring precise eye-muscle coordination and movement. Getting robots
to do the same task requires overcoming similar challenges. Classically, this
is done by independently engineering perception, actuation, and control systems
to very low tolerances. This restricts them to tightly controlled settings such
as a predetermined obstacle course in labs. In contrast, humans are able to
learn parkour through practice without significantly changing their underlying
biology. In this paper, we take a similar approach to developing robot parkour
on a small low-cost robot with imprecise actuation and a single front-facing
depth camera for perception which is low-frequency, jittery, and prone to
artifacts. We show how a single neural net policy operating directly from a
camera image, trained in simulation with large-scale RL, can overcome imprecise
sensing and actuation to output highly precise control behavior end-to-end. We
show our robot can perform a high jump on obstacles 2x its height, long jump
across gaps 2x its length, do a handstand and run across tilted ramps, and
generalize to novel obstacle courses with different physical properties.
Parkour videos at https://extreme-parkour.github.io/
|
[
{
"version": "v1",
"created": "Mon, 25 Sep 2023 17:59:55 GMT"
}
] | 2023-09-26T00:00:00 |
[
[
"Cheng",
"Xuxin",
""
],
[
"Shi",
"Kexin",
""
],
[
"Agarwal",
"Ananye",
""
],
[
"Pathak",
"Deepak",
""
]
] |
new_dataset
| 0.986398 |
2012.02420
|
Junyu Luo
|
Junyu Luo, Zifei Zheng, Hanzhong Ye, Muchao Ye, Yaqing Wang, Quanzeng
You, Cao Xiao and Fenglong Ma
|
Benchmarking Automated Clinical Language Simplification: Dataset,
Algorithm, and Evaluation
|
COLING 2022
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Patients with low health literacy usually have difficulty understanding
medical jargon and the complex structure of professional medical language.
Although some studies are proposed to automatically translate expert language
into layperson-understandable language, only a few of them focus on both
accuracy and readability aspects simultaneously in the clinical domain. Thus,
simplification of the clinical language is still a challenging task, but
unfortunately, it is not yet fully addressed in previous work. To benchmark
this task, we construct a new dataset named MedLane to support the development
and evaluation of automated clinical language simplification approaches.
Besides, we propose a new model called DECLARE that follows the human
annotation procedure and achieves state-of-the-art performance compared with
eight strong baselines. To fairly evaluate the performance, we also propose
three specific evaluation metrics. Experimental results demonstrate the utility
of the annotated MedLane dataset and the effectiveness of the proposed model
DECLARE.
|
[
{
"version": "v1",
"created": "Fri, 4 Dec 2020 06:09:02 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Sep 2023 20:53:33 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Luo",
"Junyu",
""
],
[
"Zheng",
"Zifei",
""
],
[
"Ye",
"Hanzhong",
""
],
[
"Ye",
"Muchao",
""
],
[
"Wang",
"Yaqing",
""
],
[
"You",
"Quanzeng",
""
],
[
"Xiao",
"Cao",
""
],
[
"Ma",
"Fenglong",
""
]
] |
new_dataset
| 0.992532 |
2109.10981
|
Jack Lutz
|
Jack H. Lutz, Renrui Qi, Liang Yu
|
The Point-to-Set Principle and the Dimensions of Hamel Bases
| null | null | null | null |
cs.LO math.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We prove that every real number in [0,1] is the Hausdorff dimension of a
Hamel basis of the vector space of reals over the field of rationals.
The logic of our proof is of particular interest. The statement of our
theorem is classical; it does not involve the theory of computing. However, our
proof makes essential use of algorithmic fractal dimension--a
computability-theoretic construct--and the point-to-set principle of J. Lutz
and N. Lutz (2018).
|
[
{
"version": "v1",
"created": "Wed, 22 Sep 2021 18:51:45 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Sep 2021 12:06:15 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Sep 2023 03:07:06 GMT"
},
{
"version": "v4",
"created": "Thu, 21 Sep 2023 21:04:12 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Lutz",
"Jack H.",
""
],
[
"Qi",
"Renrui",
""
],
[
"Yu",
"Liang",
""
]
] |
new_dataset
| 0.955514 |
2202.13370
|
Mima Stanojkovski
|
Mima Stanojkovski
|
Submodule codes as spherical codes in buildings
|
21 pages, revision including the referees' suggestions, to appear in
Designs, Codes and Cryptography
|
Des. Codes Cryptogr. 91, 2449-2472 (2023)
|
10.1007/s10623-023-01207-7
| null |
cs.IT math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We give a generalization of subspace codes by means of codes of modules over
finite commutative chain rings. We define a new class of Sperner codes and use
results from extremal combinatorics to prove the optimality of such codes in
different cases. Moreover, we explain the connection with Bruhat-Tits buildings
and show how our codes are the buildings' analogue of spherical codes in the
Euclidean sense.
|
[
{
"version": "v1",
"created": "Sun, 27 Feb 2022 14:27:36 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Mar 2022 15:53:53 GMT"
},
{
"version": "v3",
"created": "Sun, 26 Feb 2023 13:09:17 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Stanojkovski",
"Mima",
""
]
] |
new_dataset
| 0.999234 |
2205.07667
|
Tobias Boege
|
Tobias Boege
|
Selfadhesivity in Gaussian conditional independence structures
|
13 pages; v3: minor revision
| null |
10.1016/j.ijar.2023.109027
| null |
cs.IT math.CO math.IT math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Selfadhesivity is a property of entropic polymatroids which guarantees that
the polymatroid can be glued to an identical copy of itself along arbitrary
restrictions such that the two pieces are independent given the common
restriction. We show that positive definite matrices satisfy this condition as
well and examine consequences for Gaussian conditional independence structures.
New axioms of Gaussian CI are obtained by applying selfadhesivity to the
previously known axioms of structural semigraphoids and orientable gaussoids.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 13:33:01 GMT"
},
{
"version": "v2",
"created": "Fri, 27 Jan 2023 16:58:11 GMT"
},
{
"version": "v3",
"created": "Fri, 22 Sep 2023 15:11:40 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Boege",
"Tobias",
""
]
] |
new_dataset
| 0.99466 |
2206.12447
|
Harshit Kumar
|
Harshit Kumar, Biswadeep Chakraborty, Sudarshan Sharma, Saibal
Mukhopadhyay
|
XMD: An Expansive Hardware-telemetry based Mobile Malware Detector to
enhance Endpoint Detection
|
Revised version based on peer review feedback. Manuscript to appear
in IEEE Transactions on Information Forensics and Security
| null |
10.1109/TIFS.2023.3318969
| null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Hardware-based Malware Detectors (HMDs) have shown promise in detecting
malicious workloads. However, the current HMDs focus solely on the CPU core of
a System-on-Chip (SoC) and, therefore, do not exploit the full potential of the
hardware telemetry. In this paper, we propose XMD, an HMD that uses an
expansive set of telemetry channels extracted from the different subsystems of
SoC. XMD exploits the thread-level profiling power of the CPU-core telemetry,
and the global profiling power of non-core telemetry channels, to achieve
significantly better detection performance than currently used Hardware
Performance Counter (HPC) based detectors. We leverage the concept of manifold
hypothesis to analytically prove that adding non-core telemetry channels
improves the separability of the benign and malware classes, resulting in
performance gains. We train and evaluate XMD using hardware telemetries
collected from 723 benign applications and 1033 malware samples on a commodity
Android Operating System (OS)-based mobile device. XMD improves over currently
used HPC-based detectors by 32.91% for the in-distribution test data. XMD
achieves the best detection performance of 86.54% with a false positive rate of
2.9%, compared to the detection rate of 80%, offered by the best performing
signature-based Anti-Virus(AV) on VirusTotal, on the same set of malware
samples.
|
[
{
"version": "v1",
"created": "Fri, 24 Jun 2022 18:17:02 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Feb 2023 21:34:26 GMT"
},
{
"version": "v3",
"created": "Thu, 14 Sep 2023 19:10:41 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Kumar",
"Harshit",
""
],
[
"Chakraborty",
"Biswadeep",
""
],
[
"Sharma",
"Sudarshan",
""
],
[
"Mukhopadhyay",
"Saibal",
""
]
] |
new_dataset
| 0.997934 |
2210.13124
|
Jan Wichelmann
|
Jan Wichelmann, Anna P\"atschke, Luca Wilke, Thomas Eisenbarth
|
Cipherfix: Mitigating Ciphertext Side-Channel Attacks in Software
|
Jan Wichelmann and Anna P\"atschke contributed equally to this work
|
32nd USENIX Security Symposium, USENIX Security 2023
| null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Trusted execution environments (TEEs) provide an environment for running
workloads in the cloud without having to trust cloud service providers, by
offering additional hardware-assisted security guarantees. However, main memory
encryption as a key mechanism to protect against system-level attackers trying
to read the TEE's content and physical, off-chip attackers, is insufficient.
The recent Cipherleaks attacks infer secret data from TEE-protected
implementations by analyzing ciphertext patterns exhibited due to deterministic
memory encryption. The underlying vulnerability, dubbed the ciphertext
side-channel, is neither protected by state-of-the-art countermeasures like
constant-time code nor by hardware fixes.
Thus, in this paper, we present a software-based, drop-in solution that can
harden existing binaries such that they can be safely executed under TEEs
vulnerable to ciphertext side-channels, without requiring recompilation. We
combine taint tracking with both static and dynamic binary instrumentation to
find sensitive memory locations, and mitigate the leakage by masking secret
data before it gets written to memory. This way, although the memory encryption
remains deterministic, we destroy any secret-dependent patterns in encrypted
memory. We show that our proof-of-concept implementation protects various
constant-time implementations against ciphertext side-channels with reasonable
overhead.
|
[
{
"version": "v1",
"created": "Mon, 24 Oct 2022 11:18:16 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Mar 2023 15:29:10 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Wichelmann",
"Jan",
""
],
[
"Pätschke",
"Anna",
""
],
[
"Wilke",
"Luca",
""
],
[
"Eisenbarth",
"Thomas",
""
]
] |
new_dataset
| 0.999335 |
2211.09445
|
Tam\'as Matuszka PhD
|
Tam\'as Matuszka, Iv\'an Barton, \'Ad\'am Butykai, P\'eter Hajas,
D\'avid Kiss, Domonkos Kov\'acs, S\'andor Kuns\'agi-M\'at\'e, P\'eter
Lengyel, G\'abor N\'emeth, Levente Pet\H{o}, Dezs\H{o} Ribli, D\'avid Szeghy,
Szabolcs Vajna, B\'alint Varga
|
aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving
with Long-Range Perception
|
The paper was accepted to ICLR 2023 Workshop Scene Representations
for Autonomous Driving
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Autonomous driving is a popular research area within the computer vision
research community. Since autonomous vehicles are highly safety-critical,
ensuring robustness is essential for real-world deployment. While several
public multimodal datasets are accessible, they mainly comprise two sensor
modalities (camera, LiDAR) which are not well suited for adverse weather. In
addition, they lack far-range annotations, making it harder to train neural
networks that are the base of a highway assistant function of an autonomous
vehicle. Therefore, we introduce a multimodal dataset for robust autonomous
driving with long-range perception. The dataset consists of 176 scenes with
synchronized and calibrated LiDAR, camera, and radar sensors covering a
360-degree field of view. The collected data was captured in highway, urban,
and suburban areas during daytime, night, and rain and is annotated with 3D
bounding boxes with consistent identifiers across frames. Furthermore, we
trained unimodal and multimodal baseline models for 3D object detection. Data
are available at \url{https://github.com/aimotive/aimotive_dataset}.
|
[
{
"version": "v1",
"created": "Thu, 17 Nov 2022 10:19:59 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Apr 2023 15:06:19 GMT"
},
{
"version": "v3",
"created": "Fri, 22 Sep 2023 09:57:03 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Matuszka",
"Tamás",
""
],
[
"Barton",
"Iván",
""
],
[
"Butykai",
"Ádám",
""
],
[
"Hajas",
"Péter",
""
],
[
"Kiss",
"Dávid",
""
],
[
"Kovács",
"Domonkos",
""
],
[
"Kunsági-Máté",
"Sándor",
""
],
[
"Lengyel",
"Péter",
""
],
[
"Németh",
"Gábor",
""
],
[
"Pető",
"Levente",
""
],
[
"Ribli",
"Dezső",
""
],
[
"Szeghy",
"Dávid",
""
],
[
"Vajna",
"Szabolcs",
""
],
[
"Varga",
"Bálint",
""
]
] |
new_dataset
| 0.999817 |
2303.06614
|
Cong Lu
|
Cong Lu, Philip J. Ball, Yee Whye Teh, Jack Parker-Holder
|
Synthetic Experience Replay
|
Published at NeurIPS, 2023
| null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A key theme in the past decade has been that when large neural networks and
large datasets combine they can produce remarkable results. In deep
reinforcement learning (RL), this paradigm is commonly made possible through
experience replay, whereby a dataset of past experiences is used to train a
policy or value function. However, unlike in supervised or self-supervised
learning, an RL agent has to collect its own data, which is often limited.
Thus, it is challenging to reap the benefits of deep learning, and even small
neural networks can overfit at the start of training. In this work, we leverage
the tremendous recent progress in generative modeling and propose Synthetic
Experience Replay (SynthER), a diffusion-based approach to flexibly upsample an
agent's collected experience. We show that SynthER is an effective method for
training RL agents across offline and online settings, in both proprioceptive
and pixel-based environments. In offline settings, we observe drastic
improvements when upsampling small offline datasets and see that additional
synthetic data also allows us to effectively train larger networks.
Furthermore, SynthER enables online agents to train with a much higher
update-to-data ratio than before, leading to a significant increase in sample
efficiency, without any algorithmic changes. We believe that synthetic training
data could open the door to realizing the full potential of deep learning for
replay-based RL algorithms from limited data. Finally, we open-source our code
at https://github.com/conglu1997/SynthER.
|
[
{
"version": "v1",
"created": "Sun, 12 Mar 2023 09:10:45 GMT"
},
{
"version": "v2",
"created": "Mon, 15 May 2023 22:39:46 GMT"
},
{
"version": "v3",
"created": "Fri, 22 Sep 2023 12:41:39 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Lu",
"Cong",
""
],
[
"Ball",
"Philip J.",
""
],
[
"Teh",
"Yee Whye",
""
],
[
"Parker-Holder",
"Jack",
""
]
] |
new_dataset
| 0.993236 |
2303.07035
|
Shuchang Shen
|
Shuchang Shen, Sachith Seneviratne, Xinye Wanyan, Michael Kirley
|
FireRisk: A Remote Sensing Dataset for Fire Risk Assessment with
Benchmarks Using Supervised and Self-supervised Learning
|
10 pages, 6 figures, 1 table, 1 equation
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In recent decades, wildfires, as widespread and extremely destructive natural
disasters, have caused tremendous property losses and fatalities, as well as
extensive damage to forest ecosystems. Many fire risk assessment projects have
been proposed to prevent wildfires, but GIS-based methods are inherently
challenging to scale to different geographic areas due to variations in data
collection and local conditions. Inspired by the abundance of publicly
available remote sensing projects and the burgeoning development of deep
learning in computer vision, our research focuses on assessing fire risk using
remote sensing imagery.
In this work, we propose a novel remote sensing dataset, FireRisk, consisting
of 7 fire risk classes with a total of 91872 labelled images for fire risk
assessment. This remote sensing dataset is labelled with the fire risk classes
supplied by the Wildfire Hazard Potential (WHP) raster dataset, and remote
sensing images are collected using the National Agriculture Imagery Program
(NAIP), a high-resolution remote sensing imagery program. On FireRisk, we
present benchmark performance for supervised and self-supervised
representations, with Masked Autoencoders (MAE) pre-trained on ImageNet1k
achieving the highest classification accuracy, 65.29%.
This remote sensing dataset, FireRisk, provides a new direction for fire risk
assessment, and we make it publicly available on
https://github.com/CharmonyShen/FireRisk.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 11:54:16 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Sep 2023 19:06:47 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Shen",
"Shuchang",
""
],
[
"Seneviratne",
"Sachith",
""
],
[
"Wanyan",
"Xinye",
""
],
[
"Kirley",
"Michael",
""
]
] |
new_dataset
| 0.999899 |
2305.00969
|
Arsenii Gorin
|
David Budaghyan, Charles C. Onu, Arsenii Gorin, Cem Subakan, Doina
Precup
|
CryCeleb: A Speaker Verification Dataset Based on Infant Cry Sounds
|
Submitted to ICASSP 2024
| null | null | null |
cs.SD cs.AI cs.CL eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
This paper describes the Ubenwa CryCeleb dataset - a labeled collection of
infant cries - and the accompanying CryCeleb 2023 task, which is a public
speaker verification challenge based on cry sounds. We released more than 6
hours of manually segmented cry sounds from 786 newborns for academic use,
aiming to encourage research in infant cry analysis. The inaugural public
competition attracted 59 participants, 11 of whom improved the baseline
performance. The top-performing system achieved a significant improvement
scoring 25.8% equal error rate, which is still far from the performance of
state-of-the-art adult speaker verification systems. Therefore, we believe
there is room for further research on this dataset, potentially extending
beyond the verification task.
|
[
{
"version": "v1",
"created": "Mon, 1 May 2023 17:56:32 GMT"
},
{
"version": "v2",
"created": "Tue, 2 May 2023 19:42:44 GMT"
},
{
"version": "v3",
"created": "Mon, 15 May 2023 17:48:54 GMT"
},
{
"version": "v4",
"created": "Fri, 25 Aug 2023 12:54:35 GMT"
},
{
"version": "v5",
"created": "Thu, 21 Sep 2023 20:02:37 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Budaghyan",
"David",
""
],
[
"Onu",
"Charles C.",
""
],
[
"Gorin",
"Arsenii",
""
],
[
"Subakan",
"Cem",
""
],
[
"Precup",
"Doina",
""
]
] |
new_dataset
| 0.999897 |
2305.08138
|
Subhashis Banerjee
|
Prashant Agrawal, Abhinav Nakarmi, Mahavir Prasad Jhawar, Subodh
Sharma, and Subhashis Banerjee
|
Traceable mixnets
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce the notion of traceable mixnets. In a traditional mixnet,
multiple mix-servers jointly permute and decrypt a list of ciphertexts to
produce a list of plaintexts, along with a proof of correctness, such that the
association between individual ciphertexts and plaintexts remains completely
hidden. However, in many applications, the privacy-utility tradeoff requires
answering some specific queries about this association, without revealing any
information beyond the query result. We consider queries of the following type:
a) given a ciphertext in the mixnet input list, whether it encrypts one of a
given subset of plaintexts in the output list, and b) given a plaintext in the
mixnet output list, whether it is a decryption of one of a given subset of
ciphertexts in the input list. Traceable mixnets allow the mix-servers to
jointly prove answers to the above queries to a querier such that neither the
querier nor a threshold number of mix-servers learn any information beyond the
query result. Further, if the querier is not corrupted, the corrupted
mix-servers do not even learn the query result. We first comprehensively
formalise these security properties of traceable mixnets and then propose a
construction of traceable mixnets using novel distributed zero-knowledge proofs
(ZKPs) of set membership and of a statement we call reverse set membership.
Although set membership has been studied in the single-prover setting, the main
challenge in our distributed setting lies in making sure that none of the
mix-servers learn the association between ciphertexts and plaintexts during the
proof. We implement our distributed ZKPs and show that they are faster than
state-of-the-art by at least one order of magnitude.
|
[
{
"version": "v1",
"created": "Sun, 14 May 2023 12:18:59 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Sep 2023 05:14:06 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Agrawal",
"Prashant",
""
],
[
"Nakarmi",
"Abhinav",
""
],
[
"Jhawar",
"Mahavir Prasad",
""
],
[
"Sharma",
"Subodh",
""
],
[
"Banerjee",
"Subhashis",
""
]
] |
new_dataset
| 0.994595 |
2306.01966
|
Tatsuya Aoyama
|
Tatsuya Aoyama, Shabnam Behzad, Luke Gessler, Lauren Levine, Jessica
Lin, Yang Janet Liu, Siyao Peng, Yilun Zhu, Amir Zeldes
|
GENTLE: A Genre-Diverse Multilayer Challenge Set for English NLP and
Linguistic Evaluation
|
Camera-ready for LAW-XVII collocated with ACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present GENTLE, a new mixed-genre English challenge corpus totaling 17K
tokens and consisting of 8 unusual text types for out-of domain evaluation:
dictionary entries, esports commentaries, legal documents, medical notes,
poetry, mathematical proofs, syllabuses, and threat letters. GENTLE is manually
annotated for a variety of popular NLP tasks, including syntactic dependency
parsing, entity recognition, coreference resolution, and discourse parsing. We
evaluate state-of-the-art NLP systems on GENTLE and find severe degradation for
at least some genres in their performance on all tasks, which indicates
GENTLE's utility as an evaluation dataset for NLP systems.
|
[
{
"version": "v1",
"created": "Sat, 3 Jun 2023 00:20:15 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Sep 2023 03:31:17 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Aoyama",
"Tatsuya",
""
],
[
"Behzad",
"Shabnam",
""
],
[
"Gessler",
"Luke",
""
],
[
"Levine",
"Lauren",
""
],
[
"Lin",
"Jessica",
""
],
[
"Liu",
"Yang Janet",
""
],
[
"Peng",
"Siyao",
""
],
[
"Zhu",
"Yilun",
""
],
[
"Zeldes",
"Amir",
""
]
] |
new_dataset
| 0.999057 |
2306.06446
|
Haoran You
|
Haoran You, Huihong Shi, Yipin Guo, Yingyan (Celine) Lin
|
ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient
Vision Transformer
|
Accepted by NeurIPS 2023
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Vision Transformers (ViTs) have shown impressive performance and have become
a unified backbone for multiple vision tasks. But both attention and
multi-layer perceptions (MLPs) in ViTs are not efficient enough due to dense
multiplications, resulting in costly training and inference. To this end, we
propose to reparameterize the pre-trained ViT with a mixture of multiplication
primitives, e.g., bitwise shifts and additions, towards a new type of
multiplication-reduced model, dubbed $\textbf{ShiftAddViT}$, which aims for
end-to-end inference speedups on GPUs without the need of training from
scratch. Specifically, all $\texttt{MatMuls}$ among queries, keys, and values
are reparameterized by additive kernels, after mapping queries and keys to
binary codes in Hamming space. The remaining MLPs or linear layers are then
reparameterized by shift kernels. We utilize TVM to implement and optimize
those customized kernels for practical hardware deployment on GPUs. We find
that such a reparameterization on (quadratic or linear) attention maintains
model accuracy, while inevitably leading to accuracy drops when being applied
to MLPs. To marry the best of both worlds, we further propose a new mixture of
experts (MoE) framework to reparameterize MLPs by taking multiplication or its
primitives as experts, e.g., multiplication and shift, and designing a new
latency-aware load-balancing loss. Such a loss helps to train a generic router
for assigning a dynamic amount of input tokens to different experts according
to their latency. In principle, the faster experts run, the larger amount of
input tokens are assigned. Extensive experiments consistently validate the
effectiveness of our proposed ShiftAddViT, achieving up to
$\textbf{5.18$\times$}$ latency reductions on GPUs and $\textbf{42.9%}$ energy
savings, while maintaining comparable accuracy as original or efficient ViTs.
|
[
{
"version": "v1",
"created": "Sat, 10 Jun 2023 13:53:41 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Sep 2023 21:43:14 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"You",
"Haoran",
"",
"Celine"
],
[
"Shi",
"Huihong",
"",
"Celine"
],
[
"Guo",
"Yipin",
"",
"Celine"
],
[
"Yingyan",
"",
"",
"Celine"
],
[
"Lin",
"",
""
]
] |
new_dataset
| 0.997259 |
2307.06772
|
Lennart Kauther
|
Katharina Eickhoff and Lennart Kauther and Britta Peis
|
Stackelberg Vertex Cover on a Path
|
22 pages, 2 figures, 4 algorithms, extended abstract published at
SAGT2023
|
In: Deligkas, A., Filos-Ratsikas, A. (eds.) Algorithmic Game
Theory. pp. 22-39. Springer Nature Switzerland, Cham (2023)
|
10.1007/978-3-031-43254-5
| null |
cs.GT cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A Stackelberg Vertex Cover game is played on an undirected graph
$\mathcal{G}$ where some of the vertices are under the control of a
\emph{leader}. The remaining vertices are assigned a fixed weight. The game is
played in two stages. First, the leader chooses prices for the vertices under
her control. Afterward, the second player, called \emph{follower}, selects a
min weight vertex cover in the resulting weighted graph. That is, the follower
selects a subset of vertices $C^*$ such that every edge has at least one
endpoint in $C^*$ of minimum weight w.r.t.\ to the fixed weights, and the
prices set by the leader. Stackelberg Vertex Cover (StackVC) describes the
leader's optimization problem to select prices in the first stage of the game
so as to maximize her revenue, which is the cumulative price of all her
(priceable) vertices that are contained in the follower's solution. Previous
research showed that StackVC is \textsf{NP}-hard on bipartite graphs, but
solvable in polynomial time in the special case of bipartite graphs, where all
priceable vertices belong to the same side of the bipartition. In this paper,
we investigate StackVC on paths and present a dynamic program with linear time
and space complexity.
|
[
{
"version": "v1",
"created": "Thu, 13 Jul 2023 14:25:09 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Sep 2023 13:19:24 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Eickhoff",
"Katharina",
""
],
[
"Kauther",
"Lennart",
""
],
[
"Peis",
"Britta",
""
]
] |
new_dataset
| 0.999737 |
2308.12952
|
Homer Walke
|
Homer Walke, Kevin Black, Abraham Lee, Moo Jin Kim, Max Du, Chongyi
Zheng, Tony Zhao, Philippe Hansen-Estruch, Quan Vuong, Andre He, Vivek Myers,
Kuan Fang, Chelsea Finn, Sergey Levine
|
BridgeData V2: A Dataset for Robot Learning at Scale
|
9 pages
| null | null | null |
cs.RO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce BridgeData V2, a large and diverse dataset of robotic
manipulation behaviors designed to facilitate research on scalable robot
learning. BridgeData V2 contains 60,096 trajectories collected across 24
environments on a publicly available low-cost robot. BridgeData V2 provides
extensive task and environment variability, leading to skills that can
generalize across environments, domains, and institutions, making the dataset a
useful resource for a broad range of researchers. Additionally, the dataset is
compatible with a wide variety of open-vocabulary, multi-task learning methods
conditioned on goal images or natural language instructions. In our
experiments, we train 6 state-of-the-art imitation learning and offline
reinforcement learning methods on our dataset, and find that they succeed on a
suite of tasks requiring varying amounts of generalization. We also demonstrate
that the performance of these methods improves with more data and higher
capacity models, and that training on a greater variety of skills leads to
improved generalization. By publicly sharing BridgeData V2 and our pre-trained
models, we aim to accelerate research in scalable robot learning methods.
Project page at https://rail-berkeley.github.io/bridgedata
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 17:41:20 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Sep 2023 21:14:07 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Walke",
"Homer",
""
],
[
"Black",
"Kevin",
""
],
[
"Lee",
"Abraham",
""
],
[
"Kim",
"Moo Jin",
""
],
[
"Du",
"Max",
""
],
[
"Zheng",
"Chongyi",
""
],
[
"Zhao",
"Tony",
""
],
[
"Hansen-Estruch",
"Philippe",
""
],
[
"Vuong",
"Quan",
""
],
[
"He",
"Andre",
""
],
[
"Myers",
"Vivek",
""
],
[
"Fang",
"Kuan",
""
],
[
"Finn",
"Chelsea",
""
],
[
"Levine",
"Sergey",
""
]
] |
new_dataset
| 0.999777 |
2308.15316
|
Urs Waldmann
|
Urs Waldmann, Alex Hoi Hang Chan, Hemal Naik, M\'at\'e Nagy, Iain D.
Couzin, Oliver Deussen, Bastian Goldluecke, Fumihiro Kano
|
3D-MuPPET: 3D Multi-Pigeon Pose Estimation and Tracking
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Markerless methods for animal posture tracking have been developing recently,
but frameworks and benchmarks for tracking large animal groups in 3D are still
lacking. To overcome this gap in the literature, we present 3D-MuPPET, a
framework to estimate and track 3D poses of up to 10 pigeons at interactive
speed using multiple-views. We train a pose estimator to infer 2D keypoints and
bounding boxes of multiple pigeons, then triangulate the keypoints to 3D. For
correspondence matching, we first dynamically match 2D detections to global
identities in the first frame, then use a 2D tracker to maintain
correspondences accross views in subsequent frames. We achieve comparable
accuracy to a state of the art 3D pose estimator for Root Mean Square Error
(RMSE) and Percentage of Correct Keypoints (PCK). We also showcase a novel use
case where our model trained with data of single pigeons provides comparable
results on data containing multiple pigeons. This can simplify the domain shift
to new species because annotating single animal data is less labour intensive
than multi-animal data. Additionally, we benchmark the inference speed of
3D-MuPPET, with up to 10 fps in 2D and 1.5 fps in 3D, and perform quantitative
tracking evaluation, which yields encouraging results. Finally, we show that
3D-MuPPET also works in natural environments without model fine-tuning on
additional annotations. To the best of our knowledge we are the first to
present a framework for 2D/3D posture and trajectory tracking that works in
both indoor and outdoor environments.
|
[
{
"version": "v1",
"created": "Tue, 29 Aug 2023 14:02:27 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Sep 2023 09:00:03 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Waldmann",
"Urs",
""
],
[
"Chan",
"Alex Hoi Hang",
""
],
[
"Naik",
"Hemal",
""
],
[
"Nagy",
"Máté",
""
],
[
"Couzin",
"Iain D.",
""
],
[
"Deussen",
"Oliver",
""
],
[
"Goldluecke",
"Bastian",
""
],
[
"Kano",
"Fumihiro",
""
]
] |
new_dataset
| 0.998118 |
2309.04709
|
Mahmoud Atashbar
|
Hamed Alizadeh Ghazijahani, Mahmoud Atashbar, Yong Liang Guan, Zhaojie
Yang
|
A Public Information Precoding for MIMO Visible Light Communication
System Based on Manifold Optimization
|
This paper has been submitted to an IEEE Journal
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visible light communication (VLC) is an attractive subset of optical
communication that provides a high data rate in the access layer of the
network. The combination of multiple inputmultiple output (MIMO) with a VLC
system leads to a higher speed of data transmission named as MIMO-VLC system.
In multi-user (MU) MIMO-VLC, a LED array transmits signals for users. These
signals are categorized as signals of private information for each user and
signals of public information for all users. The main idea of this paper is to
design an omnidirectional precoding to transmit the signals of public
information in the MUMIMO-VLC network. To this end, we propose to maximize the
achievable rate which leads to maximizing the received mean power at the
possible location of the users. Besides maximizing the achievable rate, we
consider equal mean transmission power constraint in all LEDs to achieve higher
power efficiency of the power amplifiers used in the LED array. Based on this
we formulate an optimization problem in which the constraint is in the form of
a manifold and utilize a gradient method projected on the manifold to solve the
problem. Simulation results indicate that the proposed omnidirectional
precoding can achieve superior received mean power and bit error rate with
respect to the classical form without precoding utilization.
|
[
{
"version": "v1",
"created": "Sat, 9 Sep 2023 07:32:00 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Ghazijahani",
"Hamed Alizadeh",
""
],
[
"Atashbar",
"Mahmoud",
""
],
[
"Guan",
"Yong Liang",
""
],
[
"Yang",
"Zhaojie",
""
]
] |
new_dataset
| 0.99509 |
2309.06019
|
Muhammad Usman
|
Muhammad Sohail Ibrahim, Muhammad Usman, Malik Zohaib Nisar, Jeong-A
Lee
|
DSLOT-NN: Digit-Serial Left-to-Right Neural Network Accelerator
|
Presented at 2023 26th Euromicro Conference on Digital System Design
(DSD)
| null | null | null |
cs.AR cs.AI cs.PF
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a Digit-Serial Left-tO-righT (DSLOT) arithmetic based processing
technique called DSLOT-NN with aim to accelerate inference of the convolution
operation in the deep neural networks (DNNs). The proposed work has the ability
to assess and terminate the ineffective convolutions which results in massive
power and energy savings. The processing engine is comprised of low-latency
most-significant-digit-first (MSDF) (also called online) multipliers and adders
that processes data from left-to-right, allowing the execution of subsequent
operations in digit-pipelined manner. Use of online operators eliminates the
need for the development of complex mechanism of identifying the negative
activation, as the output with highest weight value is generated first, and the
sign of the result can be identified as soon as first non-zero digit is
generated. The precision of the online operators can be tuned at run-time,
making them extremely useful in situations where accuracy can be compromised
for power and energy savings. The proposed design has been implemented on
Xilinx Virtex-7 FPGA and is compared with state-of-the-art Stripes on various
performance metrics. The results show the proposed design presents power
savings, has shorter cycle time, and approximately 50% higher OPS per watt.
|
[
{
"version": "v1",
"created": "Tue, 12 Sep 2023 07:36:23 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Sep 2023 02:44:28 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Ibrahim",
"Muhammad Sohail",
""
],
[
"Usman",
"Muhammad",
""
],
[
"Nisar",
"Malik Zohaib",
""
],
[
"Lee",
"Jeong-A",
""
]
] |
new_dataset
| 0.956802 |
2309.08244
|
Zherui Lu
|
Zherui Lu, Gangyi Wang, Xinguo Wei, and Jian Li
|
A Real-time Faint Space Debris Detector With Learning-based LCM
|
13 pages, 28 figures, normal article
| null | null | null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the development of aerospace technology, the increasing population of
space debris has posed a great threat to the safety of spacecraft. However, the
low intensity of reflected light and high angular velocity of space debris
impede the extraction. Besides, due to the limitations of the ground
observation methods, small space debris can hardly be detected, making it
necessary to enhance the spacecraft's capacity for space situational awareness
(SSA). Considering that traditional methods have some defects in low-SNR target
detection, such as low effectiveness and large time consumption, this paper
proposes a method for low-SNR streak extraction based on local contrast and
maximum likelihood estimation (MLE), which can detect space objects with SNR
2.0 efficiently. In the proposed algorithm, local contrast will be applied for
crude classifications, which will return connected components as preliminary
results, and then MLE will be performed to reconstruct the connected components
of targets via orientated growth, further improving the precision. The
algorithm has been verified with both simulated streaks and real star tracker
images, and the average centroid error of the proposed algorithm is close to
the state-of-the-art method like ODCC. At the same time, the algorithm in this
paper has significant advantages in efficiency compared with ODCC. In
conclusion, the algorithm in this paper is of high speed and precision, which
guarantees its promising applications in the extraction of high dynamic
targets.
|
[
{
"version": "v1",
"created": "Fri, 15 Sep 2023 08:37:28 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Lu",
"Zherui",
""
],
[
"Wang",
"Gangyi",
""
],
[
"Wei",
"Xinguo",
""
],
[
"Li",
"Jian",
""
]
] |
new_dataset
| 0.997811 |
2309.09085
|
Yi Zhong
|
Yongyi Zang, Yi Zhong, Frank Cwitkowitz, Zhiyao Duan
|
SynthTab: Leveraging Synthesized Data for Guitar Tablature Transcription
|
Submitted to ICASSP2024
| null | null | null |
cs.SD cs.IR cs.MM eess.AS eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Guitar tablature is a form of music notation widely used among guitarists. It
captures not only the musical content of a piece, but also its implementation
and ornamentation on the instrument. Guitar Tablature Transcription (GTT) is an
important task with broad applications in music education and entertainment.
Existing datasets are limited in size and scope, causing state-of-the-art GTT
models trained on such datasets to suffer from overfitting and to fail in
generalization across datasets. To address this issue, we developed a
methodology for synthesizing SynthTab, a large-scale guitar tablature
transcription dataset using multiple commercial acoustic and electric guitar
plugins. This dataset is built on tablatures from DadaGP, which offers a vast
collection and the degree of specificity we wish to transcribe. The proposed
synthesis pipeline produces audio which faithfully adheres to the original
fingerings, styles, and techniques specified in the tablature with diverse
timbre. Experiments show that pre-training state-of-the-art GTT model on
SynthTab improves transcription accuracy in same-dataset tests. More
importantly, it significantly mitigates overfitting problems of GTT models in
cross-dataset evaluation.
|
[
{
"version": "v1",
"created": "Sat, 16 Sep 2023 19:40:30 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Sep 2023 02:14:08 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Zang",
"Yongyi",
""
],
[
"Zhong",
"Yi",
""
],
[
"Cwitkowitz",
"Frank",
""
],
[
"Duan",
"Zhiyao",
""
]
] |
new_dataset
| 0.992588 |
2309.09357
|
Xuhai Xu
|
Ziqi Yang, Xuhai Xu, Bingsheng Yao, Shao Zhang, Ethan Rogers, Stephen
Intille, Nawar Shara, Guodong Gordon Gao, Dakuo Wang
|
Talk2Care: Facilitating Asynchronous Patient-Provider Communication with
Large-Language-Model
|
Under submission to CHI2024
| null | null | null |
cs.CL cs.AI cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Despite the plethora of telehealth applications to assist home-based older
adults and healthcare providers, basic messaging and phone calls are still the
most common communication methods, which suffer from limited availability,
information loss, and process inefficiencies. One promising solution to
facilitate patient-provider communication is to leverage large language models
(LLMs) with their powerful natural conversation and summarization capability.
However, there is a limited understanding of LLMs' role during the
communication. We first conducted two interview studies with both older adults
(N=10) and healthcare providers (N=9) to understand their needs and
opportunities for LLMs in patient-provider asynchronous communication. Based on
the insights, we built an LLM-powered communication system, Talk2Care, and
designed interactive components for both groups: (1) For older adults, we
leveraged the convenience and accessibility of voice assistants (VAs) and built
an LLM-powered VA interface for effective information collection. (2) For
health providers, we built an LLM-based dashboard to summarize and present
important health information based on older adults' conversations with the VA.
We further conducted two user studies with older adults and providers to
evaluate the usability of the system. The results showed that Talk2Care could
facilitate the communication process, enrich the health information collected
from older adults, and considerably save providers' efforts and time. We
envision our work as an initial exploration of LLMs' capability in the
intersection of healthcare and interpersonal communication.
|
[
{
"version": "v1",
"created": "Sun, 17 Sep 2023 19:46:03 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Sep 2023 00:45:51 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Yang",
"Ziqi",
""
],
[
"Xu",
"Xuhai",
""
],
[
"Yao",
"Bingsheng",
""
],
[
"Zhang",
"Shao",
""
],
[
"Rogers",
"Ethan",
""
],
[
"Intille",
"Stephen",
""
],
[
"Shara",
"Nawar",
""
],
[
"Gao",
"Guodong Gordon",
""
],
[
"Wang",
"Dakuo",
""
]
] |
new_dataset
| 0.978607 |
2309.10654
|
Dawei Cheng
|
Jiangtong Li, Yuxuan Bian, Guoxuan Wang, Yang Lei, Dawei Cheng, Zhijun
Ding and Changjun Jiang
|
CFGPT: Chinese Financial Assistant with Large Language Model
|
12 pages, 5 figures
| null | null | null |
cs.CL cs.AI cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models (LLMs) have demonstrated great potential in natural
language processing tasks within the financial domain. In this work, we present
a Chinese Financial Generative Pre-trained Transformer framework, named CFGPT,
which includes a dataset~(CFData) for pre-training and supervised fine-tuning,
a financial LLM~(CFLLM) to adeptly manage financial texts, and a deployment
framework~(CFAPP) designed to navigate real-world financial applications. The
CFData comprising both a pre-training dataset and a supervised fine-tuning
dataset, where the pre-training dataset collates Chinese financial data and
analytics, alongside a smaller subset of general-purpose text with 584M
documents and 141B tokens in total, and the supervised fine-tuning dataset is
tailored for six distinct financial tasks, embodying various facets of
financial analysis and decision-making with 1.5M instruction pairs and 1.5B
tokens in total. The CFLLM, which is based on InternLM-7B to balance the model
capability and size, is trained on CFData in two stage, continued pre-training
and supervised fine-tuning. The CFAPP is centered on large language models
(LLMs) and augmented with additional modules to ensure multifaceted
functionality in real-world application. Our codes are released at
https://github.com/TongjiFinLab/CFGPT.
|
[
{
"version": "v1",
"created": "Tue, 19 Sep 2023 14:34:01 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Sep 2023 09:52:07 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Li",
"Jiangtong",
""
],
[
"Bian",
"Yuxuan",
""
],
[
"Wang",
"Guoxuan",
""
],
[
"Lei",
"Yang",
""
],
[
"Cheng",
"Dawei",
""
],
[
"Ding",
"Zhijun",
""
],
[
"Jiang",
"Changjun",
""
]
] |
new_dataset
| 0.99974 |
2309.12303
|
Shilin Yan
|
Shilin Yan, Xiaohao Xu, Lingyi Hong, Wenchao Chen, Wenqiang Zhang and
Wei Zhang
|
PanoVOS: Bridging Non-panoramic and Panoramic Views with Transformer for
Video Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Panoramic videos contain richer spatial information and have attracted
tremendous amounts of attention due to their exceptional experience in some
fields such as autonomous driving and virtual reality. However, existing
datasets for video segmentation only focus on conventional planar images. To
address the challenge, in this paper, we present a panoramic video dataset,
PanoVOS. The dataset provides 150 videos with high video resolutions and
diverse motions. To quantify the domain gap between 2D planar videos and
panoramic videos, we evaluate 15 off-the-shelf video object segmentation (VOS)
models on PanoVOS. Through error analysis, we found that all of them fail to
tackle pixel-level content discontinues of panoramic videos. Thus, we present a
Panoramic Space Consistency Transformer (PSCFormer), which can effectively
utilize the semantic boundary information of the previous frame for pixel-level
matching with the current frame. Extensive experiments demonstrate that
compared with the previous SOTA models, our PSCFormer network exhibits a great
advantage in terms of segmentation results under the panoramic setting. Our
dataset poses new challenges in panoramic VOS and we hope that our PanoVOS can
advance the development of panoramic segmentation/tracking.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 17:59:02 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Sep 2023 04:39:47 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Yan",
"Shilin",
""
],
[
"Xu",
"Xiaohao",
""
],
[
"Hong",
"Lingyi",
""
],
[
"Chen",
"Wenchao",
""
],
[
"Zhang",
"Wenqiang",
""
],
[
"Zhang",
"Wei",
""
]
] |
new_dataset
| 0.999444 |
2309.12319
|
Andres Forero Osorio
|
Andres Forero Osorio and Carlos Andr\'es Torres Echeverr\'ia
|
Drone Flight Path Architecture
|
Language: Spanish, Lenguaje:Espa\~nol. Codigo de la web app
disponible en https://github.com/dennis-forero/3D-drone-route-planner
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
This project was built from a pre-existing architecture that facilitates the
planning and automatic execution of drone routes in a known space through a 3D
virtual reality environment. Our work consisted in extending this architecture
by integrating a new web component, making use of a 3D map API, to facilitate
to people who do not have access to virtual reality hardware, the possibility
of planning flight routes that have as parameters the <latitude, longitude>
with respect to the globe and also a component in meters that represents the
height at which the drone rises in a certain point. Additionally, the
configuration possibilities of a route were extended in order to take advantage
of one of the components that gives more value and potential to unmanned
aircrafts: the use of the camera in multiple contexts and scenarios. The
extension of this solution allows the user to assign different camera tasks
along the route, see in real time what the camera is capturing and, after the
flight, retrieve the multimedia content that was created
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 23:50:01 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Osorio",
"Andres Forero",
""
],
[
"Echeverría",
"Carlos Andrés Torres",
""
]
] |
new_dataset
| 0.999167 |
2309.12349
|
Abdelghani MADDI
|
Abdelghani Maddi (GEMASS), David Sapinho
|
On the culture of open access: the Sci-hub paradox
|
Scientometrics, 2023. arXiv admin note: substantial text overlap with
arXiv:2206.06874
| null |
10.1007/s11192-023-04792-5
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Shadow libraries, also known as ''pirate libraries'', are online collections
of copyrighted publications that have been made available for free without the
permission of the copyright holders. They have gradually become key players of
scientific knowledge dissemination, despite their illegality in most countries
of the world. Many publishers and scientist-editors decry such libraries for
their copyright infringement and loss of publication usage information, while
some scholars and institutions support them, sometimes in a roundabout way, for
their role in reducing inequalities of access to knowledge, particularly in
low-income countries. Although there is a wealth of literature on shadow
libraries, none of this have focused on its potential role in knowledge
dissemination, through the open access movement. Here we analyze how shadow
libraries can affect researchers' citation practices, highlighting some
counter-intuitive findings about their impact on the Open Access Citation
Advantage (OACA). Based on a large randomized sample, this study first shows
that OA publications, including those in fully OA journals, receive more
citations than their subscription-based counterparts do. However, the OACA has
slightly decreased over the seven last years. The introduction of a distinction
between those accessible or not via the Scihub platform among
subscription-based suggest that the generalization of its use cancels the
positive effect of OA publishing. The results show that publications in fully
OA journals are victims of the success of Sci-hub. Thus, paradoxically,
although Sci-hub may seem to facilitate access to scientific knowledge, it
negatively affects the OA movement as a whole, by reducing the comparative
advantage of OA publications in terms of visibility for researchers. The
democratization of the use of Sci-hub may therefore lead to a vicious cycle,
hindering efforts to develop full OA strategies without proposing a credible
and sustainable alternative model for the dissemination of scientific
knowledge.
|
[
{
"version": "v1",
"created": "Wed, 30 Aug 2023 07:50:56 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Maddi",
"Abdelghani",
"",
"GEMASS"
],
[
"Sapinho",
"David",
""
]
] |
new_dataset
| 0.998768 |
2309.12377
|
Umberto Michelucci
|
Francesca Venturini, Silvan Fluri, Manas Mejari, Michael Baumgartner,
Dario Piga, Umberto Michelucci
|
Shedding Light on the Ageing of Extra Virgin Olive Oil: Probing the
Impact of Temperature with Fluorescence Spectroscopy and Machine Learning
Techniques
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This work systematically investigates the oxidation of extra virgin olive oil
(EVOO) under accelerated storage conditions with UV absorption and total
fluorescence spectroscopy. With the large amount of data collected, it proposes
a method to monitor the oil's quality based on machine learning applied to
highly-aggregated data. EVOO is a high-quality vegetable oil that has earned
worldwide reputation for its numerous health benefits and excellent taste.
Despite its outstanding quality, EVOO degrades over time owing to oxidation,
which can affect both its health qualities and flavour. Therefore, it is highly
relevant to quantify the effects of oxidation on EVOO and develop methods to
assess it that can be easily implemented under field conditions, rather than in
specialized laboratories. The following study demonstrates that fluorescence
spectroscopy has the capability to monitor the effect of oxidation and assess
the quality of EVOO, even when the data are highly aggregated. It shows that
complex laboratory equipment is not necessary to exploit fluorescence
spectroscopy using the proposed method and that cost-effective solutions, which
can be used in-field by non-scientists, could provide an easily-accessible
assessment of the quality of EVOO.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 09:46:01 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Venturini",
"Francesca",
""
],
[
"Fluri",
"Silvan",
""
],
[
"Mejari",
"Manas",
""
],
[
"Baumgartner",
"Michael",
""
],
[
"Piga",
"Dario",
""
],
[
"Michelucci",
"Umberto",
""
]
] |
new_dataset
| 0.953493 |
2309.12397
|
Bo-Hsun Chen
|
Bo-Hsun Chen, Peter Negrut, Thomas Liang, Nevindu Batagoda, Harry
Zhang, Dan Negrut
|
POLAR3D: Augmenting NASA's POLAR Dataset for Data-Driven Lunar
Perception and Rover Simulation
|
7 pages, 4 figures; this work has been submitted to the 2024 IEEE
Conference on Robotics and Automation (ICRA) under review
| null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We report on an effort that led to POLAR3D, a set of digital assets that
enhance the POLAR dataset of stereo images generated by NASA to mimic lunar
lighting conditions. Our contributions are twofold. First, we have annotated
each photo in the POLAR dataset, providing approximately 23 000 labels for
rocks and their shadows. Second, we digitized several lunar terrain scenarios
available in the POLAR dataset. Specifically, by utilizing both the lunar
photos and the POLAR's LiDAR point clouds, we constructed detailed obj files
for all identifiable assets. POLAR3D is the set of digital assets comprising of
rock/shadow labels and obj files associated with the digital twins of lunar
terrain scenarios. This new dataset can be used for training perception
algorithms for lunar exploration and synthesizing photorealistic images beyond
the original POLAR collection. Likewise, the obj assets can be integrated into
simulation environments to facilitate realistic rover operations in a digital
twin of a POLAR scenario. POLAR3D is publicly available to aid perception
algorithm development, camera simulation efforts, and lunar simulation
exercises.POLAR3D is publicly available at
https://github.com/uwsbel/POLAR-digital.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 18:00:34 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Chen",
"Bo-Hsun",
""
],
[
"Negrut",
"Peter",
""
],
[
"Liang",
"Thomas",
""
],
[
"Batagoda",
"Nevindu",
""
],
[
"Zhang",
"Harry",
""
],
[
"Negrut",
"Dan",
""
]
] |
new_dataset
| 0.999874 |
2309.12428
|
Davide Cozzolino
|
Davide Cozzolino and Koki Nagano and Lucas Thomaz and Angshul Majumdar
and Luisa Verdoliva
|
Synthetic Image Detection: Highlights from the IEEE Video and Image
Processing Cup 2022 Student Competition
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Video and Image Processing (VIP) Cup is a student competition that takes
place each year at the IEEE International Conference on Image Processing. The
2022 IEEE VIP Cup asked undergraduate students to develop a system capable of
distinguishing pristine images from generated ones. The interest in this topic
stems from the incredible advances in the AI-based generation of visual data,
with tools that allows the synthesis of highly realistic images and videos.
While this opens up a large number of new opportunities, it also undermines the
trustworthiness of media content and fosters the spread of disinformation on
the internet. Recently there was strong concern about the generation of
extremely realistic images by means of editing software that includes the
recent technology on diffusion models. In this context, there is a need to
develop robust and automatic tools for synthetic image detection.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 18:50:47 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Cozzolino",
"Davide",
""
],
[
"Nagano",
"Koki",
""
],
[
"Thomaz",
"Lucas",
""
],
[
"Majumdar",
"Angshul",
""
],
[
"Verdoliva",
"Luisa",
""
]
] |
new_dataset
| 0.995405 |
2309.12429
|
Yuyang Chen
|
Yuyang Chen, Praveen Raj Masilamani, Bhavin Jawade, Srirangaraj
Setlur, Karthik Dantu
|
DIOR: Dataset for Indoor-Outdoor Reidentification -- Long Range 3D/2D
Skeleton Gait Collection Pipeline, Semi-Automated Gait Keypoint Labeling and
Baseline Evaluation Methods
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In recent times, there is an increased interest in the identification and
re-identification of people at long distances, such as from rooftop cameras,
UAV cameras, street cams, and others. Such recognition needs to go beyond face
and use whole-body markers such as gait. However, datasets to train and test
such recognition algorithms are not widely prevalent, and fewer are labeled.
This paper introduces DIOR -- a framework for data collection, semi-automated
annotation, and also provides a dataset with 14 subjects and 1.649 million RGB
frames with 3D/2D skeleton gait labels, including 200 thousands frames from a
long range camera. Our approach leverages advanced 3D computer vision
techniques to attain pixel-level accuracy in indoor settings with motion
capture systems. Additionally, for outdoor long-range settings, we remove the
dependency on motion capture systems and adopt a low-cost, hybrid 3D computer
vision and learning pipeline with only 4 low-cost RGB cameras, successfully
achieving precise skeleton labeling on far-away subjects, even when their
height is limited to a mere 20-25 pixels within an RGB frame. On publication,
we will make our pipeline open for others to use.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 18:51:00 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Chen",
"Yuyang",
""
],
[
"Masilamani",
"Praveen Raj",
""
],
[
"Jawade",
"Bhavin",
""
],
[
"Setlur",
"Srirangaraj",
""
],
[
"Dantu",
"Karthik",
""
]
] |
new_dataset
| 0.999714 |
2309.12479
|
Yao-Hung Tsai
|
Mario Srouji, Yao-Hung Hubert Tsai, Hugues Thomas, Jian Zhang
|
Human Following in Mobile Platforms with Person Re-Identification
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Human following is a crucial feature of human-robot interaction, yet it poses
numerous challenges to mobile agents in real-world scenarios. Some major
hurdles are that the target person may be in a crowd, obstructed by others, or
facing away from the agent. To tackle these challenges, we present a novel
person re-identification module composed of three parts: a 360-degree visual
registration, a neural-based person re-identification using human faces and
torsos, and a motion tracker that records and predicts the target person's
future position. Our human-following system also addresses other challenges,
including identifying fast-moving targets with low latency, searching for
targets that move out of the camera's sight, collision avoidance, and
adaptively choosing different following mechanisms based on the distance
between the target person and the mobile agent. Extensive experiments show that
our proposed person re-identification module significantly enhances the
human-following feature compared to other baseline variants.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 20:50:55 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Srouji",
"Mario",
""
],
[
"Tsai",
"Yao-Hung Hubert",
""
],
[
"Thomas",
"Hugues",
""
],
[
"Zhang",
"Jian",
""
]
] |
new_dataset
| 0.99388 |
2309.12499
|
Ramakrishna Bairi
|
Ramakrishna Bairi, Atharv Sonwane, Aditya Kanade, Vageesh D C, Arun
Iyer, Suresh Parthasarathy, Sriram Rajamani, B. Ashok, Shashank Shet
|
CodePlan: Repository-level Coding using LLMs and Planning
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Software engineering activities such as package migration, fixing errors
reports from static analysis or testing, and adding type annotations or other
specifications to a codebase, involve pervasively editing the entire repository
of code. We formulate these activities as repository-level coding tasks.
Recent tools like GitHub Copilot, which are powered by Large Language Models
(LLMs), have succeeded in offering high-quality solutions to localized coding
problems. Repository-level coding tasks are more involved and cannot be solved
directly using LLMs, since code within a repository is inter-dependent and the
entire repository may be too large to fit into the prompt. We frame
repository-level coding as a planning problem and present a task-agnostic
framework, called CodePlan to solve it. CodePlan synthesizes a multi-step chain
of edits (plan), where each step results in a call to an LLM on a code location
with context derived from the entire repository, previous code changes and
task-specific instructions. CodePlan is based on a novel combination of an
incremental dependency analysis, a change may-impact analysis and an adaptive
planning algorithm.
We evaluate the effectiveness of CodePlan on two repository-level tasks:
package migration (C#) and temporal code edits (Python). Each task is evaluated
on multiple code repositories, each of which requires inter-dependent changes
to many files (between 2-97 files). Coding tasks of this level of complexity
have not been automated using LLMs before. Our results show that CodePlan has
better match with the ground truth compared to baselines. CodePlan is able to
get 5/6 repositories to pass the validity checks (e.g., to build without errors
and make correct code edits) whereas the baselines (without planning but with
the same type of contextual information as CodePlan) cannot get any of the
repositories to pass them.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 21:45:17 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Bairi",
"Ramakrishna",
""
],
[
"Sonwane",
"Atharv",
""
],
[
"Kanade",
"Aditya",
""
],
[
"C",
"Vageesh D",
""
],
[
"Iyer",
"Arun",
""
],
[
"Parthasarathy",
"Suresh",
""
],
[
"Rajamani",
"Sriram",
""
],
[
"Ashok",
"B.",
""
],
[
"Shet",
"Shashank",
""
]
] |
new_dataset
| 0.999669 |
2309.12506
|
Bilel Benjdira Dr.
|
Sawsan AlHalawani, Bilel Benjdira, Adel Ammar, Anis Koubaa, Anas M.
Ali
|
License Plate Super-Resolution Using Diffusion Models
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In surveillance, accurately recognizing license plates is hindered by their
often low quality and small dimensions, compromising recognition precision.
Despite advancements in AI-based image super-resolution, methods like
Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs)
still fall short in enhancing license plate images. This study leverages the
cutting-edge diffusion model, which has consistently outperformed other deep
learning techniques in image restoration. By training this model using a
curated dataset of Saudi license plates, both in low and high resolutions, we
discovered the diffusion model's superior efficacy. The method achieves a
12.55\% and 37.32% improvement in Peak Signal-to-Noise Ratio (PSNR) over SwinIR
and ESRGAN, respectively. Moreover, our method surpasses these techniques in
terms of Structural Similarity Index (SSIM), registering a 4.89% and 17.66%
improvement over SwinIR and ESRGAN, respectively. Furthermore, 92% of human
evaluators preferred our images over those from other algorithms. In essence,
this research presents a pioneering solution for license plate
super-resolution, with tangible potential for surveillance systems.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 22:06:23 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"AlHalawani",
"Sawsan",
""
],
[
"Benjdira",
"Bilel",
""
],
[
"Ammar",
"Adel",
""
],
[
"Koubaa",
"Anis",
""
],
[
"Ali",
"Anas M.",
""
]
] |
new_dataset
| 0.997324 |
2309.12538
|
Benjamin Tag
|
Adrian Kristanto, Maxime Cordeil, Benjamin Tag, Nathalie Henry Riche,
Tim Dwyer
|
Hanstreamer: an Open-source Webcam-based Live Data Presentation System
|
3 pages, 5 figures
|
Workshop MERCADO: Multimodal Experiences for Remote Communication
Around Data Online, IEEE Visualization Conference 2023
| null | null |
cs.HC cs.GR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present Hanstreamer, a free and open-source system for webcam-based data
presentation. The system performs real-time gesture recognition on the user's
webcam video stream to provide interactive data visuals. Apart from the
standard chart and map visuals, Hanstreamer is the first such video data
presentation system to support network visualisation and interactive
DimpVis-style time-series data exploration. The system is ready for use with
popular online meeting software such as Zoom and Microsoft Teams.
|
[
{
"version": "v1",
"created": "Thu, 21 Sep 2023 23:32:37 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Kristanto",
"Adrian",
""
],
[
"Cordeil",
"Maxime",
""
],
[
"Tag",
"Benjamin",
""
],
[
"Riche",
"Nathalie Henry",
""
],
[
"Dwyer",
"Tim",
""
]
] |
new_dataset
| 0.997865 |
2309.12563
|
Yuwei Huang
|
Yuwei Huang, Lipeng Zhu, and Rui Zhang
|
Passive Reflection Codebook Design for IRS-Integrated Access Point
|
13 pages, 11 figures
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Intelligent reflecting surface (IRS) has emerged as a promising technique to
extend the wireless signal coverage of access point (AP) and improve the
communication performance cost-effectively. In order to reduce the path-loss of
the cascaded user-IRS-AP channels, the IRS-integrated AP architecture has been
proposed to deploy the IRSs and the antenna array of the AP within the same
antenna radome. To reduce the pilot overhead for estimating all IRS-involved
channels, in this paper, we propose a novel codebook-based IRS reflection
design for the IRS-integrated AP to enhance the coverage performance in a given
area. In particular, the codebook consisting of a small number of codewords is
designed offline by employing an efficient sector division strategy based on
the azimuth angle. To ensure the performance of each sector, we optimize its
corresponding codeword for IRS reflection pattern to maximize the
sector-min-average-effective-channel-power (SMAECP) by applying the alternating
optimization (AO) and semidefinite relaxation (SDR) methods. With the designed
codebook, the AP performs the IRS reflection training by sequentially applying
all codewords and selects the one achieving the best communication performance
for data transmission. Numerical results show that our proposed codebook design
can enhance the average channel power of the whole coverage area, as compared
to the system without IRS. Moreover, our proposed codebook-based IRS reflection
design is shown to achieve significant performance gain over other benchmark
schemes in both single-user and multi-user transmissions.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 01:24:21 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Huang",
"Yuwei",
""
],
[
"Zhu",
"Lipeng",
""
],
[
"Zhang",
"Rui",
""
]
] |
new_dataset
| 0.971987 |
2309.12624
|
Chenxingyu Zhao
|
Chenxingyu Zhao, Yulin Sun, Arvind Krishnamurthy
|
Quark: A High-Performance Secure Container Runtime for Serverless
Computing
|
arXiv admin note: text overlap with arXiv:2305.10621
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Secure container runtimes serve as the foundational layer for creating and
running containers, which is the bedrock of emerging computing paradigms like
microservices and serverless computing. Although existing secure container
runtimes indeed enhance security via running containers over a guest kernel and
a Virtual Machine Monitor (VMM or Hypervisor), they incur performance penalties
in critical areas such as networking, container startup, and I/O system calls.
In our practice of operating microservices and serverless computing, we build
a high-performance secure container runtime named Quark. Unlike existing
solutions that rely on traditional VM technologies by importing Linux for the
guest kernel and QEMU for the VMM, we take a different approach to building
Quark from the ground up, paving the way for extreme customization to unlock
high performance. Our development centers on co-designing a custom guest kernel
and a VMM for secure containers. To this end, we build a lightweight guest OS
kernel named QKernel and a specialized VMM named QVisor. The QKernel-QVisor
codesign allows us to deliver three key advancements: high-performance
RDMA-based container networking, fast container startup mode, and efficient
mechanisms for executing I/O syscalls. In our practice with real-world apps
like Redis, Quark cuts down P95 latency by 79.3% and increases throughput by
2.43x compared to Kata. Moreover, Quark container startup achieves 96.5% lower
latency than the cold-start mode while saving 81.3% memory cost to the
keep-warm mode. Quark is open-source with an industry-standard codebase in
Rust.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 05:11:48 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Zhao",
"Chenxingyu",
""
],
[
"Sun",
"Yulin",
""
],
[
"Krishnamurthy",
"Arvind",
""
]
] |
new_dataset
| 0.999252 |
2309.12639
|
Xiaoheng Jiang
|
Xiaoheng Jiang, Kaiyi Guo, Yang Lu, Feng Yan, Hao Liu, Jiale Cao,
Mingliang Xu, and Dacheng Tao
|
CINFormer: Transformer network with multi-stage CNN feature injection
for surface defect segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Surface defect inspection is of great importance for industrial manufacture
and production. Though defect inspection methods based on deep learning have
made significant progress, there are still some challenges for these methods,
such as indistinguishable weak defects and defect-like interference in the
background. To address these issues, we propose a transformer network with
multi-stage CNN (Convolutional Neural Network) feature injection for surface
defect segmentation, which is a UNet-like structure named CINFormer. CINFormer
presents a simple yet effective feature integration mechanism that injects the
multi-level CNN features of the input image into different stages of the
transformer network in the encoder. This can maintain the merit of CNN
capturing detailed features and that of transformer depressing noises in the
background, which facilitates accurate defect detection. In addition, CINFormer
presents a Top-K self-attention module to focus on tokens with more important
information about the defects, so as to further reduce the impact of the
redundant background. Extensive experiments conducted on the surface defect
datasets DAGM 2007, Magnetic tile, and NEU show that the proposed CINFormer
achieves state-of-the-art performance in defect detection.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 06:12:02 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Jiang",
"Xiaoheng",
""
],
[
"Guo",
"Kaiyi",
""
],
[
"Lu",
"Yang",
""
],
[
"Yan",
"Feng",
""
],
[
"Liu",
"Hao",
""
],
[
"Cao",
"Jiale",
""
],
[
"Xu",
"Mingliang",
""
],
[
"Tao",
"Dacheng",
""
]
] |
new_dataset
| 0.991099 |
2309.12650
|
Yixin Chen
|
Yixin Chen, Ourui Fu, Wenrui Shao, Zhaoheng Xie
|
FP-PET: Large Model, Multiple Loss And Focused Practice
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This study presents FP-PET, a comprehensive approach to medical image
segmentation with a focus on CT and PET images. Utilizing a dataset from the
AutoPet2023 Challenge, the research employs a variety of machine learning
models, including STUNet-large, SwinUNETR, and VNet, to achieve
state-of-the-art segmentation performance. The paper introduces an aggregated
score that combines multiple evaluation metrics such as Dice score, false
positive volume (FPV), and false negative volume (FNV) to provide a holistic
measure of model effectiveness. The study also discusses the computational
challenges and solutions related to model training, which was conducted on
high-performance GPUs. Preprocessing and postprocessing techniques, including
gaussian weighting schemes and morphological operations, are explored to
further refine the segmentation output. The research offers valuable insights
into the challenges and solutions for advanced medical image segmentation.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 06:44:28 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Chen",
"Yixin",
""
],
[
"Fu",
"Ourui",
""
],
[
"Shao",
"Wenrui",
""
],
[
"Xie",
"Zhaoheng",
""
]
] |
new_dataset
| 0.990971 |
2309.12660
|
Ruidong Xi
|
Rui-Dong Xi, Liang Lu, Xue Zhang, Xiao Xiao, Bingyi Xia, Jiankun Wang,
Max Q.-H. Meng
|
Disturbance Rejection Control for Autonomous Trolley Collection Robots
with Prescribed Performance
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Trajectory tracking control of autonomous trolley collection robots (ATCR) is
an ambitious work due to the complex environment, serious noise and external
disturbances. This work investigates a control scheme for ATCR subjecting to
severe environmental interference. A kinematics model based adaptive sliding
mode disturbance observer with fast convergence is first proposed to estimate
the lumped disturbances. On this basis, a robust controller with prescribed
performance is proposed using a backstepping technique, which improves the
transient performance and guarantees fast convergence. Simulation outcomes have
been provided to illustrate the effectiveness of the proposed control scheme.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 07:00:50 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Xi",
"Rui-Dong",
""
],
[
"Lu",
"Liang",
""
],
[
"Zhang",
"Xue",
""
],
[
"Xiao",
"Xiao",
""
],
[
"Xia",
"Bingyi",
""
],
[
"Wang",
"Jiankun",
""
],
[
"Meng",
"Max Q. -H.",
""
]
] |
new_dataset
| 0.985713 |
2309.12676
|
Taiga Someya
|
Taiga Someya, Yushi Sugimoto, Yohei Oseki
|
JCoLA: Japanese Corpus of Linguistic Acceptability
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Neural language models have exhibited outstanding performance in a range of
downstream tasks. However, there is limited understanding regarding the extent
to which these models internalize syntactic knowledge, so that various datasets
have recently been constructed to facilitate syntactic evaluation of language
models across languages. In this paper, we introduce JCoLA (Japanese Corpus of
Linguistic Acceptability), which consists of 10,020 sentences annotated with
binary acceptability judgments. Specifically, those sentences are manually
extracted from linguistics textbooks, handbooks and journal articles, and split
into in-domain data (86 %; relatively simple acceptability judgments extracted
from textbooks and handbooks) and out-of-domain data (14 %; theoretically
significant acceptability judgments extracted from journal articles), the
latter of which is categorized by 12 linguistic phenomena. We then evaluate the
syntactic knowledge of 9 different types of Japanese language models on JCoLA.
The results demonstrated that several models could surpass human performance
for the in-domain data, while no models were able to exceed human performance
for the out-of-domain data. Error analyses by linguistic phenomena further
revealed that although neural language models are adept at handling local
syntactic dependencies like argument structure, their performance wanes when
confronted with long-distance syntactic dependencies like verbal agreement and
NPI licensing.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 07:35:45 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Someya",
"Taiga",
""
],
[
"Sugimoto",
"Yushi",
""
],
[
"Oseki",
"Yohei",
""
]
] |
new_dataset
| 0.997762 |
2309.12677
|
Ruyi Feng
|
Ruyi Feng, Zhibin Li, Bowen Liu, Yan Ding and Ou Zheng
|
TrTr: A Versatile Pre-Trained Large Traffic Model based on Transformer
for Capturing Trajectory Diversity in Vehicle Population
|
16 pages, 6 figures, under reviewed by Transportation Research Board
Annual Meeting, work in update
| null | null | null |
cs.AI physics.data-an
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Understanding trajectory diversity is a fundamental aspect of addressing
practical traffic tasks. However, capturing the diversity of trajectories
presents challenges, particularly with traditional machine learning and
recurrent neural networks due to the requirement of large-scale parameters. The
emerging Transformer technology, renowned for its parallel computation
capabilities enabling the utilization of models with hundreds of millions of
parameters, offers a promising solution. In this study, we apply the
Transformer architecture to traffic tasks, aiming to learn the diversity of
trajectories within vehicle populations. We analyze the Transformer's attention
mechanism and its adaptability to the goals of traffic tasks, and subsequently,
design specific pre-training tasks. To achieve this, we create a data structure
tailored to the attention mechanism and introduce a set of noises that
correspond to spatio-temporal demands, which are incorporated into the
structured data during the pre-training process. The designed pre-training
model demonstrates excellent performance in capturing the spatial distribution
of the vehicle population, with no instances of vehicle overlap and an RMSE of
0.6059 when compared to the ground truth values. In the context of time series
prediction, approximately 95% of the predicted trajectories' speeds closely
align with the true speeds, within a deviation of 7.5144m/s. Furthermore, in
the stability test, the model exhibits robustness by continuously predicting a
time series ten times longer than the input sequence, delivering smooth
trajectories and showcasing diverse driving behaviors. The pre-trained model
also provides a good basis for downstream fine-tuning tasks. The number of
parameters of our model is over 50 million.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 07:36:22 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Feng",
"Ruyi",
""
],
[
"Li",
"Zhibin",
""
],
[
"Liu",
"Bowen",
""
],
[
"Ding",
"Yan",
""
],
[
"Zheng",
"Ou",
""
]
] |
new_dataset
| 0.995137 |
2309.12708
|
Yuxiang Yan
|
Yuxiang Yan, Boda Liu, Jianfei Ai, Qinbu Li, Ru Wan, Jian Pu
|
PointSSC: A Cooperative Vehicle-Infrastructure Point Cloud Benchmark for
Semantic Scene Completion
|
8 pages, 5 figures, submitted to ICRA2024
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Semantic Scene Completion (SSC) aims to jointly generate space occupancies
and semantic labels for complex 3D scenes. Most existing SSC models focus on
volumetric representations, which are memory-inefficient for large outdoor
spaces. Point clouds provide a lightweight alternative but existing benchmarks
lack outdoor point cloud scenes with semantic labels. To address this, we
introduce PointSSC, the first cooperative vehicle-infrastructure point cloud
benchmark for semantic scene completion. These scenes exhibit long-range
perception and minimal occlusion. We develop an automated annotation pipeline
leveraging Segment Anything to efficiently assign semantics. To benchmark
progress, we propose a LiDAR-based model with a Spatial-Aware Transformer for
global and local feature extraction and a Completion and Segmentation
Cooperative Module for joint completion and segmentation. PointSSC provides a
challenging testbed to drive advances in semantic point cloud completion for
real-world navigation.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 08:39:16 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Yan",
"Yuxiang",
""
],
[
"Liu",
"Boda",
""
],
[
"Ai",
"Jianfei",
""
],
[
"Li",
"Qinbu",
""
],
[
"Wan",
"Ru",
""
],
[
"Pu",
"Jian",
""
]
] |
new_dataset
| 0.998933 |
2309.12715
|
Alberto Sonnino
|
Lefteris Kokoris-Kogias, Alberto Sonnino, George Danezis
|
Cuttlefish: Expressive Fast Path Blockchains with FastUnlock
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Cuttlefish addresses several limitations of existing consensus-less and
consensus-minimized decentralized ledgers, including restricted programmability
and the risk of deadlocked assets. The key insight of Cuttlefish is that
consensus in blockchains is necessary due to contention, rather than multiple
owners of an asset as suggested by prior work. Previous proposals proactively
use consensus to prevent contention from blocking assets, taking a pessimistic
approach. In contrast, Cuttlefish introduces collective objects and multi-owner
transactions that can offer most of the functionality of classic blockchains
when objects transacted on are not under contention. Additionally, in case of
contention, Cuttlefish proposes a novel `Unlock' protocol that significantly
reduces the latency of unblocking contented objects. By leveraging these
features, Cuttlefish implements consensus-less protocols for a broader range of
transactions, including asset swaps and multi-signature transactions, which
were previously believed to require consensus.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 08:56:32 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Kokoris-Kogias",
"Lefteris",
""
],
[
"Sonnino",
"Alberto",
""
],
[
"Danezis",
"George",
""
]
] |
new_dataset
| 0.992429 |
2309.12716
|
Haoyi Niu
|
Haoyi Niu, Tianying Ji, Bingqi Liu, Haocheng Zhao, Xiangyu Zhu,
Jianying Zheng, Pengfei Huang, Guyue Zhou, Jianming Hu, Xianyuan Zhan
|
H2O+: An Improved Framework for Hybrid Offline-and-Online RL with
Dynamics Gaps
| null | null | null | null |
cs.LG cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Solving real-world complex tasks using reinforcement learning (RL) without
high-fidelity simulation environments or large amounts of offline data can be
quite challenging. Online RL agents trained in imperfect simulation
environments can suffer from severe sim-to-real issues. Offline RL approaches
although bypass the need for simulators, often pose demanding requirements on
the size and quality of the offline datasets. The recently emerged hybrid
offline-and-online RL provides an attractive framework that enables joint use
of limited offline data and imperfect simulator for transferable policy
learning. In this paper, we develop a new algorithm, called H2O+, which offers
great flexibility to bridge various choices of offline and online learning
methods, while also accounting for dynamics gaps between the real and
simulation environment. Through extensive simulation and real-world robotics
experiments, we demonstrate superior performance and flexibility over advanced
cross-domain online and offline RL algorithms.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 08:58:22 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Niu",
"Haoyi",
""
],
[
"Ji",
"Tianying",
""
],
[
"Liu",
"Bingqi",
""
],
[
"Zhao",
"Haocheng",
""
],
[
"Zhu",
"Xiangyu",
""
],
[
"Zheng",
"Jianying",
""
],
[
"Huang",
"Pengfei",
""
],
[
"Zhou",
"Guyue",
""
],
[
"Hu",
"Jianming",
""
],
[
"Zhan",
"Xianyuan",
""
]
] |
new_dataset
| 0.978143 |
2309.12731
|
Dave Raggett
|
Dave Raggett
|
Defeasible Reasoning with Knowledge Graphs
|
Accepted for: Knowledge Graph and Semantic Web Conference
(KGSWC-2023), 13-15 September, 2023, Zaragoza, Spain
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Human knowledge is subject to uncertainties, imprecision, incompleteness and
inconsistencies. Moreover, the meaning of many everyday terms is dependent on
the context. That poses a huge challenge for the Semantic Web. This paper
introduces work on an intuitive notation and model for defeasible reasoning
with imperfect knowledge, and relates it to previous work on argumentation
theory. PKN is to N3 as defeasible reasoning is to deductive logic. Further
work is needed on an intuitive syntax for describing reasoning strategies and
tactics in declarative terms, drawing upon the AIF ontology for inspiration.
The paper closes with observations on symbolic approaches in the era of large
language models.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 09:27:26 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Raggett",
"Dave",
""
]
] |
new_dataset
| 0.992117 |
2309.12732
|
Lefteris Moussiades Dr
|
Lefteris Moussiades and George Zografos
|
OpenAi's GPT4 as coding assistant
|
10 pages
| null | null | null |
cs.AI cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Lately, Large Language Models have been widely used in code generation. GPT4
is considered the most potent Large Language Model from Openai. In this paper,
we examine GPT3.5 and GPT4 as coding assistants. More specifically, we have
constructed appropriate tests to check whether the two systems can a) answer
typical questions that can arise during the code development, b) produce
reliable code, and c) contribute to code debugging. The test results are
impressive. The performance of GPT4 is outstanding and signals an increase in
the productivity of programmers and the reorganization of software development
procedures based on these new tools.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 09:31:39 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Moussiades",
"Lefteris",
""
],
[
"Zografos",
"George",
""
]
] |
new_dataset
| 0.952912 |
2309.12768
|
Asra Aslam
|
Doris Antensteiner, Marah Halawa, Asra Aslam, Ivaxi Sheth, Sachini
Herath, Ziqi Huang, Sunnie S. Y. Kim, Aparna Akula, Xin Wang
|
WiCV@CVPR2023: The Eleventh Women In Computer Vision Workshop at the
Annual CVPR Conference
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present the details of Women in Computer Vision Workshop -
WiCV 2023, organized alongside the hybrid CVPR 2023 in Vancouver, Canada. WiCV
aims to amplify the voices of underrepresented women in the computer vision
community, fostering increased visibility in both academia and industry. We
believe that such events play a vital role in addressing gender imbalances
within the field. The annual WiCV@CVPR workshop offers a) opportunity for
collaboration between researchers from minority groups, b) mentorship for
female junior researchers, c) financial support to presenters to alleviate
finanacial burdens and d) a diverse array of role models who can inspire
younger researchers at the outset of their careers. In this paper, we present a
comprehensive report on the workshop program, historical trends from the past
WiCV@CVPR events, and a summary of statistics related to presenters, attendees,
and sponsorship for the WiCV 2023 workshop.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 10:15:38 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Antensteiner",
"Doris",
""
],
[
"Halawa",
"Marah",
""
],
[
"Aslam",
"Asra",
""
],
[
"Sheth",
"Ivaxi",
""
],
[
"Herath",
"Sachini",
""
],
[
"Huang",
"Ziqi",
""
],
[
"Kim",
"Sunnie S. Y.",
""
],
[
"Akula",
"Aparna",
""
],
[
"Wang",
"Xin",
""
]
] |
new_dataset
| 0.975259 |
2309.12781
|
Liming Xu
|
Liming Xu, Stephen Mak, Stefan Schoepf, Michael Ostroumov, and
Alexandra Brintrup
|
AgentChat: Multi-Agent Collaborative Logistics for Carbon Reduction
|
This paper includes 12 pages, 14 figures, and has been submitted to
IEEE for possible publication
| null | null | null |
cs.MA
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Heavy Good Vehicles (HGVs) are the second largest source of greenhouse gas
emissions in transportation, after cars and taxis. However, HGVs are
inefficiently utilised, with more than one-third of their weight capacity not
being used during travel. We, thus, in this paper address collaborative
logistics, an effective pathway to enhance HGVs' utilisation and reduce carbon
emissions. We investigate a multi-agent system approach to facilitate
collaborative logistics, particularly carrier collaboration. We propose a
simple yet effective multi-agent collaborative logistics (MACL) framework,
representing key stakeholders as intelligent agents. Furthermore, we utilise
the MACL framework in conjunction with a proposed system architecture to create
an integrated collaborative logistics testbed. This testbed, consisting of a
physical system and its digital replica, is a tailored cyber-physical system or
digital twin for collaborative logistics. Through a demonstration, we show the
utility of the testbed for studying collaborative logistics.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 10:46:45 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Xu",
"Liming",
""
],
[
"Mak",
"Stephen",
""
],
[
"Schoepf",
"Stefan",
""
],
[
"Ostroumov",
"Michael",
""
],
[
"Brintrup",
"Alexandra",
""
]
] |
new_dataset
| 0.992308 |
2309.12786
|
Florian T. Pokorny
|
Muhammad Zahid and Florian T. Pokorny
|
CloudGripper: An Open Source Cloud Robotics Testbed for Robotic
Manipulation Research, Benchmarking and Data Collection at Scale
|
Under review at IEEE ICRA 2024
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present CloudGripper, an open source cloud robotics testbed, consisting of
a scalable, space and cost-efficient design constructed as a rack of 32 small
robot arm work cells. Each robot work cell is fully enclosed and features
individual lighting, a low-cost custom 5 degree of freedom Cartesian robot arm
with an attached parallel jaw gripper and a dual camera setup for
experimentation. The system design is focused on continuous operation and
features a 10 Gbit/s network connectivity allowing for high throughput
remote-controlled experimentation and data collection for robotic manipulation.
CloudGripper furthermore is intended to form a community testbed to study the
challenges of large scale machine learning and cloud and edge-computing in the
context of robotic manipulation. In this work, we describe the mechanical
design of the system, its initial software stack and evaluate the repeatability
of motions executed by the proposed robot arm design. A local network API
throughput and latency analysis is also provided. CloudGripper-Rope-100, a
dataset of more than a hundred hours of randomized rope pushing interactions
and approximately 4 million camera images is collected and serves as a proof of
concept demonstrating data collection capabilities. A project website with more
information is available at https://cloudgripper.org.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 10:54:07 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Zahid",
"Muhammad",
""
],
[
"Pokorny",
"Florian T.",
""
]
] |
new_dataset
| 0.999381 |
2309.12810
|
Daria Stetsenko
|
Inez Okulska, Daria Stetsenko, Anna Ko{\l}os, Agnieszka Karli\'nska,
Kinga G{\l}\k{a}bi\'nska, Adam Nowakowski
|
StyloMetrix: An Open-Source Multilingual Tool for Representing
Stylometric Vectors
|
26 pages, 6 figures, pre-print for the conference
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This work aims to provide an overview on the open-source multilanguage tool
called StyloMetrix. It offers stylometric text representations that cover
various aspects of grammar, syntax and lexicon. StyloMetrix covers four
languages: Polish as the primary language, English, Ukrainian and Russian. The
normalized output of each feature can become a fruitful course for machine
learning models and a valuable addition to the embeddings layer for any deep
learning algorithm. We strive to provide a concise, but exhaustive overview on
the application of the StyloMetrix vectors as well as explain the sets of the
developed linguistic features. The experiments have shown promising results in
supervised content classification with simple algorithms as Random Forest
Classifier, Voting Classifier, Logistic Regression and others. The deep
learning assessments have unveiled the usefulness of the StyloMetrix vectors at
enhancing an embedding layer extracted from Transformer architectures. The
StyloMetrix has proven itself to be a formidable source for the machine
learning and deep learning algorithms to execute different classification
tasks.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 11:53:47 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Okulska",
"Inez",
""
],
[
"Stetsenko",
"Daria",
""
],
[
"Kołos",
"Anna",
""
],
[
"Karlińska",
"Agnieszka",
""
],
[
"Głąbińska",
"Kinga",
""
],
[
"Nowakowski",
"Adam",
""
]
] |
new_dataset
| 0.955498 |
2309.12825
|
Botian Xu
|
Botian Xu, Feng Gao, Chao Yu, Ruize Zhang, Yi Wu, Yu Wang
|
OmniDrones: An Efficient and Flexible Platform for Reinforcement
Learning in Drone Control
|
Submitted to IEEE RA-L
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, we introduce OmniDrones, an efficient and flexible platform
tailored for reinforcement learning in drone control, built on Nvidia's
Omniverse Isaac Sim. It employs a bottom-up design approach that allows users
to easily design and experiment with various application scenarios on top of
GPU-parallelized simulations. It also offers a range of benchmark tasks,
presenting challenges ranging from single-drone hovering to over-actuated
system tracking. In summary, we propose an open-sourced drone simulation
platform, equipped with an extensive suite of tools for drone learning. It
includes 4 drone models, 5 sensor modalities, 4 control modes, over 10
benchmark tasks, and a selection of widely used RL baselines. To showcase the
capabilities of OmniDrones and to support future research, we also provide
preliminary results on these benchmark tasks. We hope this platform will
encourage further studies on applying RL to practical drone systems.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 12:26:36 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Xu",
"Botian",
""
],
[
"Gao",
"Feng",
""
],
[
"Yu",
"Chao",
""
],
[
"Zhang",
"Ruize",
""
],
[
"Wu",
"Yi",
""
],
[
"Wang",
"Yu",
""
]
] |
new_dataset
| 0.96145 |
2309.12842
|
Tianbo Pan
|
Tianbo Pan, Zidong Cao, Lin Wang
|
SRFNet: Monocular Depth Estimation with Fine-grained Structure via
Spatial Reliability-oriented Fusion of Frames and Events
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Monocular depth estimation is a crucial task to measure distance relative to
a camera, which is important for applications, such as robot navigation and
self-driving. Traditional frame-based methods suffer from performance drops due
to the limited dynamic range and motion blur. Therefore, recent works leverage
novel event cameras to complement or guide the frame modality via frame-event
feature fusion. However, event streams exhibit spatial sparsity, leaving some
areas unperceived, especially in regions with marginal light changes.
Therefore, direct fusion methods, e.g., RAMNet, often ignore the contribution
of the most confident regions of each modality. This leads to structural
ambiguity in the modality fusion process, thus degrading the depth estimation
performance. In this paper, we propose a novel Spatial Reliability-oriented
Fusion Network (SRFNet), that can estimate depth with fine-grained structure at
both daytime and nighttime. Our method consists of two key technical
components. Firstly, we propose an attention-based interactive fusion (AIF)
module that applies spatial priors of events and frames as the initial masks
and learns the consensus regions to guide the inter-modal feature fusion. The
fused feature are then fed back to enhance the frame and event feature
learning. Meanwhile, it utilizes an output head to generate a fused mask, which
is iteratively updated for learning consensual spatial priors. Secondly, we
propose the Reliability-oriented Depth Refinement (RDR) module to estimate
dense depth with the fine-grained structure based on the fused features and
masks. We evaluate the effectiveness of our method on the synthetic and
real-world datasets, which shows that, even without pretraining, our method
outperforms the prior methods, e.g., RAMNet, especially in night scenes. Our
project homepage: https://vlislab22.github.io/SRFNet.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 12:59:39 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Pan",
"Tianbo",
""
],
[
"Cao",
"Zidong",
""
],
[
"Wang",
"Lin",
""
]
] |
new_dataset
| 0.996578 |
2309.12876
|
Ciro Russo
|
Ciro Russo, Alessandro Bria, Claudio Marrocco
|
Gravity Network for end-to-end small lesion detection
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces a novel one-stage end-to-end detector specifically
designed to detect small lesions in medical images. Precise localization of
small lesions presents challenges due to their appearance and the diverse
contextual backgrounds in which they are found. To address this, our approach
introduces a new type of pixel-based anchor that dynamically moves towards the
targeted lesion for detection. We refer to this new architecture as GravityNet,
and the novel anchors as gravity points since they appear to be "attracted" by
the lesions. We conducted experiments on two well-established medical problems
involving small lesions to evaluate the performance of the proposed approach:
microcalcifications detection in digital mammograms and microaneurysms
detection in digital fundus images. Our method demonstrates promising results
in effectively detecting small lesions in these medical imaging tasks.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 14:02:22 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Russo",
"Ciro",
""
],
[
"Bria",
"Alessandro",
""
],
[
"Marrocco",
"Claudio",
""
]
] |
new_dataset
| 0.986689 |
2309.12908
|
Francesco Bariatti
|
Francesco Bariatti, Peggy Cellier, S\'ebastien Ferr\'e
|
KG-MDL: Mining Graph Patterns in Knowledge Graphs with the MDL Principle
| null | null | null | null |
cs.AI cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays, increasingly more data are available as knowledge graphs (KGs).
While this data model supports advanced reasoning and querying, they remain
difficult to mine due to their size and complexity. Graph mining approaches can
be used to extract patterns from KGs. However this presents two main issues.
First, graph mining approaches tend to extract too many patterns for a human
analyst to interpret (pattern explosion). Second, real-life KGs tend to differ
from the graphs usually treated in graph mining: they are multigraphs, their
vertex degrees tend to follow a power-law, and the way in which they model
knowledge can produce spurious patterns. Recently, a graph mining approach
named GraphMDL+ has been proposed to tackle the problem of pattern explosion,
using the Minimum Description Length (MDL) principle. However, GraphMDL+, like
other graph mining approaches, is not suited for KGs without adaptations. In
this paper we propose KG-MDL, a graph pattern mining approach based on the MDL
principle that, given a KG, generates a human-sized and descriptive set of
graph patterns, and so in a parameter-less and anytime way. We report on
experiments on medium-sized KGs showing that our approach generates sets of
patterns that are both small enough to be interpreted by humans and descriptive
of the KG. We show that the extracted patterns highlight relevant
characteristics of the data: both of the schema used to create the data, and of
the concrete facts it contains. We also discuss the issues related to mining
graph patterns on knowledge graphs, as opposed to other types of graph data.
|
[
{
"version": "v1",
"created": "Fri, 22 Sep 2023 14:52:10 GMT"
}
] | 2023-09-25T00:00:00 |
[
[
"Bariatti",
"Francesco",
""
],
[
"Cellier",
"Peggy",
""
],
[
"Ferré",
"Sébastien",
""
]
] |
new_dataset
| 0.994032 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.