id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2305.13469
|
Saurabh Srivastava
|
Saurabh Srivastava, Gaurav Singh, Shou Matsumoto, Ali Raz, Paulo
Costa, Joshua Poore, Ziyu Yao
|
MAILEX: Email Event and Argument Extraction
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present the first dataset, \dataset, for performing event
extraction from conversational email threads. To this end, we first proposed a
new taxonomy covering 10 event types and 76 arguments in the email domain. Our
final dataset includes $\sim$4K emails annotated with $\sim$9K event instances.
To understand the task challenges, we conducted a series of experiments
comparing two commonly-seen lines of approaches for event extraction, i.e.,
sequence labeling and generative end-to-end extraction (including few-shot
GPT-3.5). Our results showed that the task of email event extraction is far
from being addressed, due to challenges lying in, e.g., extracting
non-continuous, shared trigger spans, extracting non-named entity arguments,
and modeling the email conversational history. Our work thus suggests more
investigations in this domain-specific event extraction task in the
future.\footnote{The source code and dataset can be obtained from
\url{https://github.com/salokr/Email-Event-Extraction}.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 20:28:23 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Srivastava",
"Saurabh",
""
],
[
"Singh",
"Gaurav",
""
],
[
"Matsumoto",
"Shou",
""
],
[
"Raz",
"Ali",
""
],
[
"Costa",
"Paulo",
""
],
[
"Poore",
"Joshua",
""
],
[
"Yao",
"Ziyu",
""
]
] |
new_dataset
| 0.999297 |
2305.13486
|
Pengyu Nie
|
Yu Liu, Zachary Thurston, Alan Han, Pengyu Nie, Milos Gligoric,
Owolabi Legunsen
|
pytest-inline: An Inline Testing Tool for Python
|
Accepted as a tool demo paper at ICSE DEMO 2023
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present pytest-inline, the first inline testing framework for Python. We
recently proposed inline tests to make it easier to test individual program
statements. But, there is no framework-level support for developers to write
inline tests in Python. To fill this gap, we design and implement pytest-inline
as a plugin for pytest, the most popular Python testing framework. Using
pytest-inline, a developer can write an inline test by assigning test inputs to
variables in a target statement and specifying the expected test output. Then,
pytest-inline runs each inline test and fails if the target statement's output
does not match the expected output. In this paper, we describe our design of
pytest-inline, the testing features that it provides, and the intended use
cases. Our evaluation on inline tests that we wrote for 80 target statements
from 31 open-source Python projects shows that using pytest-inline incurs
negligible overhead, at 0.012x. pytest-inline is integrated into the pytest-dev
organization, and a video demo is at
https://www.youtube.com/watch?v=pZgiAxR_uJg.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 20:58:44 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Liu",
"Yu",
""
],
[
"Thurston",
"Zachary",
""
],
[
"Han",
"Alan",
""
],
[
"Nie",
"Pengyu",
""
],
[
"Gligoric",
"Milos",
""
],
[
"Legunsen",
"Owolabi",
""
]
] |
new_dataset
| 0.999496 |
2305.13504
|
Dharma Kc
|
Dharma KC, Clayton T. Morrison
|
Neural Machine Translation for Code Generation
|
33 pages, 1 figure
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Neural machine translation (NMT) methods developed for natural language
processing have been shown to be highly successful in automating translation
from one natural language to another. Recently, these NMT methods have been
adapted to the generation of program code. In NMT for code generation, the task
is to generate output source code that satisfies constraints expressed in the
input. In the literature, a variety of different input scenarios have been
explored, including generating code based on natural language description,
lower-level representations such as binary or assembly (neural decompilation),
partial representations of source code (code completion and repair), and source
code in another language (code translation). In this paper we survey the NMT
for code generation literature, cataloging the variety of methods that have
been explored according to input and output representations, model
architectures, optimization techniques used, data sets, and evaluation methods.
We discuss the limitations of existing methods and future research directions
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 21:43:12 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"KC",
"Dharma",
""
],
[
"Morrison",
"Clayton T.",
""
]
] |
new_dataset
| 0.970429 |
2305.13565
|
Sangli Teng
|
Sangli Teng, Ashkan Jasour, Ram Vasudevan, Maani Ghaffari
|
Convex Geometric Motion Planning on Lie Groups via Moment Relaxation
|
Accepted to Robotics: Science and Systems (RSS), 2023
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper reports a novel result: with proper robot models on matrix Lie
groups, one can formulate the kinodynamic motion planning problem for rigid
body systems as \emph{exact} polynomial optimization problems that can be
relaxed as semidefinite programming (SDP). Due to the nonlinear rigid body
dynamics, the motion planning problem for rigid body systems is nonconvex.
Existing global optimization-based methods do not properly deal with the
configuration space of the 3D rigid body; thus, they do not scale well to
long-horizon planning problems. We use Lie groups as the configuration space in
our formulation and apply the variational integrator to formulate the forced
rigid body systems as quadratic polynomials. Then we leverage Lasserre's
hierarchy to obtain the globally optimal solution via SDP. By constructing the
motion planning problem in a sparse manner, the results show that the proposed
algorithm has \emph{linear} complexity with respect to the planning horizon.
This paper demonstrates the proposed method can provide rank-one optimal
solutions at relaxation order two for most of the testing cases of 1) 3D drone
landing using the full dynamics model and 2) inverse kinematics for serial
manipulators.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 00:42:17 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Teng",
"Sangli",
""
],
[
"Jasour",
"Ashkan",
""
],
[
"Vasudevan",
"Ram",
""
],
[
"Ghaffari",
"Maani",
""
]
] |
new_dataset
| 0.967012 |
2305.13573
|
Jintang Li
|
Sheng Tian, Jihai Dong, Jintang Li, Wenlong Zhao, Xiaolong Xu, Baokun
wang, Bowen Song, Changhua Meng, Tianyi Zhang, Liang Chen
|
SAD: Semi-Supervised Anomaly Detection on Dynamic Graphs
|
Accepted to IJCAI'23. Code will be available at
https://github.com/D10Andy/SAD
| null | null | null |
cs.LG cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Anomaly detection aims to distinguish abnormal instances that deviate
significantly from the majority of benign ones. As instances that appear in the
real world are naturally connected and can be represented with graphs, graph
neural networks become increasingly popular in tackling the anomaly detection
problem. Despite the promising results, research on anomaly detection has
almost exclusively focused on static graphs while the mining of anomalous
patterns from dynamic graphs is rarely studied but has significant application
value. In addition, anomaly detection is typically tackled from semi-supervised
perspectives due to the lack of sufficient labeled data. However, most proposed
methods are limited to merely exploiting labeled data, leaving a large number
of unlabeled samples unexplored. In this work, we present semi-supervised
anomaly detection (SAD), an end-to-end framework for anomaly detection on
dynamic graphs. By a combination of a time-equipped memory bank and a
pseudo-label contrastive learning module, SAD is able to fully exploit the
potential of large unlabeled samples and uncover underlying anomalies on
evolving graph streams. Extensive experiments on four real-world datasets
demonstrate that SAD efficiently discovers anomalies from dynamic graphs and
outperforms existing advanced methods even when provided with only little
labeled data.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 01:05:34 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Tian",
"Sheng",
""
],
[
"Dong",
"Jihai",
""
],
[
"Li",
"Jintang",
""
],
[
"Zhao",
"Wenlong",
""
],
[
"Xu",
"Xiaolong",
""
],
[
"wang",
"Baokun",
""
],
[
"Song",
"Bowen",
""
],
[
"Meng",
"Changhua",
""
],
[
"Zhang",
"Tianyi",
""
],
[
"Chen",
"Liang",
""
]
] |
new_dataset
| 0.962014 |
2305.13602
|
Haoqin Tu
|
Haoqin Tu, Yitong Li, Fei Mi, Zhongliang Yang
|
ReSee: Responding through Seeing Fine-grained Visual Knowledge in
Open-domain Dialogue
|
15 pages, preprint
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Incorporating visual knowledge into text-only dialogue systems has become a
potential direction to imitate the way humans think, imagine, and communicate.
However, existing multimodal dialogue systems are either confined by the scale
and quality of available datasets or the coarse concept of visual knowledge. To
address these issues, we provide a new paradigm of constructing multimodal
dialogues as well as two datasets extended from text-only dialogues under such
paradigm (ReSee-WoW, ReSee-DD). We propose to explicitly split the visual
knowledge into finer granularity (``turn-level'' and ``entity-level''). To
further boost the accuracy and diversity of augmented visual information, we
retrieve them from the Internet or a large image dataset. To demonstrate the
superiority and universality of the provided visual knowledge, we propose a
simple but effective framework ReSee to add visual representation into vanilla
dialogue models by modality concatenations. We also conduct extensive
experiments and ablations w.r.t. different model configurations and visual
knowledge settings. Empirical, encouraging results not only demonstrate the
effectiveness of introducing visual knowledge at both entity and turn level but
also verify the proposed model ReSee outperforms several state-of-the-art
methods on automatic and human evaluations. By leveraging text and vision
knowledge, ReSee can produce informative responses with real-world visual
concepts.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 02:08:56 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Tu",
"Haoqin",
""
],
[
"Li",
"Yitong",
""
],
[
"Mi",
"Fei",
""
],
[
"Yang",
"Zhongliang",
""
]
] |
new_dataset
| 0.992007 |
2305.13611
|
Yue Lu
|
Congqi Cao, Yue Lu, Peng Wang and Yanning Zhang
|
A New Comprehensive Benchmark for Semi-supervised Video Anomaly
Detection and Anticipation
|
CVPR 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Semi-supervised video anomaly detection (VAD) is a critical task in the
intelligent surveillance system. However, an essential type of anomaly in VAD
named scene-dependent anomaly has not received the attention of researchers.
Moreover, there is no research investigating anomaly anticipation, a more
significant task for preventing the occurrence of anomalous events. To this
end, we propose a new comprehensive dataset, NWPU Campus, containing 43 scenes,
28 classes of abnormal events, and 16 hours of videos. At present, it is the
largest semi-supervised VAD dataset with the largest number of scenes and
classes of anomalies, the longest duration, and the only one considering the
scene-dependent anomaly. Meanwhile, it is also the first dataset proposed for
video anomaly anticipation. We further propose a novel model capable of
detecting and anticipating anomalous events simultaneously. Compared with 7
outstanding VAD algorithms in recent years, our method can cope with
scene-dependent anomaly detection and anomaly anticipation both well, achieving
state-of-the-art performance on ShanghaiTech, CUHK Avenue, IITB Corridor and
the newly proposed NWPU Campus datasets consistently. Our dataset and code is
available at: https://campusvad.github.io.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 02:20:12 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Cao",
"Congqi",
""
],
[
"Lu",
"Yue",
""
],
[
"Wang",
"Peng",
""
],
[
"Zhang",
"Yanning",
""
]
] |
new_dataset
| 0.999811 |
2305.13614
|
Siyuan Chen
|
Siyuan Chen, Mengyue Wu, Kenny Q. Zhu, Kunyao Lan, Zhiling Zhang,
Lyuchun Cui
|
LLM-empowered Chatbots for Psychiatrist and Patient Simulation:
Application and Evaluation
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Empowering chatbots in the field of mental health is receiving increasing
amount of attention, while there still lacks exploration in developing and
evaluating chatbots in psychiatric outpatient scenarios. In this work, we focus
on exploring the potential of ChatGPT in powering chatbots for psychiatrist and
patient simulation. We collaborate with psychiatrists to identify objectives
and iteratively develop the dialogue system to closely align with real-world
scenarios. In the evaluation experiments, we recruit real psychiatrists and
patients to engage in diagnostic conversations with the chatbots, collecting
their ratings for assessment. Our findings demonstrate the feasibility of using
ChatGPT-powered chatbots in psychiatric scenarios and explore the impact of
prompt designs on chatbot behavior and user experience.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 02:25:01 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Chen",
"Siyuan",
""
],
[
"Wu",
"Mengyue",
""
],
[
"Zhu",
"Kenny Q.",
""
],
[
"Lan",
"Kunyao",
""
],
[
"Zhang",
"Zhiling",
""
],
[
"Cui",
"Lyuchun",
""
]
] |
new_dataset
| 0.980772 |
2305.13627
|
Samuel Cahyawijaya
|
Samuel Cahyawijaya, Holy Lovenia, Tiezheng Yu, Willy Chung, Pascale
Fung
|
Instruct-Align: Teaching Novel Languages with to LLMs through
Alignment-based Cross-Lingual Instruction
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Instruction-tuned large language models (LLMs) have shown remarkable
generalization capability over multiple tasks in multiple languages.
Nevertheless, their generalization towards different languages varies
especially to underrepresented languages or even to unseen languages. Prior
works on adapting new languages to LLMs find that naively adapting new
languages to instruction-tuned LLMs will result in catastrophic forgetting,
which in turn causes the loss of multitasking ability in these LLMs. To tackle
this, we propose the Instruct-Align a.k.a (IA)$^1$ framework, which enables
instruction-tuned LLMs to learn cross-lingual alignment between unseen and
previously learned languages via alignment-based cross-lingual
instruction-tuning. Our preliminary result on BLOOMZ-560M shows that (IA)$^1$
is able to learn a new language effectively with only a limited amount of
parallel data and at the same time prevent catastrophic forgetting by applying
continual instruction-tuning through experience replay. Our work contributes to
the progression of language adaptation methods for instruction-tuned LLMs and
opens up the possibility of adapting underrepresented low-resource languages
into existing instruction-tuned LLMs. Our code will be publicly released upon
acceptance.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 02:51:34 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Cahyawijaya",
"Samuel",
""
],
[
"Lovenia",
"Holy",
""
],
[
"Yu",
"Tiezheng",
""
],
[
"Chung",
"Willy",
""
],
[
"Fung",
"Pascale",
""
]
] |
new_dataset
| 0.983765 |
2305.13631
|
Weixi Feng
|
Siqi Liu, Weixi Feng, Wenhu Chen, William Yang Wang
|
EDIS: Entity-Driven Image Search over Multimodal Web Content
| null | null | null | null |
cs.CL cs.CV cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Making image retrieval methods practical for real-world search applications
requires significant progress in dataset scales, entity comprehension, and
multimodal information fusion. In this work, we introduce
\textbf{E}ntity-\textbf{D}riven \textbf{I}mage \textbf{S}earch (EDIS), a
challenging dataset for cross-modal image search in the news domain. EDIS
consists of 1 million web images from actual search engine results and curated
datasets, with each image paired with a textual description. Unlike datasets
that assume a small set of single-modality candidates, EDIS reflects real-world
web image search scenarios by including a million multimodal image-text pairs
as candidates. EDIS encourages the development of retrieval models that
simultaneously address cross-modal information fusion and matching. To achieve
accurate ranking results, a model must: 1) understand named entities and events
from text queries, 2) ground entities onto images or text descriptions, and 3)
effectively fuse textual and visual representations. Our experimental results
show that EDIS challenges state-of-the-art methods with dense entities and a
large-scale candidate set. The ablation study also proves that fusing textual
features with visual features is critical in improving retrieval results.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 02:59:19 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Liu",
"Siqi",
""
],
[
"Feng",
"Weixi",
""
],
[
"Chen",
"Wenhu",
""
],
[
"Wang",
"William Yang",
""
]
] |
new_dataset
| 0.999627 |
2305.13646
|
Tirthankar Roy
|
Sinan Rasiya Koya, Kanak Kanti Kar, Shivendra Srivastava, Tsegaye
Tadesse, Mark Svoboda, Tirthankar Roy
|
An Autoencoder-based Snow Drought Index
| null | null | null | null |
cs.LG physics.ao-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In several regions across the globe, snow has a significant impact on
hydrology. The amounts of water that infiltrate the ground and flow as runoff
are driven by the melting of snow. Therefore, it is crucial to study the
magnitude and effect of snowmelt. Snow droughts, resulting from reduced snow
storage, can drastically impact the water supplies in basins where snow
predominates, such as in the western United States. Hence, it is important to
detect the time and severity of snow droughts efficiently. We propose Snow
Drought Response Index or SnoDRI, a novel indicator that could be used to
identify and quantify snow drought occurrences. Our index is calculated using
cutting-edge ML algorithms from various snow-related variables. The
self-supervised learning of an autoencoder is combined with mutual information
in the model. In this study, we use random forests for feature extraction for
SnoDRI and assess the importance of each variable. We use reanalysis data
(NLDAS-2) from 1981 to 2021 for the Pacific United States to study the efficacy
of the new snow drought index. We evaluate the index by confirming the
coincidence of its interpretation and the actual snow drought incidents.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 03:41:45 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Koya",
"Sinan Rasiya",
""
],
[
"Kar",
"Kanak Kanti",
""
],
[
"Srivastava",
"Shivendra",
""
],
[
"Tadesse",
"Tsegaye",
""
],
[
"Svoboda",
"Mark",
""
],
[
"Roy",
"Tirthankar",
""
]
] |
new_dataset
| 0.999081 |
2305.13675
|
Timothy Schott
|
Tim Schott, Daniel Furman, and Shreshta Bhat
|
Polyglot or Not? Measuring Multilingual Encyclopedic Knowledge Retrieval
from Foundation Language Models
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, we evaluate the capacity for foundation models to retrieve
encyclopedic knowledge across a wide range of languages, topics, and contexts.
To support this effort, we 1) produce a new dataset containing 303k factual
associations in 20 different languages, 2) formulate a new counterfactual
knowledge assessment, Polyglot or Not, and 3) benchmark 5 foundation models in
a multilingual setting and a diverse set of 20 models in an English-only
setting. We observed significant accuracy differences in models of interest,
with Meta's LLaMA topping both the multilingual and English-only assessments.
Error analysis reveals a significant deficiency in LLaMA's ability to retrieve
facts in languages written in the Cyrillic script and gaps in its understanding
of facts based on the location and gender of entailed subjects. Ultimately, we
argue that the promise of utilizing foundation language models as bonafide
polyglots is greatly diminished when they are tasked with retrieving
information in languages other than English. Supporting code
(https://github.com/daniel-furman/Polyglot-or-Not) and dataset
(https://huggingface.co/datasets/Polyglot-or-Not/Fact-Completion) are openly
released.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 04:31:39 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Schott",
"Tim",
""
],
[
"Furman",
"Daniel",
""
],
[
"Bhat",
"Shreshta",
""
]
] |
new_dataset
| 0.999755 |
2305.13700
|
Chenglong Wang
|
Chenglong Wang, Jiangyan Yi, Jianhua Tao, Chuyuan Zhang, Shuai Zhang
and Xun Chen
|
Detection of Cross-Dataset Fake Audio Based on Prosodic and
Pronunciation Features
|
Interspeech2023
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing fake audio detection systems perform well in in-domain testing, but
still face many challenges in out-of-domain testing. This is due to the
mismatch between the training and test data, as well as the poor
generalizability of features extracted from limited views. To address this, we
propose multi-view features for fake audio detection, which aim to capture more
generalized features from prosodic, pronunciation, and wav2vec dimensions.
Specifically, the phoneme duration features are extracted from a pre-trained
model based on a large amount of speech data. For the pronunciation features, a
Conformer-based phoneme recognition model is first trained, keeping the
acoustic encoder part as a deeply embedded feature extractor. Furthermore, the
prosodic and pronunciation features are fused with wav2vec features based on an
attention mechanism to improve the generalization of fake audio detection
models. Results show that the proposed approach achieves significant
performance gains in several cross-dataset experiments.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 05:27:39 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Wang",
"Chenglong",
""
],
[
"Yi",
"Jiangyan",
""
],
[
"Tao",
"Jianhua",
""
],
[
"Zhang",
"Chuyuan",
""
],
[
"Zhang",
"Shuai",
""
],
[
"Chen",
"Xun",
""
]
] |
new_dataset
| 0.996977 |
2305.13703
|
Vered Shwartz
|
EunJeong Hwang and Vered Shwartz
|
MemeCap: A Dataset for Captioning and Interpreting Memes
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Memes are a widely popular tool for web users to express their thoughts using
visual metaphors. Understanding memes requires recognizing and interpreting
visual metaphors with respect to the text inside or around the meme, often
while employing background knowledge and reasoning abilities. We present the
task of meme captioning and release a new dataset, MemeCap. Our dataset
contains 6.3K memes along with the title of the post containing the meme, the
meme captions, the literal image caption, and the visual metaphors. Despite the
recent success of vision and language (VL) models on tasks such as image
captioning and visual question answering, our extensive experiments using
state-of-the-art VL models show that they still struggle with visual metaphors,
and perform substantially worse than humans.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 05:41:18 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Hwang",
"EunJeong",
""
],
[
"Shwartz",
"Vered",
""
]
] |
new_dataset
| 0.99982 |
2305.13713
|
Yuki Saito
|
Yuki Saito, Eiji Iimori, Shinnosuke Takamichi, Kentaro Tachibana,
Hiroshi Saruwatari
|
CALLS: Japanese Empathetic Dialogue Speech Corpus of Complaint Handling
and Attentive Listening in Customer Center
|
5 pages, accepted for INTERSPEECH2023
| null | null | null |
cs.SD cs.CL cs.LG eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present CALLS, a Japanese speech corpus that considers phone calls in a
customer center as a new domain of empathetic spoken dialogue. The existing
STUDIES corpus covers only empathetic dialogue between a teacher and student in
a school. To extend the application range of empathetic dialogue speech
synthesis (EDSS), we designed our corpus to include the same female speaker as
the STUDIES teacher, acting as an operator in simulated phone calls. We
describe a corpus construction methodology and analyze the recorded speech. We
also conduct EDSS experiments using the CALLS and STUDIES corpora to
investigate the effect of domain differences. The results show that mixing the
two corpora during training causes biased improvements in the quality of
synthetic speech due to the different degrees of expressiveness. Our project
page of the corpus is http://sython.org/Corpus/STUDIES-2.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 06:04:50 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Saito",
"Yuki",
""
],
[
"Iimori",
"Eiji",
""
],
[
"Takamichi",
"Shinnosuke",
""
],
[
"Tachibana",
"Kentaro",
""
],
[
"Saruwatari",
"Hiroshi",
""
]
] |
new_dataset
| 0.999559 |
2305.13724
|
Yuki Saito
|
Yuki Saito, Shinnosuke Takamichi, Eiji Iimori, Kentaro Tachibana,
Hiroshi Saruwatari
|
ChatGPT-EDSS: Empathetic Dialogue Speech Synthesis Trained from
ChatGPT-derived Context Word Embeddings
|
5 pages, accepted for INTERSPEECH 2023
| null | null | null |
cs.SD cs.CL cs.LG eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We propose ChatGPT-EDSS, an empathetic dialogue speech synthesis (EDSS)
method using ChatGPT for extracting dialogue context. ChatGPT is a chatbot that
can deeply understand the content and purpose of an input prompt and
appropriately respond to the user's request. We focus on ChatGPT's reading
comprehension and introduce it to EDSS, a task of synthesizing speech that can
empathize with the interlocutor's emotion. Our method first gives chat history
to ChatGPT and asks it to generate three words representing the intention,
emotion, and speaking style for each line in the chat. Then, it trains an EDSS
model using the embeddings of ChatGPT-derived context words as the conditioning
features. The experimental results demonstrate that our method performs
comparably to ones using emotion labels or neural network-derived context
embeddings learned from chat histories. The collected ChatGPT-derived context
information is available at
https://sarulab-speech.github.io/demo_ChatGPT_EDSS/.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 06:19:37 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Saito",
"Yuki",
""
],
[
"Takamichi",
"Shinnosuke",
""
],
[
"Iimori",
"Eiji",
""
],
[
"Tachibana",
"Kentaro",
""
],
[
"Saruwatari",
"Hiroshi",
""
]
] |
new_dataset
| 0.959514 |
2305.13733
|
Hongru Wang
|
Rui Wang, Hongru Wang, Fei Mi, Yi Chen, Ruifeng Xu, Kam-Fai Wong
|
Self-Critique Prompting with Large Language Models for Inductive
Instructions
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Numerous works are proposed to improve or evaluate the capabilities of Large
language models (LLMs) to fulfill user instructions. However, they neglect the
possibility that user inputs may inherently contain incorrect information due
to users' false beliefs or malicious intents. In this way, blindly adhering to
users' false content will cause deception and harm. To address this problem, we
propose a challenging benchmark consisting of Inductive Instructions (INDust)
to evaluate whether LLMs could resist these instructions. The INDust includes
15K instructions across three categories: Fact-Checking Instructions, Questions
based on False Premises, and Creative Instructions based on False Premises. Our
experiments on several strong LLMs reveal that current LLMs can be easily
deceived by INDust into generating misleading and malicious statements. Hence
we employ Self-Critique prompting to encourage LLMs to not only critique
themselves like in previous works but also the users, which show remarkable
improvement in handling inductive instructions under both zero-shot and
few-shot settings.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 06:38:20 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Wang",
"Rui",
""
],
[
"Wang",
"Hongru",
""
],
[
"Mi",
"Fei",
""
],
[
"Chen",
"Yi",
""
],
[
"Xu",
"Ruifeng",
""
],
[
"Wong",
"Kam-Fai",
""
]
] |
new_dataset
| 0.951306 |
2305.13740
|
Yiming Ai
|
Yiming Ai, Zhiwei He, Kai Yu, Rui Wang
|
TeCS: A Dataset and Benchmark for Tense Consistency of Machine
Translation
|
10 pages, accepted in main conference of ACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Tense inconsistency frequently occurs in machine translation. However, there
are few criteria to assess the model's mastery of tense prediction from a
linguistic perspective. In this paper, we present a parallel tense test set,
containing French-English 552 utterances. We also introduce a corresponding
benchmark, tense prediction accuracy. With the tense test set and the
benchmark, researchers are able to measure the tense consistency performance of
machine translation systems for the first time.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 06:51:48 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Ai",
"Yiming",
""
],
[
"He",
"Zhiwei",
""
],
[
"Yu",
"Kai",
""
],
[
"Wang",
"Rui",
""
]
] |
new_dataset
| 0.999801 |
2305.13776
|
Rishabh Gupta
|
Rishabh Gupta, Shaily Desai, Manvi Goel, Anil Bandhakavi, Tanmoy
Chakraborty and Md. Shad Akhtar
|
Counterspeeches up my sleeve! Intent Distribution Learning and
Persistent Fusion for Intent-Conditioned Counterspeech Generation
|
ACL 2023
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Counterspeech has been demonstrated to be an efficacious approach for
combating hate speech. While various conventional and controlled approaches
have been studied in recent years to generate counterspeech, a counterspeech
with a certain intent may not be sufficient in every scenario. Due to the
complex and multifaceted nature of hate speech, utilizing multiple forms of
counter-narratives with varying intents may be advantageous in different
circumstances. In this paper, we explore intent-conditioned counterspeech
generation. At first, we develop IntentCONAN, a diversified intent-specific
counterspeech dataset with 6831 counterspeeches conditioned on five intents,
i.e., informative, denouncing, question, positive, and humour. Subsequently, we
propose QUARC, a two-stage framework for intent-conditioned counterspeech
generation. QUARC leverages vector-quantized representations learned for each
intent category along with PerFuMe, a novel fusion module to incorporate
intent-specific information into the model. Our evaluation demonstrates that
QUARC outperforms several baselines by an average of 10% across evaluation
metrics. An extensive human evaluation supplements our hypothesis of better and
more appropriate responses than comparative systems.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 07:45:17 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Gupta",
"Rishabh",
""
],
[
"Desai",
"Shaily",
""
],
[
"Goel",
"Manvi",
""
],
[
"Bandhakavi",
"Anil",
""
],
[
"Chakraborty",
"Tanmoy",
""
],
[
"Akhtar",
"Md. Shad",
""
]
] |
new_dataset
| 0.994528 |
2305.13786
|
Viorica Patraucean Dr
|
Viorica P\u{a}tr\u{a}ucean, Lucas Smaira, Ankush Gupta, Adri\`a
Recasens Continente, Larisa Markeeva, Dylan Banarse, Skanda Koppula, Joseph
Heyward, Mateusz Malinowski, Yi Yang, Carl Doersch, Tatiana Matejovicova,
Yury Sulsky, Antoine Miech, Alex Frechette, Hanna Klimczak, Raphael Koster,
Junlin Zhang, Stephanie Winkler, Yusuf Aytar, Simon Osindero, Dima Damen,
Andrew Zisserman, Jo\~ao Carreira
|
Perception Test: A Diagnostic Benchmark for Multimodal Video Models
|
25 pages, 11 figures
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel multimodal video benchmark - the Perception Test - to
evaluate the perception and reasoning skills of pre-trained multimodal models
(e.g. Flamingo, BEiT-3, or GPT-4). Compared to existing benchmarks that focus
on computational tasks (e.g. classification, detection or tracking), the
Perception Test focuses on skills (Memory, Abstraction, Physics, Semantics) and
types of reasoning (descriptive, explanatory, predictive, counterfactual)
across video, audio, and text modalities, to provide a comprehensive and
efficient evaluation tool. The benchmark probes pre-trained models for their
transfer capabilities, in a zero-shot / few-shot or limited finetuning regime.
For these purposes, the Perception Test introduces 11.6k real-world videos, 23s
average length, designed to show perceptually interesting situations, filmed by
around 100 participants worldwide. The videos are densely annotated with six
types of labels (multiple-choice and grounded video question-answers, object
and point tracks, temporal action and sound segments), enabling both language
and non-language evaluations. The fine-tuning and validation splits of the
benchmark are publicly available (CC-BY license), in addition to a challenge
server with a held-out test split. Human baseline results compared to
state-of-the-art video QA models show a significant gap in performance (91.4%
vs 43.6%), suggesting that there is significant room for improvement in
multimodal video understanding.
Dataset, baselines code, and challenge server are available at
https://github.com/deepmind/perception_test
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 07:54:37 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Pătrăucean",
"Viorica",
""
],
[
"Smaira",
"Lucas",
""
],
[
"Gupta",
"Ankush",
""
],
[
"Continente",
"Adrià Recasens",
""
],
[
"Markeeva",
"Larisa",
""
],
[
"Banarse",
"Dylan",
""
],
[
"Koppula",
"Skanda",
""
],
[
"Heyward",
"Joseph",
""
],
[
"Malinowski",
"Mateusz",
""
],
[
"Yang",
"Yi",
""
],
[
"Doersch",
"Carl",
""
],
[
"Matejovicova",
"Tatiana",
""
],
[
"Sulsky",
"Yury",
""
],
[
"Miech",
"Antoine",
""
],
[
"Frechette",
"Alex",
""
],
[
"Klimczak",
"Hanna",
""
],
[
"Koster",
"Raphael",
""
],
[
"Zhang",
"Junlin",
""
],
[
"Winkler",
"Stephanie",
""
],
[
"Aytar",
"Yusuf",
""
],
[
"Osindero",
"Simon",
""
],
[
"Damen",
"Dima",
""
],
[
"Zisserman",
"Andrew",
""
],
[
"Carreira",
"João",
""
]
] |
new_dataset
| 0.998779 |
2305.13819
|
Yi Huang
|
Yi Huang, Jiancheng Huang, Jianzhuang Liu, Yu Dong, Jiaxi Lv, Shifeng
Chen
|
WaveDM: Wavelet-Based Diffusion Models for Image Restoration
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Latest diffusion-based methods for many image restoration tasks outperform
traditional models, but they encounter the long-time inference problem. To
tackle it, this paper proposes a Wavelet-Based Diffusion Model (WaveDM) with an
Efficient Conditional Sampling (ECS) strategy. WaveDM learns the distribution
of clean images in the wavelet domain conditioned on the wavelet spectrum of
degraded images after wavelet transform, which is more time-saving in each step
of sampling than modeling in the spatial domain. In addition, ECS follows the
same procedure as the deterministic implicit sampling in the initial sampling
period and then stops to predict clean images directly, which reduces the
number of total sampling steps to around 5. Evaluations on four benchmark
datasets including image raindrop removal, defocus deblurring, demoir\'eing,
and denoising demonstrate that WaveDM achieves state-of-the-art performance
with the efficiency that is comparable to traditional one-pass methods and over
100 times faster than existing image restoration methods using vanilla
diffusion models.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 08:41:04 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Huang",
"Yi",
""
],
[
"Huang",
"Jiancheng",
""
],
[
"Liu",
"Jianzhuang",
""
],
[
"Dong",
"Yu",
""
],
[
"Lv",
"Jiaxi",
""
],
[
"Chen",
"Shifeng",
""
]
] |
new_dataset
| 0.994186 |
2305.13844
|
Shohei Higashiyama
|
Shohei Higashiyama, Hiroki Ouchi, Hiroki Teranishi, Hiroyuki Otomo,
Yusuke Ide, Aitaro Yamamoto, Hiroyuki Shindo, Yuki Matsuda, Shoko Wakamiya,
Naoya Inoue, Ikuya Yamada, Taro Watanabe
|
Arukikata Travelogue Dataset with Geographic Entity Mention,
Coreference, and Link Annotation
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Geoparsing is a fundamental technique for analyzing geo-entity information in
text. We focus on document-level geoparsing, which considers geographic
relatedness among geo-entity mentions, and presents a Japanese travelogue
dataset designed for evaluating document-level geoparsing systems. Our dataset
comprises 200 travelogue documents with rich geo-entity information: 12,171
mentions, 6,339 coreference clusters, and 2,551 geo-entities linked to
geo-database entries.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 09:07:42 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Higashiyama",
"Shohei",
""
],
[
"Ouchi",
"Hiroki",
""
],
[
"Teranishi",
"Hiroki",
""
],
[
"Otomo",
"Hiroyuki",
""
],
[
"Ide",
"Yusuke",
""
],
[
"Yamamoto",
"Aitaro",
""
],
[
"Shindo",
"Hiroyuki",
""
],
[
"Matsuda",
"Yuki",
""
],
[
"Wakamiya",
"Shoko",
""
],
[
"Inoue",
"Naoya",
""
],
[
"Yamada",
"Ikuya",
""
],
[
"Watanabe",
"Taro",
""
]
] |
new_dataset
| 0.999749 |
2305.13858
|
Yufei Xie
|
Yufei Xie, Shaoman Li and Penghui Lin
|
Producing a Standard Dataset of Speed Climbing Training Videos Using
Deep Learning Techniques
|
2023 3rd International Conference on Innovative Talents Training and
Sustainable Development
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This dissertation presents a methodology for recording speed climbing
training sessions with multiple cameras and annotating the videos with relevant
data, including body position, hand and foot placement, and timing. The
annotated data is then analyzed using deep learning techniques to create a
standard dataset of speed climbing training videos. The results demonstrate the
potential of the new dataset for improving speed climbing training and
research, including identifying areas for improvement, creating personalized
training plans, and analyzing the effects of different training methods.The
findings will also be applied to the training process of the Jiangxi climbing
team through further empirical research to test the findings and further
explore the feasibility of this study.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 09:27:17 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Xie",
"Yufei",
""
],
[
"Li",
"Shaoman",
""
],
[
"Lin",
"Penghui",
""
]
] |
new_dataset
| 0.989627 |
2305.13876
|
Taiki Miyanishi
|
Taiki Miyanishi, Daichi Azuma, Shuhei Kurita, Motoki Kawanabe
|
Cross3DVG: Baseline and Dataset for Cross-Dataset 3D Visual Grounding on
Different RGB-D Scans
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Cross3DVG, a novel task for cross-dataset visual grounding in 3D
scenes, revealing the limitations of existing 3D visual grounding models using
restricted 3D resources and thus easily overfit to a specific 3D dataset. To
facilitate Cross3DVG, we have created a large-scale 3D visual grounding dataset
containing more than 63k diverse descriptions of 3D objects within 1,380 indoor
RGB-D scans from 3RScan with human annotations, paired with the existing 52k
descriptions on ScanRefer. We perform Cross3DVG by training a model on the
source 3D visual grounding dataset and then evaluating it on the target dataset
constructed in different ways (e.g., different sensors, 3D reconstruction
methods, and language annotators) without using target labels. We conduct
comprehensive experiments using established visual grounding models, as well as
a CLIP-based 2D-3D integration method, designed to bridge the gaps between 3D
datasets. By performing Cross3DVG tasks, we found that (i) cross-dataset 3D
visual grounding has significantly lower performance than learning and
evaluation with a single dataset, suggesting much room for improvement in
cross-dataset generalization of 3D visual grounding, (ii) better detectors and
transformer-based localization modules for 3D grounding are beneficial for
enhancing 3D grounding performance and (iii) fusing 2D-3D data using CLIP
demonstrates further performance improvements. Our Cross3DVG task will provide
a benchmark for developing robust 3D visual grounding models capable of
handling diverse 3D scenes while leveraging deep language understanding.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 09:52:49 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Miyanishi",
"Taiki",
""
],
[
"Azuma",
"Daichi",
""
],
[
"Kurita",
"Shuhei",
""
],
[
"Kawanabe",
"Motoki",
""
]
] |
new_dataset
| 0.999825 |
2305.13877
|
Arseny Moskvichev
|
Arseny Moskvichev and Ky-Vinh Mai
|
Narrative XL: A Large-scale Dataset For Long-Term Memory Models
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Despite their tremendous successes, most large language models do not have
any long-term memory mechanisms, which restricts their applications. Overcoming
this limitation would not only require changes to the typical transformer
architectures or training procedures, but also a dataset on which these new
models could be trained and evaluated. We argue that existing resources lack a
few key properties, and that at present, there are no naturalistic datasets of
sufficient scale to train (and not only evaluate) long-term memory language
models. We then present our solution that capitalizes on the advances in
short-term memory language models to create such a dataset. Using GPT 3.5, we
summarized each scene in 1500 hand-curated books from Project Gutenberg, which
resulted in approximately 150 scene-level summaries per book. We then created a
number of reading comprehension questions based on these summaries, including
three types of multiple-choice scene recognition questions, as well as
free-form narrative reconstruction questions. Each book is thus associated with
more than 500 reading comprehension questions. Crucially, most questions have a
known ``retention demand'', indicating how long-term of a memory is needed to
answer it, which should aid long-term memory performance evaluation. We
validate our data in three small-scale experiments: one with human labelers,
and two with existing language models. We show that our questions 1) adequately
represent the source material 2) can be used to diagnose the model's memory
capacity 3) are not trivial for modern language models even when the memory
demand does not exceed those models' context lengths. Lastly, we provide our
code which can be used to further expand the dataset in an automated manner.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 09:55:32 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Moskvichev",
"Arseny",
""
],
[
"Mai",
"Ky-Vinh",
""
]
] |
new_dataset
| 0.999587 |
2305.13884
|
Thanh Le-Cong Le-Cong Thanh
|
Truong Giang Nguyen, Thanh Le-Cong, Hong Jin Kang, Ratnadira
Widyasari, Chengran Yang, Zhipeng Zhao, Bowen Xu, Jiayuan Zhou, Xin Xia,
Ahmed E. Hassan, Xuan-Bach D. Le, David Lo
|
Multi-Granularity Detector for Vulnerability Fixes
| null |
IEEE Transactions on Software Engineering, 2023
| null | null |
cs.CR cs.AI cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
With the increasing reliance on Open Source Software, users are exposed to
third-party library vulnerabilities. Software Composition Analysis (SCA) tools
have been created to alert users of such vulnerabilities. SCA requires the
identification of vulnerability-fixing commits. Prior works have proposed
methods that can automatically identify such vulnerability-fixing commits.
However, identifying such commits is highly challenging, as only a very small
minority of commits are vulnerability fixing. Moreover, code changes can be
noisy and difficult to analyze. We observe that noise can occur at different
levels of detail, making it challenging to detect vulnerability fixes
accurately.
To address these challenges and boost the effectiveness of prior works, we
propose MiDas (Multi-Granularity Detector for Vulnerability Fixes). Unique from
prior works, Midas constructs different neural networks for each level of code
change granularity, corresponding to commit-level, file-level, hunk-level, and
line-level, following their natural organization. It then utilizes an ensemble
model that combines all base models to generate the final prediction. This
design allows MiDas to better handle the noisy and highly imbalanced nature of
vulnerability-fixing commit data. Additionally, to reduce the human effort
required to inspect code changes, we have designed an effort-aware adjustment
for Midas's outputs based on commit length. The evaluation results demonstrate
that MiDas outperforms the current state-of-the-art baseline in terms of AUC by
4.9% and 13.7% on Java and Python-based datasets, respectively. Furthermore, in
terms of two effort-aware metrics, EffortCost@L and Popt@L, MiDas also
outperforms the state-of-the-art baseline, achieving improvements of up to
28.2% and 15.9% on Java, and 60% and 51.4% on Python, respectively.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 10:06:28 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Nguyen",
"Truong Giang",
""
],
[
"Le-Cong",
"Thanh",
""
],
[
"Kang",
"Hong Jin",
""
],
[
"Widyasari",
"Ratnadira",
""
],
[
"Yang",
"Chengran",
""
],
[
"Zhao",
"Zhipeng",
""
],
[
"Xu",
"Bowen",
""
],
[
"Zhou",
"Jiayuan",
""
],
[
"Xia",
"Xin",
""
],
[
"Hassan",
"Ahmed E.",
""
],
[
"Le",
"Xuan-Bach D.",
""
],
[
"Lo",
"David",
""
]
] |
new_dataset
| 0.998807 |
2305.13891
|
Sunyou Hwang
|
Tom Suys, Sunyou Hwang, Guido C. H. E. de Croon, Bart D. W. Remes
|
Autonomous Control for Orographic Soaring of Fixed-Wing UAVs
|
6+1 pages, 9 figures, accepted to ICRA 2023
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We present a novel controller for fixed-wing UAVs that enables autonomous
soaring in an orographic wind field, extending flight endurance. Our method
identifies soaring regions and addresses position control challenges by
introducing a target gradient line (TGL) on which the UAV achieves an
equilibrium soaring position, where sink rate and updraft are balanced.
Experimental testing validates the controller's effectiveness in maintaining
autonomous soaring flight without using any thrust in a non-static wind field.
We also demonstrate a single degree of control freedom in a soaring position
through manipulation of the TGL.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 10:14:49 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Suys",
"Tom",
""
],
[
"Hwang",
"Sunyou",
""
],
[
"de Croon",
"Guido C. H. E.",
""
],
[
"Remes",
"Bart D. W.",
""
]
] |
new_dataset
| 0.967369 |
2305.13902
|
Seung Jae Lee
|
Hyunwoo Kang, Jaeho Shin, Jaewook Shin, Youngseok Jang, Seung Jae Lee
|
Design and Operation of Autonomous Wheelchair Towing Robot
|
Submitted to Intelligent Service Robotics
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this study, a new concept of a wheelchair-towing robot for the facile
electrification of manual wheelchairs is introduced. The development of this
concept includes the design of towing robot hardware and an autonomous driving
algorithm to ensure the safe transportation of patients to their intended
destinations inside the hospital. We developed a novel docking mechanism to
facilitate easy docking and separation between the towing robot and the manual
wheelchair, which is connected to the front caster wheel of the manual
wheelchair. The towing robot has a mecanum wheel drive, enabling the robot to
move with a high degree of freedom in the standalone driving mode while
adhering to kinematic constraints in the docking mode. Our novel towing robot
features a camera sensor that can observe the ground ahead which allows the
robot to autonomously follow color-coded wayfinding lanes installed in hospital
corridors. This study introduces dedicated image processing techniques for
capturing the lanes and control algorithms for effectively tracing a path to
achieve autonomous path following. The autonomous towing performance of our
proposed platform was validated by a real-world experiment in which a hospital
environment with colored lanes was created.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 10:25:31 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Kang",
"Hyunwoo",
""
],
[
"Shin",
"Jaeho",
""
],
[
"Shin",
"Jaewook",
""
],
[
"Jang",
"Youngseok",
""
],
[
"Lee",
"Seung Jae",
""
]
] |
new_dataset
| 0.984534 |
2305.13913
|
Yun Li
|
Yun Li, Hongwei Liu, Sihem Mesnager
|
Constructions of Constant Dimension Subspace Codes
|
This article was submitted to Designs, Codes and Cryptography on
November 22nd, 2022
| null | null | null |
cs.IT math.CO math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Subspace codes have important applications in random network coding. It is
interesting to construct subspace codes with both sizes, and the minimum
distances are as large as possible. In particular, cyclic constant dimension
subspaces codes have additional properties which can be used to make encoding
and decoding more efficient. In this paper, we construct large cyclic constant
dimension subspace codes with minimum distances $2k-2$ and $2k$. These codes
are contained in $\mathcal{G}_q(n, k)$, where $\mathcal{G}_q(n, k)$ denotes the
set of all $k$-dimensional subspaces of $\mathbb{F}_{q^n}$. Consequently, some
results in \cite{FW}, \cite{NXG}, and \cite{ZT} are extended.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 10:37:00 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Li",
"Yun",
""
],
[
"Liu",
"Hongwei",
""
],
[
"Mesnager",
"Sihem",
""
]
] |
new_dataset
| 0.961742 |
2305.13945
|
Puyu Yang
|
Puyu Yang, Ahad Shoaib, Robert West, Giovanni Colavizza
|
Wikipedia and open access
|
16 pages, 8 figures
| null | null | null |
cs.DL
|
http://creativecommons.org/licenses/by/4.0/
|
Wikipedia is a well-known platform for disseminating knowledge, and
scientific sources, such as journal articles, play a critical role in
supporting its mission. The open access movement aims to make scientific
knowledge openly available, and we might intuitively expect open access to help
further Wikipedia's mission. However, the extent of this relationship remains
largely unknown. To fill this gap, we analyze a large dataset of citations from
Wikipedia and model the role of open access in Wikipedia's citation patterns.
We find that open-access articles are extensively and increasingly more cited
in Wikipedia. What is more, they show a 15% higher likelihood of being cited in
Wikipedia when compared to closed-access articles, after controlling for
confounding factors. This open-access citation effect is particularly strong
for articles with low citation counts, including recently published ones. Our
results show that open access plays a key role in the dissemination of
scientific knowledge, including by providing Wikipedia editors timely access to
novel results. These findings have important implications for researchers,
policymakers, and practitioners in the field of information science and
technology.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 11:10:27 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Yang",
"Puyu",
""
],
[
"Shoaib",
"Ahad",
""
],
[
"West",
"Robert",
""
],
[
"Colavizza",
"Giovanni",
""
]
] |
new_dataset
| 0.996804 |
2305.13977
|
Huajun Long
|
Huajun Long, Jie Li, Rui Li, Xinfeng Liu, Jingyuan Cheng
|
Dual-modality Smart Shoes for Quantitative Assessment of Hemiplegic
Patients' Lower Limbs' Muscle Strength
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Stroke can lead to the impaired motor ability of the patient's lower limbs
and hemiplegia. Accurate assessment of the lower limbs' motor ability is
important for diagnosis and rehabilitation. To digitalize such assessment so
that each test can be traced back any time and subjectivity can be avoided, we
test how dual-modality smart shoes equipped with pressure-sensitive insoles and
inertial measurement units can be used for this purpose. A 5m walking test
protocol, including the left and right turns, is designed. Data are collected
from 23 patients and 17 healthy subjects. For the lower limbs' motor ability,
the tests are observed by two physicians and assessed using the five graded
Medical Research Council scale for muscle examination. The average of two
physicians' scores for the same patient is used as the ground truth. Using the
feature set we developed, 100\% accuracy is achieved in classifying the
patients and healthy subjects. For patients' muscle strength, a mean absolute
error of 0.143 and a maximum error of 0.395 is achieved using our feature set
and the regression method, closer to the ground truth than the scores from each
physician (mean absolute error: 0.217, maximum error: 0.5). We thus validate
the possibility of using such smart shoes to objectively and accurately
evaluate the lower limbs' muscle strength of the stroke patients.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 11:58:45 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Long",
"Huajun",
""
],
[
"Li",
"Jie",
""
],
[
"Li",
"Rui",
""
],
[
"Liu",
"Xinfeng",
""
],
[
"Cheng",
"Jingyuan",
""
]
] |
new_dataset
| 0.999283 |
2305.13989
|
David Adelani
|
Cheikh M. Bamba Dione, David Adelani, Peter Nabende, Jesujoba Alabi,
Thapelo Sindane, Happy Buzaaba, Shamsuddeen Hassan Muhammad, Chris Chinenye
Emezue, Perez Ogayo, Anuoluwapo Aremu, Catherine Gitau, Derguene Mbaye,
Jonathan Mukiibi, Blessing Sibanda, Bonaventure F. P. Dossou, Andiswa Bukula,
Rooweither Mabuya, Allahsera Auguste Tapo, Edwin Munkoh-Buabeng, victoire
Memdjokam Koagne, Fatoumata Ouoba Kabore, Amelia Taylor, Godson Kalipe,
Tebogo Macucwa, Vukosi Marivate, Tajuddeen Gwadabe, Mboning Tchiaze Elvis,
Ikechukwu Onyenwe, Gratien Atindogbe, Tolulope Adelani, Idris Akinade,
Olanrewaju Samuel, Marien Nahimana, Th\'eog\`ene Musabeyezu, Emile
Niyomutabazi, Ester Chimhenga, Kudzai Gotosa, Patrick Mizha, Apelete Agbolo,
Seydou Traore, Chinedu Uchechukwu, Aliyu Yusuf, Muhammad Abdullahi and
Dietrich Klakow
|
MasakhaPOS: Part-of-Speech Tagging for Typologically Diverse African
Languages
|
Accepted to ACL 2023 (Main conference)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present MasakhaPOS, the largest part-of-speech (POS)
dataset for 20 typologically diverse African languages. We discuss the
challenges in annotating POS for these languages using the UD (universal
dependencies) guidelines. We conducted extensive POS baseline experiments using
conditional random field and several multilingual pre-trained language models.
We applied various cross-lingual transfer models trained with data available in
UD. Evaluating on the MasakhaPOS dataset, we show that choosing the best
transfer language(s) in both single-source and multi-source setups greatly
improves the POS tagging performance of the target languages, in particular
when combined with cross-lingual parameter-efficient fine-tuning methods.
Crucially, transferring knowledge from a language that matches the language
family and morphosyntactic properties seems more effective for POS tagging in
unseen languages.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 12:15:33 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Dione",
"Cheikh M. Bamba",
""
],
[
"Adelani",
"David",
""
],
[
"Nabende",
"Peter",
""
],
[
"Alabi",
"Jesujoba",
""
],
[
"Sindane",
"Thapelo",
""
],
[
"Buzaaba",
"Happy",
""
],
[
"Muhammad",
"Shamsuddeen Hassan",
""
],
[
"Emezue",
"Chris Chinenye",
""
],
[
"Ogayo",
"Perez",
""
],
[
"Aremu",
"Anuoluwapo",
""
],
[
"Gitau",
"Catherine",
""
],
[
"Mbaye",
"Derguene",
""
],
[
"Mukiibi",
"Jonathan",
""
],
[
"Sibanda",
"Blessing",
""
],
[
"Dossou",
"Bonaventure F. P.",
""
],
[
"Bukula",
"Andiswa",
""
],
[
"Mabuya",
"Rooweither",
""
],
[
"Tapo",
"Allahsera Auguste",
""
],
[
"Munkoh-Buabeng",
"Edwin",
""
],
[
"Koagne",
"victoire Memdjokam",
""
],
[
"Kabore",
"Fatoumata Ouoba",
""
],
[
"Taylor",
"Amelia",
""
],
[
"Kalipe",
"Godson",
""
],
[
"Macucwa",
"Tebogo",
""
],
[
"Marivate",
"Vukosi",
""
],
[
"Gwadabe",
"Tajuddeen",
""
],
[
"Elvis",
"Mboning Tchiaze",
""
],
[
"Onyenwe",
"Ikechukwu",
""
],
[
"Atindogbe",
"Gratien",
""
],
[
"Adelani",
"Tolulope",
""
],
[
"Akinade",
"Idris",
""
],
[
"Samuel",
"Olanrewaju",
""
],
[
"Nahimana",
"Marien",
""
],
[
"Musabeyezu",
"Théogène",
""
],
[
"Niyomutabazi",
"Emile",
""
],
[
"Chimhenga",
"Ester",
""
],
[
"Gotosa",
"Kudzai",
""
],
[
"Mizha",
"Patrick",
""
],
[
"Agbolo",
"Apelete",
""
],
[
"Traore",
"Seydou",
""
],
[
"Uchechukwu",
"Chinedu",
""
],
[
"Yusuf",
"Aliyu",
""
],
[
"Abdullahi",
"Muhammad",
""
],
[
"Klakow",
"Dietrich",
""
]
] |
new_dataset
| 0.999505 |
2305.14004
|
Ayush Maheshwari
|
Ayush Maheshwari, Ashim Gupta, Amrith Krishna, Ganesh Ramakrishnan, G.
Anil Kumar, Jitin Singla
|
S\={a}mayik: A Benchmark and Dataset for English-Sanskrit Translation
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sanskrit is a low-resource language with a rich heritage. Digitized Sanskrit
corpora reflective of the contemporary usage of Sanskrit, specifically that too
in prose, is heavily under-represented at present. Presently, no such
English-Sanskrit parallel dataset is publicly available. We release a dataset,
S\={a}mayik, of more than 42,000 parallel English-Sanskrit sentences, from four
different corpora that aim to bridge this gap. Moreover, we also release
benchmarks adapted from existing multilingual pretrained models for
Sanskrit-English translation. We include training splits from our contemporary
dataset and the Sanskrit-English parallel sentences from the training split of
Itih\={a}sa, a previously released classical era machine translation dataset
containing Sanskrit.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 12:32:24 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Maheshwari",
"Ayush",
""
],
[
"Gupta",
"Ashim",
""
],
[
"Krishna",
"Amrith",
""
],
[
"Ramakrishnan",
"Ganesh",
""
],
[
"Kumar",
"G. Anil",
""
],
[
"Singla",
"Jitin",
""
]
] |
new_dataset
| 0.99987 |
2305.14008
|
Alvari Sepp\"anen
|
Alvari Sepp\"anen, Risto Ojala, Kari Tammi
|
Multi-Echo Denoising in Adverse Weather
| null | null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Adverse weather can cause noise to light detection and ranging (LiDAR) data.
This is a problem since it is used in many outdoor applications, e.g. object
detection and mapping. We propose the task of multi-echo denoising, where the
goal is to pick the echo that represents the objects of interest and discard
other echoes. Thus, the idea is to pick points from alternative echoes that are
not available in standard strongest echo point clouds due to the noise. In an
intuitive sense, we are trying to see through the adverse weather. To achieve
this goal, we propose a novel self-supervised deep learning method and the
characteristics similarity regularization method to boost its performance.
Based on extensive experiments on a semi-synthetic dataset, our method achieves
superior performance compared to the state-of-the-art in self-supervised
adverse weather denoising (23% improvement). Moreover, the experiments with a
real multi-echo adverse weather dataset prove the efficacy of multi-echo
denoising. Our work enables more reliable point cloud acquisition in adverse
weather and thus promises safer autonomous driving and driving assistance
systems in such conditions. The code is available at
https://github.com/alvariseppanen/SMEDNet
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 12:40:28 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Seppänen",
"Alvari",
""
],
[
"Ojala",
"Risto",
""
],
[
"Tammi",
"Kari",
""
]
] |
new_dataset
| 0.97344 |
2305.14010
|
Wenhao Yu
|
Wenhao Yu, Meng Jiang, Peter Clark, Ashish Sabharwal
|
IfQA: A Dataset for Open-domain Question Answering under Counterfactual
Presuppositions
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Although counterfactual reasoning is a fundamental aspect of intelligence,
the lack of large-scale counterfactual open-domain question-answering (QA)
benchmarks makes it difficult to evaluate and improve models on this ability.
To address this void, we introduce the first such dataset, named IfQA, where
each question is based on a counterfactual presupposition via an "if" clause.
For example, if Los Angeles was on the east coast of the U.S., what would be
the time difference between Los Angeles and Paris? Such questions require
models to go beyond retrieving direct factual knowledge from the Web: they must
identify the right information to retrieve and reason about an imagined
situation that may even go against the facts built into their parameters. The
IfQA dataset contains over 3,800 questions that were annotated annotated by
crowdworkers on relevant Wikipedia passages. Empirical analysis reveals that
the IfQA dataset is highly challenging for existing open-domain QA methods,
including supervised retrieve-then-read pipeline methods (EM score 36.2), as
well as recent few-shot approaches such as chain-of-thought prompting with
GPT-3 (EM score 27.4). The unique challenges posed by the IfQA benchmark will
push open-domain QA research on both retrieval and counterfactual reasoning
fronts.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 12:43:19 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Yu",
"Wenhao",
""
],
[
"Jiang",
"Meng",
""
],
[
"Clark",
"Peter",
""
],
[
"Sabharwal",
"Ashish",
""
]
] |
new_dataset
| 0.999871 |
2305.14014
|
Shuai Zhao
|
Shuai Zhao, Xiaohan Wang, Linchao Zhu, Yi Yang
|
CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained
Vision-Language Model
|
Preprint, work in progress
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Pre-trained vision-language models are the de-facto foundation models for
various downstream tasks. However, this trend has not extended to the field of
scene text recognition (STR), despite the potential of CLIP to serve as a
powerful scene text reader. CLIP can robustly identify regular (horizontal) and
irregular (rotated, curved, blurred, or occluded) text in natural images. With
such merits, we introduce CLIP4STR, a simple yet effective STR method built
upon image and text encoders of CLIP. It has two encoder-decoder branches: a
visual branch and a cross-modal branch. The visual branch provides an initial
prediction based on the visual feature, and the cross-modal branch refines this
prediction by addressing the discrepancy between the visual feature and text
semantics. To fully leverage the capabilities of both branches, we design a
dual predict-and-refine decoding scheme for inference. CLIP4STR achieves new
state-of-the-art performance on 11 STR benchmarks. Additionally, a
comprehensive empirical study is provided to enhance the understanding of the
adaptation of CLIP to STR. We believe our method establishes a simple but
strong baseline for future STR research with VL models.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 12:51:20 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Zhao",
"Shuai",
""
],
[
"Wang",
"Xiaohan",
""
],
[
"Zhu",
"Linchao",
""
],
[
"Yang",
"Yi",
""
]
] |
new_dataset
| 0.999095 |
2305.14072
|
Samarth Bhargav
|
Samarth Bhargav, Anne Schuth, Claudia Hauff
|
When the Music Stops: Tip-of-the-Tongue Retrieval for Music
| null | null |
10.1145/3539618.3592086
| null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
We present a study of Tip-of-the-tongue (ToT) retrieval for music, where a
searcher is trying to find an existing music entity, but is unable to succeed
as they cannot accurately recall important identifying information. ToT
information needs are characterized by complexity, verbosity, uncertainty, and
possible false memories. We make four contributions. (1) We collect a dataset -
$ToT_{Music}$ - of 2,278 information needs and ground truth answers. (2) We
introduce a schema for these information needs and show that they often involve
multiple modalities encompassing several Music IR subtasks such as lyric
search, audio-based search, audio fingerprinting, and text search. (3) We
underscore the difficulty of this task by benchmarking a standard text
retrieval approach on this dataset. (4) We investigate the efficacy of query
reformulations generated by a large language model (LLM), and show that they
are not as effective as simply employing the entire information need as a query
- leaving several open questions for future research.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 13:50:06 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Bhargav",
"Samarth",
""
],
[
"Schuth",
"Anne",
""
],
[
"Hauff",
"Claudia",
""
]
] |
new_dataset
| 0.999669 |
2305.14100
|
Ren Li
|
Ren Li, Beno\^it Guillard, Pascal Fua
|
ISP: Multi-Layered Garment Draping with Implicit Sewing Patterns
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many approaches to draping individual garments on human body models are
realistic, fast, and yield outputs that are differentiable with respect to the
body shape on which they are draped. However, none of them can handle
multi-layered clothing, which is prevalent in everyday dress. In this paper, we
introduce a parametric garment representation model that can. As in models used
by clothing designers, each garment consists of individual 2D panels. Their 2D
shape is defined by a Signed Distance Function and 3D shape by a 2D to 3D
mapping. The 2D parameterization enables easy detection of potential collisions
and the 3D parameterization handles complex shapes effectively. We show that
this combination is faster and yields higher quality reconstructions than
purely implicit surface representations, and makes the recovery of layered
garments from images possible thanks to its differentiability. Furthermore, it
supports rapid editing of garment shapes and texture by modifying individual 2D
panels.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 14:23:48 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Li",
"Ren",
""
],
[
"Guillard",
"Benoît",
""
],
[
"Fua",
"Pascal",
""
]
] |
new_dataset
| 0.999307 |
2305.14196
|
Uri Shaham
|
Uri Shaham and Maor Ivgi and Avia Efrat and Jonathan Berant and Omer
Levy
|
ZeroSCROLLS: A Zero-Shot Benchmark for Long Text Understanding
| null | null | null | null |
cs.CL cs.AI cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce ZeroSCROLLS, a zero-shot benchmark for natural language
understanding over long texts, which contains only test sets, without training
or development data. We adapt six tasks from the SCROLLS benchmark, and add
four new datasets, including two novel information fusing tasks, such as
aggregating the percentage of positive reviews. Using ZeroSCROLLS, we conduct a
comprehensive evaluation of both open-source and closed large language models,
finding that Claude outperforms ChatGPT, and that GPT-4 achieves the highest
average score. However, there is still room for improvement on multiple open
challenges in ZeroSCROLLS, such as aggregation tasks, where models struggle to
pass the naive baseline. As the state of the art is a moving target, we invite
researchers to evaluate their ideas on the live ZeroSCROLLS leaderboard
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 16:15:31 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Shaham",
"Uri",
""
],
[
"Ivgi",
"Maor",
""
],
[
"Efrat",
"Avia",
""
],
[
"Berant",
"Jonathan",
""
],
[
"Levy",
"Omer",
""
]
] |
new_dataset
| 0.980852 |
2305.14201
|
Tiedong Liu
|
Tiedong Liu and Bryan Kian Hsiang Low
|
Goat: Fine-tuned LLaMA Outperforms GPT-4 on Arithmetic Tasks
| null | null | null | null |
cs.LG cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce Goat, a fine-tuned LLaMA model that significantly outperforms
GPT-4 on a range of arithmetic tasks. Fine-tuned on a synthetically generated
dataset, Goat achieves state-of-the-art performance on BIG-bench arithmetic
sub-task. In particular, the zero-shot Goat-7B matches or even surpasses the
accuracy achieved by the few-shot PaLM-540B. Surprisingly, Goat can achieve
near-perfect accuracy on large-number addition and subtraction through
supervised fine-tuning only, which is almost impossible with previous
pretrained language models, such as Bloom, OPT, GPT-NeoX, etc. We attribute
Goat's exceptional performance to LLaMA's consistent tokenization of numbers.
To tackle more challenging tasks like large-number multiplication and division,
we propose an approach that classifies tasks based on their learnability, and
subsequently decomposes unlearnable tasks, such as multi-digit multiplication
and division, into a series of learnable tasks by leveraging basic arithmetic
principles. We thoroughly examine the performance of our model, offering a
comprehensive evaluation of the effectiveness of our proposed decomposition
steps. Additionally, Goat-7B can be easily trained using LoRA on a 24GB VRAM
GPU, facilitating reproducibility for other researchers. We release our model,
dataset, and the Python script for dataset generation.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 16:20:30 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Liu",
"Tiedong",
""
],
[
"Low",
"Bryan Kian Hsiang",
""
]
] |
new_dataset
| 0.999154 |
2305.14202
|
Silei Xu
|
Silei Xu, Theo Culhane, Meng-Hsi Wu, Sina J. Semnani, Monica S. Lam
|
Complementing GPT-3 with Few-Shot Sequence-to-Sequence Semantic Parsing
over Wikidata
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
As the largest knowledge base, Wikidata is a massive source of knowledge,
complementing large language models with well-structured data. In this paper,
we present WikiWebQuestions, a high-quality knowledge base question answering
benchmark for Wikidata. This new benchmark uses real-world human data with
SPARQL annotation to facilitate a more accurate comparison with large language
models utilizing the up-to-date answers from Wikidata. Additionally, a baseline
for this benchmark is established with an effective training data synthesis
methodology and WikiSP, a Seq2Seq semantic parser, that handles large noisy
knowledge graphs. Experimental results illustrate the effectiveness of this
methodology, achieving 69% and 59% answer accuracy in the dev set and test set,
respectively. We showed that we can pair semantic parsers with GPT-3 to provide
a combination of verifiable results and qualified guesses that can provide
useful answers to 97% of the questions in the dev set of our benchmark.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 16:20:43 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Xu",
"Silei",
""
],
[
"Culhane",
"Theo",
""
],
[
"Wu",
"Meng-Hsi",
""
],
[
"Semnani",
"Sina J.",
""
],
[
"Lam",
"Monica S.",
""
]
] |
new_dataset
| 0.95798 |
2305.14207
|
Jun Cen
|
Jun Cen, Yizheng Wu, Kewei Wang, Xingyi Li, Jingkang Yang, Yixuan Pei,
Lingdong Kong, Ziwei Liu, Qifeng Chen
|
SAD: Segment Any RGBD
|
Technical report of Segment Any RGBD. Project url:
https://github.com/Jun-CEN/SegmentAnyRGBD
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The Segment Anything Model (SAM) has demonstrated its effectiveness in
segmenting any part of 2D RGB images. However, SAM exhibits a stronger emphasis
on texture information while paying less attention to geometry information when
segmenting RGB images. To address this limitation, we propose the Segment Any
RGBD (SAD) model, which is specifically designed to extract geometry
information directly from images. Inspired by the natural ability of humans to
identify objects through the visualization of depth maps, SAD utilizes SAM to
segment the rendered depth map, thus providing cues with enhanced geometry
information and mitigating the issue of over-segmentation. We further include
the open-vocabulary semantic segmentation in our framework, so that the 3D
panoptic segmentation is fulfilled. The project is available on
https://github.com/Jun-CEN/SegmentAnyRGBD.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 16:26:56 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Cen",
"Jun",
""
],
[
"Wu",
"Yizheng",
""
],
[
"Wang",
"Kewei",
""
],
[
"Li",
"Xingyi",
""
],
[
"Yang",
"Jingkang",
""
],
[
"Pei",
"Yixuan",
""
],
[
"Kong",
"Lingdong",
""
],
[
"Liu",
"Ziwei",
""
],
[
"Chen",
"Qifeng",
""
]
] |
new_dataset
| 0.999787 |
2305.14214
|
Benjamin Minixhofer
|
Benjamin Minixhofer, Jonas Pfeiffer, Ivan Vuli\'c
|
CompoundPiece: Evaluating and Improving Decompounding Performance of
Language Models
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While many languages possess processes of joining two or more words to create
compound words, previous studies have been typically limited only to languages
with excessively productive compound formation (e.g., German, Dutch) and there
is no public dataset containing compound and non-compound words across a large
number of languages. In this work, we systematically study decompounding, the
task of splitting compound words into their constituents, at a wide scale. We
first address the data gap by introducing a dataset of 255k compound and
non-compound words across 56 diverse languages obtained from Wiktionary. We
then use this dataset to evaluate an array of Large Language Models (LLMs) on
the decompounding task. We find that LLMs perform poorly, especially on words
which are tokenized unfavorably by subword tokenization. We thus introduce a
novel methodology to train dedicated models for decompounding. The proposed
two-stage procedure relies on a fully self-supervised objective in the first
stage, while the second, supervised learning stage optionally fine-tunes the
model on the annotated Wiktionary data. Our self-supervised models outperform
the prior best unsupervised decompounding models by 13.9% accuracy on average.
Our fine-tuned models outperform all prior (language-specific) decompounding
tools. Furthermore, we use our models to leverage decompounding during the
creation of a subword tokenizer, which we refer to as CompoundPiece.
CompoundPiece tokenizes compound words more favorably on average, leading to
improved performance on decompounding over an otherwise equivalent model using
SentencePiece tokenization.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 16:32:27 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Minixhofer",
"Benjamin",
""
],
[
"Pfeiffer",
"Jonas",
""
],
[
"Vulić",
"Ivan",
""
]
] |
new_dataset
| 0.982065 |
2305.14225
|
Kung-Hsiang Huang
|
Kung-Hsiang Huang, Hou Pong Chan, Kathleen McKeown, Heng Ji
|
ManiTweet: A New Benchmark for Identifying Manipulation of News on
Social Media
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Considerable advancements have been made to tackle the misrepresentation of
information derived from reference articles in the domains of fact-checking and
faithful summarization. However, an unaddressed aspect remains - the
identification of social media posts that manipulate information within
associated news articles. This task presents a significant challenge, primarily
due to the prevalence of personal opinions in such posts. We present a novel
task, identifying manipulation of news on social media, which aims to detect
manipulation in social media posts and identify manipulated or inserted
information. To study this task, we have proposed a data collection schema and
curated a dataset called ManiTweet, consisting of 3.6K pairs of tweets and
corresponding articles. Our analysis demonstrates that this task is highly
challenging, with large language models (LLMs) yielding unsatisfactory
performance. Additionally, we have developed a simple yet effective basic model
that outperforms LLMs significantly on the ManiTweet dataset. Finally, we have
conducted an exploratory analysis of human-written tweets, unveiling intriguing
connections between manipulation and the domain and factuality of news
articles, as well as revealing that manipulated sentences are more likely to
encapsulate the main story or consequences of a news outlet.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 16:40:07 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Huang",
"Kung-Hsiang",
""
],
[
"Chan",
"Hou Pong",
""
],
[
"McKeown",
"Kathleen",
""
],
[
"Ji",
"Heng",
""
]
] |
new_dataset
| 0.99951 |
2305.14235
|
Ruochen Zhang
|
Ruochen Zhang, Samuel Cahyawijaya, Jan Christian Blaise Cruz and Alham
Fikri Aji
|
Multilingual Large Language Models Are Not (Yet) Code-Switchers
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Multilingual Large Language Models (LLMs) have recently shown great
capability in various tasks, exhibiting state-of-the-art performance using
few-shot or zero-shot prompting methods. While these models have been
extensively studied in tasks where inputs are assumed to be in a single
language, less attention has been paid to exploring their performance when
inputs involve code-switching (CSW). In this paper, we provide an extensive
empirical study of various multilingual LLMs and benchmark their performance in
three tasks: sentiment analysis, machine translation, and word-level language
identification. Our findings indicate that despite multilingual LLMs showing
promising outcomes in certain tasks when using zero-/few-shot prompting, their
performance still falls short on average when compared to smaller finetuned
models. We argue that LLMs that are "multilingual" are not necessarily
code-switching compatible and extensive future research is required to fully
bridge this gap.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 16:50:48 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Zhang",
"Ruochen",
""
],
[
"Cahyawijaya",
"Samuel",
""
],
[
"Cruz",
"Jan Christian Blaise",
""
],
[
"Aji",
"Alham Fikri",
""
]
] |
new_dataset
| 0.958166 |
2305.14251
|
Sewon Min
|
Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang
Wei Koh, Mohit Iyyer, Luke Zettlemoyer, Hannaneh Hajishirzi
|
FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long
Form Text Generation
|
23 pages, 7 figures
| null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Evaluating the factuality of long-form text generated by large language
models (LMs) is non-trivial because (1) generations often contain a mixture of
supported and unsupported pieces of information, making binary judgments of
quality inadequate, and (2) human evaluation is time-consuming and costly. In
this paper, we introduce FActScore (Factual precision in Atomicity Score), a
new evaluation that breaks a generation into a series of atomic facts and
computes the percentage of atomic facts supported by a reliable knowledge
source. We conduct an extensive human evaluation to obtain FActScores of people
biographies generated by several state-of-the-art commercial LMs --
InstructGPT, ChatGPT, and the retrieval-augmented PerplexityAI -- and report
new analysis demonstrating the need for such a fine-grained score (e.g.,
ChatGPT only achieves 58%). Since human evaluation is costly, we also introduce
an automated model that estimates FActScore, using retrieval and a strong
language model, with less than a 2% error rate. Finally, we use this automated
metric to evaluate 6,500 generations from a new set of 13 recent LMs that would
have cost $26K if evaluated by humans, with various findings: GPT-4 and ChatGPT
are more factual than public models, and Vicuna and Alpaca are some of the best
public models.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 17:06:00 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Min",
"Sewon",
""
],
[
"Krishna",
"Kalpesh",
""
],
[
"Lyu",
"Xinxi",
""
],
[
"Lewis",
"Mike",
""
],
[
"Yih",
"Wen-tau",
""
],
[
"Koh",
"Pang Wei",
""
],
[
"Iyyer",
"Mohit",
""
],
[
"Zettlemoyer",
"Luke",
""
],
[
"Hajishirzi",
"Hannaneh",
""
]
] |
new_dataset
| 0.995443 |
2305.14260
|
Yue Fan
|
Yue Fan, Kaizhi Zheng, Jing Gu, Xin Eric Wang
|
R2H: Building Multimodal Navigation Helpers that Respond to Help
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The ability to assist humans during a navigation task in a supportive role is
crucial for intelligent agents. Such agents, equipped with environment
knowledge and conversational abilities, can guide individuals through
unfamiliar terrains by generating natural language responses to their
inquiries, grounded in the visual information of their surroundings. However,
these multimodal conversational navigation helpers are still underdeveloped.
This paper proposes a new benchmark, Respond to Help (R2H), to build multimodal
navigation helpers that can respond to help, based on existing dialog-based
embodied datasets. R2H mainly includes two tasks: (1) Respond to Dialog History
(RDH), which assesses the helper agent's ability to generate informative
responses based on a given dialog history, and (2) Respond during Interaction
(RdI), which evaluates the helper agent's ability to maintain effective and
consistent cooperation with a task performer agent during navigation in
real-time. Furthermore, we propose a novel task-oriented multimodal response
generation model that can see and respond, named SeeRee, as the navigation
helper to guide the task performer in embodied tasks. Through both automatic
and human evaluations, we show that SeeRee produces more effective and
informative responses than baseline methods in assisting the task performer
with different navigation tasks. Project website:
https://sites.google.com/view/respond2help/home.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 17:12:09 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Fan",
"Yue",
""
],
[
"Zheng",
"Kaizhi",
""
],
[
"Gu",
"Jing",
""
],
[
"Wang",
"Xin Eric",
""
]
] |
new_dataset
| 0.999362 |
2305.14289
|
Xili Yi
|
Xili Yi, Nima Fazeli
|
Precise Object Sliding with Top Contact via Asymmetric Dual Limit
Surfaces
|
10 pages, 11 figures, accepted in Robotics: Science and Systems (RSS
2023), Daegu, Republic of Korea
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we discuss the mechanics and planning algorithms to slide an
object on a horizontal planar surface via frictional patch contact made with
its top surface. Here, we propose an asymmetric dual limit surface model to
determine slip boundary conditions for both the top and bottom contact. With
this model, we obtain a range of twists that can keep the object in sticking
contact with the robot end-effector while slipping on the supporting plane.
Based on these constraints, we derive a planning algorithm to slide objects
with only top contact to arbitrary goal poses without slippage between end
effector and the object. We validate the proposed model empirically and
demonstrate its predictive accuracy on a variety of object geometries and
motions. We also evaluate the planning algorithm over a variety of objects and
goals demonstrate an orientation error improvement of 90\% when compared to
methods naive to linear path planners.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 17:33:37 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Yi",
"Xili",
""
],
[
"Fazeli",
"Nima",
""
]
] |
new_dataset
| 0.99708 |
2305.14292
|
Sina Semnani
|
Sina J. Semnani, Violet Z. Yao, Heidi C. Zhang, Monica S. Lam
|
WikiChat: A Few-Shot LLM-Based Chatbot Grounded with Wikipedia
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite recent advances in Large Language Models (LLMs), users still cannot
trust the information provided in their responses. LLMs cannot speak accurately
about events that occurred after their training, which are often topics of
great interest to users, and, as we show in this paper, they are highly prone
to hallucination when talking about less popular (tail) topics. This paper
presents WikiChat, a few-shot LLM-based chatbot that is grounded with live
information from Wikipedia. Through many iterations of experimentation, we have
crafte a pipeline based on information retrieval that (1) uses LLMs to suggest
interesting and relevant facts that are individually verified against
Wikipedia, (2) retrieves additional up-to-date information, and (3) composes
coherent and engaging time-aware responses. We propose a novel hybrid
human-and-LLM evaluation methodology to analyze the factuality and
conversationality of LLM-based chatbots. We focus on evaluating important but
previously neglected issues such as conversing about recent and tail topics. We
evaluate WikiChat against strong fine-tuned and LLM-based baselines across a
diverse set of conversation topics. We find that WikiChat outperforms all
baselines in terms of the factual accuracy of its claims, by up to 12.1%, 28.3%
and 32.7% on head, recent and tail topics, while matching GPT-3.5 in terms of
providing natural, relevant, non-repetitive and informational responses.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 17:37:36 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Semnani",
"Sina J.",
""
],
[
"Yao",
"Violet Z.",
""
],
[
"Zhang",
"Heidi C.",
""
],
[
"Lam",
"Monica S.",
""
]
] |
new_dataset
| 0.997669 |
2305.14298
|
En Yu
|
En Yu, Tiancai Wang, Zhuoling Li, Yuang Zhang, Xiangyu Zhang, Wenbing
Tao
|
MOTRv3: Release-Fetch Supervision for End-to-End Multi-Object Tracking
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although end-to-end multi-object trackers like MOTR enjoy the merits of
simplicity, they suffer from the conflict between detection and association
seriously, resulting in unsatisfactory convergence dynamics. While MOTRv2
partly addresses this problem, it demands an additional detection network for
assistance. In this work, we serve as the first to reveal that this conflict
arises from the unfair label assignment between detect queries and track
queries during training, where these detect queries recognize targets and track
queries associate them. Based on this observation, we propose MOTRv3, which
balances the label assignment process using the developed release-fetch
supervision strategy. In this strategy, labels are first released for detection
and gradually fetched back for association. Besides, another two strategies
named pseudo label distillation and track group denoising are designed to
further improve the supervision for detection and association. Without the
assistance of an extra detection network during inference, MOTRv3 achieves
impressive performance across diverse benchmarks, e.g., MOT17, DanceTrack.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 17:40:13 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Yu",
"En",
""
],
[
"Wang",
"Tiancai",
""
],
[
"Li",
"Zhuoling",
""
],
[
"Zhang",
"Yuang",
""
],
[
"Zhang",
"Xiangyu",
""
],
[
"Tao",
"Wenbing",
""
]
] |
new_dataset
| 0.990818 |
2305.14303
|
Yilun Zhao
|
Yilun Zhao, Zhenting Qi, Linyong Nan, Boyu Mi, Yixin Liu, Weijin Zou,
Simeng Han, Xiangru Tang, Yumo Xu, Arman Cohan, Dragomir Radev
|
QTSumm: A New Benchmark for Query-Focused Table Summarization
|
work in progress
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
People primarily consult tables to conduct data analysis or answer specific
questions. Text generation systems that can provide accurate table summaries
tailored to users' information needs can facilitate more efficient access to
relevant data insights. However, existing table-to-text generation studies
primarily focus on converting tabular data into coherent statements, rather
than addressing information-seeking purposes. In this paper, we define a new
query-focused table summarization task, where text generation models have to
perform human-like reasoning and analysis over the given table to generate a
tailored summary, and we introduce a new benchmark named QTSumm for this task.
QTSumm consists of 5,625 human-annotated query-summary pairs over 2,437 tables
on diverse topics. Moreover, we investigate state-of-the-art models (i.e., text
generation, table-to-text generation, and large language models) on the QTSumm
dataset. Experimental results and manual analysis reveal that our benchmark
presents significant challenges in table-to-text generation for future
research.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 17:43:51 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Zhao",
"Yilun",
""
],
[
"Qi",
"Zhenting",
""
],
[
"Nan",
"Linyong",
""
],
[
"Mi",
"Boyu",
""
],
[
"Liu",
"Yixin",
""
],
[
"Zou",
"Weijin",
""
],
[
"Han",
"Simeng",
""
],
[
"Tang",
"Xiangru",
""
],
[
"Xu",
"Yumo",
""
],
[
"Cohan",
"Arman",
""
],
[
"Radev",
"Dragomir",
""
]
] |
new_dataset
| 0.996324 |
2305.14321
|
William Brannon
|
William Brannon, Suyash Fulay, Hang Jiang, Wonjune Kang, Brandon Roy,
Jad Kabbara, Deb Roy
|
ConGraT: Self-Supervised Contrastive Pretraining for Joint Graph and
Text Embeddings
|
3 figures, 9 tables
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose ConGraT(Contrastive Graph-Text pretraining), a general,
self-supervised method for jointly learning separate representations of texts
and nodes in a parent (or ``supervening'') graph, where each text is associated
with one of the nodes. Datasets fitting this paradigm are common, from social
media (users and posts), to citation networks over articles, to link graphs
over web pages. We expand on prior work by providing a general,
self-supervised, joint pretraining method, one which does not depend on
particular dataset structure or a specific task. Our method uses two separate
encoders for graph nodes and texts, which are trained to align their
representations within a common latent space. Training uses a batch-wise
contrastive learning objective inspired by prior work on joint text and image
encoding. As graphs are more structured objects than images, we also extend the
training objective to incorporate information about node similarity and
plausible next guesses in matching nodes and texts. Experiments on various
datasets reveal that ConGraT outperforms strong baselines on various downstream
tasks, including node and text category classification and link prediction.
Code and certain datasets are available at
https://github.com/wwbrannon/congrat.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 17:53:30 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Brannon",
"William",
""
],
[
"Fulay",
"Suyash",
""
],
[
"Jiang",
"Hang",
""
],
[
"Kang",
"Wonjune",
""
],
[
"Roy",
"Brandon",
""
],
[
"Kabbara",
"Jad",
""
],
[
"Roy",
"Deb",
""
]
] |
new_dataset
| 0.992203 |
2305.14326
|
Chan Young Park
|
Lucille Njoo, Chan Young Park, Octavia Stappart, Marvin Thielk, Yi Chu
and Yulia Tsvetkov
|
TalkUp: A Novel Dataset Paving the Way for Understanding Empowering
Language
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Empowering language is important in many real-world contexts, from education
to workplace dynamics to healthcare. Though language technologies are growing
more prevalent in these contexts, empowerment has not been studied in NLP, and
moreover, it is inherently challenging to operationalize because of its subtle,
implicit nature. This work presents the first computational exploration of
empowering language. We first define empowerment detection as a new task,
grounding it in linguistic and social psychology literature. We then
crowdsource a novel dataset of Reddit posts labeled for empowerment, reasons
why these posts are empowering to readers, and the social relationships between
posters and readers. Our preliminary analyses show that this dataset, which we
call TalkUp, can be used to train language models that capture empowering and
disempowering language. More broadly, as it is rich with the ambiguities and
diverse interpretations of real-world language, TalkUp provides an avenue to
explore implication, presuppositions, and how social context influences the
meaning of language.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 17:55:34 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Njoo",
"Lucille",
""
],
[
"Park",
"Chan Young",
""
],
[
"Stappart",
"Octavia",
""
],
[
"Thielk",
"Marvin",
""
],
[
"Chu",
"Yi",
""
],
[
"Tsvetkov",
"Yulia",
""
]
] |
new_dataset
| 0.999711 |
2305.14327
|
Da Yin
|
Da Yin, Xiao Liu, Fan Yin, Ming Zhong, Hritik Bansal, Jiawei Han,
Kai-Wei Chang
|
Dynosaur: A Dynamic Growth Paradigm for Instruction-Tuning Data Curation
|
Work in progress
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Instruction tuning has emerged to enhance the capabilities of large language
models (LLMs) in providing appropriate outputs based on input instructions.
However, existing methods for collecting instruction-tuning data suffer from
limitations in scalability and affordability. In this paper, we propose
Dynosaur, a dynamic growth paradigm for instruction-tuning data curation. Built
upon the metadata of existing NLP datasets, we generate multiple task
instructions applicable to various NLP datasets and determine the relevant data
fields for constructing instruction-tuning data with LLMs. Dynosaur offers
several advantages: 1) lower generation costs (less than $12 for generating
800K instruction-tuning data), 2) good quality of instruction-tuning data
(better performance than Alpaca and Instruction GPT-4 on Super-NI with
comparable data sizes), and 3) the ability to grow dynamically by incorporating
new datasets from Huggingface Datasets Platform. We further investigate
continual learning as an approach to learning with the ever-growing
instruction-tuning dataset. We demonstrate that replay methods not only help
mitigate forgetting issues but help generalize to unseen tasks better. As a
novel continual learning scenario for instruction tuning, selecting tasks based
on instruction representations can be an effective replaying strategy. Code and
data are released at \url{https://github.com/WadeYin9712/Dynosaur}.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 17:56:26 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Yin",
"Da",
""
],
[
"Liu",
"Xiao",
""
],
[
"Yin",
"Fan",
""
],
[
"Zhong",
"Ming",
""
],
[
"Bansal",
"Hritik",
""
],
[
"Han",
"Jiawei",
""
],
[
"Chang",
"Kai-Wei",
""
]
] |
new_dataset
| 0.999522 |
2305.14341
|
Lucy Lu Wang
|
Yue Guo, Tal August, Gondy Leroy, Trevor Cohen, Lucy Lu Wang
|
APPLS: A Meta-evaluation Testbed for Plain Language Summarization
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
While there has been significant development of models for Plain Language
Summarization (PLS), evaluation remains a challenge. This is in part because
PLS involves multiple, interrelated language transformations (e.g., adding
background explanations, removing specialized terminology). No metrics are
explicitly engineered for PLS, and the suitability of other text generation
evaluation metrics remains unclear. To address these concerns, our study
presents a granular meta-evaluation testbed, APPLS, designed to evaluate
existing metrics for PLS. Drawing on insights from previous research, we define
controlled perturbations for our testbed along four criteria that a metric of
plain language should capture: informativeness, simplification, coherence, and
faithfulness. Our analysis of metrics using this testbed reveals that current
metrics fail to capture simplification, signaling a crucial gap. In response,
we introduce POMME, a novel metric designed to assess text simplification in
PLS. We demonstrate its correlation with simplification perturbations and
validate across a variety of datasets. Our research contributes the first
meta-evaluation testbed for PLS and a comprehensive evaluation of existing
metrics, offering insights with relevance to other text generation tasks.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 17:59:19 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Guo",
"Yue",
""
],
[
"August",
"Tal",
""
],
[
"Leroy",
"Gondy",
""
],
[
"Cohen",
"Trevor",
""
],
[
"Wang",
"Lucy Lu",
""
]
] |
new_dataset
| 0.99022 |
2305.14344
|
Agrim Gupta
|
Agrim Gupta, Jiajun Wu, Jia Deng, Li Fei-Fei
|
Siamese Masked Autoencoders
|
Project page https://siam-mae-video.github.io/
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Establishing correspondence between images or scenes is a significant
challenge in computer vision, especially given occlusions, viewpoint changes,
and varying object appearances. In this paper, we present Siamese Masked
Autoencoders (SiamMAE), a simple extension of Masked Autoencoders (MAE) for
learning visual correspondence from videos. SiamMAE operates on pairs of
randomly sampled video frames and asymmetrically masks them. These frames are
processed independently by an encoder network, and a decoder composed of a
sequence of cross-attention layers is tasked with predicting the missing
patches in the future frame. By masking a large fraction ($95\%$) of patches in
the future frame while leaving the past frame unchanged, SiamMAE encourages the
network to focus on object motion and learn object-centric representations.
Despite its conceptual simplicity, features learned via SiamMAE outperform
state-of-the-art self-supervised methods on video object segmentation, pose
keypoint propagation, and semantic part propagation tasks. SiamMAE achieves
competitive results without relying on data augmentation, handcrafted
tracking-based pretext tasks, or other techniques to prevent representational
collapse.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 17:59:46 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Gupta",
"Agrim",
""
],
[
"Wu",
"Jiajun",
""
],
[
"Deng",
"Jia",
""
],
[
"Fei-Fei",
"Li",
""
]
] |
new_dataset
| 0.995714 |
1910.07351
|
Naman Jain
|
Monarch Parmar, Naman Jain, Pranjali Jain, P Jayakrishna Sahit, Soham
Pachpande, Shruti Singh and Mayank Singh
|
NLPExplorer: Exploring the Universe of NLP Papers
|
42nd European Conference on Information Retrieval Research, ECIR 2020
| null |
10.1007/978-3-030-45442-5_61
| null |
cs.IR cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding the current research trends, problems, and their innovative
solutions remains a bottleneck due to the ever-increasing volume of scientific
articles. In this paper, we propose NLPExplorer, a completely automatic portal
for indexing, searching, and visualizing Natural Language Processing (NLP)
research volume. NLPExplorer presents interesting insights from papers,
authors, venues, and topics. In contrast to previous topic modelling based
approaches, we manually curate five course-grained non-exclusive topical
categories namely Linguistic Target (Syntax, Discourse, etc.), Tasks (Tagging,
Summarization, etc.), Approaches (unsupervised, supervised, etc.), Languages
(English, Chinese,etc.) and Dataset types (news, clinical notes, etc.). Some of
the novel features include a list of young popular authors, popular URLs, and
datasets, a list of topically diverse papers and recent popular papers. Also,
it provides temporal statistics such as yearwise popularity of topics,
datasets, and seminal papers. To facilitate future research and system
development, we make all the processed datasets accessible through API calls.
The current system is available at http://lingo.iitgn.ac.in:5001/
|
[
{
"version": "v1",
"created": "Wed, 16 Oct 2019 13:57:15 GMT"
},
{
"version": "v2",
"created": "Fri, 19 May 2023 18:04:37 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Parmar",
"Monarch",
""
],
[
"Jain",
"Naman",
""
],
[
"Jain",
"Pranjali",
""
],
[
"Sahit",
"P Jayakrishna",
""
],
[
"Pachpande",
"Soham",
""
],
[
"Singh",
"Shruti",
""
],
[
"Singh",
"Mayank",
""
]
] |
new_dataset
| 0.984372 |
2007.06343
|
Rahul Tallamraju
|
Rahul Tallamraju, Nitin Saini, Elia Bonetto, Michael Pabst, Yu Tang
Liu, Michael J. Black and Aamir Ahmad
|
AirCapRL: Autonomous Aerial Human Motion Capture using Deep
Reinforcement Learning
|
Article accepted for publication in Robotics and Automation Letters
(RA-L) and IROS 2020. 8 Pages, 8 figures
| null |
10.1109/LRA.2020.3013906
| null |
cs.RO cs.LG cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this letter, we introduce a deep reinforcement learning (RL) based
multi-robot formation controller for the task of autonomous aerial human motion
capture (MoCap). We focus on vision-based MoCap, where the objective is to
estimate the trajectory of body pose and shape of a single moving person using
multiple micro aerial vehicles. State-of-the-art solutions to this problem are
based on classical control methods, which depend on hand-crafted system and
observation models. Such models are difficult to derive and generalize across
different systems. Moreover, the non-linearity and non-convexities of these
models lead to sub-optimal controls. In our work, we formulate this problem as
a sequential decision making task to achieve the vision-based motion capture
objectives, and solve it using a deep neural network-based RL method. We
leverage proximal policy optimization (PPO) to train a stochastic decentralized
control policy for formation control. The neural network is trained in a
parallelized setup in synthetic environments. We performed extensive simulation
experiments to validate our approach. Finally, real-robot experiments
demonstrate that our policies generalize to real world conditions. Video Link:
https://bit.ly/38SJfjo Supplementary: https://bit.ly/3evfo1O
|
[
{
"version": "v1",
"created": "Mon, 13 Jul 2020 12:30:31 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Aug 2020 11:10:52 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Tallamraju",
"Rahul",
""
],
[
"Saini",
"Nitin",
""
],
[
"Bonetto",
"Elia",
""
],
[
"Pabst",
"Michael",
""
],
[
"Liu",
"Yu Tang",
""
],
[
"Black",
"Michael J.",
""
],
[
"Ahmad",
"Aamir",
""
]
] |
new_dataset
| 0.997055 |
2111.08692
|
Daniel Lemire
|
Daniel Lemire
|
Unicode at Gigabytes per Second
|
SPIRE 2021: String Processing and Information Retrieval
|
Software: Practice and Experience, Volume52, Issue2 February 2022
|
10.1007/978-3-030-86692-1_2
| null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
We often represent text using Unicode formats (UTF-8 and UTF-16). The UTF-8
format is increasingly popular, especially on the web (XML, HTML, JSON, Rust,
Go, Swift, Ruby). The UTF-16 format is most common in Java, .NET, and inside
operating systems such as Windows.
Software systems frequently have to convert text from one Unicode format to
the other. While recent disks have bandwidths of 5 GiB/s or more, conventional
approaches transcode non-ASCII text at a fraction of a gigabyte per second.
We show that we can validate and transcode Unicode text at gigabytes per
second on current systems (x64 and ARM) without sacrificing safety. Our
open-source library can be ten times faster than the popular ICU library on
non-ASCII strings and even faster on ASCII strings.
|
[
{
"version": "v1",
"created": "Sun, 14 Nov 2021 23:20:22 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Nov 2022 17:16:53 GMT"
},
{
"version": "v3",
"created": "Sat, 20 May 2023 02:00:24 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Lemire",
"Daniel",
""
]
] |
new_dataset
| 0.999537 |
2112.11691
|
Xu Yan
|
Xu Yan, Zhihao Yuan, Yuhao Du, Yinghong Liao, Yao Guo, Zhen Li,
Shuguang Cui
|
Comprehensive Visual Question Answering on Point Clouds through
Compositional Scene Manipulation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual Question Answering on 3D Point Cloud (VQA-3D) is an emerging yet
challenging field that aims at answering various types of textual questions
given an entire point cloud scene. To tackle this problem, we propose the
CLEVR3D, a large-scale VQA-3D dataset consisting of 171K questions from 8,771
3D scenes. Specifically, we develop a question engine leveraging 3D scene graph
structures to generate diverse reasoning questions, covering the questions of
objects' attributes (i.e., size, color, and material) and their spatial
relationships. Through such a manner, we initially generated 44K questions from
1,333 real-world scenes. Moreover, a more challenging setup is proposed to
remove the confounding bias and adjust the context from a common-sense layout.
Such a setup requires the network to achieve comprehensive visual understanding
when the 3D scene is different from the general co-occurrence context (e.g.,
chairs always exist with tables). To this end, we further introduce the
compositional scene manipulation strategy and generate 127K questions from
7,438 augmented 3D scenes, which can improve VQA-3D models for real-world
comprehension. Built upon the proposed dataset, we baseline several VQA-3D
models, where experimental results verify that the CLEVR3D can significantly
boost other 3D scene understanding tasks. Our code and dataset will be made
publicly available at https://github.com/yanx27/CLEVR3D.
|
[
{
"version": "v1",
"created": "Wed, 22 Dec 2021 06:43:21 GMT"
},
{
"version": "v2",
"created": "Fri, 31 Dec 2021 09:13:52 GMT"
},
{
"version": "v3",
"created": "Mon, 22 May 2023 02:55:52 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Yan",
"Xu",
""
],
[
"Yuan",
"Zhihao",
""
],
[
"Du",
"Yuhao",
""
],
[
"Liao",
"Yinghong",
""
],
[
"Guo",
"Yao",
""
],
[
"Li",
"Zhen",
""
],
[
"Cui",
"Shuguang",
""
]
] |
new_dataset
| 0.99887 |
2202.10206
|
Qian Ren
|
Qian Ren, Yue Li, Yingjun Wu, Yuchen Wu, Hong Lei, Lei Wang, Bangdao
Chen
|
DECLOAK: Enable Secure and Cheap Multi-Party Transactions on Legacy
Blockchains by a Minimally Trusted TEE Network
|
arXiv admin note: text overlap with arXiv:2106.13926
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
As the confidentiality and scalability of smart contracts have become a
crucial demand of blockchains, off-chain contract execution frameworks have
been promising. Some have recently expanded off-chain contracts to Multi-Party
Computation (MPC), which seek to transition the on-chain states by off-chain
MPC. The most general problem among these solutions is MPT, since its off-chain
MPC takes on- and off-chain inputs, delivers on- and off-chain outputs, and can
be publicly verified by the blockchain, thus capable of covering more
scenarios. However, existing Multi-Party Transaction (MPT) solutions lack at
least one of data availability, financial fairness, delivery fairness, and
delivery atomicity. These properties are crucially valued by communities, e.g.,
the Ethereum community, or users. Even worse, these solutions require high-cost
interactions between the blockchain and off-chain systems.
This paper proposes a novel MPT-enabled off-chain contract execution
framework, DECLOAK. DECLOAK is the first to achieve data availability of MPT,
and our method can apply to other fields that seek to persist user data
on-chain. Moreover, DECLOAK solves all mentioned shortcomings with even lower
gas costs and weaker assumptions. Specifically, DECLOAK tolerates all but one
Byzantine party and TEE executors. Evaluating on 10 MPTs, DECLOAK reduces the
gas cost of the SOTA, Cloak, by 65.6%. Consequently, we are the first to not
only achieve such level secure MPT in practical assumption, but also
demonstrate that evaluating MPT in the comparable gas cost to normal Ethereum
transaction is possible. And the cost superiority of DECLOAK increases as the
number of MPT parties grows.
|
[
{
"version": "v1",
"created": "Mon, 21 Feb 2022 13:31:54 GMT"
},
{
"version": "v2",
"created": "Mon, 22 May 2023 11:55:10 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Ren",
"Qian",
""
],
[
"Li",
"Yue",
""
],
[
"Wu",
"Yingjun",
""
],
[
"Wu",
"Yuchen",
""
],
[
"Lei",
"Hong",
""
],
[
"Wang",
"Lei",
""
],
[
"Chen",
"Bangdao",
""
]
] |
new_dataset
| 0.996246 |
2206.03891
|
Carlos Hinojosa
|
Carlos Hinojosa, Miguel Marquez, Henry Arguello, Ehsan Adeli, Li
Fei-Fei, Juan Carlos Niebles
|
PrivHAR: Recognizing Human Actions From Privacy-preserving Lens
|
Oral paper presented at European Conference on Computer Vision (ECCV)
2022, in Tel Aviv, Israel
|
Computer Vision--ECCV 2022: 17th European Conference, Tel Aviv,
Israel, October 23--27, 2022, Proceedings, Part IV
|
10.1007/978-3-031-19772-7_19
| null |
cs.CV cs.AI cs.CR cs.LG eess.IV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The accelerated use of digital cameras prompts an increasing concern about
privacy and security, particularly in applications such as action recognition.
In this paper, we propose an optimizing framework to provide robust visual
privacy protection along the human action recognition pipeline. Our framework
parameterizes the camera lens to successfully degrade the quality of the videos
to inhibit privacy attributes and protect against adversarial attacks while
maintaining relevant features for activity recognition. We validate our
approach with extensive simulations and hardware experiments.
|
[
{
"version": "v1",
"created": "Wed, 8 Jun 2022 13:43:29 GMT"
},
{
"version": "v2",
"created": "Sun, 29 Jan 2023 15:49:27 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Hinojosa",
"Carlos",
""
],
[
"Marquez",
"Miguel",
""
],
[
"Arguello",
"Henry",
""
],
[
"Adeli",
"Ehsan",
""
],
[
"Fei-Fei",
"Li",
""
],
[
"Niebles",
"Juan Carlos",
""
]
] |
new_dataset
| 0.985242 |
2208.04159
|
Ningning Wang
|
Ningning Wang, Guodong Li, Sihuang Hu, Min Ye
|
Constructing MSR codes with subpacketization $2^{n/3}$ for $k+1$ helper
nodes
| null |
IEEE Transactions on Information Theory (Volume: 69, Issue: 6,
June 2023)
|
10.1109/TIT.2023.3238759
| null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Wang et al. (IEEE Transactions on Information Theory, vol. 62, no. 8, 2016)
proposed an explicit construction of an $(n=k+2,k)$ Minimum Storage
Regenerating (MSR) code with $2$ parity nodes and subpacketization $2^{k/3}$.
The number of helper nodes for this code is $d=k+1=n-1$, and this code has the
smallest subpacketization among all the existing explicit constructions of MSR
codes with the same $n,k$ and $d$. In this paper, we present a new construction
of MSR codes for a wider range of parameters. More precisely, we still fix
$d=k+1$, but we allow the code length $n$ to be any integer satisfying $n\ge
k+2$. The field size of our code is linear in $n$, and the subpacketization of
our code is $2^{n/3}$. This value is slightly larger than the subpacketization
of the construction by Wang et al. because their code construction only
guarantees optimal repair for all the systematic nodes while our code
construction guarantees optimal repair for all nodes.
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 13:59:11 GMT"
},
{
"version": "v2",
"created": "Thu, 11 May 2023 14:58:30 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Wang",
"Ningning",
""
],
[
"Li",
"Guodong",
""
],
[
"Hu",
"Sihuang",
""
],
[
"Ye",
"Min",
""
]
] |
new_dataset
| 0.985286 |
2209.03416
|
Sophia Sanborn
|
Sophia Sanborn, Christian Shewmake, Bruno Olshausen, Christopher
Hillar
|
Bispectral Neural Networks
| null |
The Eleventh International Conference on Learning Representations
(2023)
| null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present a neural network architecture, Bispectral Neural Networks (BNNs)
for learning representations that are invariant to the actions of compact
commutative groups on the space over which a signal is defined. The model
incorporates the ansatz of the bispectrum, an analytically defined group
invariant that is complete -- that is, it preserves all signal structure while
removing only the variation due to group actions. Here, we demonstrate that
BNNs are able to simultaneously learn groups, their irreducible
representations, and corresponding equivariant and complete-invariant maps
purely from the symmetries implicit in data. Further, we demonstrate that the
completeness property endows these networks with strong invariance-based
adversarial robustness. This work establishes Bispectral Neural Networks as a
powerful computational primitive for robust invariant representation learning
|
[
{
"version": "v1",
"created": "Wed, 7 Sep 2022 18:34:48 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Sep 2022 18:38:48 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Oct 2022 15:00:56 GMT"
},
{
"version": "v4",
"created": "Sun, 19 Mar 2023 16:34:47 GMT"
},
{
"version": "v5",
"created": "Fri, 19 May 2023 19:17:35 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Sanborn",
"Sophia",
""
],
[
"Shewmake",
"Christian",
""
],
[
"Olshausen",
"Bruno",
""
],
[
"Hillar",
"Christopher",
""
]
] |
new_dataset
| 0.990692 |
2211.00313
|
Guang Li
|
Guang Li, Ren Togo, Takahiro Ogawa, Miki Haseyama
|
RGMIM: Region-Guided Masked Image Modeling for Learning Meaningful
Representation from X-Ray Images
| null | null | null | null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Purpose: Self-supervised learning has been gaining attention in the medical
field for its potential to improve computer-aided diagnosis. One popular method
of self-supervised learning is masked image modeling (MIM), which involves
masking a subset of input pixels and predicting the masked pixels. However,
traditional MIM methods typically use a random masking strategy, which may not
be ideal for medical images that often have a small region of interest for
disease detection. To address this issue, this work aims to improve MIM for
medical images and evaluate its effectiveness in an open X-ray image dataset.
Methods: In this paper, we present a novel method called region-guided masked
image modeling (RGMIM) for learning meaningful representation from X-ray
images. Our method adopts a new masking strategy that utilizes organ mask
information to identify valid regions for learning more meaningful
representations. The proposed method was contrasted with five self-supervised
learning techniques (MAE, SKD, Cross, BYOL, and, SimSiam). We conduct
quantitative evaluations on an open lung X-ray image dataset as well as masking
ratio hyperparameter studies. Results: When using the entire training set,
RGMIM outperformed other comparable methods, achieving a 0.962 lung disease
detection accuracy. Specifically, RGMIM significantly improved performance in
small data volumes, such as 5% and 10% of the training set (846 and 1,693
images) compared to other methods, and achieved a 0.957 detection accuracy even
when only 50% of the training set was used. Conclusions: RGMIM can mask more
valid regions, facilitating the learning of discriminative representations and
the subsequent high-accuracy lung disease detection. RGMIM outperforms other
state-of-the-art self-supervised learning methods in experiments, particularly
when limited training data is used.
|
[
{
"version": "v1",
"created": "Tue, 1 Nov 2022 07:41:03 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Mar 2023 01:55:07 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Apr 2023 10:06:36 GMT"
},
{
"version": "v4",
"created": "Sun, 21 May 2023 14:36:59 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Li",
"Guang",
""
],
[
"Togo",
"Ren",
""
],
[
"Ogawa",
"Takahiro",
""
],
[
"Haseyama",
"Miki",
""
]
] |
new_dataset
| 0.965846 |
2211.05705
|
Bobo Li
|
Bobo Li, Hao Fei, Fei Li, Yuhan Wu, Jinsong Zhang, Shengqiong Wu,
Jingye Li, Yijiang Liu, Lizi Liao, Tat-Seng Chua and Donghong Ji
|
DiaASQ : A Benchmark of Conversational Aspect-based Sentiment Quadruple
Analysis
|
Accepted to Findings of ACL 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rapid development of aspect-based sentiment analysis (ABSA) within recent
decades shows great potential for real-world society. The current ABSA works,
however, are mostly limited to the scenario of a single text piece, leaving the
study in dialogue contexts unexplored. To bridge the gap between fine-grained
sentiment analysis and conversational opinion mining, in this work, we
introduce a novel task of conversational aspect-based sentiment quadruple
analysis, namely DiaASQ, aiming to detect the quadruple of
target-aspect-opinion-sentiment in a dialogue. We manually construct a
large-scale high-quality DiaASQ dataset in both Chinese and English languages.
We deliberately develop a neural model to benchmark the task, which advances in
effectively performing end-to-end quadruple prediction, and manages to
incorporate rich dialogue-specific and discourse feature representations for
better cross-utterance quadruple extraction. We hope the new benchmark will
spur more advancements in the sentiment analysis community.
|
[
{
"version": "v1",
"created": "Thu, 10 Nov 2022 17:18:20 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Nov 2022 03:30:25 GMT"
},
{
"version": "v3",
"created": "Mon, 12 Dec 2022 05:06:05 GMT"
},
{
"version": "v4",
"created": "Mon, 22 May 2023 10:49:20 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Li",
"Bobo",
""
],
[
"Fei",
"Hao",
""
],
[
"Li",
"Fei",
""
],
[
"Wu",
"Yuhan",
""
],
[
"Zhang",
"Jinsong",
""
],
[
"Wu",
"Shengqiong",
""
],
[
"Li",
"Jingye",
""
],
[
"Liu",
"Yijiang",
""
],
[
"Liao",
"Lizi",
""
],
[
"Chua",
"Tat-Seng",
""
],
[
"Ji",
"Donghong",
""
]
] |
new_dataset
| 0.997345 |
2211.08675
|
Hyoukjun Kwon
|
Hyoukjun Kwon, Krishnakumar Nair, Jamin Seo, Jason Yik, Debabrata
Mohapatra, Dongyuan Zhan, Jinook Song, Peter Capak, Peizhao Zhang, Peter
Vajda, Colby Banbury, Mark Mazumder, Liangzhen Lai, Ashish Sirasao, Tushar
Krishna, Harshit Khaitan, Vikas Chandra, Vijay Janapa Reddi
|
XRBench: An Extended Reality (XR) Machine Learning Benchmark Suite for
the Metaverse
| null | null | null | null |
cs.LG cs.ET
|
http://creativecommons.org/licenses/by/4.0/
|
Real-time multi-task multi-model (MTMM) workloads, a new form of deep
learning inference workloads, are emerging for applications areas like extended
reality (XR) to support metaverse use cases. These workloads combine user
interactivity with computationally complex machine learning (ML) activities.
Compared to standard ML applications, these ML workloads present unique
difficulties and constraints. Real-time MTMM workloads impose heterogeneity and
concurrency requirements on future ML systems and devices, necessitating the
development of new capabilities. This paper begins with a discussion of the
various characteristics of these real-time MTMM ML workloads and presents an
ontology for evaluating the performance of future ML hardware for XR systems.
Next, we present XRBENCH, a collection of MTMM ML tasks, models, and usage
scenarios that execute these models in three representative ways: cascaded,
concurrent, and cascaded-concurrent for XR use cases. Finally, we emphasize the
need for new metrics that capture the requirements properly. We hope that our
work will stimulate research and lead to the development of a new generation of
ML systems for XR use cases. XRBench is available as an open-source project:
https://github.com/XRBench
|
[
{
"version": "v1",
"created": "Wed, 16 Nov 2022 05:08:42 GMT"
},
{
"version": "v2",
"created": "Sat, 20 May 2023 00:16:23 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Kwon",
"Hyoukjun",
""
],
[
"Nair",
"Krishnakumar",
""
],
[
"Seo",
"Jamin",
""
],
[
"Yik",
"Jason",
""
],
[
"Mohapatra",
"Debabrata",
""
],
[
"Zhan",
"Dongyuan",
""
],
[
"Song",
"Jinook",
""
],
[
"Capak",
"Peter",
""
],
[
"Zhang",
"Peizhao",
""
],
[
"Vajda",
"Peter",
""
],
[
"Banbury",
"Colby",
""
],
[
"Mazumder",
"Mark",
""
],
[
"Lai",
"Liangzhen",
""
],
[
"Sirasao",
"Ashish",
""
],
[
"Krishna",
"Tushar",
""
],
[
"Khaitan",
"Harshit",
""
],
[
"Chandra",
"Vikas",
""
],
[
"Reddi",
"Vijay Janapa",
""
]
] |
new_dataset
| 0.952289 |
2212.10325
|
Hongyi Yuan
|
Hongyi Yuan, Zheng Yuan, Chuanqi Tan, Fei Huang, Songfang Huang
|
SeqDiffuSeq: Text Diffusion with Encoder-Decoder Transformers
|
Under Review
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Diffusion model, a new generative modelling paradigm, has achieved great
success in image, audio, and video generation. However, considering the
discrete categorical nature of text, it is not trivial to extend continuous
diffusion models to natural language, and text diffusion models are less
studied. Sequence-to-sequence text generation is one of the essential natural
language processing topics. In this work, we apply diffusion models to approach
sequence-to-sequence text generation, and explore whether the superiority
generation performance of diffusion model can transfer to natural language
domain. We propose SeqDiffuSeq, a text diffusion model for sequence-to-sequence
generation. SeqDiffuSeq uses an encoder-decoder Transformers architecture to
model denoising function. In order to improve generation quality, SeqDiffuSeq
combines the self-conditioning technique and a newly proposed adaptive noise
schedule technique. The adaptive noise schedule has the difficulty of denoising
evenly distributed across time steps, and considers exclusive noise schedules
for tokens at different positional order. Experiment results illustrate the
good performance on sequence-to-sequence generation in terms of text quality
and inference time.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 15:16:24 GMT"
},
{
"version": "v2",
"created": "Wed, 3 May 2023 07:43:22 GMT"
},
{
"version": "v3",
"created": "Thu, 4 May 2023 15:52:02 GMT"
},
{
"version": "v4",
"created": "Tue, 9 May 2023 10:44:48 GMT"
},
{
"version": "v5",
"created": "Mon, 22 May 2023 17:31:46 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Yuan",
"Hongyi",
""
],
[
"Yuan",
"Zheng",
""
],
[
"Tan",
"Chuanqi",
""
],
[
"Huang",
"Fei",
""
],
[
"Huang",
"Songfang",
""
]
] |
new_dataset
| 0.995673 |
2301.00511
|
Shouguo Yang
|
Shouguo Yang, Chaopeng Dong, Yang Xiao, Yiran Cheng, Zhiqiang Shi, Zhi
Li, and Limin Sun
|
Asteria-Pro: Enhancing Deep-Learning Based Binary Code Similarity
Detection by Incorporating Domain Knowledge
|
arXiv admin note: text overlap with arXiv:2108.06082
| null | null | null |
cs.SE cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The widespread code reuse allows vulnerabilities to proliferate among a vast
variety of firmware. There is an urgent need to detect these vulnerable code
effectively and efficiently. By measuring code similarities, AI-based binary
code similarity detection is applied to detecting vulnerable code at scale.
Existing studies have proposed various function features to capture the
commonality for similarity detection. Nevertheless, the significant code
syntactic variability induced by the diversity of IoT hardware architectures
diminishes the accuracy of binary code similarity detection. In our earlier
study and the tool Asteria, we adopt a Tree-LSTM network to summarize function
semantics as function commonality and the evaluation result indicates an
advanced performance. However, it still has utility concerns due to excessive
time costs and inadequate precision while searching for large-scale firmware
bugs.
To this end, we propose a novel deep learning enhancement architecture by
incorporating domain knowledge-based pre-filtration and re-ranking modules, and
we develop a prototype based on Asteria called Asteria-Pro. Pre-filtration
module seeks to eliminates dissimilar functions to boost subsequent deep
learning model calculations, while re-ranking module aims to raises the
rankings of vulnerable functions among candidates generated by deep learning
model. Our evaluation indicates that pre-filtration module cuts the calculation
time by 96.9% and re-ranking improves MRR and Recall by 23.71% and 36.4%. By
incorporating the pre-filtration and re-ranking modules, Asteria-Pro
outperforms existing state-of-the-art approaches in bug search task, by a
significant large margin. We conduct a large-scale real-world firmware bug
search and Asteria-Pro manages to detect 1,482 vulnerable functions with a high
precision 91.65%.
|
[
{
"version": "v1",
"created": "Mon, 2 Jan 2023 03:16:26 GMT"
},
{
"version": "v2",
"created": "Mon, 22 May 2023 02:01:35 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Yang",
"Shouguo",
""
],
[
"Dong",
"Chaopeng",
""
],
[
"Xiao",
"Yang",
""
],
[
"Cheng",
"Yiran",
""
],
[
"Shi",
"Zhiqiang",
""
],
[
"Li",
"Zhi",
""
],
[
"Sun",
"Limin",
""
]
] |
new_dataset
| 0.982769 |
2301.05935
|
Jorge Calvo-Zaragoza
|
Enrique Vidal, Alejandro H. Toselli, Antonio R\'ios-Vila, Jorge
Calvo-Zaragoza
|
End-to-End Page-Level Assessment of Handwritten Text Recognition
|
Published in Pattern Recognition
| null |
10.1016/j.patcog.2023.109695
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The evaluation of Handwritten Text Recognition (HTR) systems has
traditionally used metrics based on the edit distance between HTR and ground
truth (GT) transcripts, at both the character and word levels. This is very
adequate when the experimental protocol assumes that both GT and HTR text lines
are the same, which allows edit distances to be independently computed to each
given line. Driven by recent advances in pattern recognition, HTR systems
increasingly face the end-to-end page-level transcription of a document, where
the precision of locating the different text lines and their corresponding
reading order (RO) play a key role. In such a case, the standard metrics do not
take into account the inconsistencies that might appear. In this paper, the
problem of evaluating HTR systems at the page level is introduced in detail. We
analyse the convenience of using a two-fold evaluation, where the transcription
accuracy and the RO goodness are considered separately. Different alternatives
are proposed, analysed and empirically compared both through partially
simulated and through real, full end-to-end experiments. Results support the
validity of the proposed two-fold evaluation approach. An important conclusion
is that such an evaluation can be adequately achieved by just two simple and
well-known metrics: the Word Error Rate (WER), that takes transcription
sequentiality into account, and the here re-formulated Bag of Words Word Error
Rate (bWER), that ignores order. While the latter directly and very accurately
assess intrinsic word recognition errors, the difference between both metrics
gracefully correlates with the Normalised Spearman's Foot Rule Distance (NSFD),
a metric which explicitly measures RO errors associated with layout analysis
flaws.
|
[
{
"version": "v1",
"created": "Sat, 14 Jan 2023 15:43:07 GMT"
},
{
"version": "v2",
"created": "Sun, 21 May 2023 07:41:53 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Vidal",
"Enrique",
""
],
[
"Toselli",
"Alejandro H.",
""
],
[
"Ríos-Vila",
"Antonio",
""
],
[
"Calvo-Zaragoza",
"Jorge",
""
]
] |
new_dataset
| 0.996882 |
2302.01825
|
Wangmeng Xiang
|
Hanyuan Chen, Jun-Yan He, Wangmeng Xiang, Zhi-Qi Cheng, Wei Liu,
Hanbing Liu, Bin Luo, Yifeng Geng, Xuansong Xie
|
HDFormer: High-order Directed Transformer for 3D Human Pose Estimation
|
Accepted to IJCAI 2023; 9 pages, 5 figures, 7 tables; the code is at
https://github.com/hyer/HDFormer
|
In the 32nd international Joint Conference on Artificial
Intelligence (IJCAI 2023)
| null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human pose estimation is a challenging task due to its structured data
sequence nature. Existing methods primarily focus on pair-wise interaction of
body joints, which is insufficient for scenarios involving overlapping joints
and rapidly changing poses. To overcome these issues, we introduce a novel
approach, the High-order Directed Transformer (HDFormer), which leverages
high-order bone and joint relationships for improved pose estimation.
Specifically, HDFormer incorporates both self-attention and high-order
attention to formulate a multi-order attention module. This module facilitates
first-order "joint$\leftrightarrow$joint", second-order
"bone$\leftrightarrow$joint", and high-order "hyperbone$\leftrightarrow$joint"
interactions, effectively addressing issues in complex and occlusion-heavy
situations. In addition, modern CNN techniques are integrated into the
transformer-based architecture, balancing the trade-off between performance and
efficiency. HDFormer significantly outperforms state-of-the-art (SOTA) models
on Human3.6M and MPI-INF-3DHP datasets, requiring only 1/10 of the parameters
and significantly lower computational costs. Moreover, HDFormer demonstrates
broad real-world applicability, enabling real-time, accurate 3D pose
estimation. The source code is in https://github.com/hyer/HDFormer
|
[
{
"version": "v1",
"created": "Fri, 3 Feb 2023 16:00:48 GMT"
},
{
"version": "v2",
"created": "Mon, 22 May 2023 06:32:17 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Chen",
"Hanyuan",
""
],
[
"He",
"Jun-Yan",
""
],
[
"Xiang",
"Wangmeng",
""
],
[
"Cheng",
"Zhi-Qi",
""
],
[
"Liu",
"Wei",
""
],
[
"Liu",
"Hanbing",
""
],
[
"Luo",
"Bin",
""
],
[
"Geng",
"Yifeng",
""
],
[
"Xie",
"Xuansong",
""
]
] |
new_dataset
| 0.998154 |
2303.00807
|
Jon Saad-Falcon
|
Jon Saad-Falcon, Omar Khattab, Keshav Santhanam, Radu Florian, Martin
Franz, Salim Roukos, Avirup Sil, Md Arafat Sultan, Christopher Potts
|
UDAPDR: Unsupervised Domain Adaptation via LLM Prompting and
Distillation of Rerankers
| null | null | null | null |
cs.IR cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Many information retrieval tasks require large labeled datasets for
fine-tuning. However, such datasets are often unavailable, and their utility
for real-world applications can diminish quickly due to domain shifts. To
address this challenge, we develop and motivate a method for using large
language models (LLMs) to generate large numbers of synthetic queries cheaply.
The method begins by generating a small number of synthetic queries using an
expensive LLM. After that, a much less expensive one is used to create large
numbers of synthetic queries, which are used to fine-tune a family of reranker
models. These rerankers are then distilled into a single efficient retriever
for use in the target domain. We show that this technique boosts zero-shot
accuracy in long-tail domains, even where only 2K synthetic queries are used
for fine-tuning, and that it achieves substantially lower latency than standard
reranking methods. We make our end-to-end approach, including our synthetic
datasets and replication code, publicly available on Github:
https://github.com/primeqa/primeqa.
|
[
{
"version": "v1",
"created": "Wed, 1 Mar 2023 20:21:23 GMT"
},
{
"version": "v2",
"created": "Mon, 22 May 2023 17:59:22 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Saad-Falcon",
"Jon",
""
],
[
"Khattab",
"Omar",
""
],
[
"Santhanam",
"Keshav",
""
],
[
"Florian",
"Radu",
""
],
[
"Franz",
"Martin",
""
],
[
"Roukos",
"Salim",
""
],
[
"Sil",
"Avirup",
""
],
[
"Sultan",
"Md Arafat",
""
],
[
"Potts",
"Christopher",
""
]
] |
new_dataset
| 0.98814 |
2303.10981
|
Federico Califano
|
Federico Califano
|
Passivity-Preserving Safety-Critical Control using Control Barrier
Functions
| null | null | null | null |
cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this letter we propose a holistic analysis merging the techniques of
passivity-based control (PBC) and control barrier functions (CBF). We
constructively find conditions under which passivity of the closed-loop system
is preserved under CBF-based safety-critical control. The results provide an
energetic interpretation of safety-critical control schemes, and induce novel
passive designs which are less conservative than standard methods based on
damping injection. The results are specialised to port-Hamiltonian systems and
simulations are performed on a cart-pole system.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 10:06:29 GMT"
},
{
"version": "v2",
"created": "Mon, 22 May 2023 14:20:05 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Califano",
"Federico",
""
]
] |
new_dataset
| 0.976239 |
2303.12067
|
Sayed Erfan Arefin
|
Sayed Erfan Arefin
|
Simple Two-wheel Self-Balancing Robot Implementation
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Cyber-physical systems, also known as CPS, is an emerging field of technology
that combines the physical and digital worlds by allowing for seamless
interaction and communication between the two. One of the key characteristics
of a CPS is its ability to take input from its environment and use that
information to produce an output through actuators in the physical world. A
balancing robot is a prime example of a CPS, as it uses input from its sensors
to continually monitor its orientation and take action to prevent falling over
by generating thrust through its wheels or manipulating its inertia. In this
specific project, a two-wheel self-balancing robot was developed, utilizing the
concept of a reverse pendulum. A reverse pendulum by default is inherently
unstable and requires an external force to maintain its balance. In this case,
the balancing robot produces this external force through the use of wheels and
motors. To achieve precise balancing, stepper motors were utilized in the
design of the robot. Additionally, the robot has the capability to move in four
basic directions and the movement is controlled through an app connected to the
robot via Bluetooth. This allows for remote control and monitoring of the
robot's movements and actions. Overall, the development of this two-wheel
self-balancing robot serves as a demonstration of the potential and
capabilities of cyber-physical systems technology.
|
[
{
"version": "v1",
"created": "Sun, 22 Jan 2023 00:49:37 GMT"
},
{
"version": "v2",
"created": "Mon, 22 May 2023 14:51:20 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Arefin",
"Sayed Erfan",
""
]
] |
new_dataset
| 0.969001 |
2303.16509
|
Animesh Karnewar
|
Animesh Karnewar, Andrea Vedaldi, David Novotny, Niloy Mitra
|
HoloDiffusion: Training a 3D Diffusion Model using 2D Images
|
CVPR 2023 conference; project page at:
https://holodiffusion.github.io/
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Diffusion models have emerged as the best approach for generative modeling of
2D images. Part of their success is due to the possibility of training them on
millions if not billions of images with a stable learning objective. However,
extending these models to 3D remains difficult for two reasons. First, finding
a large quantity of 3D training data is much more complex than for 2D images.
Second, while it is conceptually trivial to extend the models to operate on 3D
rather than 2D grids, the associated cubic growth in memory and compute
complexity makes this infeasible. We address the first challenge by introducing
a new diffusion setup that can be trained, end-to-end, with only posed 2D
images for supervision; and the second challenge by proposing an image
formation model that decouples model memory from spatial memory. We evaluate
our method on real-world data, using the CO3D dataset which has not been used
to train 3D generative models before. We show that our diffusion models are
scalable, train robustly, and are competitive in terms of sample quality and
fidelity to existing approaches for 3D generative modeling.
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 07:35:56 GMT"
},
{
"version": "v2",
"created": "Sun, 21 May 2023 22:38:07 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Karnewar",
"Animesh",
""
],
[
"Vedaldi",
"Andrea",
""
],
[
"Novotny",
"David",
""
],
[
"Mitra",
"Niloy",
""
]
] |
new_dataset
| 0.982373 |
2304.10573
|
Philippe Hansen-Estruch
|
Philippe Hansen-Estruch, Ilya Kostrikov, Michael Janner, Jakub
Grudzien Kuba, Sergey Levine
|
IDQL: Implicit Q-Learning as an Actor-Critic Method with Diffusion
Policies
|
9 Pages, 4 Figures, 3 Tables
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Effective offline RL methods require properly handling out-of-distribution
actions. Implicit Q-learning (IQL) addresses this by training a Q-function
using only dataset actions through a modified Bellman backup. However, it is
unclear which policy actually attains the values represented by this implicitly
trained Q-function. In this paper, we reinterpret IQL as an actor-critic method
by generalizing the critic objective and connecting it to a
behavior-regularized implicit actor. This generalization shows how the induced
actor balances reward maximization and divergence from the behavior policy,
with the specific loss choice determining the nature of this tradeoff. Notably,
this actor can exhibit complex and multimodal characteristics, suggesting
issues with the conditional Gaussian actor fit with advantage weighted
regression (AWR) used in prior methods. Instead, we propose using samples from
a diffusion parameterized behavior policy and weights computed from the critic
to then importance sampled our intended policy. We introduce Implicit Diffusion
Q-learning (IDQL), combining our general IQL critic with the policy extraction
method. IDQL maintains the ease of implementation of IQL while outperforming
prior offline RL methods and demonstrating robustness to hyperparameters. Code
is available at https://github.com/philippe-eecs/IDQL.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 18:04:09 GMT"
},
{
"version": "v2",
"created": "Fri, 19 May 2023 18:31:04 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Hansen-Estruch",
"Philippe",
""
],
[
"Kostrikov",
"Ilya",
""
],
[
"Janner",
"Michael",
""
],
[
"Kuba",
"Jakub Grudzien",
""
],
[
"Levine",
"Sergey",
""
]
] |
new_dataset
| 0.981588 |
2304.11719
|
Wei Yao
|
Jie Shao, Wei Yao, Puzuo Wang, Zhiyi He, Lei Luo
|
Urban GeoBIM construction by integrating semantic LiDAR point clouds
with as-designed BIM models
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Developments in three-dimensional real worlds promote the integration of
geoinformation and building information models (BIM) known as GeoBIM in urban
construction. Light detection and ranging (LiDAR) integrated with global
navigation satellite systems can provide geo-referenced spatial information.
However, constructing detailed urban GeoBIM poses challenges in terms of LiDAR
data quality. BIM models designed from software are rich in geometrical
information but often lack accurate geo-referenced locations. In this paper, we
propose a complementary strategy that integrates LiDAR point clouds with
as-designed BIM models for reconstructing urban scenes. A state-of-the-art deep
learning framework and graph theory are first combined for LiDAR point cloud
segmentation. A coarse-to-fine matching program is then developed to integrate
object point clouds with corresponding BIM models. Results show the overall
segmentation accuracy of LiDAR datasets reaches up to 90%, and average
positioning accuracies of BIM models are 0.023 m for pole-like objects and
0.156 m for buildings, demonstrating the effectiveness of the method in
segmentation and matching processes. This work offers a practical solution for
rapid and accurate urban GeoBIM construction.
|
[
{
"version": "v1",
"created": "Sun, 23 Apr 2023 18:16:14 GMT"
},
{
"version": "v2",
"created": "Mon, 22 May 2023 05:21:06 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Shao",
"Jie",
""
],
[
"Yao",
"Wei",
""
],
[
"Wang",
"Puzuo",
""
],
[
"He",
"Zhiyi",
""
],
[
"Luo",
"Lei",
""
]
] |
new_dataset
| 0.997701 |
2305.07185
|
Lili Yu
|
Lili Yu, D\'aniel Simig, Colin Flaherty, Armen Aghajanyan, Luke
Zettlemoyer, Mike Lewis
|
MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Autoregressive transformers are spectacular models for short sequences but
scale poorly to long sequences such as high-resolution images, podcasts, code,
or books. We proposed Megabyte, a multi-scale decoder architecture that enables
end-to-end differentiable modeling of sequences of over one million bytes.
Megabyte segments sequences into patches and uses a local submodel within
patches and a global model between patches. This enables sub-quadratic
self-attention, much larger feedforward layers for the same compute, and
improved parallelism during decoding -- unlocking better performance at reduced
cost for both training and generation. Extensive experiments show that Megabyte
allows byte-level models to perform competitively with subword models on long
context language modeling, achieve state-of-the-art density estimation on
ImageNet, and model audio from raw files. Together, these results establish the
viability of tokenization-free autoregressive sequence modeling at scale.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 00:55:41 GMT"
},
{
"version": "v2",
"created": "Fri, 19 May 2023 21:09:11 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Yu",
"Lili",
""
],
[
"Simig",
"Dániel",
""
],
[
"Flaherty",
"Colin",
""
],
[
"Aghajanyan",
"Armen",
""
],
[
"Zettlemoyer",
"Luke",
""
],
[
"Lewis",
"Mike",
""
]
] |
new_dataset
| 0.99982 |
2305.07922
|
Yue Wang
|
Yue Wang, Hung Le, Akhilesh Deepak Gotmare, Nghi D.Q. Bui, Junnan Li,
Steven C.H. Hoi
|
CodeT5+: Open Code Large Language Models for Code Understanding and
Generation
|
26 pages, preprint
| null | null | null |
cs.CL cs.LG cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models (LLMs) pretrained on vast source code have achieved
prominent progress in code intelligence. However, existing code LLMs have two
main limitations in terms of architecture and pretraining tasks. First, they
often adopt a specific architecture (encoder-only or decoder-only) or rely on a
unified encoder-decoder network for different downstream tasks. The former
paradigm is limited by inflexibility in applications while in the latter, the
model is treated as a single system for all tasks, leading to suboptimal
performance on a subset of tasks. Secondly, they often employ a limited set of
pretraining objectives which might not be relevant to some downstream tasks and
hence result in substantial performance degrade. To address these limitations,
we propose ``CodeT5+'', a family of encoder-decoder LLMs for code in which
component modules can be flexibly combined to suit a wide range of downstream
code tasks. Such flexibility is enabled by our proposed mixture of pretraining
objectives to mitigate the pretrain-finetune discrepancy. These objectives
cover span denoising, contrastive learning, text-code matching, and causal LM
pretraining tasks, on both unimodal and bimodal multilingual code corpora.
Furthermore, we propose to initialize CodeT5+ with frozen off-the-shelf LLMs
without training from scratch to efficiently scale up our models, and explore
instruction-tuning to align with natural language instructions. We extensively
evaluate CodeT5+ on over 20 code-related benchmarks in different settings,
including zero-shot, finetuning, and instruction-tuning. We observe
state-of-the-art (SoTA) model performance on various code-related tasks, such
as code generation and completion, math programming, and text-to-code retrieval
tasks. Particularly, our instruction-tuned CodeT5+ 16B achieves new SoTA
results on HumanEval code generation task against other open code LLMs.
|
[
{
"version": "v1",
"created": "Sat, 13 May 2023 14:23:07 GMT"
},
{
"version": "v2",
"created": "Sat, 20 May 2023 07:27:15 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Wang",
"Yue",
""
],
[
"Le",
"Hung",
""
],
[
"Gotmare",
"Akhilesh Deepak",
""
],
[
"Bui",
"Nghi D. Q.",
""
],
[
"Li",
"Junnan",
""
],
[
"Hoi",
"Steven C. H.",
""
]
] |
new_dataset
| 0.999216 |
2305.09418
|
Dominic Williams
|
Dominic Williams, Fraser Macfarlane, Avril Britten
|
Leaf Only SAM: A Segment Anything Pipeline for Zero-Shot Automated Leaf
Segmentation
|
9 pages, 4 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Segment Anything Model (SAM) is a new foundation model that can be used as a
zero-shot object segmentation method with the use of either guide prompts such
as bounding boxes, polygons, or points. Alternatively, additional post
processing steps can be used to identify objects of interest after segmenting
everything in an image. Here we present a method using segment anything
together with a series of post processing steps to segment potato leaves,
called Leaf Only SAM. The advantage of this proposed method is that it does not
require any training data to produce its results so has many applications
across the field of plant phenotyping where there is limited high quality
annotated data available. We compare the performance of Leaf Only SAM to a Mask
R-CNN model which has been fine-tuned on our small novel potato leaf dataset.
On the evaluation dataset, Leaf Only SAM finds an average recall of 63.2 and an
average precision of 60.3, compared to recall of 78.7 and precision of 74.7 for
Mask R-CNN. Leaf Only SAM does not perform better than the fine-tuned Mask
R-CNN model on our data, but the SAM based model does not require any extra
training or annotation of our new dataset. This shows there is potential to use
SAM as a zero-shot classifier with the addition of post processing steps.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 13:16:33 GMT"
},
{
"version": "v2",
"created": "Mon, 22 May 2023 09:53:21 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Williams",
"Dominic",
""
],
[
"Macfarlane",
"Fraser",
""
],
[
"Britten",
"Avril",
""
]
] |
new_dataset
| 0.960091 |
2305.10263
|
Yuqi Ren
|
Chuang Liu, Renren Jin, Yuqi Ren, Linhao Yu, Tianyu Dong, Xiaohan
Peng, Shuting Zhang, Jianxiang Peng, Peiyi Zhang, Qingqing Lyu, Xiaowen Su,
Qun Liu, Deyi Xiong
|
M3KE: A Massive Multi-Level Multi-Subject Knowledge Evaluation Benchmark
for Chinese Large Language Models
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models have recently made tremendous progress in a variety of
aspects, e.g., cross-task generalization, instruction following.
Comprehensively evaluating the capability of large language models in multiple
tasks is of great importance. In this paper, we propose M3KE, a Massive
Multi-Level Multi-Subject Knowledge Evaluation benchmark, which is developed to
measure knowledge acquired by Chinese large language models by testing their
multitask accuracy in zero- and few-shot settings. We have collected 20,477
questions from 71 tasks. Our selection covers all major levels of Chinese
education system, ranging from the primary school to college, as well as a wide
variety of subjects, including humanities, history, politics, law, education,
psychology, science, technology, art and religion. All questions are
multiple-choice questions with four options, hence guaranteeing a standardized
and unified assessment process. We've assessed a number of state-of-the-art
open-source Chinese large language models on the proposed benchmark. The size
of these models varies from 335M to 130B parameters. Experiment results
demonstrate that they perform significantly worse than GPT-3.5 that reaches an
accuracy of ~ 48% on M3KE. The dataset is available at
https://github.com/tjunlp-lab/M3KE.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 14:56:31 GMT"
},
{
"version": "v2",
"created": "Sun, 21 May 2023 03:57:11 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Liu",
"Chuang",
""
],
[
"Jin",
"Renren",
""
],
[
"Ren",
"Yuqi",
""
],
[
"Yu",
"Linhao",
""
],
[
"Dong",
"Tianyu",
""
],
[
"Peng",
"Xiaohan",
""
],
[
"Zhang",
"Shuting",
""
],
[
"Peng",
"Jianxiang",
""
],
[
"Zhang",
"Peiyi",
""
],
[
"Lyu",
"Qingqing",
""
],
[
"Su",
"Xiaowen",
""
],
[
"Liu",
"Qun",
""
],
[
"Xiong",
"Deyi",
""
]
] |
new_dataset
| 0.999446 |
2305.10853
|
Gabriela Ben Melech
|
Gabriela Ben Melech Stan, Diana Wofk, Scottie Fox, Alex Redden, Will
Saxton, Jean Yu, Estelle Aflalo, Shao-Yen Tseng, Fabio Nonato, Matthias
Muller, Vasudev Lal
|
LDM3D: Latent Diffusion Model for 3D
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This research paper proposes a Latent Diffusion Model for 3D (LDM3D) that
generates both image and depth map data from a given text prompt, allowing
users to generate RGBD images from text prompts. The LDM3D model is fine-tuned
on a dataset of tuples containing an RGB image, depth map and caption, and
validated through extensive experiments. We also develop an application called
DepthFusion, which uses the generated RGB images and depth maps to create
immersive and interactive 360-degree-view experiences using TouchDesigner. This
technology has the potential to transform a wide range of industries, from
entertainment and gaming to architecture and design. Overall, this paper
presents a significant contribution to the field of generative AI and computer
vision, and showcases the potential of LDM3D and DepthFusion to revolutionize
content creation and digital experiences. A short video summarizing the
approach can be found at https://t.ly/tdi2.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 10:15:06 GMT"
},
{
"version": "v2",
"created": "Sun, 21 May 2023 20:26:30 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Stan",
"Gabriela Ben Melech",
""
],
[
"Wofk",
"Diana",
""
],
[
"Fox",
"Scottie",
""
],
[
"Redden",
"Alex",
""
],
[
"Saxton",
"Will",
""
],
[
"Yu",
"Jean",
""
],
[
"Aflalo",
"Estelle",
""
],
[
"Tseng",
"Shao-Yen",
""
],
[
"Nonato",
"Fabio",
""
],
[
"Muller",
"Matthias",
""
],
[
"Lal",
"Vasudev",
""
]
] |
new_dataset
| 0.999577 |
2305.11481
|
Wenxuan Wang
|
Wenxuan Wang, Jing Liu, Xingjian He, Yisi Zhang, Chen Chen, Jiachen
Shen, Yan Zhang, Jiangyun Li
|
CM-MaskSD: Cross-Modality Masked Self-Distillation for Referring Image
Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Referring image segmentation (RIS) is a fundamental vision-language task that
intends to segment a desired object from an image based on a given natural
language expression. Due to the essentially distinct data properties between
image and text, most of existing methods either introduce complex designs
towards fine-grained vision-language alignment or lack required dense
alignment, resulting in scalability issues or mis-segmentation problems such as
over- or under-segmentation. To achieve effective and efficient fine-grained
feature alignment in the RIS task, we explore the potential of masked
multimodal modeling coupled with self-distillation and propose a novel
cross-modality masked self-distillation framework named CM-MaskSD, in which our
method inherits the transferred knowledge of image-text semantic alignment from
CLIP model to realize fine-grained patch-word feature alignment for better
segmentation accuracy. Moreover, our CM-MaskSD framework can considerably boost
model performance in a nearly parameter-free manner, since it shares weights
between the main segmentation branch and the introduced masked
self-distillation branches, and solely introduces negligible parameters for
coordinating the multimodal features. Comprehensive experiments on three
benchmark datasets (i.e. RefCOCO, RefCOCO+, G-Ref) for the RIS task
convincingly demonstrate the superiority of our proposed framework over
previous state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 07:17:27 GMT"
},
{
"version": "v2",
"created": "Mon, 22 May 2023 05:02:36 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Wang",
"Wenxuan",
""
],
[
"Liu",
"Jing",
""
],
[
"He",
"Xingjian",
""
],
[
"Zhang",
"Yisi",
""
],
[
"Chen",
"Chen",
""
],
[
"Shen",
"Jiachen",
""
],
[
"Zhang",
"Yan",
""
],
[
"Li",
"Jiangyun",
""
]
] |
new_dataset
| 0.974053 |
2305.11747
|
Junyi Li
|
Junyi Li, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie and Ji-Rong Wen
|
HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large
Language Models
|
Working in progress
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models (LLMs), such as ChatGPT, are prone to generate
hallucinations, \ie content that conflicts with the source or cannot be
verified by the factual knowledge. To understand what types of content and to
which extent LLMs are apt to hallucinate, we introduce the Hallucination
Evaluation for Large Language Models (HaluEval) benchmark, a large collection
of generated and human-annotated hallucinated samples for evaluating the
performance of LLMs in recognizing hallucination. To generate these samples, we
propose a ChatGPT-based two-step framework, \ie sampling-then-filtering.
Besides, we also hire some human labelers to annotate the hallucinations in
ChatGPT responses. The empirical results suggest that ChatGPT is likely to
generate hallucinated content in specific topics by fabricating unverifiable
information (\ie about $11.4\%$ user queries). Moreover, existing LLMs face
great challenges in recognizing the hallucinations in texts. While, our
experiments also prove that the hallucination recognition can be improved by
providing external knowledge or adding reasoning steps. Our benchmark can be
accessed at https://github.com/RUCAIBox/HaluEval.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 15:36:27 GMT"
},
{
"version": "v2",
"created": "Mon, 22 May 2023 13:36:09 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Li",
"Junyi",
""
],
[
"Cheng",
"Xiaoxue",
""
],
[
"Zhao",
"Wayne Xin",
""
],
[
"Nie",
"Jian-Yun",
""
],
[
"Wen",
"Ji-Rong",
""
]
] |
new_dataset
| 0.996946 |
2305.11871
|
Bala Murugan MS
|
Srija Santhanam, Kavipriya P, Balamurugan MS, Manoj Kumar Rajagopal
|
Amity -- A Hybrid Mental Health Application
|
eighteen pages and seven figure
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wellness in trivial terms combines physical, social, and mental wellbeing.
While mental health is neglected, long-term success in a person life is mostly
determined by his psychological health and contentment. For a person in
distress, professional mental health services are quite expensive, unpopular,
and invite a lot of hesitation. Hence, it would be effective to use an Android
application that can offer day to day therapeutic assistance, meditation
sessions, and guidance since it can cater to a massive community instantly. In
this paper, we propose a mobile and web application AMITY with a chat group and
chatbot created using a machine learning approach. We have also built a dataset
to train the chatbot model that we propose in this paper. We briefly introduce
the dataset and the machine learning model in section 3. In section 4, we
include the architecture and the development details of the Hybrid application.
Next, we present our results on usability and the efficiency of the idea we
propose.
|
[
{
"version": "v1",
"created": "Tue, 18 Apr 2023 06:26:53 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Santhanam",
"Srija",
""
],
[
"P",
"Kavipriya",
""
],
[
"MS",
"Balamurugan",
""
],
[
"Rajagopal",
"Manoj Kumar",
""
]
] |
new_dataset
| 0.995949 |
2305.11889
|
Chadnra Sekhar Sanaboina Dr
|
Chandra Sekhar Sanaboina, Harish Bommidi
|
An Automated Power Conservation System (APCS) using Particle Photon and
Smartphone
|
8 Pages
| null |
10.26438/ijcse/v6i11.983990
| null |
cs.HC cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Nowadays, people use electricity in all aspects of their lives so that
electricity consumption increases gradually. There can be wastage of
electricity due to various reasons, such as human negligence, daylighting, etc.
Hence, conservation of energy is the need of the day. This paper deals with the
fabrication of an "Automated Power Conservation System (APCS)" that has
multiple benefits like saving on power consumption there by saving on
electricity bills of the organization, eliminating human involvement and
manpower which is often required to manually toggle the lights and electrical
devices on/off, and last but most importantly conserve the precious natural
resources by reducing electrical energy consumption. Two IR sensors are used in
this project and these two sensors are used for detecting the presence of a
person in the classroom. When the existence of the person is detected by the
APCS it automatically turns on the fans and lights in that classroom and during
the absence they will be automatically turned off, thus paving the easiest way
to conserve power. This hardware is integrated with the Android app, where the
user can get data on his smartphone regarding the number of fans and lights
that are turned on at a particular instance of time. The user can also switch
on/off the fans and lights from anywhere in the world by using the Android App.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 01:55:13 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Sanaboina",
"Chandra Sekhar",
""
],
[
"Bommidi",
"Harish",
""
]
] |
new_dataset
| 0.968308 |
2305.11891
|
Roberto Del Prete Mr
|
Gabriele Meoni and Roberto Del Prete and Federico Serva and Alix De
Beussche and Olivier Colin and Nicolas Long\'ep\'e
|
THRawS: A Novel Dataset for Thermal Hotspots Detection in Raw Sentinel-2
Data
|
13 pages, 7 figures, 3 tables
| null | null | null |
cs.CV eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Nowadays, most of the datasets leveraging space-borne Earth Observation (EO)
data are based on high-end levels products, which are ortho-rectified,
coregistered, calibrated, and further processed to mitigate the impact of noise
and distortions. Nevertheless, given the growing interest to apply Artificial
Intelligence (AI) onboard satellites for time-critical applications, such as
natural disaster response, providing raw satellite images could be useful to
foster the research on energy-efficient pre-processing algorithms and AI models
for onboard-satellite applications. In this framework, we present THRawS, the
first dataset composed of Sentinel-2 (S-2) raw data containing warm temperature
hotspots (wildfires and volcanic eruptions). To foster the realisation of
robust AI architectures, the dataset gathers data from all over the globe.
Furthermore, we designed a custom methodology to identify events in raw data
starting from the corresponding Level-1C (L1C) products. Indeed, given the
availability of state-of-the-art algorithms for thermal anomalies detection on
the L1C tiles, we detect such events on these latter and we then re-project
them on the corresponding raw images. Additionally, to deal with unprocessed
data, we devise a lightweight coarse coregisteration and georeferencing
strategy. The developed dataset is comprehensive of more than 100 samples
containing wildfires, volcanic eruptions, and event-free volcanic areas to
enable both warm-events detection and general classification applications.
Finally, we compare performances between the proposed coarse spatial
coregistration technique and the SuperGlue Deep Neural Network method to
highlight the different constraints in terms of timing and quality of spatial
registration to minimise the spatial displacement error for a specific scene.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 09:54:21 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Meoni",
"Gabriele",
""
],
[
"Del Prete",
"Roberto",
""
],
[
"Serva",
"Federico",
""
],
[
"De Beussche",
"Alix",
""
],
[
"Colin",
"Olivier",
""
],
[
"Longépé",
"Nicolas",
""
]
] |
new_dataset
| 0.999847 |
2305.11946
|
Hong Xu
|
Hong Xu and Shireen Y. Elhabian
|
Image2SSM: Reimagining Statistical Shape Models from Images with Radial
Basis Functions
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Statistical shape modeling (SSM) is an essential tool for analyzing
variations in anatomical morphology. In a typical SSM pipeline, 3D anatomical
images, gone through segmentation and rigid registration, are represented using
lower-dimensional shape features, on which statistical analysis can be
performed. Various methods for constructing compact shape representations have
been proposed, but they involve laborious and costly steps. We propose
Image2SSM, a novel deep-learning-based approach for SSM that leverages
image-segmentation pairs to learn a radial-basis-function (RBF)-based
representation of shapes directly from images. This RBF-based shape
representation offers a rich self-supervised signal for the network to estimate
a continuous, yet compact representation of the underlying surface that can
adapt to complex geometries in a data-driven manner. Image2SSM can characterize
populations of biological structures of interest by constructing statistical
landmark-based shape models of ensembles of anatomical shapes while requiring
minimal parameter tuning and no user assistance. Once trained, Image2SSM can be
used to infer low-dimensional shape representations from new unsegmented
images, paving the way toward scalable approaches for SSM, especially when
dealing with large cohorts. Experiments on synthetic and real datasets show the
efficacy of the proposed method compared to the state-of-art
correspondence-based method for SSM.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 18:08:10 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Xu",
"Hong",
""
],
[
"Elhabian",
"Shireen Y.",
""
]
] |
new_dataset
| 0.975654 |
2305.11980
|
Alaa Maalouf
|
Alaa Maalouf and Murad Tukan and Vladimir Braverman and Daniela Rus
|
AutoCoreset: An Automatic Practical Coreset Construction Framework
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A coreset is a tiny weighted subset of an input set, that closely resembles
the loss function, with respect to a certain set of queries. Coresets became
prevalent in machine learning as they have shown to be advantageous for many
applications. While coreset research is an active research area, unfortunately,
coresets are constructed in a problem-dependent manner, where for each problem,
a new coreset construction algorithm is usually suggested, a process that may
take time or may be hard for new researchers in the field. Even the generic
frameworks require additional (problem-dependent) computations or proofs to be
done by the user. Besides, many problems do not have (provable) small coresets,
limiting their applicability. To this end, we suggest an automatic practical
framework for constructing coresets, which requires (only) the input data and
the desired cost function from the user, without the need for any other
task-related computation to be done by the user. To do so, we reduce the
problem of approximating a loss function to an instance of vector summation
approximation, where the vectors we aim to sum are loss vectors of a specific
subset of the queries, such that we aim to approximate the image of the
function on this subset. We show that while this set is limited, the coreset is
quite general. An extensive experimental study on various machine learning
applications is also conducted. Finally, we provide a ``plug and play" style
implementation, proposing a user-friendly system that can be easily used to
apply coresets for many problems. Full open source code can be found at
\href{https://github.com/alaamaalouf/AutoCoreset}{\text{https://github.com/alaamaalouf/AutoCoreset}}.
We believe that these contributions enable future research and easier use and
applications of coresets.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 19:59:52 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Maalouf",
"Alaa",
""
],
[
"Tukan",
"Murad",
""
],
[
"Braverman",
"Vladimir",
""
],
[
"Rus",
"Daniela",
""
]
] |
new_dataset
| 0.999506 |
2305.11981
|
Lukas Daniel Klausner
|
Lukas Daniel Klausner, Maximilian Heimst\"adt, Leonhard Dobusch
|
"Sch\"one neue Lieferkettenwelt": Workers' Voice und Arbeitsstandards in
Zeiten algorithmischer Vorhersage
|
21 pages, in German
|
Soziale Standards in globalen Lieferketten: Internationale
Richtlinien, unternehmerische Verantwortung und die Stimme der Beschaeftigten
(= Forschung aus der Hans-Boeckler-Stiftung 200), transcript Verlag,
Bielefeld 2023, 97-114
| null | null |
cs.CY cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The complexity and increasingly tight coupling of supply chains poses a major
logistical challenge for leading companies. Another challenge is that leading
companies -- under pressure from consumers, a critical public and legislative
measures such as supply chain laws -- have to take more responsibility than
before for their suppliers' labour standards. In this paper, we discuss a new
approach that leading companies are using to try to address these challenges:
algorithmic prediction of business risks, but also environmental and social
risks. We describe the technical and cultural conditions for algorithmic
prediction and explain how -- from the perspective of leading companies -- it
helps to address both challenges. We then develop scenarios on how and with
what kind of social consequences algorithmic prediction can be used by leading
companies. From the scenarios, we derive policy options for different
stakeholder groups to help develop algorithmic prediction towards improving
labour standards and worker voice.
--
Die Komplexit\"at und zunehmend enge Kopplung vieler Lieferketten stellt eine
gro{\ss}e logistische Herausforderung f\"ur Leitunternehmen dar. Eine weitere
Herausforderung besteht darin, dass Leitunternehmen -- gedr\"angt durch
Konsument:innen, eine kritische \"Offentlichkeit und gesetzgeberische
Ma{\ss}nahmen wie die Lieferkettengesetze -- st\"arker als bisher Verantwortung
f\"ur Arbeitsstandards in ihren Zulieferbetrieben \"ubernehmen m\"ussen. In
diesem Beitrag diskutieren wir einen neuen Ansatz, mit dem Leitunternehmen
versuchen, diese Herausforderungen zu bearbeiten: die algorithmische Vorhersage
von betriebswirtschaftlichen, aber auch \"okologischen und sozialen Risiken.
Wir beschreiben die technischen und kulturellen Bedingungen f\"ur
algorithmische Vorhersage und erkl\"aren, wie diese -- aus Perspektive von
Leitunternehmen -- bei der Bearbeitung beider Herausforderungen hilft.
Anschlie{\ss}end entwickeln wir Szenarien, wie und mit welchen sozialen
Konsequenzen algorithmische Vorhersage durch Leitunternehmen eingesetzt werden
kann. Aus den Szenarien leiten wir Handlungsoptionen f\"ur verschiedene
Stakeholder-Gruppen ab, die dabei helfen sollen, algorithmische Vorhersage im
Sinne einer Verbesserung von Arbeitsstandards und Workers' Voice
weiterzuentwickeln.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 20:01:26 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Klausner",
"Lukas Daniel",
""
],
[
"Heimstädt",
"Maximilian",
""
],
[
"Dobusch",
"Leonhard",
""
]
] |
new_dataset
| 0.999412 |
2305.12002
|
Xuanyu Zhang
|
Xuanyu Zhang and Qing Yang and Dongliang Xu
|
XuanYuan 2.0: A Large Chinese Financial Chat Model with Hundreds of
Billions Parameters
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, pre-trained language models have undergone rapid development
with the emergence of large-scale models. However, there is a lack of
open-sourced chat models specifically designed for the Chinese language,
especially in the field of Chinese finance, at the scale of hundreds of
billions. To address this gap, we introduce XuanYuan 2.0, the largest Chinese
chat model to date, built upon the BLOOM-176B architecture. Additionally, we
propose a novel training method called hybrid-tuning to mitigate catastrophic
forgetting. By combining general-domain with domain-specific knowledge and
integrating the stages of pre-training and fine-tuning, XuanYuan 2.0 is capable
of providing accurate and contextually appropriate responses in the Chinese
financial domain.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 21:01:20 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Zhang",
"Xuanyu",
""
],
[
"Yang",
"Qing",
""
],
[
"Xu",
"Dongliang",
""
]
] |
new_dataset
| 0.99678 |
2305.12010
|
Rachel Kurchin
|
Anant Thazhemadam, Dhairya Gandhi, Venkatasubramanian Viswanathan,
Rachel C. Kurchin
|
Chemellia: An Ecosystem for Atomistic Scientific Machine Learning
| null | null | null | null |
cs.CE cond-mat.mtrl-sci cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Chemellia is an open-source framework for atomistic machine learning in the
Julia programming language. The framework takes advantage of Julia's high speed
as well as the ability to share and reuse code and interfaces through the
paradigm of multiple dispatch. Chemellia is designed to make use of existing
interfaces and avoid ``reinventing the wheel'' wherever possible. A key aspect
of the Chemellia ecosystem is the ChemistryFeaturization interface for defining
and encoding features -- it is designed to maximize interoperability between
featurization schemes and elements thereof, to maintain provenance of encoded
features, and to ensure easy decodability and reconfigurability to enable
feature engineering experiments. This embodies the overall design principles of
the Chemellia ecosystem: separation of concerns, interoperability, and
transparency. We illustrate these principles by discussing the implementation
of crystal graph convolutional neural networks for material property
prediction.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 21:37:37 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Thazhemadam",
"Anant",
""
],
[
"Gandhi",
"Dhairya",
""
],
[
"Viswanathan",
"Venkatasubramanian",
""
],
[
"Kurchin",
"Rachel C.",
""
]
] |
new_dataset
| 0.999438 |
2305.12023
|
\'Edouard Bonnet
|
\'Edouard Bonnet and Julien Duron
|
Stretch-width
|
28 pages, 12 figures
| null | null | null |
cs.DM cs.DS math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a new parameter, called stretch-width, that we show sits
strictly between clique-width and twin-width. Unlike the reduced parameters
[BKW '22], planar graphs and polynomial subdivisions do not have bounded
stretch-width. This leaves open the possibility of efficient algorithms for a
broad fragment of problems within Monadic Second-Order (MSO) logic on graphs of
bounded stretch-width. In this direction, we prove that graphs of bounded
maximum degree and bounded stretch-width have at most logarithmic treewidth. As
a consequence, in classes of bounded stretch-width, Maximum Independent Set can
be solved in subexponential time $2^{O(n^{4/5} \log n)}$ on $n$-vertex graphs,
and, if further the maximum degree is bounded, Existential Counting Modal Logic
[Pilipczuk '11] can be model-checked in polynomial time. We also give a
polynomial-time $O(\text{OPT}^2)$-approximation for the stretch-width of
symmetric $0,1$-matrices or ordered graphs. Somewhat unexpectedly, we prove
that exponential subdivisions of bounded-degree graphs have bounded
stretch-width. This allows to complement the logarithmic upper bound of
treewidth with a matching lower bound. We leave as open the existence of an
efficient approximation algorithm for the stretch-width of unordered graphs, if
the exponential subdivisions of all graphs have bounded stretch-width, and if
graphs of bounded stretch-width have logarithmic clique-width (or rank-width).
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 22:31:05 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Bonnet",
"Édouard",
""
],
[
"Duron",
"Julien",
""
]
] |
new_dataset
| 0.99945 |
2305.12029
|
Hua Shen
|
Hua Shen, Vicky Zayats, Johann C. Rocholl, Daniel D. Walker, Dirk
Padfield
|
MultiTurnCleanup: A Benchmark for Multi-Turn Spoken Conversational
Transcript Cleanup
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current disfluency detection models focus on individual utterances each from
a single speaker. However, numerous discontinuity phenomena in spoken
conversational transcripts occur across multiple turns, hampering human
readability and the performance of downstream NLP tasks. This study addresses
these phenomena by proposing an innovative Multi-Turn Cleanup task for spoken
conversational transcripts and collecting a new dataset, MultiTurnCleanup1. We
design a data labeling schema to collect the high-quality dataset and provide
extensive data analysis. Furthermore, we leverage two modeling approaches for
experimental evaluation as benchmarks for future research.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 22:50:02 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Shen",
"Hua",
""
],
[
"Zayats",
"Vicky",
""
],
[
"Rocholl",
"Johann C.",
""
],
[
"Walker",
"Daniel D.",
""
],
[
"Padfield",
"Dirk",
""
]
] |
new_dataset
| 0.999557 |
2305.12036
|
Monika Kwiatkowski
|
Monika Kwiatkowski, Simon Matern, Olaf Hellwich
|
SIDAR: Synthetic Image Dataset for Alignment & Restoration
| null | null | null | null |
cs.CV cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image alignment and image restoration are classical computer vision tasks.
However, there is still a lack of datasets that provide enough data to train
and evaluate end-to-end deep learning models. Obtaining ground-truth data for
image alignment requires sophisticated structure-from-motion methods or optical
flow systems that often do not provide enough data variance, i.e., typically
providing a high number of image correspondences, while only introducing few
changes of scenery within the underlying image sequences. Alternative
approaches utilize random perspective distortions on existing image data.
However, this only provides trivial distortions, lacking the complexity and
variance of real-world scenarios. Instead, our proposed data augmentation helps
to overcome the issue of data scarcity by using 3D rendering: images are added
as textures onto a plane, then varying lighting conditions, shadows, and
occlusions are added to the scene. The scene is rendered from multiple
viewpoints, generating perspective distortions more consistent with real-world
scenarios, with homographies closely resembling those of camera projections
rather than randomized homographies. For each scene, we provide a sequence of
distorted images with corresponding occlusion masks, homographies, and
ground-truth labels. The resulting dataset can serve as a training and
evaluation set for a multitude of tasks involving image alignment and artifact
removal, such as deep homography estimation, dense image matching, 2D bundle
adjustment, inpainting, shadow removal, denoising, content retrieval, and
background subtraction. Our data generation pipeline is customizable and can be
applied to any existing dataset, serving as a data augmentation to further
improve the feature learning of any existing method.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 23:32:06 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Kwiatkowski",
"Monika",
""
],
[
"Matern",
"Simon",
""
],
[
"Hellwich",
"Olaf",
""
]
] |
new_dataset
| 0.999807 |
2305.12050
|
Vijayaraghavan Murali
|
Vijayaraghavan Murali, Chandra Maddila, Imad Ahmad, Michael Bolin,
Daniel Cheng, Negar Ghorbani, Renuka Fernandez, Nachiappan Nagappan
|
CodeCompose: A Large-Scale Industrial Deployment of AI-assisted Code
Authoring
| null | null | null | null |
cs.SE cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The rise of large language models (LLMs) has unlocked various applications of
this technology in software development. In particular, generative LLMs have
been shown to effectively power AI-based code authoring tools that can suggest
entire statements or blocks of code during code authoring. In this paper we
present CodeCompose, an AI-assisted code authoring tool developed and deployed
at Meta internally. CodeCompose is based on the InCoder LLM that merges
generative capabilities with bi-directionality. We have scaled up CodeCompose
to serve tens of thousands of developers at Meta, across 10+ programming
languages and several coding surfaces.
We discuss unique challenges in terms of user experience and metrics that
arise when deploying such tools in large-scale industrial settings. We present
our experience in making design decisions about the model and system
architecture for CodeCompose that addresses these challenges. Finally, we
present metrics from our large-scale deployment of CodeCompose that shows its
impact on Meta's internal code authoring experience over a 15-day time window,
where 4.5 million suggestions were made by CodeCompose. Quantitative metrics
reveal that (i) CodeCompose has an acceptance rate of 22% across several
languages, and (ii) 8% of the code typed by users of CodeCompose is through
accepting code suggestions from CodeCompose. Qualitative feedback indicates an
overwhelming 91.5% positive reception for CodeCompose. In addition to assisting
with code authoring, CodeCompose is also introducing other positive side
effects such as encouraging developers to generate more in-code documentation,
helping them with the discovery of new APIs, etc.
|
[
{
"version": "v1",
"created": "Sat, 20 May 2023 00:45:15 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Murali",
"Vijayaraghavan",
""
],
[
"Maddila",
"Chandra",
""
],
[
"Ahmad",
"Imad",
""
],
[
"Bolin",
"Michael",
""
],
[
"Cheng",
"Daniel",
""
],
[
"Ghorbani",
"Negar",
""
],
[
"Fernandez",
"Renuka",
""
],
[
"Nagappan",
"Nachiappan",
""
]
] |
new_dataset
| 0.993362 |
2305.12092
|
Mike Zhang
|
Mike Zhang and Rob van der Goot and Barbara Plank
|
ESCOXLM-R: Multilingual Taxonomy-driven Pre-training for the Job Market
Domain
|
Accepted at ACL2023 (Main)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The increasing number of benchmarks for Natural Language Processing (NLP)
tasks in the computational job market domain highlights the demand for methods
that can handle job-related tasks such as skill extraction, skill
classification, job title classification, and de-identification. While some
approaches have been developed that are specific to the job market domain,
there is a lack of generalized, multilingual models and benchmarks for these
tasks. In this study, we introduce a language model called ESCOXLM-R, based on
XLM-R, which uses domain-adaptive pre-training on the European Skills,
Competences, Qualifications and Occupations (ESCO) taxonomy, covering 27
languages. The pre-training objectives for ESCOXLM-R include dynamic masked
language modeling and a novel additional objective for inducing multilingual
taxonomical ESCO relations. We comprehensively evaluate the performance of
ESCOXLM-R on 6 sequence labeling and 3 classification tasks in 4 languages and
find that it achieves state-of-the-art results on 6 out of 9 datasets. Our
analysis reveals that ESCOXLM-R performs better on short spans and outperforms
XLM-R on entity-level and surface-level span-F1, likely due to ESCO containing
short skill and occupation titles, and encoding information on the
entity-level.
|
[
{
"version": "v1",
"created": "Sat, 20 May 2023 04:50:20 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Zhang",
"Mike",
""
],
[
"van der Goot",
"Rob",
""
],
[
"Plank",
"Barbara",
""
]
] |
new_dataset
| 0.998923 |
2305.12107
|
Yi Zhong
|
Yi Zhong, Chen Zhang, Xule Liu, Chenxi Sun, Weishan Deng, Haifeng Hu,
Zhongqian Sun
|
EE-TTS: Emphatic Expressive TTS with Linguistic Information
|
Accepted by INTERSPEECH2023
| null | null | null |
cs.SD cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While Current TTS systems perform well in synthesizing high-quality speech,
producing highly expressive speech remains a challenge. Emphasis, as a critical
factor in determining the expressiveness of speech, has attracted more
attention nowadays. Previous works usually enhance the emphasis by adding
intermediate features, but they can not guarantee the overall expressiveness of
the speech. To resolve this matter, we propose Emphatic Expressive TTS
(EE-TTS), which leverages multi-level linguistic information from syntax and
semantics. EE-TTS contains an emphasis predictor that can identify appropriate
emphasis positions from text and a conditioned acoustic model to synthesize
expressive speech with emphasis and linguistic information. Experimental
results indicate that EE-TTS outperforms baseline with MOS improvements of 0.49
and 0.67 in expressiveness and naturalness. EE-TTS also shows strong
generalization across different datasets according to AB test results.
|
[
{
"version": "v1",
"created": "Sat, 20 May 2023 05:58:56 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Zhong",
"Yi",
""
],
[
"Zhang",
"Chen",
""
],
[
"Liu",
"Xule",
""
],
[
"Sun",
"Chenxi",
""
],
[
"Deng",
"Weishan",
""
],
[
"Hu",
"Haifeng",
""
],
[
"Sun",
"Zhongqian",
""
]
] |
new_dataset
| 0.987192 |
2305.12160
|
Christopher McLaughlin Danforth
|
Kelsey Linnell, Mikaela Fudolig, Laura Bloomfield, Thomas McAndrew,
Taylor H. Ricketts, Jarlath P. M. O'Neil-Dunne, Peter Sheridan Dodds,
Christopher M. Danforth
|
Park visitation and walkshed demographics in the United States
| null | null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
A large and growing body of research demonstrates the value of local parks to
mental and physical well-being. Recently, researchers have begun using passive
digital data sources to investigate equity in usage; exactly who is benefiting
from parks? Early studies suggest that park visitation differs according to
demographic features, and that the demographic composition of a park's
surrounding neighborhood may be related to the utilization a park receives.
Employing a data set of park visitations generated by observations of roughly
50 million mobile devices in the US in 2019, we assess the ability of the
demographic composition of a park's walkshed to predict its yearly visitation.
Predictive models are constructed using Support Vector Regression, LASSO,
Elastic Net, and Random Forests. Surprisingly, our results suggest that the
demographic composition of a park's walkshed demonstrates little to no utility
for predicting visitation.
|
[
{
"version": "v1",
"created": "Sat, 20 May 2023 10:39:07 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Linnell",
"Kelsey",
""
],
[
"Fudolig",
"Mikaela",
""
],
[
"Bloomfield",
"Laura",
""
],
[
"McAndrew",
"Thomas",
""
],
[
"Ricketts",
"Taylor H.",
""
],
[
"O'Neil-Dunne",
"Jarlath P. M.",
""
],
[
"Dodds",
"Peter Sheridan",
""
],
[
"Danforth",
"Christopher M.",
""
]
] |
new_dataset
| 0.996169 |
2305.12173
|
Simon Jeanteur
|
Simon Jeanteur, Laura Kov\'acs, Matteo Maffei and Michael Rawson
|
CryptoVampire: Automated Reasoning for the Complete Symbolic Attacker
Cryptographic Model
| null | null | null | null |
cs.CR cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Cryptographic protocols are extremely hard to design and prove correct, as
witnessed by the ever-growing list of attacks even on protocol standards. Using
the symbolic model of cryptography, protocols are proven correct against an
idealized cryptographic model, which abstracts away from the algebraic
properties of cryptographic schemes and thus misses attacks. On the other hand,
existing computational models of cryptography only support interactive proofs
and/or are limited to stateless protocols. A promising approach is given by the
computationally complete symbolic attacker (CCSA) model, formalized in the BC
logic, which aims at bridging and getting the best of the two worlds, obtaining
cryptographic guarantees by symbolic protocol analysis. While machine-checked
security proofs are provided in this domain, such efforts require expert
knowledge both in the cryptographic space as well as on the reasoning side.
In this paper, we present the CryptoVampire framework, providing the first
fully automated setting for deriving proofs of trace properties in the BC
logic. CryptoVampire brings a first-order formalization of protocol properties,
by proposing tailored handling of subterm relations. In addition, CryptoVampire
implements specialized reasoning techniques, saturation algorithms, and
heuristics, allowing the direct integration of CryptoVampire within the
landscape of automated theorem proving. Our experimental results showcase the
effectiveness of CryptoVampire, providing also automation support for existing
approaches in the area.
|
[
{
"version": "v1",
"created": "Sat, 20 May 2023 11:26:51 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Jeanteur",
"Simon",
""
],
[
"Kovács",
"Laura",
""
],
[
"Maffei",
"Matteo",
""
],
[
"Rawson",
"Michael",
""
]
] |
new_dataset
| 0.997037 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.