id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2305.02364
|
Silin Gao
|
Silin Gao, Beatriz Borges, Soyoung Oh, Deniz Bayazit, Saya Kanno,
Hiromi Wakaki, Yuki Mitsufuji, Antoine Bosselut
|
PeaCoK: Persona Commonsense Knowledge for Consistent and Engaging
Narratives
|
ACL 2023, long paper
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sustaining coherent and engaging narratives requires dialogue or storytelling
agents to understand how the personas of speakers or listeners ground the
narrative. Specifically, these agents must infer personas of their listeners to
produce statements that cater to their interests. They must also learn to
maintain consistent speaker personas for themselves throughout the narrative,
so that their counterparts feel involved in a realistic conversation or story.
However, personas are diverse and complex: they entail large quantities of
rich interconnected world knowledge that is challenging to robustly represent
in general narrative systems (e.g., a singer is good at singing, and may have
attended conservatoire). In this work, we construct a new large-scale persona
commonsense knowledge graph, PeaCoK, containing ~100K human-validated persona
facts. Our knowledge graph schematizes five dimensions of persona knowledge
identified in previous studies of human interactive behaviours, and distils
facts in this schema from both existing commonsense knowledge graphs and
large-scale pretrained language models. Our analysis indicates that PeaCoK
contains rich and precise world persona inferences that help downstream systems
generate more consistent and engaging narratives.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 18:02:22 GMT"
},
{
"version": "v2",
"created": "Fri, 26 May 2023 08:45:23 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Gao",
"Silin",
""
],
[
"Borges",
"Beatriz",
""
],
[
"Oh",
"Soyoung",
""
],
[
"Bayazit",
"Deniz",
""
],
[
"Kanno",
"Saya",
""
],
[
"Wakaki",
"Hiromi",
""
],
[
"Mitsufuji",
"Yuki",
""
],
[
"Bosselut",
"Antoine",
""
]
] |
new_dataset
| 0.999412 |
2305.10683
|
Zhenhailong Wang
|
Zhenhailong Wang, Ansel Blume, Sha Li, Genglin Liu, Jaemin Cho, Zineng
Tang, Mohit Bansal, Heng Ji
|
Paxion: Patching Action Knowledge in Video-Language Foundation Models
|
under review
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Action knowledge involves the understanding of textual, visual, and temporal
aspects of actions. We introduce the Action Dynamics Benchmark (ActionBench)
containing two carefully designed probing tasks: Action Antonym and Video
Reversal, which targets multimodal alignment capabilities and temporal
understanding skills of the model, respectively. Despite recent video-language
models' (VidLM) impressive performance on various benchmark tasks, our
diagnostic tasks reveal their surprising deficiency (near-random performance)
in action knowledge, suggesting that current models rely on object recognition
abilities as a shortcut for action understanding. To remedy this, we propose a
novel framework, Paxion, along with a new Discriminative Video Dynamics
Modeling (DVDM) objective. The Paxion framework utilizes a Knowledge Patcher
network to encode new action knowledge and a Knowledge Fuser component to
integrate the Patcher into frozen VidLMs without compromising their existing
capabilities. Due to limitations of the widely-used Video-Text Contrastive
(VTC) loss for learning action knowledge, we introduce the DVDM objective to
train the Knowledge Patcher. DVDM forces the model to encode the correlation
between the action text and the correct ordering of video frames. Our extensive
analyses show that Paxion and DVDM together effectively fill the gap in action
knowledge understanding (~50% to 80%), while maintaining or improving
performance on a wide spectrum of both object- and action-centric downstream
tasks.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 03:53:59 GMT"
},
{
"version": "v2",
"created": "Fri, 19 May 2023 22:58:17 GMT"
},
{
"version": "v3",
"created": "Fri, 26 May 2023 00:14:50 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Wang",
"Zhenhailong",
""
],
[
"Blume",
"Ansel",
""
],
[
"Li",
"Sha",
""
],
[
"Liu",
"Genglin",
""
],
[
"Cho",
"Jaemin",
""
],
[
"Tang",
"Zineng",
""
],
[
"Bansal",
"Mohit",
""
],
[
"Ji",
"Heng",
""
]
] |
new_dataset
| 0.999743 |
2305.10688
|
Yingce Xia
|
Zequn Liu, Wei Zhang, Yingce Xia, Lijun Wu, Shufang Xie, Tao Qin, Ming
Zhang and Tie-Yan Liu
|
MolXPT: Wrapping Molecules with Text for Generative Pre-training
|
Accepted to ACL 2023; add more details about MoleculeNet finetune
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generative pre-trained Transformer (GPT) has demonstrates its great success
in natural language processing and related techniques have been adapted into
molecular modeling. Considering that text is the most important record for
scientific discovery, in this paper, we propose MolXPT, a unified language
model of text and molecules pre-trained on SMILES (a sequence representation of
molecules) wrapped by text. Briefly, we detect the molecule names in each
sequence and replace them to the corresponding SMILES. In this way, the SMILES
could leverage the information from surrounding text, and vice versa. The above
wrapped sequences, text sequences from PubMed and SMILES sequences from PubChem
are all fed into a language model for pre-training. Experimental results
demonstrate that MolXPT outperforms strong baselines of molecular property
prediction on MoleculeNet, performs comparably to the best model in
text-molecule translation while using less than half of its parameters, and
enables zero-shot molecular generation without finetuning.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 03:58:19 GMT"
},
{
"version": "v2",
"created": "Fri, 26 May 2023 04:35:46 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Liu",
"Zequn",
""
],
[
"Zhang",
"Wei",
""
],
[
"Xia",
"Yingce",
""
],
[
"Wu",
"Lijun",
""
],
[
"Xie",
"Shufang",
""
],
[
"Qin",
"Tao",
""
],
[
"Zhang",
"Ming",
""
],
[
"Liu",
"Tie-Yan",
""
]
] |
new_dataset
| 0.998193 |
2305.12442
|
Detai Xin
|
Detai Xin, Shinnosuke Takamichi, Ai Morimatsu, Hiroshi Saruwatari
|
Laughter Synthesis using Pseudo Phonetic Tokens with a Large-scale
In-the-wild Laughter Corpus
|
Accepted by INTERSPEECH 2023
| null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We present a large-scale in-the-wild Japanese laughter corpus and a laughter
synthesis method. Previous work on laughter synthesis lacks not only data but
also proper ways to represent laughter. To solve these problems, we first
propose an in-the-wild corpus comprising $3.5$ hours of laughter, which is to
our best knowledge the largest laughter corpus designed for laughter synthesis.
We then propose pseudo phonetic tokens (PPTs) to represent laughter by a
sequence of discrete tokens, which are obtained by training a clustering model
on features extracted from laughter by a pretrained self-supervised model.
Laughter can then be synthesized by feeding PPTs into a text-to-speech system.
We further show PPTs can be used to train a language model for unconditional
laughter generation. Results of comprehensive subjective and objective
evaluations demonstrate that the proposed method significantly outperforms a
baseline method, and can generate natural laughter unconditionally.
|
[
{
"version": "v1",
"created": "Sun, 21 May 2023 12:25:25 GMT"
},
{
"version": "v2",
"created": "Fri, 26 May 2023 13:17:11 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Xin",
"Detai",
""
],
[
"Takamichi",
"Shinnosuke",
""
],
[
"Morimatsu",
"Ai",
""
],
[
"Saruwatari",
"Hiroshi",
""
]
] |
new_dataset
| 0.999065 |
2305.13527
|
Tollef Emil J{\o}rgensen
|
Tollef Emil J{\o}rgensen and Andre K{\aa}sen
|
Aligning the Norwegian UD Treebank with Entity and Coreference
Information
|
4 pages, 1 table. Appendix: 3 tables and 5 data examples
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper presents a merged collection of entity and coreference annotated
data grounded in the Universal Dependencies (UD) treebanks for the two written
forms of Norwegian: Bokm{\aa}l and Nynorsk. The aligned and converted corpora
are the Norwegian Named Entities (NorNE) and Norwegian Anaphora Resolution
Corpus (NARC). While NorNE is aligned with an older version of the treebank,
NARC is misaligned and requires extensive transformation from the original
annotations to the UD structure and CoNLL-U format. We here demonstrate the
conversion and alignment processes, along with an analysis of discovered issues
and errors in the data - some of which include data split overlaps in the
original treebank. These procedures and the developed system may prove helpful
for future corpus alignment and coreference annotation endeavors. The merged
corpora comprise the first Norwegian UD treebank enriched with named entities
and coreference information.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 22:44:53 GMT"
},
{
"version": "v2",
"created": "Thu, 25 May 2023 22:36:36 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Jørgensen",
"Tollef Emil",
""
],
[
"Kåsen",
"Andre",
""
]
] |
new_dataset
| 0.984827 |
2305.15732
|
Ming Gao
|
Ming Gao, YanWu Xu, Yang Zhao, Tingbo Hou, Chenkai Zhao, Mingming Gong
|
CLIP3Dstyler: Language Guided 3D Arbitrary Neural Style Transfer
|
17 pages, 14 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose a novel language-guided 3D arbitrary neural style
transfer method (CLIP3Dstyler). We aim at stylizing any 3D scene with an
arbitrary style from a text description, and synthesizing the novel stylized
view, which is more flexible than the image-conditioned style transfer.
Compared with the previous 2D method CLIPStyler, we are able to stylize a 3D
scene and generalize to novel scenes without re-train our model. A
straightforward solution is to combine previous image-conditioned 3D style
transfer and text-conditioned 2D style transfer \bigskip methods. However, such
a solution cannot achieve our goal due to two main challenges. First, there is
no multi-modal model matching point clouds and language at different feature
scales (low-level, high-level). Second, we observe a style mixing issue when we
stylize the content with different style conditions from text prompts. To
address the first issue, we propose a 3D stylization framework to match the
point cloud features with text features in local and global views. For the
second issue, we propose an improved directional divergence loss to make
arbitrary text styles more distinguishable as a complement to our framework. We
conduct extensive experiments to show the effectiveness of our model on
text-guided 3D scene style transfer.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 05:30:13 GMT"
},
{
"version": "v2",
"created": "Fri, 26 May 2023 03:23:20 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Gao",
"Ming",
""
],
[
"Xu",
"YanWu",
""
],
[
"Zhao",
"Yang",
""
],
[
"Hou",
"Tingbo",
""
],
[
"Zhao",
"Chenkai",
""
],
[
"Gong",
"Mingming",
""
]
] |
new_dataset
| 0.990684 |
2305.16314
|
Congyue Deng
|
Congyue Deng, Jiahui Lei, Bokui Shen, Kostas Daniilidis, Leonidas
Guibas
|
Banana: Banach Fixed-Point Network for Pointcloud Segmentation with
Inter-Part Equivariance
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Equivariance has gained strong interest as a desirable network property that
inherently ensures robust generalization. However, when dealing with complex
systems such as articulated objects or multi-object scenes, effectively
capturing inter-part transformations poses a challenge, as it becomes entangled
with the overall structure and local transformations. The interdependence of
part assignment and per-part group action necessitates a novel equivariance
formulation that allows for their co-evolution. In this paper, we present
Banana, a Banach fixed-point network for equivariant segmentation with
inter-part equivariance by construction. Our key insight is to iteratively
solve a fixed-point problem, where point-part assignment labels and per-part
SE(3)-equivariance co-evolve simultaneously. We provide theoretical derivations
of both per-step equivariance and global convergence, which induces an
equivariant final convergent state. Our formulation naturally provides a strict
definition of inter-part equivariance that generalizes to unseen inter-part
configurations. Through experiments conducted on both articulated objects and
multi-object scans, we demonstrate the efficacy of our approach in achieving
strong generalization under inter-part transformations, even when confronted
with substantial changes in pointcloud geometry and topology.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 17:59:32 GMT"
},
{
"version": "v2",
"created": "Fri, 26 May 2023 14:28:26 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Deng",
"Congyue",
""
],
[
"Lei",
"Jiahui",
""
],
[
"Shen",
"Bokui",
""
],
[
"Daniilidis",
"Kostas",
""
],
[
"Guibas",
"Leonidas",
""
]
] |
new_dataset
| 0.986779 |
2305.16355
|
Yixuan Su
|
Yixuan Su and Tian Lan and Huayang Li and Jialu Xu and Yan Wang and
Deng Cai
|
PandaGPT: One Model To Instruction-Follow Them All
|
Technical report, work in progress. Our project page is at
https://panda-gpt.github.io/
| null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present PandaGPT, an approach to emPower large lANguage moDels with visual
and Auditory instruction-following capabilities. Our pilot experiments show
that PandaGPT can perform complex tasks such as detailed image description
generation, writing stories inspired by videos, and answering questions about
audios. More interestingly, PandaGPT can take multimodal inputs simultaneously
and compose their semantics naturally. For example, PandaGPT can connect how
objects look in an image/video and how they sound in an audio. To do so,
PandaGPT combines the multimodal encoders from ImageBind and the large language
models from Vicuna. Notably, only aligned image-text pairs are required for the
training of PandaGPT. Thanks to the strong capability of ImageBind in embedding
data from different modalities into the same space, PandaGPT displays emergent,
i.e. zero-shot, cross-modal behaviors for data other than image and text (e.g.,
video, audio, depth, thermal, and IMU). We hope that PandaGPT serves as an
initial step toward building AGI that can perceive and understand inputs in
different modalities holistically, as we humans do. Our project page is at
https://panda-gpt.github.io/.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 04:16:07 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Su",
"Yixuan",
""
],
[
"Lan",
"Tian",
""
],
[
"Li",
"Huayang",
""
],
[
"Xu",
"Jialu",
""
],
[
"Wang",
"Yan",
""
],
[
"Cai",
"Deng",
""
]
] |
new_dataset
| 0.988612 |
2305.16357
|
Himanshu Gupta
|
Ujjwala Anantheswaran and Himanshu Gupta and Mihir Parmar and Kuntal
Kumar Pal and Chitta Baral
|
EDM3: Event Detection as Multi-task Text Generation
|
9 pages, 4 figures, 10 tables, 5 Page appendix
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Event detection refers to identifying event occurrences in a text and
comprises of two subtasks; event identification and classification. We present
EDM3, a novel approach for Event Detection that formulates three generative
tasks: identification, classification, and combined detection. We show that
EDM3 helps to learn transferable knowledge that can be leveraged to perform
Event Detection and its subtasks concurrently, mitigating the error propagation
inherent in pipelined approaches. Unlike previous dataset- or domain-specific
approaches, EDM3 utilizes the existing knowledge of language models, allowing
it to be trained over any classification schema. We evaluate EDM3 on multiple
event detection datasets: RAMS, WikiEvents, MAVEN, and MLEE, showing that EDM3
outperforms 1) single-task performance by 8.4% on average and 2) multi-task
performance without instructional prompts by 2.4% on average. We obtain SOTA
results on RAMS (71.3% vs. 65.1% F-1) and competitive performance on other
datasets. We analyze our approach to demonstrate its efficacy in low-resource
and multi-sentence settings. We also show the effectiveness of this approach on
non-standard event configurations such as multi-word and multi-class event
triggers. Overall, our results show that EDM3 is a promising approach for Event
Detection that has the potential for real-world applications.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 06:25:16 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Anantheswaran",
"Ujjwala",
""
],
[
"Gupta",
"Himanshu",
""
],
[
"Parmar",
"Mihir",
""
],
[
"Pal",
"Kuntal Kumar",
""
],
[
"Baral",
"Chitta",
""
]
] |
new_dataset
| 0.999598 |
2305.16371
|
Eunseop Yoon
|
Eunseop Yoon, Hee Suk Yoon, John Harvill, Mark Hasegawa-Johnson and
Chang D. Yoo
|
INTapt: Information-Theoretic Adversarial Prompt Tuning for Enhanced
Non-Native Speech Recognition
|
ACL2023
| null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Automatic Speech Recognition (ASR) systems have attained unprecedented
performance with large speech models pre-trained based on self-supervised
speech representation learning. However, these pre-trained speech models suffer
from representational bias as they tend to better represent those prominent
accents (i.e., native (L1) English accent) in the pre-training speech corpus
than less represented accents, resulting in a deteriorated performance for
non-native (L2) English accents. Although there have been some approaches to
mitigate this issue, all of these methods require updating the pre-trained
model weights. In this paper, we propose Information Theoretic Adversarial
Prompt Tuning (INTapt), which introduces prompts concatenated to the original
input that can re-modulate the attention of the pre-trained model such that the
corresponding input resembles a native (L1) English speech without updating the
backbone weights. INTapt is trained simultaneously in the following two
manners: (1) adversarial training to reduce accent feature dependence between
the original input and the prompt-concatenated input and (2) training to
minimize CTC loss for improving ASR performance to a prompt-concatenated input.
Experimental results show that INTapt improves the performance of L2 English
and increases feature similarity between L2 and L1 accents.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 13:06:01 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Yoon",
"Eunseop",
""
],
[
"Yoon",
"Hee Suk",
""
],
[
"Harvill",
"John",
""
],
[
"Hasegawa-Johnson",
"Mark",
""
],
[
"Yoo",
"Chang D.",
""
]
] |
new_dataset
| 0.996934 |
2305.16373
|
Zhengyuan Shi
|
Zhengyuan Shi, Hongyang Pan, Sadaf Khan, Min Li, Yi Liu, Junhua Huang,
Hui-Ling Zhen, Mingxuan Yuan, Zhufei Chu and Qiang Xu
|
DeepGate2: Functionality-Aware Circuit Representation Learning
| null | null | null | null |
cs.LG cs.AI cs.AR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Circuit representation learning aims to obtain neural representations of
circuit elements and has emerged as a promising research direction that can be
applied to various EDA and logic reasoning tasks. Existing solutions, such as
DeepGate, have the potential to embed both circuit structural information and
functional behavior. However, their capabilities are limited due to weak
supervision or flawed model design, resulting in unsatisfactory performance in
downstream tasks. In this paper, we introduce DeepGate2, a novel
functionality-aware learning framework that significantly improves upon the
original DeepGate solution in terms of both learning effectiveness and
efficiency. Our approach involves using pairwise truth table differences
between sampled logic gates as training supervision, along with a well-designed
and scalable loss function that explicitly considers circuit functionality.
Additionally, we consider inherent circuit characteristics and design an
efficient one-round graph neural network (GNN), resulting in an order of
magnitude faster learning speed than the original DeepGate solution.
Experimental results demonstrate significant improvements in two practical
downstream tasks: logic synthesis and Boolean satisfiability solving. The code
is available at https://github.com/cure-lab/DeepGate2
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 13:51:12 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Shi",
"Zhengyuan",
""
],
[
"Pan",
"Hongyang",
""
],
[
"Khan",
"Sadaf",
""
],
[
"Li",
"Min",
""
],
[
"Liu",
"Yi",
""
],
[
"Huang",
"Junhua",
""
],
[
"Zhen",
"Hui-Ling",
""
],
[
"Yuan",
"Mingxuan",
""
],
[
"Chu",
"Zhufei",
""
],
[
"Xu",
"Qiang",
""
]
] |
new_dataset
| 0.989859 |
2305.16389
|
Saeed Ahmadi
|
Peyman Khordadpour, Saeed Ahmadi
|
FIDS: Fuzzy Intrusion Detection System for simultaneous detection of
DoS/DDoS attacks in Cloud computing
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
In recent times, I've encountered a principle known as cloud computing, a
model that simplifies user access to data and computing power on a demand
basis. The main objective of cloud computing is to accommodate users' growing
needs by decreasing dependence on human resources, minimizing expenses, and
enhancing the speed of data access. Nevertheless, preserving security and
privacy in cloud computing systems pose notable challenges. This issue arises
because these systems have a distributed structure, which is susceptible to
unsanctioned access - a fundamental problem. In the context of cloud computing,
the provision of services on demand makes them targets for common assaults like
Denial of Service (DoS) attacks, which include Economic Denial of
Sustainability (EDoS) and Distributed Denial of Service (DDoS). These
onslaughts can be classified into three categories: bandwidth consumption
attacks, specific application attacks, and connection layer attacks. Most of
the studies conducted in this arena have concentrated on a singular type of
attack, with the concurrent detection of multiple DoS attacks often overlooked.
This article proposes a suitable method to identify four types of assaults:
HTTP, Database, TCP SYN, and DNS Flood. The aim is to present a universal
algorithm that performs effectively in detecting all four attacks instead of
using separate algorithms for each one. In this technique, seventeen server
parameters like memory usage, CPU usage, and input/output counts are extracted
and monitored for changes, identifying the failure point using the CUSUM
algorithm to calculate the likelihood of each attack. Subsequently, a fuzzy
neural network is employed to determine the occurrence of an attack. When
compared to the Snort software, the proposed method's results show a
significant improvement in the average detection rate, jumping from 57% to 95%.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 18:00:10 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Khordadpour",
"Peyman",
""
],
[
"Ahmadi",
"Saeed",
""
]
] |
new_dataset
| 0.968915 |
2305.16509
|
Ming-Chang Lee
|
Ming-Chang Lee and Jia-Chun Lin
|
RoLA: A Real-Time Online Lightweight Anomaly Detection System for
Multivariate Time Series
|
10 pages, 4 figures, 4 tables, the 18th International Conference on
Software Technologies (ICSOFT 2023)
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A multivariate time series refers to observations of two or more variables
taken from a device or a system simultaneously over time. There is an
increasing need to monitor multivariate time series and detect anomalies in
real time to ensure proper system operation and good service quality. It is
also highly desirable to have a lightweight anomaly detection system that
considers correlations between different variables, adapts to changes in the
pattern of the multivariate time series, offers immediate responses, and
provides supportive information regarding detection results based on
unsupervised learning and online model training. In the past decade, many
multivariate time series anomaly detection approaches have been introduced.
However, they are unable to offer all the above-mentioned features. In this
paper, we propose RoLA, a real-time online lightweight anomaly detection system
for multivariate time series based on a divide-and-conquer strategy, parallel
processing, and the majority rule. RoLA employs multiple lightweight anomaly
detectors to monitor multivariate time series in parallel, determine the
correlations between variables dynamically on the fly, and then jointly detect
anomalies based on the majority rule in real time. To demonstrate the
performance of RoLA, we conducted an experiment based on a public dataset
provided by the FerryBox of the One Ocean Expedition. The results show that
RoLA provides satisfactory detection accuracy and lightweight performance.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 22:32:45 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Lee",
"Ming-Chang",
""
],
[
"Lin",
"Jia-Chun",
""
]
] |
new_dataset
| 0.999165 |
2305.16510
|
Mihir Vinay Kulkarni
|
Mihir Kulkarni, Theodor J. L. Forgaard, Kostas Alexis
|
Aerial Gym -- Isaac Gym Simulator for Aerial Robots
|
4 pages, 3 figures. To appear in the ICRA 2023 workshop on The Role
of Robotics Simulators for Unmanned Aerial Vehicles. Code available at
https://github.com/ntnu-arl/aerial_gym_simulator
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Developing learning-based methods for navigation of aerial robots is an
intensive data-driven process that requires highly parallelized simulation. The
full utilization of such simulators is hindered by the lack of parallelized
high-level control methods that imitate the real-world robot interface.
Responding to this need, we develop the Aerial Gym simulator that can simulate
millions of multirotor vehicles parallelly with nonlinear geometric controllers
for the Special Euclidean Group SE(3) for attitude, velocity and position
tracking. We also develop functionalities for managing a large number of
obstacles in the environment, enabling rapid randomization for learning of
navigation tasks. In addition, we also provide sample environments having
robots with simulated cameras capable of capturing RGB, depth, segmentation and
optical flow data in obstacle-rich environments. This simulator is a step
towards developing a - currently missing - highly parallelized aerial robot
simulation with geometric controllers at a large scale, while also providing a
customizable obstacle randomization functionality for navigation tasks. We
provide training scripts with compatible reinforcement learning frameworks to
navigate the robot to a goal setpoint based on attitude and velocity command
interfaces. Finally, we open source the simulator and aim to develop it further
to speed up rendering using alternate kernel-based frameworks in order to
parallelize ray-casting for depth images thus supporting a larger number of
robots.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 22:34:10 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Kulkarni",
"Mihir",
""
],
[
"Forgaard",
"Theodor J. L.",
""
],
[
"Alexis",
"Kostas",
""
]
] |
new_dataset
| 0.963162 |
2305.16585
|
Kuan-Hao Huang
|
Kuan-Hao Huang, Varun Iyer, I-Hung Hsu, Anoop Kumar, Kai-Wei Chang,
Aram Galstyan
|
ParaAMR: A Large-Scale Syntactically Diverse Paraphrase Dataset by AMR
Back-Translation
|
ACL 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Paraphrase generation is a long-standing task in natural language processing
(NLP). Supervised paraphrase generation models, which rely on human-annotated
paraphrase pairs, are cost-inefficient and hard to scale up. On the other hand,
automatically annotated paraphrase pairs (e.g., by machine back-translation),
usually suffer from the lack of syntactic diversity -- the generated paraphrase
sentences are very similar to the source sentences in terms of syntax. In this
work, we present ParaAMR, a large-scale syntactically diverse paraphrase
dataset created by abstract meaning representation back-translation. Our
quantitative analysis, qualitative examples, and human evaluation demonstrate
that the paraphrases of ParaAMR are syntactically more diverse compared to
existing large-scale paraphrase datasets while preserving good semantic
similarity. In addition, we show that ParaAMR can be used to improve on three
NLP tasks: learning sentence embeddings, syntactically controlled paraphrase
generation, and data augmentation for few-shot learning. Our results thus
showcase the potential of ParaAMR for improving various NLP applications.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 02:27:33 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Huang",
"Kuan-Hao",
""
],
[
"Iyer",
"Varun",
""
],
[
"Hsu",
"I-Hung",
""
],
[
"Kumar",
"Anoop",
""
],
[
"Chang",
"Kai-Wei",
""
],
[
"Galstyan",
"Aram",
""
]
] |
new_dataset
| 0.999811 |
2305.16591
|
Tao Xiao
|
Tao Xiao, Sebastian Baltes, Hideaki Hata, Christoph Treude, Raula
Gaikovina Kula, Takashi Ishio, Kenichi Matsumoto
|
18 Million Links in Commit Messages: Purpose, Evolution, and Decay
| null |
Empir Software Eng 28, 91 (2023)
|
10.1007/s10664-023-10325-8
| null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Commit messages contain diverse and valuable types of knowledge in all
aspects of software maintenance and evolution. Links are an example of such
knowledge. Previous work on "9.6 million links in source code comments" showed
that links are prone to decay, become outdated, and lack bidirectional
traceability. We conducted a large-scale study of 18,201,165 links from commits
in 23,110 GitHub repositories to investigate whether they suffer the same fate.
Results show that referencing external resources is prevalent and that the most
frequent domains other than github.com are the external domains of Stack
Overflow and Google Code. Similarly, links serve as source code context to
commit messages, with inaccessible links being frequent. Although repeatedly
referencing links is rare (4%), 14% of links that are prone to evolve become
unavailable over time; e.g., tutorials or articles and software homepages
become unavailable over time. Furthermore, we find that 70% of the distinct
links suffer from decay; the domains that occur the most frequently are related
to Subversion repositories. We summarize that links in commits share the same
fate as links in code, opening up avenues for future work.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 02:32:52 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Xiao",
"Tao",
""
],
[
"Baltes",
"Sebastian",
""
],
[
"Hata",
"Hideaki",
""
],
[
"Treude",
"Christoph",
""
],
[
"Kula",
"Raula Gaikovina",
""
],
[
"Ishio",
"Takashi",
""
],
[
"Matsumoto",
"Kenichi",
""
]
] |
new_dataset
| 0.994238 |
2305.16638
|
Shenglong Zhang
|
Shenglong Zhang and Ying Liu
|
Adversarial Multi-task Learning for End-to-end Metaphor Detection
|
Findings of ACL 2023 Accepted
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Metaphor detection (MD) suffers from limited training data. In this paper, we
started with a linguistic rule called Metaphor Identification Procedure and
then proposed a novel multi-task learning framework to transfer knowledge in
basic sense discrimination (BSD) to MD. BSD is constructed from word sense
disambiguation (WSD), which has copious amounts of data. We leverage
adversarial training to align the data distributions of MD and BSD in the same
feature space, so task-invariant representations can be learned. To capture
fine-grained alignment patterns, we utilize the multi-mode structures of MD and
BSD. Our method is totally end-to-end and can mitigate the data scarcity
problem in MD. Competitive results are reported on four public datasets. Our
code and datasets are available.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 05:28:00 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Zhang",
"Shenglong",
""
],
[
"Liu",
"Ying",
""
]
] |
new_dataset
| 0.969736 |
2305.16648
|
Kent Chang
|
Kent K. Chang, Danica Chen, David Bamman
|
Dramatic Conversation Disentanglement
|
25 pages, 5 figures, accepted to ACL 2023 Findings
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new dataset for studying conversation disentanglement in movies
and TV series. While previous work has focused on conversation disentanglement
in IRC chatroom dialogues, movies and TV shows provide a space for studying
complex pragmatic patterns of floor and topic change in face-to-face
multi-party interactions. In this work, we draw on theoretical research in
sociolinguistics, sociology, and film studies to operationalize a
conversational thread (including the notion of a floor change) in dramatic
texts, and use that definition to annotate a dataset of 10,033 dialogue turns
(comprising 2,209 threads) from 831 movies. We compare the performance of
several disentanglement models on this dramatic dataset, and apply the
best-performing model to disentangle 808 movies. We see that, contrary to
expectation, average thread lengths do not decrease significantly over the past
40 years, and characters portrayed by actors who are women, while
underrepresented, initiate more new conversational threads relative to their
speaking time.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 05:39:49 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Chang",
"Kent K.",
""
],
[
"Chen",
"Danica",
""
],
[
"Bamman",
"David",
""
]
] |
new_dataset
| 0.999781 |
2305.16651
|
William Held
|
Will Held, Caleb Ziems, Diyi Yang
|
TADA: Task-Agnostic Dialect Adapters for English
|
5 Pages; ACL Findings Paper 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large Language Models, the dominant starting point for Natural Language
Processing (NLP) applications, fail at a higher rate for speakers of English
dialects other than Standard American English (SAE). Prior work addresses this
using task-specific data or synthetic data augmentation, both of which require
intervention for each dialect and task pair. This poses a scalability issue
that prevents the broad adoption of robust dialectal English NLP. We introduce
a simple yet effective method for task-agnostic dialect adaptation by aligning
non-SAE dialects using adapters and composing them with task-specific adapters
from SAE. Task-Agnostic Dialect Adapters (TADA) improve dialectal robustness on
4 dialectal variants of the GLUE benchmark without task-specific supervision.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 05:45:03 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Held",
"Will",
""
],
[
"Ziems",
"Caleb",
""
],
[
"Yang",
"Diyi",
""
]
] |
new_dataset
| 0.986725 |
2305.16698
|
Yonghui Wang
|
Yonghui Wang, Wengang Zhou, Yunyao Mao, Houqiang Li
|
Detect Any Shadow: Segment Anything for Video Shadow Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Segment anything model (SAM) has achieved great success in the field of
natural image segmentation. Nevertheless, SAM tends to classify shadows as
background, resulting in poor segmentation performance for shadow detection
task. In this paper, we propose an simple but effective approach for fine
tuning SAM to detect shadows. Additionally, we also combine it with long
short-term attention mechanism to extend its capabilities to video shadow
detection. Specifically, we first fine tune SAM by utilizing shadow data
combined with sparse prompts and apply the fine-tuned model to detect a
specific frame (e.g., first frame) in the video with a little user assistance.
Subsequently, using the detected frame as a reference, we employ a long
short-term network to learn spatial correlations between distant frames and
temporal consistency between contiguous frames, thereby achieving shadow
information propagation across frames. Extensive experimental results
demonstrate that our method outperforms the state-of-the-art techniques, with
improvements of 17.2% and 3.3% in terms of MAE and IoU, respectively,
validating the effectiveness of our method.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 07:39:10 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Wang",
"Yonghui",
""
],
[
"Zhou",
"Wengang",
""
],
[
"Mao",
"Yunyao",
""
],
[
"Li",
"Houqiang",
""
]
] |
new_dataset
| 0.984416 |
2305.16713
|
Jeeho Hyun
|
Jeeho Hyun, Sangyun Kim, Giyoung Jeon, Seung Hwan Kim, Kyunghoon Bae,
Byung Jun Kang
|
ReConPatch : Contrastive Patch Representation Learning for Industrial
Anomaly Detection
|
10 pages, 4 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Anomaly detection is crucial to the advanced identification of product
defects such as incorrect parts, misaligned components, and damages in
industrial manufacturing. Due to the rare observations and unknown types of
defects, anomaly detection is considered to be challenging in machine learning.
To overcome this difficulty, recent approaches utilize the common visual
representations from natural image datasets and distill the relevant features.
However, existing approaches still have the discrepancy between the pre-trained
feature and the target data, or require the input augmentation which should be
carefully designed particularly for the industrial dataset. In this paper, we
introduce ReConPatch, which constructs discriminative features for anomaly
detection by training a linear modulation attached to a pre-trained model.
ReConPatch employs contrastive representation learning to collect and
distribute features in a way that produces a target-oriented and easily
separable representation. To address the absence of labeled pairs for the
contrastive learning, we utilize two similarity measures, pairwise and
contextual similarities, between data representations as a pseudo-label. Unlike
previous work, ReConPatch achieves robust anomaly detection performance without
extensive input augmentation. Our method achieves the state-of-the-art anomaly
detection performance (99.72%) for the widely used and challenging MVTec AD
dataset.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 07:59:36 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Hyun",
"Jeeho",
""
],
[
"Kim",
"Sangyun",
""
],
[
"Jeon",
"Giyoung",
""
],
[
"Kim",
"Seung Hwan",
""
],
[
"Bae",
"Kyunghoon",
""
],
[
"Kang",
"Byung Jun",
""
]
] |
new_dataset
| 0.969198 |
2305.16740
|
Royi Rassin
|
Royi Rassin, Yoav Goldberg, Reut Tsarfaty
|
Conjunct Resolution in the Face of Verbal Omissions
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Verbal omissions are complex syntactic phenomena in VP coordination
structures. They occur when verbs and (some of) their arguments are omitted
from subsequent clauses after being explicitly stated in an initial clause.
Recovering these omitted elements is necessary for accurate interpretation of
the sentence, and while humans easily and intuitively fill in the missing
information, state-of-the-art models continue to struggle with this task.
Previous work is limited to small-scale datasets, synthetic data creation
methods, and to resolution methods in the dependency-graph level. In this work
we propose a conjunct resolution task that operates directly on the text and
makes use of a split-and-rephrase paradigm in order to recover the missing
elements in the coordination structure. To this end, we first formulate a
pragmatic framework of verbal omissions which describes the different types of
omissions, and develop an automatic scalable collection method. Based on this
method, we curate a large dataset, containing over 10K examples of
naturally-occurring verbal omissions with crowd-sourced annotations of the
resolved conjuncts. We train various neural baselines for this task, and show
that while our best method obtains decent performance, it leaves ample space
for improvement. We propose our dataset, metrics and models as a starting point
for future research on this topic.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 08:44:02 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Rassin",
"Royi",
""
],
[
"Goldberg",
"Yoav",
""
],
[
"Tsarfaty",
"Reut",
""
]
] |
new_dataset
| 0.993896 |
2305.16752
|
Alexandros Evangelidis
|
Severin Bals, Alexandros Evangelidis, Kush Grover, Jan Kretinsky,
Jakob Waibel
|
MULTIGAIN 2.0: MDP controller synthesis for multiple mean-payoff, LTL
and steady-state constraints
| null | null | null | null |
cs.AI cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present MULTIGAIN 2.0, a major extension to the controller synthesis tool
MultiGain, built on top of the probabilistic model checker PRISM. This new
version extends MultiGain's multi-objective capabilities, by allowing for the
formal verification and synthesis of controllers for probabilistic systems with
multi-dimensional long-run average reward structures, steady-state constraints,
and linear temporal logic properties. Additionally, MULTIGAIN 2.0 provides an
approach for finding finite memory solutions and the capability for two- and
three-dimensional visualization of Pareto curves to facilitate trade-off
analysis in multi-objective scenarios
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 08:59:51 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Bals",
"Severin",
""
],
[
"Evangelidis",
"Alexandros",
""
],
[
"Grover",
"Kush",
""
],
[
"Kretinsky",
"Jan",
""
],
[
"Waibel",
"Jakob",
""
]
] |
new_dataset
| 0.998204 |
2305.16833
|
Wei Chen
|
Wei Chen, Shiqi Wei, Zhongyu Wei, Xuanjing Huang
|
KNSE: A Knowledge-aware Natural Language Inference Framework for
Dialogue Symptom Status Recognition
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Symptom diagnosis in medical conversations aims to correctly extract both
symptom entities and their status from the doctor-patient dialogue. In this
paper, we propose a novel framework called KNSE for symptom status recognition
(SSR), where the SSR is formulated as a natural language inference (NLI) task.
For each mentioned symptom in a dialogue window, we first generate knowledge
about the symptom and hypothesis about status of the symptom, to form a
(premise, knowledge, hypothesis) triplet. The BERT model is then used to encode
the triplet, which is further processed by modules including utterance
aggregation, self-attention, cross-attention, and GRU to predict the symptom
status. Benefiting from the NLI formalization, the proposed framework can
encode more informative prior knowledge to better localize and track symptom
status, which can effectively improve the performance of symptom status
recognition. Preliminary experiments on Chinese medical dialogue datasets show
that KNSE outperforms previous competitive baselines and has advantages in
cross-disease and cross-symptom scenarios.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 11:23:26 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Chen",
"Wei",
""
],
[
"Wei",
"Shiqi",
""
],
[
"Wei",
"Zhongyu",
""
],
[
"Huang",
"Xuanjing",
""
]
] |
new_dataset
| 0.999556 |
2305.16835
|
Pinxue Guo
|
Pinxue Guo, Tony Huang, Peiyang He, Xuefeng Liu, Tianjun Xiao, Zhaoyu
Chen, Wenqiang Zhang
|
OpenVIS: Open-vocabulary Video Instance Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose and study a new computer vision task named open-vocabulary video
instance segmentation (OpenVIS), which aims to simultaneously segment, detect,
and track arbitrary objects in a video according to corresponding text
descriptions. Compared to the original video instance segmentation, OpenVIS
enables users to identify objects of desired categories, regardless of whether
those categories were included in the training dataset. To achieve this goal,
we propose a two-stage pipeline for proposing high-quality class-agnostic
object masks and predicting their corresponding categories via pre-trained VLM.
Specifically, we first employ a query-based mask proposal network to generate
masks of all potential objects, where we replace the original class head with
an instance head trained with a binary object loss, thereby enhancing the
class-agnostic mask proposal ability. Then, we introduce a proposal
post-processing approach to adapt the proposals better to the pre-trained VLMs,
avoiding distortion and unnatural proposal inputs. Meanwhile, to facilitate
research on this new task, we also propose an evaluation benchmark that
utilizes off-the-shelf datasets to comprehensively assess its performance.
Experimentally, the proposed OpenVIS exhibits a remarkable 148\% improvement
compared to the full-supervised baselines on BURST, which have been trained on
all categories.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 11:25:59 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Guo",
"Pinxue",
""
],
[
"Huang",
"Tony",
""
],
[
"He",
"Peiyang",
""
],
[
"Liu",
"Xuefeng",
""
],
[
"Xiao",
"Tianjun",
""
],
[
"Chen",
"Zhaoyu",
""
],
[
"Zhang",
"Wenqiang",
""
]
] |
new_dataset
| 0.999894 |
2305.16868
|
Wanxin Li
|
Wanxin Li, Collin Meese, Zijia Zhong, Hao Guo, Mark Nejad
|
Location-aware Verification for Autonomous Truck Platooning Based on
Blockchain and Zero-knowledge Proof
|
Published in 2021 IEEE International Conference on Blockchain and
Cryptocurrency (ICBC). arXiv admin note: text overlap with arXiv:2010.14037
| null |
10.1109/ICBC51069.2021.9461116
| null |
cs.NI cs.CR cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Platooning technologies enable trucks to drive cooperatively and
automatically, which bring benefits including less fuel consumption, more road
capacity and safety. In order to establish trust during dynamic platoon
formation, ensure vehicular data integrity, and guard platoons against
potential attackers, it is pivotal to verify any given vehicle's identity
information before granting it access to join a platoon. To address this
concern in dynamic truck platooning, we present a novel location-aware and
privacy-preserving verification protocol based on zero-knowledge proof and
permissioned blockchain. By performing the verification process within the
spatially-local area defined by a given platoon, our system can provide lower
latency and communication overhead compared to a location-agnostic blockchain
system. We prototype the proposed system and perform benchmark tests on the
Hyperledger platform. The experimental results show that our system is suitable
for real-world truck platooning.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 12:20:07 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Li",
"Wanxin",
""
],
[
"Meese",
"Collin",
""
],
[
"Zhong",
"Zijia",
""
],
[
"Guo",
"Hao",
""
],
[
"Nejad",
"Mark",
""
]
] |
new_dataset
| 0.964863 |
2305.16893
|
Ivan Homoliak Ph.D.
|
Ivan Homoliak, Martin Pere\v{s}\'ini, Patrik Holop, Jakub Handzu\v{s},
Fran Casino
|
CBDC-AquaSphere: Interoperable Central Bank Digital Currency Built on
Trusted Computing and Blockchain
| null | null | null | null |
cs.DC cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
The adoption of decentralized, tamper-proof ledger systems is paving the way
for new applications and opportunities in different contexts. While most
research aims to improve their scalability, privacy, and governance issues,
interoperability has received less attention. Executing transactions across
various blockchains is notably instrumental in unlocking the potential of novel
applications, particularly in the financial sector, where their potential would
otherwise be significantly diminished. Therefore, interoperable ledgers are
crucial to ensure the expansion and further adoption of such a technology in
various contexts.
In this paper, we present a protocol that uses a combination of trusted
execution environment (TEE) and blockchains to enable interoperability over
independent semi-centralized CBDC ledgers, guaranteeing the atomicity of
inter-bank transfers. Our interoperability protocol uses a custom adaptation of
atomic swap protocol and is executed by any pair of CBDC instances to realize a
one-way transfer. It ensures features such as atomicity, verifiability,
correctness, censorship resistance, and privacy while offering high scalability
in terms of the number of CBDC instances. Our approach enables to possible
deployment scenarios that can be combined: (1) CBDC instances represent central
banks of multiple countries, and (2) CBDC instances represent the set of retail
banks and a paramount central bank of a single country. We provide a detailed
description of our protocol as well as an extensive analysis of its benefits,
features, and security.
In this WIP paper, we made a proof-of-concept implementation and made a
partial evaluation, while the more extensive evaluation will be made in our
future work.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 12:54:00 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Homoliak",
"Ivan",
""
],
[
"Perešíni",
"Martin",
""
],
[
"Holop",
"Patrik",
""
],
[
"Handzuš",
"Jakub",
""
],
[
"Casino",
"Fran",
""
]
] |
new_dataset
| 0.998041 |
2305.16907
|
Nils M\"uller
|
Nils M\"uller, Kaibin Bao, J\"org Matthes, Kai Heussen
|
CyPhERS: A Cyber-Physical Event Reasoning System providing real-time
situational awareness for attack and fault response
|
Article submitted to Computers in Industry
| null | null | null |
cs.CR cs.SY eess.SP eess.SY stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Cyber-physical systems (CPSs) constitute the backbone of critical
infrastructures such as power grids or water distribution networks. Operating
failures in these systems can cause serious risks for society. To avoid or
minimize downtime, operators require real-time awareness about critical
incidents. However, online event identification in CPSs is challenged by the
complex interdependency of numerous physical and digital components, requiring
to take cyber attacks and physical failures equally into account. The online
event identification problem is further complicated through the lack of
historical observations of critical but rare events, and the continuous
evolution of cyber attack strategies. This work introduces and demonstrates
CyPhERS, a Cyber-Physical Event Reasoning System. CyPhERS provides real-time
information pertaining the occurrence, location, physical impact, and root
cause of potentially critical events in CPSs, without the need for historical
event observations. Key novelty of CyPhERS is the capability to generate
informative and interpretable event signatures of known and unknown types of
both cyber attacks and physical failures. The concept is evaluated and
benchmarked on a demonstration case that comprises a multitude of attack and
fault events targeting various components of a CPS. The results demonstrate
that the event signatures provide relevant and inferable information on both
known and unknown event types.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 13:21:37 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Müller",
"Nils",
""
],
[
"Bao",
"Kaibin",
""
],
[
"Matthes",
"Jörg",
""
],
[
"Heussen",
"Kai",
""
]
] |
new_dataset
| 0.990543 |
2305.16927
|
Wanxin Li
|
Wanxin Li, Collin Meese, Mark Nejad, Hao Guo
|
P-CFT: A Privacy-preserving and Crash Fault Tolerant Consensus Algorithm
for Permissioned Blockchains
|
Published in 2021 4th International Conference on Hot
Information-Centric Networking (HotICN)
| null |
10.1109/HotICN53262.2021.9680829
| null |
cs.CR cs.DC cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Consensus algorithms play a critical role in blockchains and directly impact
their performance. During consensus processing, nodes need to validate and
order the pending transactions into a new block, which requires verifying the
application-specific data encapsulated within a transaction. This exposes the
underlying data to the consensus nodes, presenting privacy concerns. Existing
consensus algorithms focus on realizing application security and performance
goals, but lack privacy-by-design properties or are resource-heavy and intended
for securing permissionless blockchain networks. In this paper, we propose
P-CFT, a zero-knowledge and crash fault tolerant consensus algorithm for
permissioned blockchains. The proposed consensus algorithm provides inherent
data privacy directly to the consensus layer, while still providing guarantees
of crash fault tolerance. We conduct experiments using the Hyperledger Ursa
cryptographic library, and the results show promise for integrating P-CFT into
existing permissioned blockchain systems requiring privacy-preserving and crash
fault tolerant features.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 13:38:37 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Li",
"Wanxin",
""
],
[
"Meese",
"Collin",
""
],
[
"Nejad",
"Mark",
""
],
[
"Guo",
"Hao",
""
]
] |
new_dataset
| 0.996109 |
2305.16957
|
Vineet Bhat
|
Vineet Bhat, Preethi Jyothi and Pushpak Bhattacharyya
|
DisfluencyFixer: A tool to enhance Language Learning through Speech To
Speech Disfluency Correction
|
To be published in Interspeech 2023 - Show and Tell Demonstrations
| null | null | null |
cs.CL cs.LG cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Conversational speech often consists of deviations from the speech plan,
producing disfluent utterances that affect downstream NLP tasks. Removing these
disfluencies is necessary to create fluent and coherent speech. This paper
presents DisfluencyFixer, a tool that performs speech-to-speech disfluency
correction in English and Hindi using a pipeline of Automatic Speech
Recognition (ASR), Disfluency Correction (DC) and Text-To-Speech (TTS) models.
Our proposed system removes disfluencies from input speech and returns fluent
speech as output along with its transcript, disfluency type and total
disfluency count in source utterance, providing a one-stop destination for
language learners to improve the fluency of their speech. We evaluate the
performance of our tool subjectively and receive scores of 4.26, 4.29 and 4.42
out of 5 in ASR performance, DC performance and ease-of-use of the system. Our
tool can be accessed openly at the following link.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 14:13:38 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Bhat",
"Vineet",
""
],
[
"Jyothi",
"Preethi",
""
],
[
"Bhattacharyya",
"Pushpak",
""
]
] |
new_dataset
| 0.961987 |
2305.16976
|
Erick Lavoie
|
Erick Lavoie
|
GOC-Ledger: State-based Conflict-Free Replicated Ledger from Grow-Only
Counters
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Conventional blockchains use consensus algorithms that totally order updates
across all accounts, which is stronger than necessary to implement a replicated
ledger. This makes updates slower and more expensive than necessary. More
recent consensus-free replicated ledgers forego consensus algorithms, with
significant increase in performance and decrease in infrastructure costs.
However, current designs are based around reliable broadcast of update
operations to all replicas which require reliable message delivery and
reasoning over operation histories to establish convergence and safety.
In this paper, we present a replicated ledger as a state-based conflict-free
replicated data type (CRDT) based on grow-only counters. This design provides
two major benefits: 1) it requires a weaker eventual transitive delivery of the
latest state rather than reliable broadcast of all update operations to all
replicas; 2) eventual convergence and safety properties can be proven easily
without having to reason over operation histories: convergence comes from the
composition of grow-only counters, themselves CRDTs, and safety properties can
be expressed over the state of counters, locally and globally. In addition,
applications that tolerate temporary negative balances require no additional
mechanisms and applications that require strictly non-negative balances can be
supported by enforcing sequential updates to the same account across replicas.
Our design is sufficient when executing on replicas that might crash and
recover, as common in deployments in which all replicas are managed by trusted
entities. It may also provide a good foundation to explore new mechanisms for
tolerating adversarial replicas.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 14:30:45 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Lavoie",
"Erick",
""
]
] |
new_dataset
| 0.999089 |
2305.16980
|
Peter Conwell
|
Peter R. Conwell, Kaushik Chakram, Valeria J. Villegas-Medina
|
Spawning Nodes Generate Deterministic Scale-Free Networks
| null | null | null | null |
cs.SI physics.soc-ph
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper we present a deterministic vertex spawning model that yields a
scale-free network. The model specifies that a parent vertex produces a child
vertex in a time interval approximately proportional to the current time and
inversely proportional to the number of edges currently connected to the
parent. Spawned offspring maintain an undirected edge with its parent. No
information about the network as a whole is required to obtain scale-invariant
behavior. Although the algorithm is deterministic, the number of nodes spawning
in a small time interval quickly becomes randomized. We show theoretically and
with simulations that such a spawned network will have a degree distribution
obeying a power law with exponent 2.5. Simulations show that the distribution
matches a Zipf distribution.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 14:35:02 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Conwell",
"Peter R.",
""
],
[
"Chakram",
"Kaushik",
""
],
[
"Villegas-Medina",
"Valeria J.",
""
]
] |
new_dataset
| 0.993189 |
2305.17029
|
Xin Zhou
|
Xin Zhou and Adam J. Spiers
|
InstaGrasp: An Entirely 3D Printed Adaptive Gripper with TPU Soft
Elements and Minimal Assembly Time
|
7 pages, 13 figures, Manipulation and Touch Lab (MTL), Imperial
College London
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Fabricating existing and popular open-source adaptive robotic grippers
commonly involves using multiple professional machines, purchasing a wide range
of parts, and tedious, time-consuming assembly processes. This poses a
significant barrier to entry for some robotics researchers and drives others to
opt for expensive commercial alternatives. To provide both parties with an
easier and cheaper (under 100GBP) solution, we propose a novel adaptive gripper
design where every component (with the exception of actuators and the screws
that come packaged with them) can be fabricated on a hobby-grade 3D printer,
via a combination of inexpensive and readily available PLA and TPU filaments.
This approach means that the gripper's tendons, flexure joints and finger pads
are now printed, as a replacement for traditional string-tendons and molded
urethane flexures and pads. A push-fit systems results in an assembly time of
under 10 minutes. The gripper design is also highly modular and requires only a
few minutes to replace any part, leading to extremely user-friendly maintenance
and part modifications. An extensive stress test has shown a level of
durability more than suitable for research, whilst grasping experiments (with
perturbations) using items from the YCB object set has also proven its
mechanical adaptability to be highly satisfactory.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 15:39:05 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Zhou",
"Xin",
""
],
[
"Spiers",
"Adam J.",
""
]
] |
new_dataset
| 0.999104 |
2305.17071
|
Jinhang Zuo
|
Jinhang Zuo, Zhiyao Zhang, Zhiyong Wang, Shuai Li, Mohammad
Hajiesmaili, Adam Wierman
|
Adversarial Attacks on Online Learning to Rank with Click Feedback
| null | null | null | null |
cs.LG cs.CR cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Online learning to rank (OLTR) is a sequential decision-making problem where
a learning agent selects an ordered list of items and receives feedback through
user clicks. Although potential attacks against OLTR algorithms may cause
serious losses in real-world applications, little is known about adversarial
attacks on OLTR. This paper studies attack strategies against multiple variants
of OLTR. Our first result provides an attack strategy against the UCB algorithm
on classical stochastic bandits with binary feedback, which solves the key
issues caused by bounded and discrete feedback that previous works can not
handle. Building on this result, we design attack algorithms against UCB-based
OLTR algorithms in position-based and cascade models. Finally, we propose a
general attack strategy against any algorithm under the general click model.
Each attack algorithm manipulates the learning agent into choosing the target
attack item $T-o(T)$ times, incurring a cumulative cost of $o(T)$. Experiments
on synthetic and real data further validate the effectiveness of our proposed
attack algorithms.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 16:28:26 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Zuo",
"Jinhang",
""
],
[
"Zhang",
"Zhiyao",
""
],
[
"Wang",
"Zhiyong",
""
],
[
"Li",
"Shuai",
""
],
[
"Hajiesmaili",
"Mohammad",
""
],
[
"Wierman",
"Adam",
""
]
] |
new_dataset
| 0.990093 |
2305.17100
|
Kai Zhang
|
Kai Zhang, Jun Yu, Zhiling Yan, Yixin Liu, Eashan Adhikarla, Sunyang
Fu, Xun Chen, Chen Chen, Yuyin Zhou, Xiang Li, Lifang He, Brian D. Davison,
Quanzheng Li, Yong Chen, Hongfang Liu, Lichao Sun
|
BiomedGPT: A Unified and Generalist Biomedical Generative Pre-trained
Transformer for Vision, Language, and Multimodal Tasks
|
work in progress
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we introduce a unified and generalist Biomedical Generative
Pre-trained Transformer (BiomedGPT) model, which leverages self-supervision on
large and diverse datasets to accept multi-modal inputs and perform a range of
downstream tasks. Our experiments demonstrate that BiomedGPT delivers expansive
and inclusive representations of biomedical data, outperforming the majority of
preceding state-of-the-art models across five distinct tasks with 20 public
datasets spanning over 15 unique biomedical modalities. Through the ablation
study, we also showcase the efficacy of our multi-modal and multi-task
pretraining approach in transferring knowledge to previously unseen data.
Overall, our work presents a significant step forward in developing unified and
generalist models for biomedicine, with far-reaching implications for improving
healthcare outcomes.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 17:14:43 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Zhang",
"Kai",
""
],
[
"Yu",
"Jun",
""
],
[
"Yan",
"Zhiling",
""
],
[
"Liu",
"Yixin",
""
],
[
"Adhikarla",
"Eashan",
""
],
[
"Fu",
"Sunyang",
""
],
[
"Chen",
"Xun",
""
],
[
"Chen",
"Chen",
""
],
[
"Zhou",
"Yuyin",
""
],
[
"Li",
"Xiang",
""
],
[
"He",
"Lifang",
""
],
[
"Davison",
"Brian D.",
""
],
[
"Li",
"Quanzheng",
""
],
[
"Chen",
"Yong",
""
],
[
"Liu",
"Hongfang",
""
],
[
"Sun",
"Lichao",
""
]
] |
new_dataset
| 0.996442 |
2305.17110
|
Bingjie Tang
|
Bingjie Tang, Michael A. Lin, Iretiayo Akinola, Ankur Handa, Gaurav S.
Sukhatme, Fabio Ramos, Dieter Fox, Yashraj Narang
|
IndustReal: Transferring Contact-Rich Assembly Tasks from Simulation to
Reality
|
Accepted to Robotics: Science and Systems (RSS) 2023
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robotic assembly is a longstanding challenge, requiring contact-rich
interaction and high precision and accuracy. Many applications also require
adaptivity to diverse parts, poses, and environments, as well as low cycle
times. In other areas of robotics, simulation is a powerful tool to develop
algorithms, generate datasets, and train agents. However, simulation has had a
more limited impact on assembly. We present IndustReal, a set of algorithms,
systems, and tools that solve assembly tasks in simulation with reinforcement
learning (RL) and successfully achieve policy transfer to the real world.
Specifically, we propose 1) simulation-aware policy updates, 2)
signed-distance-field rewards, and 3) sampling-based curricula for robotic RL
agents. We use these algorithms to enable robots to solve contact-rich pick,
place, and insertion tasks in simulation. We then propose 4) a policy-level
action integrator to minimize error at policy deployment time. We build and
demonstrate a real-world robotic assembly system that uses the trained policies
and action integrator to achieve repeatable performance in the real world.
Finally, we present hardware and software tools that allow other researchers to
reproduce our system and results. For videos and additional details, please see
http://sites.google.com/nvidia.com/industreal .
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 17:20:02 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Tang",
"Bingjie",
""
],
[
"Lin",
"Michael A.",
""
],
[
"Akinola",
"Iretiayo",
""
],
[
"Handa",
"Ankur",
""
],
[
"Sukhatme",
"Gaurav S.",
""
],
[
"Ramos",
"Fabio",
""
],
[
"Fox",
"Dieter",
""
],
[
"Narang",
"Yashraj",
""
]
] |
new_dataset
| 0.99964 |
1910.05483
|
Xiaoke Shen
|
Xiaoke Shen and Ioannis Stamos
|
Frustum VoxNet for 3D object detection from RGB-D or Depth images
|
Update for v3: Added 2D detection performance of using RGBDHS as
input in appendix. page 8, add Acknowledgement. page 10, add Supplementary
Material. The paper got accepted by 2020 Winter Conference on Applications of
Computer Vision (WACV '20). The first arxiv version can be found here:
arXiv:1910.05483
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, there have been a plethora of classification and detection systems
from RGB as well as 3D images. In this work, we describe a new 3D object
detection system from an RGB-D or depth-only point cloud. Our system first
detects objects in 2D (either RGB or pseudo-RGB constructed from depth). The
next step is to detect 3D objects within the 3D frustums these 2D detections
define. This is achieved by voxelizing parts of the frustums (since frustums
can be really large), instead of using the whole frustums as done in earlier
work. The main novelty of our system has to do with determining which parts (3D
proposals) of the frustums to voxelize, thus allowing us to provide high
resolution representations around the objects of interest. It also allows our
system to have reduced memory requirements. These 3D proposals are fed to an
efficient ResNet-based 3D Fully Convolutional Network (FCN). Our 3D detection
system is fast and can be integrated into a robotics platform. With respect to
systems that do not perform voxelization (such as PointNet), our methods can
operate without the requirement of subsampling of the datasets. We have also
introduced a pipelining approach that further improves the efficiency of our
system. Results on SUN RGB-D dataset show that our system, which is based on a
small network, can process 20 frames per second with comparable detection
results to the state-of-the-art, achieving a 2 times speedup.
|
[
{
"version": "v1",
"created": "Sat, 12 Oct 2019 04:06:46 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Feb 2020 23:59:10 GMT"
},
{
"version": "v3",
"created": "Thu, 25 May 2023 02:56:37 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Shen",
"Xiaoke",
""
],
[
"Stamos",
"Ioannis",
""
]
] |
new_dataset
| 0.99952 |
2205.15416
|
Md Saef Ullah Miah
|
Md. Ariful Islam, Md. Antonin Islam, Md. Amzad Hossain Jacky, Md.
Al-Amin, M. Saef Ullah Miah, Md Muhidul Islam Khan, Md. Iqbal Hossain
|
Distributed Ledger Technology based Integrated Healthcare Solution for
Bangladesh
|
21 pages, 16 figures, 4 tables
| null |
10.1109/ACCESS.2023.3279724
| null |
cs.IR cs.DC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Healthcare data is sensitive and requires great protection. Encrypted
electronic health records (EHRs) contain personal and sensitive data such as
names and addresses. Having access to patient data benefits all of them. This
paper proposes a blockchain-based distributed healthcare application platform
for Bangladeshi public and private healthcare providers. Using data
immutability and smart contracts, the suggested application framework allows
users to create safe digital agreements for commerce or collaboration. Thus,
all enterprises may securely collaborate using the same blockchain network,
gaining data openness and read/write capacity. The proposed application
consists of various application interfaces for various system users. For data
integrity, privacy, permission and service availability, the proposed solution
leverages Hyperledger fabric and Blockchain as a Service. Everyone will also
have their own profile in the portal. A unique identity for each person and the
installation of digital information centres across the country have greatly
eased the process. It will collect systematic health data from each person
which will be beneficial for research institutes and health-related
organisations. A national data warehouse in Bangladesh is feasible for this
application and It is also possible to keep a clean health sector by analysing
data stored in this warehouse and conducting various purification algorithms
using technologies like Data Science. Given that Bangladesh has both public and
private health care, a straightforward digital strategy for all organisations
is essential.
|
[
{
"version": "v1",
"created": "Mon, 30 May 2022 20:26:31 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Islam",
"Md. Ariful",
""
],
[
"Islam",
"Md. Antonin",
""
],
[
"Jacky",
"Md. Amzad Hossain",
""
],
[
"Al-Amin",
"Md.",
""
],
[
"Miah",
"M. Saef Ullah",
""
],
[
"Khan",
"Md Muhidul Islam",
""
],
[
"Hossain",
"Md. Iqbal",
""
]
] |
new_dataset
| 0.985578 |
2206.00859
|
Chenglong Li
|
Chenglong Li, Xiaobin Yang, Guohao Wang, Aihua Zheng, Chang Tan,
Ruoran Jia, and Jin Tang
|
Disentangled Generation Network for Enlarged License Plate Recognition
and A Unified Dataset
|
Submission to CVIU
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
License plate recognition plays a critical role in many practical
applications, but license plates of large vehicles are difficult to be
recognized due to the factors of low resolution, contamination, low
illumination, and occlusion, to name a few. To overcome the above factors, the
transportation management department generally introduces the enlarged license
plate behind the rear of a vehicle. However, enlarged license plates have high
diversity as they are non-standard in position, size, and style. Furthermore,
the background regions contain a variety of noisy information which greatly
disturbs the recognition of license plate characters. Existing works have not
studied this challenging problem. In this work, we first address the enlarged
license plate recognition problem and contribute a dataset containing 9342
images, which cover most of the challenges of real scenes. However, the created
data are still insufficient to train deep methods of enlarged license plate
recognition, and building large-scale training data is very time-consuming and
high labor cost. To handle this problem, we propose a novel task-level
disentanglement generation framework based on the Disentangled Generation
Network (DGNet), which disentangles the generation into the text generation and
background generation in an end-to-end manner to effectively ensure diversity
and integrity, for robust enlarged license plate recognition. Extensive
experiments on the created dataset are conducted, and we demonstrate the
effectiveness of the proposed approach in three representative text recognition
frameworks.
|
[
{
"version": "v1",
"created": "Thu, 2 Jun 2022 03:26:50 GMT"
},
{
"version": "v2",
"created": "Thu, 25 May 2023 14:03:01 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Li",
"Chenglong",
""
],
[
"Yang",
"Xiaobin",
""
],
[
"Wang",
"Guohao",
""
],
[
"Zheng",
"Aihua",
""
],
[
"Tan",
"Chang",
""
],
[
"Jia",
"Ruoran",
""
],
[
"Tang",
"Jin",
""
]
] |
new_dataset
| 0.999759 |
2209.07753
|
Jacky Liang
|
Jacky Liang, Wenlong Huang, Fei Xia, Peng Xu, Karol Hausman, Brian
Ichter, Pete Florence, Andy Zeng
|
Code as Policies: Language Model Programs for Embodied Control
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models (LLMs) trained on code completion have been shown to be
capable of synthesizing simple Python programs from docstrings [1]. We find
that these code-writing LLMs can be re-purposed to write robot policy code,
given natural language commands. Specifically, policy code can express
functions or feedback loops that process perception outputs (e.g.,from object
detectors [2], [3]) and parameterize control primitive APIs. When provided as
input several example language commands (formatted as comments) followed by
corresponding policy code (via few-shot prompting), LLMs can take in new
commands and autonomously re-compose API calls to generate new policy code
respectively. By chaining classic logic structures and referencing third-party
libraries (e.g., NumPy, Shapely) to perform arithmetic, LLMs used in this way
can write robot policies that (i) exhibit spatial-geometric reasoning, (ii)
generalize to new instructions, and (iii) prescribe precise values (e.g.,
velocities) to ambiguous descriptions ("faster") depending on context (i.e.,
behavioral commonsense). This paper presents code as policies: a robot-centric
formulation of language model generated programs (LMPs) that can represent
reactive policies (e.g., impedance controllers), as well as waypoint-based
policies (vision-based pick and place, trajectory-based control), demonstrated
across multiple real robot platforms. Central to our approach is prompting
hierarchical code-gen (recursively defining undefined functions), which can
write more complex code and also improves state-of-the-art to solve 39.8% of
problems on the HumanEval [1] benchmark. Code and videos are available at
https://code-as-policies.github.io
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2022 07:17:23 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Sep 2022 23:31:52 GMT"
},
{
"version": "v3",
"created": "Wed, 1 Mar 2023 04:02:50 GMT"
},
{
"version": "v4",
"created": "Thu, 25 May 2023 03:50:11 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Liang",
"Jacky",
""
],
[
"Huang",
"Wenlong",
""
],
[
"Xia",
"Fei",
""
],
[
"Xu",
"Peng",
""
],
[
"Hausman",
"Karol",
""
],
[
"Ichter",
"Brian",
""
],
[
"Florence",
"Pete",
""
],
[
"Zeng",
"Andy",
""
]
] |
new_dataset
| 0.984361 |
2210.02671
|
William Merrill
|
William Merrill and Ashish Sabharwal
|
A Logic for Expressing Log-Precision Transformers
|
May 24, 2023: Restructured version of old preprint
| null | null | null |
cs.LG cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One way to interpret the reasoning power of transformer-based language models
is to describe the types of logical rules they can resolve over some input
text. Recently, Chiang et al. (2023) showed that finite-precision transformers
can be equivalently expressed in a generalization of first-order logic.
However, finite-precision transformers are a weak transformer variant because,
as we show, a single head can only attend to a constant number of tokens and,
in particular, cannot represent uniform attention. Since attending broadly is a
core capability for transformers, we ask whether a minimally more expressive
model that can attend universally can also be characterized in logic. To this
end, we analyze transformers whose forward pass is computed in $\log n$
precision on contexts of length $n$. We prove that any log-precision
transformer can be equivalently expressed as a first-order logic sentence that,
in addition to standard universal and existential quantifiers, may also contain
majority-vote quantifiers. This is the tightest known upper bound and first
logical characterization of log-precision transformers.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 04:18:09 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Nov 2022 04:54:05 GMT"
},
{
"version": "v3",
"created": "Sat, 28 Jan 2023 04:12:54 GMT"
},
{
"version": "v4",
"created": "Wed, 24 May 2023 18:20:44 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Merrill",
"William",
""
],
[
"Sabharwal",
"Ashish",
""
]
] |
new_dataset
| 0.997815 |
2211.11772
|
Dorottya Demszky
|
Dorottya Demszky and Heather Hill
|
The NCTE Transcripts: A Dataset of Elementary Math Classroom Transcripts
|
18th Workshop on Innovative Use of NLP for Building Educational
Applications
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Classroom discourse is a core medium of instruction - analyzing it can
provide a window into teaching and learning as well as driving the development
of new tools for improving instruction. We introduce the largest dataset of
mathematics classroom transcripts available to researchers, and demonstrate how
this data can help improve instruction. The dataset consists of 1,660 45-60
minute long 4th and 5th grade elementary mathematics observations collected by
the National Center for Teacher Effectiveness (NCTE) between 2010-2013. The
anonymized transcripts represent data from 317 teachers across 4 school
districts that serve largely historically marginalized students. The
transcripts come with rich metadata, including turn-level annotations for
dialogic discourse moves, classroom observation scores, demographic
information, survey responses and student test scores. We demonstrate that our
natural language processing model, trained on our turn-level annotations, can
learn to identify dialogic discourse moves and these moves are correlated with
better classroom observation scores and learning outcomes. This dataset opens
up several possibilities for researchers, educators and policymakers to learn
about and improve K-12 instruction. The dataset can be found at
https://github.com/ddemszky/classroom-transcript-analysis.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 19:00:01 GMT"
},
{
"version": "v2",
"created": "Wed, 24 May 2023 18:41:18 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Demszky",
"Dorottya",
""
],
[
"Hill",
"Heather",
""
]
] |
new_dataset
| 0.999894 |
2212.05598
|
Mathias Gehrig
|
Mathias Gehrig and Davide Scaramuzza
|
Recurrent Vision Transformers for Object Detection with Event Cameras
| null |
IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
Vancouver, 2023
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present Recurrent Vision Transformers (RVTs), a novel backbone for object
detection with event cameras. Event cameras provide visual information with
sub-millisecond latency at a high-dynamic range and with strong robustness
against motion blur. These unique properties offer great potential for
low-latency object detection and tracking in time-critical scenarios. Prior
work in event-based vision has achieved outstanding detection performance but
at the cost of substantial inference time, typically beyond 40 milliseconds. By
revisiting the high-level design of recurrent vision backbones, we reduce
inference time by a factor of 6 while retaining similar performance. To achieve
this, we explore a multi-stage design that utilizes three key concepts in each
stage: First, a convolutional prior that can be regarded as a conditional
positional embedding. Second, local and dilated global self-attention for
spatial feature interaction. Third, recurrent temporal feature aggregation to
minimize latency while retaining temporal information. RVTs can be trained from
scratch to reach state-of-the-art performance on event-based object detection -
achieving an mAP of 47.2% on the Gen1 automotive dataset. At the same time,
RVTs offer fast inference (<12 ms on a T4 GPU) and favorable parameter
efficiency (5 times fewer than prior art). Our study brings new insights into
effective design choices that can be fruitful for research beyond event-based
vision.
|
[
{
"version": "v1",
"created": "Sun, 11 Dec 2022 20:28:59 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Apr 2023 16:38:14 GMT"
},
{
"version": "v3",
"created": "Thu, 25 May 2023 09:17:11 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Gehrig",
"Mathias",
""
],
[
"Scaramuzza",
"Davide",
""
]
] |
new_dataset
| 0.996759 |
2302.07324
|
Chenglei Si
|
Chenglei Si, Zhengyan Zhang, Yingfa Chen, Xiaozhi Wang, Zhiyuan Liu,
Maosong Sun
|
READIN: A Chinese Multi-Task Benchmark with Realistic and Diverse Input
Noises
|
ACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
For many real-world applications, the user-generated inputs usually contain
various noises due to speech recognition errors caused by linguistic
variations1 or typographical errors (typos). Thus, it is crucial to test model
performance on data with realistic input noises to ensure robustness and
fairness. However, little study has been done to construct such benchmarks for
Chinese, where various language-specific input noises happen in the real world.
In order to fill this important gap, we construct READIN: a Chinese multi-task
benchmark with REalistic And Diverse Input Noises. READIN contains four diverse
tasks and requests annotators to re-enter the original test data with two
commonly used Chinese input methods: Pinyin input and speech input. We designed
our annotation pipeline to maximize diversity, for example by instructing the
annotators to use diverse input method editors (IMEs) for keyboard noises and
recruiting speakers from diverse dialectical groups for speech noises. We
experiment with a series of strong pretrained language models as well as robust
training methods, we find that these models often suffer significant
performance drops on READIN even with robustness methods like data
augmentation. As the first large-scale attempt in creating a benchmark with
noises geared towards user-generated inputs, we believe that READIN serves as
an important complement to existing Chinese NLP benchmarks. The source code and
dataset can be obtained from https://github.com/thunlp/READIN.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 20:14:39 GMT"
},
{
"version": "v2",
"created": "Thu, 25 May 2023 01:04:08 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Si",
"Chenglei",
""
],
[
"Zhang",
"Zhengyan",
""
],
[
"Chen",
"Yingfa",
""
],
[
"Wang",
"Xiaozhi",
""
],
[
"Liu",
"Zhiyuan",
""
],
[
"Sun",
"Maosong",
""
]
] |
new_dataset
| 0.999655 |
2302.08345
|
Giulia Clerici
|
Giulia Clerici, Pierre Laforgue, Nicol\`o Cesa-Bianchi
|
Linear Bandits with Memory: from Rotting to Rising
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nonstationary phenomena, such as satiation effects in recommendations, have
mostly been modeled using bandits with finitely many arms. However, the richer
action space provided by linear bandits is often preferred in practice. In this
work, we introduce a novel nonstationary linear bandit model, where current
rewards are influenced by the learner's past actions in a fixed-size window.
Our model, which recovers stationary linear bandits as a special case,
leverages two parameters: the window size $m \ge 0$, and an exponent $\gamma$
that captures the rotting ($\gamma < 0)$ or rising ($\gamma > 0$) nature of the
phenomenon. When both $m$ and $\gamma$ are known, we propose and analyze a
variant of OFUL which minimizes regret against cycling policies. By choosing
the cycle length so as to trade-off approximation and estimation errors, we
then prove a bound of order
$\sqrt{d}\,(m+1)^{\frac{1}{2}+\max\{\gamma,0\}}\,T^{3/4}$ (ignoring log
factors) on the regret against the optimal sequence of actions, where $T$ is
the horizon and $d$ is the dimension of the linear action space. Through a
bandit model selection approach, our results are extended to the case where $m$
and $\gamma$ are unknown. Finally, we complement our theoretical results with
experiments against natural baselines.
|
[
{
"version": "v1",
"created": "Thu, 16 Feb 2023 15:02:07 GMT"
},
{
"version": "v2",
"created": "Thu, 25 May 2023 07:53:34 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Clerici",
"Giulia",
""
],
[
"Laforgue",
"Pierre",
""
],
[
"Cesa-Bianchi",
"Nicolò",
""
]
] |
new_dataset
| 0.984095 |
2302.08624
|
Kevin Scaria
|
Kevin Scaria and Himanshu Gupta and Siddharth Goyal and Saurabh Arjun
Sawant and Swaroop Mishra and Chitta Baral
|
InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis
|
4 pages, 2 figures, 5 tables, 5 appendix pages
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present InstructABSA, Aspect Based Sentiment Analysis
(ABSA) using the instruction learning paradigm for the ABSA subtasks: Aspect
Term Extraction (ATE), Aspect Term Sentiment Classification (ATSC), and Joint
Task modeling. Our method introduces positive, negative, and neutral examples
to each training sample, and instruction tunes the model (Tk-Instruct) the ABSA
subtasks, yielding significant performance improvements. Experimental results
on the Sem Eval 2014, 15, and 16 datasets demonstrate that InstructABSA
outperforms the previous state-of-the-art (SOTA) approaches on the three ABSA
subtasks (ATE, ATSC, and Joint Task) by a significant margin, outperforming 7x
larger models. In particular, InstructABSA surpasses the SOTA on the Rest14 ATE
subtask by 5.69% points, Rest15 ATSC subtask by 9.59% points, and on the Lapt14
Joint Task by 3.37% points. Our results also suggest a strong generalization
ability to new domains across all three subtasks
|
[
{
"version": "v1",
"created": "Thu, 16 Feb 2023 23:29:22 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Feb 2023 06:53:41 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Apr 2023 04:44:43 GMT"
},
{
"version": "v4",
"created": "Thu, 20 Apr 2023 05:57:12 GMT"
},
{
"version": "v5",
"created": "Thu, 25 May 2023 02:13:10 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Scaria",
"Kevin",
""
],
[
"Gupta",
"Himanshu",
""
],
[
"Goyal",
"Siddharth",
""
],
[
"Sawant",
"Saurabh Arjun",
""
],
[
"Mishra",
"Swaroop",
""
],
[
"Baral",
"Chitta",
""
]
] |
new_dataset
| 0.993064 |
2303.09165
|
Hui Tang
|
Hui Tang and Kui Jia
|
A New Benchmark: On the Utility of Synthetic Data with Blender for Bare
Supervised Learning and Downstream Domain Adaptation
|
24 pages, 14 figures, 5 tables, accepted by the IEEE/CVF Conference
on Computer Vision and Pattern Recognition (CVPR), 2023. The proposed new
synthetic-to-real benchmark S2RDA is available at
https://pan.baidu.com/s/1fHHaqrEHbUZLXEg9XKpgSg?pwd=w9wa. The project page is
available at https://huitangtang.github.io/On_the_Utility_of_Synthetic_Data/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning in computer vision has achieved great success with the price of
large-scale labeled training data. However, exhaustive data annotation is
impracticable for each task of all domains of interest, due to high labor costs
and unguaranteed labeling accuracy. Besides, the uncontrollable data collection
process produces non-IID training and test data, where undesired duplication
may exist. All these nuisances may hinder the verification of typical theories
and exposure to new findings. To circumvent them, an alternative is to generate
synthetic data via 3D rendering with domain randomization. We in this work push
forward along this line by doing profound and extensive research on bare
supervised learning and downstream domain adaptation. Specifically, under the
well-controlled, IID data setting enabled by 3D rendering, we systematically
verify the typical, important learning insights, e.g., shortcut learning, and
discover the new laws of various data regimes and network architectures in
generalization. We further investigate the effect of image formation factors on
generalization, e.g., object scale, material texture, illumination, camera
viewpoint, and background in a 3D scene. Moreover, we use the
simulation-to-reality adaptation as a downstream task for comparing the
transferability between synthetic and real data when used for pre-training,
which demonstrates that synthetic data pre-training is also promising to
improve real test results. Lastly, to promote future research, we develop a new
large-scale synthetic-to-real benchmark for image classification, termed S2RDA,
which provides more significant challenges for transfer from simulation to
reality. The code and datasets are available at
https://github.com/huitangtang/On_the_Utility_of_Synthetic_Data.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 09:03:52 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Mar 2023 09:02:33 GMT"
},
{
"version": "v3",
"created": "Mon, 15 May 2023 10:37:28 GMT"
},
{
"version": "v4",
"created": "Thu, 25 May 2023 14:42:33 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Tang",
"Hui",
""
],
[
"Jia",
"Kui",
""
]
] |
new_dataset
| 0.963232 |
2303.17580
|
Yongliang Shen
|
Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, Yueting
Zhuang
|
HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging
Face
| null | null | null | null |
cs.CL cs.AI cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Solving complicated AI tasks with different domains and modalities is a key
step toward artificial general intelligence. While there are abundant AI models
available for different domains and modalities, they cannot handle complicated
AI tasks. Considering large language models (LLMs) have exhibited exceptional
ability in language understanding, generation, interaction, and reasoning, we
advocate that LLMs could act as a controller to manage existing AI models to
solve complicated AI tasks and language could be a generic interface to empower
this. Based on this philosophy, we present HuggingGPT, a framework that
leverages LLMs (e.g., ChatGPT) to connect various AI models in machine learning
communities (e.g., Hugging Face) to solve AI tasks. Specifically, we use
ChatGPT to conduct task planning when receiving a user request, select models
according to their function descriptions available in Hugging Face, execute
each subtask with the selected AI model, and summarize the response according
to the execution results. By leveraging the strong language capability of
ChatGPT and abundant AI models in Hugging Face, HuggingGPT is able to cover
numerous sophisticated AI tasks in different modalities and domains and achieve
impressive results in language, vision, speech, and other challenging tasks,
which paves a new way towards artificial general intelligence.
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 17:48:28 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Apr 2023 17:24:47 GMT"
},
{
"version": "v3",
"created": "Thu, 25 May 2023 15:50:20 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Shen",
"Yongliang",
""
],
[
"Song",
"Kaitao",
""
],
[
"Tan",
"Xu",
""
],
[
"Li",
"Dongsheng",
""
],
[
"Lu",
"Weiming",
""
],
[
"Zhuang",
"Yueting",
""
]
] |
new_dataset
| 0.993842 |
2305.04693
|
Zita Abreu
|
Zita Abreu, Julia Lieb, Joachim Rosenthal
|
Binary convolutional codes with optimal column distances
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-sa/4.0/
|
There exists a large literature of construction of convolutional codes with
maximal or near maximal free distance. Much less is known about constructions
of convolutional codes having optimal or near optimal column distances. In this
paper, a new construction of convolutional codes over the binary field with
optimal column distances is presented.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 13:25:38 GMT"
},
{
"version": "v2",
"created": "Thu, 25 May 2023 09:46:03 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Abreu",
"Zita",
""
],
[
"Lieb",
"Julia",
""
],
[
"Rosenthal",
"Joachim",
""
]
] |
new_dataset
| 0.99684 |
2305.05379
|
Kaushik Moudgalya
|
Kaushik Moudgalya, Ankit Ramakrishnan, Vamsikrishna Chemudupati, and
Xing Han Lu
|
TASTY: A Transformer based Approach to Space and Time complexity
| null | null | null | null |
cs.SE cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Code based Language Models (LMs) have shown very promising results in the
field of software engineering with applications such as code refinement, code
completion and generation. However, the task of time and space complexity
classification from code has not been extensively explored due to a lack of
datasets, with prior endeavors being limited to Java. In this project, we aim
to address these gaps by creating a labelled dataset of code snippets spanning
multiple languages (Python and C++ datasets currently, with C, C#, and
JavaScript datasets being released shortly). We find that existing time
complexity calculation libraries and tools only apply to a limited number of
use-cases. The lack of a well-defined rule based system motivates the
application of several recently proposed code-based LMs. We demonstrate the
effectiveness of dead code elimination and increasing the maximum sequence
length of LMs. In addition to time complexity, we propose to use LMs to find
space complexities from code, and to the best of our knowledge, this is the
first attempt to do so. Furthermore, we introduce a novel code comprehension
task, called cross-language transfer, where we fine-tune the LM on one language
and run inference on another. Finally, we visualize the activation of the
attention fed classification head of our LMs using Non-negative Matrix
Factorization (NMF) to interpret our results.
|
[
{
"version": "v1",
"created": "Sat, 6 May 2023 03:37:44 GMT"
},
{
"version": "v2",
"created": "Wed, 10 May 2023 03:08:04 GMT"
},
{
"version": "v3",
"created": "Thu, 25 May 2023 01:57:21 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Moudgalya",
"Kaushik",
""
],
[
"Ramakrishnan",
"Ankit",
""
],
[
"Chemudupati",
"Vamsikrishna",
""
],
[
"Lu",
"Xing Han",
""
]
] |
new_dataset
| 0.998104 |
2305.06586
|
Zhiyu Chen
|
Besnik Fetahu, Sudipta Kar, Zhiyu Chen, Oleg Rokhlenko, Shervin
Malmasi
|
SemEval-2023 Task 2: Fine-grained Multilingual Named Entity Recognition
(MultiCoNER 2)
|
SemEval-2023 (co-located with ACL-2023 in Toronto, Canada)
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the findings of SemEval-2023 Task 2 on Fine-grained Multilingual
Named Entity Recognition (MultiCoNER 2). Divided into 13 tracks, the task
focused on methods to identify complex fine-grained named entities (like
WRITTENWORK, VEHICLE, MUSICALGRP) across 12 languages, in both monolingual and
multilingual scenarios, as well as noisy settings. The task used the MultiCoNER
V2 dataset, composed of 2.2 million instances in Bangla, Chinese, English,
Farsi, French, German, Hindi, Italian., Portuguese, Spanish, Swedish, and
Ukrainian. MultiCoNER 2 was one of the most popular tasks of SemEval-2023. It
attracted 842 submissions from 47 teams, and 34 teams submitted system papers.
Results showed that complex entity types such as media titles and product names
were the most challenging. Methods fusing external knowledge into transformer
models achieved the best performance, and the largest gains were on the
Creative Work and Group classes, which are still challenging even with external
knowledge. Some fine-grained classes proved to be more challenging than others,
such as SCIENTIST, ARTWORK, and PRIVATECORP. We also observed that noisy data
has a significant impact on model performance, with an average drop of 10% on
the noisy subset. The task highlights the need for future research on improving
NER robustness on noisy data containing complex entities.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 05:56:08 GMT"
},
{
"version": "v2",
"created": "Thu, 25 May 2023 17:54:06 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Fetahu",
"Besnik",
""
],
[
"Kar",
"Sudipta",
""
],
[
"Chen",
"Zhiyu",
""
],
[
"Rokhlenko",
"Oleg",
""
],
[
"Malmasi",
"Shervin",
""
]
] |
new_dataset
| 0.999654 |
2305.10974
|
Risheng Liu
|
Xingyuan Li and Jinyuan Liu and Yixin Lei and Long Ma and Xin Fan and
Risheng Liu
|
MonoTDP: Twin Depth Perception for Monocular 3D Object Detection in
Adverse Scenes
|
10 pages, 5 figures, 3 tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D object detection plays a crucial role in numerous intelligent vision
systems. Detection in the open world inevitably encounters various adverse
scenes, such as dense fog, heavy rain, and low light conditions. Although
existing efforts primarily focus on diversifying network architecture or
training schemes, resulting in significant progress in 3D object detection,
most of these learnable modules fail in adverse scenes, thereby hindering
detection performance. To address this issue, this paper proposes a monocular
3D detection model designed to perceive twin depth in adverse scenes, termed
MonoTDP, which effectively mitigates the degradation of detection performance
in various harsh environments. Specifically, we first introduce an adaptive
learning strategy to aid the model in handling uncontrollable weather
conditions, significantly resisting degradation caused by various degrading
factors. Then, to address the depth/content loss in adverse regions, we propose
a novel twin depth perception module that simultaneously estimates scene and
object depth, enabling the integration of scene-level features and object-level
features. Additionally, we assemble a new adverse 3D object detection dataset
encompassing a wide range of challenging scenes, including rainy, foggy, and
low light weather conditions, with each type of scene containing 7,481 images.
Experimental results demonstrate that our proposed method outperforms current
state-of-the-art approaches by an average of 3.12% in terms of AP_R40 for car
category across various adverse environments.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 13:42:02 GMT"
},
{
"version": "v2",
"created": "Thu, 25 May 2023 06:12:02 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Li",
"Xingyuan",
""
],
[
"Liu",
"Jinyuan",
""
],
[
"Lei",
"Yixin",
""
],
[
"Ma",
"Long",
""
],
[
"Fan",
"Xin",
""
],
[
"Liu",
"Risheng",
""
]
] |
new_dataset
| 0.999672 |
2305.11175
|
Wenhai Wang
|
Wenhai Wang, Zhe Chen, Xiaokang Chen, Jiannan Wu, Xizhou Zhu, Gang
Zeng, Ping Luo, Tong Lu, Jie Zhou, Yu Qiao, Jifeng Dai
|
VisionLLM: Large Language Model is also an Open-Ended Decoder for
Vision-Centric Tasks
|
Technical Report
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models (LLMs) have notably accelerated progress towards
artificial general intelligence (AGI), with their impressive zero-shot capacity
for user-tailored tasks, endowing them with immense potential across a range of
applications. However, in the field of computer vision, despite the
availability of numerous powerful vision foundation models (VFMs), they are
still restricted to tasks in a pre-defined form, struggling to match the
open-ended task capabilities of LLMs. In this work, we present an LLM-based
framework for vision-centric tasks, termed VisionLLM. This framework provides a
unified perspective for vision and language tasks by treating images as a
foreign language and aligning vision-centric tasks with language tasks that can
be flexibly defined and managed using language instructions. An LLM-based
decoder can then make appropriate predictions based on these instructions for
open-ended tasks. Extensive experiments show that the proposed VisionLLM can
achieve different levels of task customization through language instructions,
from fine-grained object-level to coarse-grained task-level customization, all
with good results. It's noteworthy that, with a generalist LLM-based framework,
our model can achieve over 60\% mAP on COCO, on par with detection-specific
models. We hope this model can set a new baseline for generalist vision and
language models. The demo shall be released based on
https://github.com/OpenGVLab/InternGPT. The code shall be released at
https://github.com/OpenGVLab/VisionLLM.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 17:59:42 GMT"
},
{
"version": "v2",
"created": "Thu, 25 May 2023 15:02:07 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Wang",
"Wenhai",
""
],
[
"Chen",
"Zhe",
""
],
[
"Chen",
"Xiaokang",
""
],
[
"Wu",
"Jiannan",
""
],
[
"Zhu",
"Xizhou",
""
],
[
"Zeng",
"Gang",
""
],
[
"Luo",
"Ping",
""
],
[
"Lu",
"Tong",
""
],
[
"Zhou",
"Jie",
""
],
[
"Qiao",
"Yu",
""
],
[
"Dai",
"Jifeng",
""
]
] |
new_dataset
| 0.969773 |
2305.11996
|
Su-Kyoung Kim
|
Niklas Kueper, Kartik Chari, Judith B\"utef\"ur, Julia Habenicht, Su
Kyoung Kim, Tobias Rossol, Marc Tabie, Frank Kirchner, and Elsa Andrea
Kirchner
|
EEG and EMG dataset for the detection of errors introduced by an active
orthosis device
|
Revised references to our datasets, general corrections to typos, and
latex template format changes, Overall Content unchanged
| null | null | null |
cs.HC cs.AI cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a dataset containing recordings of the
electroencephalogram (EEG) and the electromyogram (EMG) from eight subjects who
were assisted in moving their right arm by an active orthosis device. The
supported movements were elbow joint movements, i.e., flexion and extension of
the right arm. While the orthosis was actively moving the subject's arm, some
errors were deliberately introduced for a short duration of time. During this
time, the orthosis moved in the opposite direction. In this paper, we explain
the experimental setup and present some behavioral analyses across all
subjects. Additionally, we present an average event-related potential analysis
for one subject to offer insights into the data quality and the EEG activity
caused by the error introduction. The dataset described herein is openly
accessible. The aim of this study was to provide a dataset to the research
community, particularly for the development of new methods in the asynchronous
detection of erroneous events from the EEG. We are especially interested in the
tactile and haptic-mediated recognition of errors, which has not yet been
sufficiently investigated in the literature. We hope that the detailed
description of the orthosis and the experiment will enable its reproduction and
facilitate a systematic investigation of the influencing factors in the
detection of erroneous behavior of assistive systems by a large community.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 20:42:28 GMT"
},
{
"version": "v2",
"created": "Thu, 25 May 2023 10:33:50 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Kueper",
"Niklas",
""
],
[
"Chari",
"Kartik",
""
],
[
"Bütefür",
"Judith",
""
],
[
"Habenicht",
"Julia",
""
],
[
"Kim",
"Su Kyoung",
""
],
[
"Rossol",
"Tobias",
""
],
[
"Tabie",
"Marc",
""
],
[
"Kirchner",
"Frank",
""
],
[
"Kirchner",
"Elsa Andrea",
""
]
] |
new_dataset
| 0.999652 |
2305.13137
|
Kari Ali Noriy
|
Kari Ali Noriy, Xiaosong Yang, Jian Jun Zhang
|
EMNS /Imz/ Corpus: An emotive single-speaker dataset for narrative
storytelling in games, television and graphic novels
|
Dataset download link: https://openslr.elda.org/136/
| null | null | null |
cs.CL cs.LG cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The increasing adoption of text-to-speech technologies has led to a growing
demand for natural and emotive voices that adapt to a conversation's context
and emotional tone. The Emotive Narrative Storytelling (EMNS) corpus is a
unique speech dataset created to enhance conversations' expressiveness and
emotive quality in interactive narrative-driven systems. The corpus consists of
a 2.3-hour recording featuring a female speaker delivering labelled utterances.
It encompasses eight acted emotional states, evenly distributed with a variance
of 0.68%, along with expressiveness levels and natural language descriptions
with word emphasis labels. The evaluation of audio samples from different
datasets revealed that the EMNS corpus achieved the highest average scores in
accurately conveying emotions and demonstrating expressiveness. It outperformed
other datasets in conveying shared emotions and achieved comparable levels of
genuineness. A classification task confirmed the accurate representation of
intended emotions in the corpus, with participants recognising the recordings
as genuine and expressive. Additionally, the availability of the dataset
collection tool under the Apache 2.0 License simplifies remote speech data
collection for researchers.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 15:32:32 GMT"
},
{
"version": "v2",
"created": "Thu, 25 May 2023 16:17:24 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Noriy",
"Kari Ali",
""
],
[
"Yang",
"Xiaosong",
""
],
[
"Zhang",
"Jian Jun",
""
]
] |
new_dataset
| 0.999749 |
2305.14635
|
Yan Zhou
|
Yan Zhou, Qingkai Fang, Yang Feng
|
CMOT: Cross-modal Mixup via Optimal Transport for Speech Translation
|
ACL 2023 main conference
| null | null | null |
cs.CL cs.AI cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
End-to-end speech translation (ST) is the task of translating speech signals
in the source language into text in the target language. As a cross-modal task,
end-to-end ST is difficult to train with limited data. Existing methods often
try to transfer knowledge from machine translation (MT), but their performances
are restricted by the modality gap between speech and text. In this paper, we
propose Cross-modal Mixup via Optimal Transport CMOT to overcome the modality
gap. We find the alignment between speech and text sequences via optimal
transport and then mix up the sequences from different modalities at a token
level using the alignment. Experiments on the MuST-C ST benchmark demonstrate
that CMOT achieves an average BLEU of 30.0 in 8 translation directions,
outperforming previous methods. Further analysis shows CMOT can adaptively find
the alignment between modalities, which helps alleviate the modality gap
between speech and text. Code is publicly available at
https://github.com/ictnlp/CMOT.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 02:13:48 GMT"
},
{
"version": "v2",
"created": "Thu, 25 May 2023 08:55:41 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Zhou",
"Yan",
""
],
[
"Fang",
"Qingkai",
""
],
[
"Feng",
"Yang",
""
]
] |
new_dataset
| 0.995936 |
2305.15570
|
Susheela Sharma
|
Susheela Sharma, Ji H. Park, Jordan P. Amadio, Mohsen Khadem, and
Farshid Alambeigi
|
A Novel Concentric Tube Steerable Drilling Robot for Minimally Invasive
Treatment of Spinal Tumors Using Cavity and U-shape Drilling Techniques
|
7 pages, 8 figures, Accepted for Publication at the 2023
International Conference on Robotics and Automation
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper, we present the design, fabrication, and evaluation of a novel
flexible, yet structurally strong, Concentric Tube Steerable Drilling Robot
(CT-SDR) to improve minimally invasive treatment of spinal tumors. Inspired by
concentric tube robots, the proposed two degree-of-freedom (DoF) CT-SDR, for
the first time, not only allows a surgeon to intuitively and quickly drill
smooth planar and out-of-plane J- and U- shape curved trajectories, but it
also, enables drilling cavities through a hard tissue in a minimally invasive
fashion. We successfully evaluated the performance and efficacy of the proposed
CT-SDR in drilling various planar and out-of-plane J-shape branch, U-shape, and
cavity drilling scenarios on simulated bone materials.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 21:05:29 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Sharma",
"Susheela",
""
],
[
"Park",
"Ji H.",
""
],
[
"Amadio",
"Jordan P.",
""
],
[
"Khadem",
"Mohsen",
""
],
[
"Alambeigi",
"Farshid",
""
]
] |
new_dataset
| 0.998938 |
2305.15589
|
Levent Guvenc
|
Murat Gozu, Mumin Tolga Emirler, Ismail Meric Can Uygan, Tevfik Ali
Boke, Levent Guvenc, Bilin Aksun-Guvenc
|
Automated Driving Architecture and Operation of a Light Commercial
Vehicle
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper is on the automated driving architecture and operation of a light
commercial vehicle. Simple longitudinal and lateral dynamic models of the
vehicle and a more detailed CarSim model are developed and used in simulations
and controller design and evaluation. Experimental validation is used to make
sure that the models used represent the actual response of the vehicle as
closely as possible. The vehicle is made drive-by-wire by interfacing with the
existing throttle-by-wire, by adding an active vacuum booster for brake-by-wire
and by adding a steering actuator for steer-by-wire operation. Vehicle
localization is achieved by using a GPS sensor integrated with six axes IMU
with a built-in INS algorithm and a digital compass for heading information.
Front looking radar, lidar and camera are used for environmental sensing.
Communication with the road infrastructure and other vehicles is made possible
by a vehicle to vehicle communication modem. A dedicated computer under real
time Linux is used to collect, process and distribute sensor information. A
dSPACE MicroAutoBox is used for drive-by-wire controls. CACC based longitudinal
control and path tracking of a map of GPS waypoints are used to present the
operation of this automated driving vehicle.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 21:56:18 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Gozu",
"Murat",
""
],
[
"Emirler",
"Mumin Tolga",
""
],
[
"Uygan",
"Ismail Meric Can",
""
],
[
"Boke",
"Tevfik Ali",
""
],
[
"Guvenc",
"Levent",
""
],
[
"Aksun-Guvenc",
"Bilin",
""
]
] |
new_dataset
| 0.978338 |
2305.15627
|
Lijun Ji
|
Shuhui Yu and Lijun Ji
|
New constructions of cyclic subspace codes
| null | null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A subspace of a finite field is called a Sidon space if the product of any
two of its nonzero elements is unique up to a scalar multiplier from the base
field. Sidon spaces, introduced by Roth et al. (IEEE Trans Inf Theory 64(6):
4412-4422, 2018), have a close connection with optimal full-length orbit codes.
In this paper, we present two constructions of Sidon spaces. The union of Sidon
spaces from the first construction yields cyclic subspace codes in
$\mathcal{G}_{q}(n,k)$ with minimum distance $2k-2$ and size $r(\lceil
\frac{n}{2rk} \rceil
-1)((q^{k}-1)^{r}(q^{n}-1)+\frac{(q^{k}-1)^{r-1}(q^{n}-1)}{q-1})$, where $k|n$,
$r\geq 2$ and $n\geq (2r+1)k$, $\mathcal{G}_{q}(n,k)$ is the set of all
$k$-dimensional subspaces of $\mathbb{F}_{q}^{n}$. The union of Sidon spaces
from the second construction gives cyclic subspace codes in
$\mathcal{G}_{q}(n,k)$ with minimum distance $2k-2$ and size $\lfloor
\frac{(r-1)(q^{k}-2)(q^{k}-1)^{r-1}(q^{n}-1)}{2}\rfloor$ where $n= 2rk$ and
$r\geq 2$. Our cyclic subspace codes have larger sizes than those in the
literature, in particular, in the case of $n=4k$, the size of our resulting
code is within a factor of $\frac{1}{2}+o_{k}(1)$ of the sphere-packing bound
as $k$ goes to infinity.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 00:32:35 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Yu",
"Shuhui",
""
],
[
"Ji",
"Lijun",
""
]
] |
new_dataset
| 0.982706 |
2305.15667
|
Ruixuan Liu
|
Ruixuan Liu, Yifan Sun, Changliu Liu
|
Robotic LEGO Assembly and Disassembly from Human Demonstration
| null | null | null | null |
cs.RO cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper studies automatic prototyping using LEGO. To satisfy individual
needs and self-sustainability, this paper presents a framework that learns the
assembly and disassembly sequences from human demonstrations. In addition, a
digital twin is developed to verify the correctness of robot learning before
deploying to the real world. Moreover, an end-effector tool (EOT) is designed,
which allows large industrial robots to easily manipulate LEGO bricks. The
proposed system is deployed to a FANUC LR-mate 200id/7L robot. Experiments
demonstrate that the proposed system can effectively learn the assembly and
disassembly tasks from human demonstrations. And the learned tasks are realized
by the FANUC robot.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 02:39:14 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Liu",
"Ruixuan",
""
],
[
"Sun",
"Yifan",
""
],
[
"Liu",
"Changliu",
""
]
] |
new_dataset
| 0.99964 |
2305.15727
|
Zhiwen Fan
|
Zhiwen Fan, Panwang Pan, Peihao Wang, Yifan Jiang, Dejia Xu, Hanwen
Jiang, Zhangyang Wang
|
POPE: 6-DoF Promptable Pose Estimation of Any Object, in Any Scene, with
One Reference
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite the significant progress in six degrees-of-freedom (6DoF) object pose
estimation, existing methods have limited applicability in real-world scenarios
involving embodied agents and downstream 3D vision tasks. These limitations
mainly come from the necessity of 3D models, closed-category detection, and a
large number of densely annotated support views. To mitigate this issue, we
propose a general paradigm for object pose estimation, called Promptable Object
Pose Estimation (POPE). The proposed approach POPE enables zero-shot 6DoF
object pose estimation for any target object in any scene, while only a single
reference is adopted as the support view. To achieve this, POPE leverages the
power of the pre-trained large-scale 2D foundation model, employs a framework
with hierarchical feature representation and 3D geometry principles. Moreover,
it estimates the relative camera pose between object prompts and the target
object in new views, enabling both two-view and multi-view 6DoF pose estimation
tasks. Comprehensive experimental results demonstrate that POPE exhibits
unrivaled robust performance in zero-shot settings, by achieving a significant
reduction in the averaged Median Pose Error by 52.38% and 50.47% on the LINEMOD
and OnePose datasets, respectively. We also conduct more challenging testings
in causally captured images (see Figure 1), which further demonstrates the
robustness of POPE. Project page can be found with
https://paulpanwang.github.io/POPE/.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 05:19:17 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Fan",
"Zhiwen",
""
],
[
"Pan",
"Panwang",
""
],
[
"Wang",
"Peihao",
""
],
[
"Jiang",
"Yifan",
""
],
[
"Xu",
"Dejia",
""
],
[
"Jiang",
"Hanwen",
""
],
[
"Wang",
"Zhangyang",
""
]
] |
new_dataset
| 0.971633 |
2305.15728
|
Jiancheng An
|
Jiancheng An, Chau Yuen, Chongwen Huang, Merouane Debbah, H. Vincent
Poor, Lajos Hanzo
|
A Tutorial on Holographic MIMO Communications--Part I: Channel Modeling
and Channel Estimation
|
15 pages, 3 figures, accepted by IEEE CL
| null |
10.1109/LCOMM.2023.3278683
| null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
By integrating a nearly infinite number of reconfigurable elements into a
finite space, a spatially continuous array aperture is formed for holographic
multiple-input multiple-output (HMIMO) communications. This three-part tutorial
aims for providing an overview of the latest advances in HMIMO communications.
As Part I of the tutorial, this letter first introduces the fundamental concept
of HMIMO and reviews the recent progress in HMIMO channel modeling, followed by
a suite of efficient channel estimation approaches. Finally, numerical results
are provided for demonstrating the statistical consistency of the new HMIMO
channel model advocated with conventional ones and evaluating the performance
of the channel estimators. Parts II and III of the tutorial will delve into the
performance analysis and holographic beamforming, and detail the interplay of
HMIMO with emerging technologies.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 05:20:06 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"An",
"Jiancheng",
""
],
[
"Yuen",
"Chau",
""
],
[
"Huang",
"Chongwen",
""
],
[
"Debbah",
"Merouane",
""
],
[
"Poor",
"H. Vincent",
""
],
[
"Hanzo",
"Lajos",
""
]
] |
new_dataset
| 0.954674 |
2305.15740
|
Gwantae Kim
|
Gwantae Kim, Seonghyeok Noh, Insung Ham and Hanseok Ko
|
MPE4G: Multimodal Pretrained Encoder for Co-Speech Gesture Generation
|
5 pages, 3 figures
|
ICASSP 2023
| null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
When virtual agents interact with humans, gestures are crucial to delivering
their intentions with speech. Previous multimodal co-speech gesture generation
models required encoded features of all modalities to generate gestures. If
some input modalities are removed or contain noise, the model may not generate
the gestures properly. To acquire robust and generalized encodings, we propose
a novel framework with a multimodal pre-trained encoder for co-speech gesture
generation. In the proposed method, the multi-head-attention-based encoder is
trained with self-supervised learning to contain the information on each
modality. Moreover, we collect full-body gestures that consist of 3D joint
rotations to improve visualization and apply gestures to the extensible body
model. Through the series of experiments and human evaluation, the proposed
method renders realistic co-speech gestures not only when all input modalities
are given but also when the input modalities are missing or noisy.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 05:42:58 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Kim",
"Gwantae",
""
],
[
"Noh",
"Seonghyeok",
""
],
[
"Ham",
"Insung",
""
],
[
"Ko",
"Hanseok",
""
]
] |
new_dataset
| 0.999307 |
2305.15753
|
Ruidong Chen
|
Weizhi Nie, Ruidong Chen, Weijie Wang, Bruno Lepri, Nicu Sebe
|
T2TD: Text-3D Generation Model based on Prior Knowledge Guidance
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, 3D models have been utilized in many applications, such as
auto-driver, 3D reconstruction, VR, and AR. However, the scarcity of 3D model
data does not meet its practical demands. Thus, generating high-quality 3D
models efficiently from textual descriptions is a promising but challenging way
to solve this problem. In this paper, inspired by the ability of human beings
to complement visual information details from ambiguous descriptions based on
their own experience, we propose a novel text-3D generation model (T2TD), which
introduces the related shapes or textual information as the prior knowledge to
improve the performance of the 3D generation model. In this process, we first
introduce the text-3D knowledge graph to save the relationship between 3D
models and textual semantic information, which can provide the related shapes
to guide the target 3D model generation. Second, we integrate an effective
causal inference model to select useful feature information from these related
shapes, which removes the unrelated shape information and only maintains
feature information that is strongly relevant to the textual description.
Meanwhile, to effectively integrate multi-modal prior knowledge into textual
information, we adopt a novel multi-layer transformer structure to
progressively fuse related shape and textual information, which can effectively
compensate for the lack of structural information in the text and enhance the
final performance of the 3D generation model. The final experimental results
demonstrate that our approach significantly improves 3D model generation
quality and outperforms the SOTA methods on the text2shape datasets.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 06:05:52 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Nie",
"Weizhi",
""
],
[
"Chen",
"Ruidong",
""
],
[
"Wang",
"Weijie",
""
],
[
"Lepri",
"Bruno",
""
],
[
"Sebe",
"Nicu",
""
]
] |
new_dataset
| 0.995275 |
2305.15760
|
Tahir Javed
|
Tahir Javed, Sakshi Joshi, Vignesh Nagarajan, Sai Sundaresan, Janki
Nawale, Abhigyan Raman, Kaushal Bhogale, Pratyush Kumar, Mitesh M. Khapra
|
Svarah: Evaluating English ASR Systems on Indian Accents
| null | null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
India is the second largest English-speaking country in the world with a
speaker base of roughly 130 million. Thus, it is imperative that automatic
speech recognition (ASR) systems for English should be evaluated on Indian
accents. Unfortunately, Indian speakers find a very poor representation in
existing English ASR benchmarks such as LibriSpeech, Switchboard, Speech Accent
Archive, etc. In this work, we address this gap by creating Svarah, a benchmark
that contains 9.6 hours of transcribed English audio from 117 speakers across
65 geographic locations throughout India, resulting in a diverse range of
accents. Svarah comprises both read speech and spontaneous conversational data,
covering various domains, such as history, culture, tourism, etc., ensuring a
diverse vocabulary. We evaluate 6 open source ASR models and 2 commercial ASR
systems on Svarah and show that there is clear scope for improvement on Indian
accents. Svarah as well as all our code will be publicly available.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 06:20:29 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Javed",
"Tahir",
""
],
[
"Joshi",
"Sakshi",
""
],
[
"Nagarajan",
"Vignesh",
""
],
[
"Sundaresan",
"Sai",
""
],
[
"Nawale",
"Janki",
""
],
[
"Raman",
"Abhigyan",
""
],
[
"Bhogale",
"Kaushal",
""
],
[
"Kumar",
"Pratyush",
""
],
[
"Khapra",
"Mitesh M.",
""
]
] |
new_dataset
| 0.998289 |
2305.15765
|
Wenhao Cheng
|
Wenhao Cheng, Junbo Yin, Wei Li, Ruigang Yang and Jianbing Shen
|
Language-Guided 3D Object Detection in Point Cloud for Autonomous
Driving
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper addresses the problem of 3D referring expression comprehension
(REC) in autonomous driving scenario, which aims to ground a natural language
to the targeted region in LiDAR point clouds. Previous approaches for REC
usually focus on the 2D or 3D-indoor domain, which is not suitable for
accurately predicting the location of the queried 3D region in an autonomous
driving scene. In addition, the upper-bound limitation and the heavy
computation cost motivate us to explore a better solution. In this work, we
propose a new multi-modal visual grounding task, termed LiDAR Grounding. Then
we devise a Multi-modal Single Shot Grounding (MSSG) approach with an effective
token fusion strategy. It jointly learns the LiDAR-based object detector with
the language features and predicts the targeted region directly from the
detector without any post-processing. Moreover, the image feature can be
flexibly integrated into our approach to provide rich texture and color
information. The cross-modal learning enforces the detector to concentrate on
important regions in the point cloud by considering the informative language
expressions, thus leading to much better accuracy and efficiency. Extensive
experiments on the Talk2Car dataset demonstrate the effectiveness of the
proposed methods. Our work offers a deeper insight into the LiDAR-based
grounding task and we expect it presents a promising direction for the
autonomous driving community.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 06:22:10 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Cheng",
"Wenhao",
""
],
[
"Yin",
"Junbo",
""
],
[
"Li",
"Wei",
""
],
[
"Yang",
"Ruigang",
""
],
[
"Shen",
"Jianbing",
""
]
] |
new_dataset
| 0.996194 |
2305.15780
|
Gilles Dowek
|
Gilles Dowek (LOGICAL)
|
What is a Theory ?
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deduction modulo is a way to express a theory using computation rules instead
of axioms. We present in this paper an extension of deduction modulo, called
Polarized deduction modulo, where some rules can only be used at positive
occurrences, while others can only be used at negative ones. We show that all
theories in propositional calculus can be expressed in this framework and that
cuts can always be eliminated with such theories.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 06:48:52 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Dowek",
"Gilles",
"",
"LOGICAL"
]
] |
new_dataset
| 0.998521 |
2305.15801
|
Aristotelis Lazaridis
|
Vasileios Moschopoulos, Pantelis Kyriakidis, Aristotelis Lazaridis,
Ioannis Vlahavas
|
Lucy-SKG: Learning to Play Rocket League Efficiently Using Deep
Reinforcement Learning
|
24 pages, 11 figures
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A successful tactic that is followed by the scientific community for
advancing AI is to treat games as problems, which has been proven to lead to
various breakthroughs. We adapt this strategy in order to study Rocket League,
a widely popular but rather under-explored 3D multiplayer video game with a
distinct physics engine and complex dynamics that pose a significant challenge
in developing efficient and high-performance game-playing agents. In this
paper, we present Lucy-SKG, a Reinforcement Learning-based model that learned
how to play Rocket League in a sample-efficient manner, outperforming by a
notable margin the two highest-ranking bots in this game, namely Necto (2022
bot champion) and its successor Nexto, thus becoming a state-of-the-art agent.
Our contributions include: a) the development of a reward analysis and
visualization library, b) novel parameterizable reward shape functions that
capture the utility of complex reward types via our proposed Kinesthetic Reward
Combination (KRC) technique, and c) design of auxiliary neural architectures
for training on reward prediction and state representation tasks in an
on-policy fashion for enhanced efficiency in learning speed and performance. By
performing thorough ablation studies for each component of Lucy-SKG, we showed
their independent effectiveness in overall performance. In doing so, we
demonstrate the prospects and challenges of using sample-efficient
Reinforcement Learning techniques for controlling complex dynamical systems
under competitive team-based multiplayer conditions.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 07:33:17 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Moschopoulos",
"Vasileios",
""
],
[
"Kyriakidis",
"Pantelis",
""
],
[
"Lazaridis",
"Aristotelis",
""
],
[
"Vlahavas",
"Ioannis",
""
]
] |
new_dataset
| 0.992566 |
2305.15809
|
Heiko Koziolek
|
Heiko Koziolek, Sten Gruener, Virendra Ashiwal
|
ChatGPT for PLC/DCS Control Logic Generation
|
8 pages, 6 figures
| null | null | null |
cs.SE cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models (LLMs) providing generative AI have become popular to
support software engineers in creating, summarizing, optimizing, and
documenting source code. It is still unknown how LLMs can support control
engineers using typical control programming languages in programming tasks.
Researchers have explored GitHub CoPilot or DeepMind AlphaCode for source code
generation but did not yet tackle control logic programming. The contribution
of this paper is an exploratory study, for which we created 100 LLM prompts in
10 representative categories to analyze control logic generation for of PLCs
and DCS from natural language. We tested the prompts by generating answers with
ChatGPT using the GPT-4 LLM. It generated syntactically correct IEC 61131-3
Structured Text code in many cases and demonstrated useful reasoning skills
that could boost control engineer productivity. Our prompt collection is the
basis for a more formal LLM benchmark to test and compare such models for
control logic generation.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 07:46:53 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Koziolek",
"Heiko",
""
],
[
"Gruener",
"Sten",
""
],
[
"Ashiwal",
"Virendra",
""
]
] |
new_dataset
| 0.993742 |
2305.15858
|
Marwan Dhuheir
|
Marwan Dhuheir, Aiman Erbad, Sinan Sabeeh
|
LLHR: Low Latency and High Reliability CNN Distributed Inference for
Resource-Constrained UAV Swarms
|
arXiv admin note: substantial text overlap with arXiv:2212.11201
|
In2023 IEEE Wireless Communications and Networking Conference
(WCNC) 2023 Mar 26 (pp. 1-6). IEEE
|
10.1109/WCNC55385.2023.10118908
| null |
cs.DC cs.LG
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Recently, Unmanned Aerial Vehicles (UAVs) have shown impressive performance
in many critical applications, such as surveillance, search and rescue
operations, environmental monitoring, etc. In many of these applications, the
UAVs capture images as well as other sensory data and then send the data
processing requests to remote servers. Nevertheless, this approach is not
always practical in real-time-based applications due to unstable connections,
limited bandwidth, limited energy, and strict end-to-end latency. One promising
solution is to divide the inference requests into subtasks that can be
distributed among UAVs in a swarm based on the available resources. Moreover,
these tasks create intermediate results that need to be transmitted reliably as
the swarm moves to cover the area. Our system model deals with real-time
requests, aiming to find the optimal transmission power that guarantees higher
reliability and low latency. We formulate the Low Latency and High-Reliability
(LLHR) distributed inference as an optimization problem, and due to the
complexity of the problem, we divide it into three subproblems. In the first
subproblem, we find the optimal transmit power of the connected UAVs with
guaranteed transmission reliability. The second subproblem aims to find the
optimal positions of the UAVs in the grid, while the last subproblem finds the
optimal placement of the CNN layers in the available UAVs. We conduct extensive
simulations and compare our work to two baseline models demonstrating that our
model outperforms the competing models.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 08:47:16 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Dhuheir",
"Marwan",
""
],
[
"Erbad",
"Aiman",
""
],
[
"Sabeeh",
"Sinan",
""
]
] |
new_dataset
| 0.997838 |
2305.15993
|
Kubilay Can Demir
|
Kubilay Can Demir, Tobias Weise, Matthias May, Axel Schmid, Andreas
Maier, Seung Hee Yang
|
PoCaPNet: A Novel Approach for Surgical Phase Recognition Using Speech
and X-Ray Images
|
5 Pages, 3 figures, INTERSPEECH 2023
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Surgical phase recognition is a challenging and necessary task for the
development of context-aware intelligent systems that can support medical
personnel for better patient care and effective operating room management. In
this paper, we present a surgical phase recognition framework that employs a
Multi-Stage Temporal Convolution Network using speech and X-Ray images for the
first time. We evaluate our proposed approach using our dataset that comprises
31 port-catheter placement operations and report 82.56 \% frame-wise accuracy
with eight surgical phases. Additionally, we investigate the design choices in
the temporal model and solutions for the class-imbalance problem. Our
experiments demonstrate that speech and X-Ray data can be effectively utilized
for surgical phase recognition, providing a foundation for the development of
speech assistants in operating rooms of the future.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 12:31:58 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Demir",
"Kubilay Can",
""
],
[
"Weise",
"Tobias",
""
],
[
"May",
"Matthias",
""
],
[
"Schmid",
"Axel",
""
],
[
"Maier",
"Andreas",
""
],
[
"Yang",
"Seung Hee",
""
]
] |
new_dataset
| 0.998855 |
2305.16008
|
Phuoc Nguyen
|
Phuoc Nguyen Thuan, Jorge Pe\~na Queralta, Tomi Westerlund
|
Vision-based Safe Autonomous UAV Docking with Panoramic Sensors
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The remarkable growth of unmanned aerial vehicles (UAVs) has also sparked
concerns about safety measures during their missions. To advance towards safer
autonomous aerial robots, this work presents a vision-based solution to
ensuring safe autonomous UAV landings with minimal infrastructure. During
docking maneuvers, UAVs pose a hazard to people in the vicinity. In this paper,
we propose the use of a single omnidirectional panoramic camera pointing
upwards from a landing pad to detect and estimate the position of people around
the landing area. The images are processed in real-time in an embedded
computer, which communicates with the onboard computer of approaching UAVs to
transition between landing, hovering or emergency landing states. While
landing, the ground camera also aids in finding an optimal position, which can
be required in case of low-battery or when hovering is no longer possible. We
use a YOLOv7-based object detection model and a XGBooxt model for localizing
nearby people, and the open-source ROS and PX4 frameworks for communication,
interfacing, and control of the UAV. We present both simulation and real-world
indoor experimental results to show the efficiency of our methods.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 12:48:55 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Thuan",
"Phuoc Nguyen",
""
],
[
"Queralta",
"Jorge Peña",
""
],
[
"Westerlund",
"Tomi",
""
]
] |
new_dataset
| 0.989702 |
2305.16023
|
Yue Zhang
|
Yue Zhang, Bo Zhang, Haochen Jiang, Zhenghua Li, Chen Li, Fei Huang,
Min Zhang
|
NaSGEC: a Multi-Domain Chinese Grammatical Error Correction Dataset from
Native Speaker Texts
|
Accepted by ACL 2023 (Findings, long paper)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We introduce NaSGEC, a new dataset to facilitate research on Chinese
grammatical error correction (CGEC) for native speaker texts from multiple
domains. Previous CGEC research primarily focuses on correcting texts from a
single domain, especially learner essays. To broaden the target domain, we
annotate multiple references for 12,500 sentences from three native domains,
i.e., social media, scientific writing, and examination. We provide solid
benchmark results for NaSGEC by employing cutting-edge CGEC models and
different training data. We further perform detailed analyses of the
connections and gaps between our domains from both empirical and statistical
views. We hope this work can inspire future studies on an important but
under-explored direction--cross-domain GEC.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 13:05:52 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Zhang",
"Yue",
""
],
[
"Zhang",
"Bo",
""
],
[
"Jiang",
"Haochen",
""
],
[
"Li",
"Zhenghua",
""
],
[
"Li",
"Chen",
""
],
[
"Huang",
"Fei",
""
],
[
"Zhang",
"Min",
""
]
] |
new_dataset
| 0.999863 |
2305.16042
|
Patrick Ebel
|
Patrick Ebel, Christoph Lingenfelder, Andreas Vogelsang
|
Multitasking while Driving: How Drivers Self-Regulate their Interaction
with In-Vehicle Touchscreens in Automated Driving
|
Accepted for publication in the "International Journal of
Human-Computer Interaction". arXiv admin note: substantial text overlap with
arXiv:2207.04284
| null |
10.1080/10447318.2023.2215634
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Driver assistance systems are designed to increase comfort and safety by
automating parts of the driving task. At the same time, modern in-vehicle
information systems with large touchscreens provide the driver with numerous
options for entertainment, information, or communication, and are a potential
source of distraction. However, little is known about how driving automation
affects how drivers interact with the center stack touchscreen, i.e., how
drivers self-regulate their behavior in response to different levels of driving
automation. To investigate this, we apply multilevel models to a real-world
driving dataset consisting of 31,378 sequences. Our results show significant
differences in drivers' interaction and glance behavior in response to
different levels of driving automation, vehicle speed, and road curvature.
During automated driving, drivers perform more interactions per touchscreen
sequence and increase the time spent looking at the center stack touchscreen.
Specifically, at higher levels of driving automation (level 2), the mean glance
duration toward the center stack touchscreen increases by 36% and the mean
number of interactions per sequence increases by 17% compared to manual
driving. Furthermore, partially automated driving has a strong impact on the
use of more complex UI elements (e.g., maps) and touch gestures (e.g.,
multitouch). We also show that the effect of driving automation on drivers'
self-regulation is greater than that of vehicle speed and road curvature. The
derived knowledge can inform the design and evaluation of touch-based
infotainment systems and the development of context-aware driver monitoring
systems.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 13:19:16 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Ebel",
"Patrick",
""
],
[
"Lingenfelder",
"Christoph",
""
],
[
"Vogelsang",
"Andreas",
""
]
] |
new_dataset
| 0.966921 |
2305.16075
|
Gabriele Nava
|
Gabriele Nava and Daniele Pucci
|
Failure Detection and Fault Tolerant Control of a Jet-Powered Flying
Humanoid Robot
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Failure detection and fault tolerant control are fundamental safety features
of any aerial vehicle. With the emergence of complex, multi-body flying systems
such as jet-powered humanoid robots, it becomes of crucial importance to design
fault detection and control strategies for these systems, too. In this paper we
propose a fault detection and control framework for the flying humanoid robot
iRonCub in case of loss of one turbine. The framework is composed of a failure
detector based on turbines rotational speed, a momentum-based flight control
for fault response, and an offline reference generator that produces
far-from-singularities configurations and accounts for self and jet exhausts
collision avoidance. Simulation results with Gazebo and MATLAB prove the
effectiveness of the proposed control strategy.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 14:03:10 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Nava",
"Gabriele",
""
],
[
"Pucci",
"Daniele",
""
]
] |
new_dataset
| 0.997426 |
2305.16107
|
Long Zhou
|
Tianrui Wang, Long Zhou, Ziqiang Zhang, Yu Wu, Shujie Liu, Yashesh
Gaur, Zhuo Chen, Jinyu Li, Furu Wei
|
VioLA: Unified Codec Language Models for Speech Recognition, Synthesis,
and Translation
|
Working in progress
| null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent research shows a big convergence in model architecture, training
objectives, and inference methods across various tasks for different
modalities. In this paper, we propose VioLA, a single auto-regressive
Transformer decoder-only network that unifies various cross-modal tasks
involving speech and text, such as speech-to-text, text-to-text,
text-to-speech, and speech-to-speech tasks, as a conditional codec language
model task via multi-task learning framework. To accomplish this, we first
convert all the speech utterances to discrete tokens (similar to the textual
data) using an offline neural codec encoder. In such a way, all these tasks are
converted to token-based sequence conversion problems, which can be naturally
handled with one conditional language model. We further integrate task IDs
(TID) and language IDs (LID) into the proposed model to enhance the modeling
capability of handling different languages and tasks. Experimental results
demonstrate that the proposed VioLA model can support both single-modal and
cross-modal tasks well, and the decoder-only model achieves a comparable and
even better performance than the strong baselines.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 14:39:47 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Wang",
"Tianrui",
""
],
[
"Zhou",
"Long",
""
],
[
"Zhang",
"Ziqiang",
""
],
[
"Wu",
"Yu",
""
],
[
"Liu",
"Shujie",
""
],
[
"Gaur",
"Yashesh",
""
],
[
"Chen",
"Zhuo",
""
],
[
"Li",
"Jinyu",
""
],
[
"Wei",
"Furu",
""
]
] |
new_dataset
| 0.997575 |
2305.16158
|
Sohag Kumar Saha
|
S M Mostaq Hossain, Sohag Kumar Saha, Shampa Banik, Trapa Banik
|
A New Era of Mobility: Exploring Digital Twin Applications in Autonomous
Vehicular Systems
|
7 pages, conference paper, accepted for publication in IEEE AIIoT
2023 conference
|
IEEE AIIoT 2023 conference
| null |
paper #1570907205
|
cs.NI cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Digital Twins (DTs) are virtual representations of physical objects or
processes that can collect information from the real environment to represent,
validate, and replicate the physical twin's present and future behavior. The
DTs are becoming increasingly prevalent in a variety of fields, including
manufacturing, automobiles, medicine, smart cities, and other related areas. In
this paper, we presented a systematic reviews on DTs in the autonomous
vehicular industry. We addressed DTs and their essential characteristics,
emphasized on accurate data collection, real-time analytics, and efficient
simulation capabilities, while highlighting their role in enhancing performance
and reliability. Next, we explored the technical challenges and central
technologies of DTs. We illustrated the comparison analysis of different
methodologies that have been used for autonomous vehicles in smart cities.
Finally, we addressed the application challenges and limitations of DTs in the
autonomous vehicular industry.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 06:39:57 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Hossain",
"S M Mostaq",
""
],
[
"Saha",
"Sohag Kumar",
""
],
[
"Banik",
"Shampa",
""
],
[
"Banik",
"Trapa",
""
]
] |
new_dataset
| 0.970642 |
2305.16163
|
Xinting Liao
|
Xinting Liao, Weiming Liu, Xiaolin Zheng, Binhui Yao, and Chaochao
Chen
|
PPGenCDR: A Stable and Robust Framework for Privacy-Preserving
Cross-Domain Recommendation
|
To be appear in AAAI2023
| null | null | null |
cs.IR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Privacy-preserving cross-domain recommendation (PPCDR) refers to preserving
the privacy of users when transferring the knowledge from source domain to
target domain for better performance, which is vital for the long-term
development of recommender systems. Existing work on cross-domain
recommendation (CDR) reaches advanced and satisfying recommendation
performance, but mostly neglects preserving privacy. To fill this gap, we
propose a privacy-preserving generative cross-domain recommendation (PPGenCDR)
framework for PPCDR. PPGenCDR includes two main modules, i.e., stable
privacy-preserving generator module, and robust cross-domain recommendation
module. Specifically, the former isolates data from different domains with a
generative adversarial network (GAN) based model, which stably estimates the
distribution of private data in the source domain with Renyi differential
privacy (RDP) technique. Then the latter aims to robustly leverage the
perturbed but effective knowledge from the source domain with the raw data in
target domain to improve recommendation performance. Three key modules, i.e.,
(1) selective privacy preserver, (2) GAN stabilizer, and (3) robustness
conductor, guarantee the cost-effective trade-off between utility and privacy,
the stability of GAN when using RDP, and the robustness of leveraging
transferable knowledge accordingly. The extensive empirical studies on Douban
and Amazon datasets demonstrate that PPGenCDR significantly outperforms the
state-of-the-art recommendation models while preserving privacy.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 08:04:05 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Liao",
"Xinting",
""
],
[
"Liu",
"Weiming",
""
],
[
"Zheng",
"Xiaolin",
""
],
[
"Yao",
"Binhui",
""
],
[
"Chen",
"Chaochao",
""
]
] |
new_dataset
| 0.970815 |
2305.16171
|
Emmy Liu
|
Anubha Kabra, Emmy Liu, Simran Khanuja, Alham Fikri Aji, Genta Indra
Winata, Samuel Cahyawijaya, Anuoluwapo Aremu, Perez Ogayo, Graham Neubig
|
Multi-lingual and Multi-cultural Figurative Language Understanding
|
ACL 2023 Findings
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Figurative language permeates human communication, but at the same time is
relatively understudied in NLP. Datasets have been created in English to
accelerate progress towards measuring and improving figurative language
processing in language models (LMs). However, the use of figurative language is
an expression of our cultural and societal experiences, making it difficult for
these phrases to be universally applicable. In this work, we create a
figurative language inference dataset, \datasetname, for seven diverse
languages associated with a variety of cultures: Hindi, Indonesian, Javanese,
Kannada, Sundanese, Swahili and Yoruba. Our dataset reveals that each language
relies on cultural and regional concepts for figurative expressions, with the
highest overlap between languages originating from the same region. We assess
multilingual LMs' abilities to interpret figurative language in zero-shot and
few-shot settings. All languages exhibit a significant deficiency compared to
English, with variations in performance reflecting the availability of
pre-training and fine-tuning data, emphasizing the need for LMs to be exposed
to a broader range of linguistic and cultural variation during training.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 15:30:31 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Kabra",
"Anubha",
""
],
[
"Liu",
"Emmy",
""
],
[
"Khanuja",
"Simran",
""
],
[
"Aji",
"Alham Fikri",
""
],
[
"Winata",
"Genta Indra",
""
],
[
"Cahyawijaya",
"Samuel",
""
],
[
"Aremu",
"Anuoluwapo",
""
],
[
"Ogayo",
"Perez",
""
],
[
"Neubig",
"Graham",
""
]
] |
new_dataset
| 0.999866 |
2305.16220
|
Yihao Huang
|
Yihao Huang, Yue Cao, Tianlin Li, Felix Juefei-Xu, Di Lin, Ivor
W.Tsang, Yang Liu, Qing Guo
|
On the Robustness of Segment Anything
|
22 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Segment anything model (SAM) has presented impressive objectness
identification capability with the idea of prompt learning and a new collected
large-scale dataset. Given a prompt (e.g., points, bounding boxes, or masks)
and an input image, SAM is able to generate valid segment masks for all objects
indicated by the prompts, presenting high generalization across diverse
scenarios and being a general method for zero-shot transfer to downstream
vision tasks. Nevertheless, it remains unclear whether SAM may introduce errors
in certain threatening scenarios. Clarifying this is of significant importance
for applications that require robustness, such as autonomous vehicles. In this
paper, we aim to study the testing-time robustness of SAM under adversarial
scenarios and common corruptions. To this end, we first build a testing-time
robustness evaluation benchmark for SAM by integrating existing public
datasets. Second, we extend representative adversarial attacks against SAM and
study the influence of different prompts on robustness. Third, we study the
robustness of SAM under diverse corruption types by evaluating SAM on corrupted
datasets with different prompts. With experiments conducted on SA-1B and KITTI
datasets, we find that SAM exhibits remarkable robustness against various
corruptions, except for blur-related corruption. Furthermore, SAM remains
susceptible to adversarial attacks, particularly when subjected to PGD and BIM
attacks. We think such a comprehensive study could highlight the importance of
the robustness issues of SAM and trigger a series of new tasks for SAM as well
as downstream vision tasks.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 16:28:30 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Huang",
"Yihao",
""
],
[
"Cao",
"Yue",
""
],
[
"Li",
"Tianlin",
""
],
[
"Juefei-Xu",
"Felix",
""
],
[
"Lin",
"Di",
""
],
[
"Tsang",
"Ivor W.",
""
],
[
"Liu",
"Yang",
""
],
[
"Guo",
"Qing",
""
]
] |
new_dataset
| 0.976734 |
2305.16246
|
Alexander Olshevsky
|
Rui Liu, Alex Olshevsky
|
Distributed TD(0) with Almost No Communication
|
This is a shortened version of arXiv:2104.07855
| null | null | null |
cs.LG cs.SY eess.SY math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We provide a new non-asymptotic analysis of distributed temporal difference
learning with linear function approximation. Our approach relies on ``one-shot
averaging,'' where $N$ agents run identical local copies of the TD(0) method
and average the outcomes only once at the very end. We demonstrate a version of
the linear time speedup phenomenon, where the convergence time of the
distributed process is a factor of $N$ faster than the convergence time of
TD(0). This is the first result proving benefits from parallelism for temporal
difference methods.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 17:00:46 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Liu",
"Rui",
""
],
[
"Olshevsky",
"Alex",
""
]
] |
new_dataset
| 0.991625 |
2305.16275
|
Mark Clement
|
Chetan Joshi and Lawry Sorenson and Ammon Wolfert and Dr. Mark Clement
and Dr. Joseph Price and Dr. Kasey Buckles
|
CENSUS-HWR: a large training dataset for offline handwriting recognition
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Progress in Automated Handwriting Recognition has been hampered by the lack
of large training datasets. Nearly all research uses a set of small datasets
that often cause models to overfit. We present CENSUS-HWR, a new dataset
consisting of full English handwritten words in 1,812,014 gray scale images. A
total of 1,865,134 handwritten texts from a vocabulary of 10,711 words in the
English language are present in this collection. This dataset is intended to
serve handwriting models as a benchmark for deep learning algorithms. This huge
English handwriting recognition dataset has been extracted from the US 1930 and
1940 censuses taken by approximately 70,000 enumerators each year. The dataset
and the trained model with their weights are freely available to download at
https://censustree.org/data.html.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 17:31:39 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Joshi",
"Chetan",
""
],
[
"Sorenson",
"Lawry",
""
],
[
"Wolfert",
"Ammon",
""
],
[
"Clement",
"Dr. Mark",
""
],
[
"Price",
"Dr. Joseph",
""
],
[
"Buckles",
"Dr. Kasey",
""
]
] |
new_dataset
| 0.999854 |
2305.16315
|
Jiahui Lei
|
Jiahui Lei and Congyue Deng and Bokui Shen and Leonidas Guibas and
Kostas Daniilidis
|
NAP: Neural 3D Articulation Prior
|
project page: https://www.cis.upenn.edu/~leijh/projects/nap
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose Neural 3D Articulation Prior (NAP), the first 3D deep generative
model to synthesize 3D articulated object models. Despite the extensive
research on generating 3D objects, compositions, or scenes, there remains a
lack of focus on capturing the distribution of articulated objects, a common
object category for human and robot interaction. To generate articulated
objects, we first design a novel articulation tree/graph parameterization and
then apply a diffusion-denoising probabilistic model over this representation
where articulated objects can be generated via denoising from random complete
graphs. In order to capture both the geometry and the motion structure whose
distribution will affect each other, we design a graph-attention denoising
network for learning the reverse diffusion process. We propose a novel distance
that adapts widely used 3D generation metrics to our novel task to evaluate
generation quality, and experiments demonstrate our high performance in
articulated object generation. We also demonstrate several conditioned
generation applications, including Part2Motion, PartNet-Imagination,
Motion2Part, and GAPart2Object.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 17:59:35 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Lei",
"Jiahui",
""
],
[
"Deng",
"Congyue",
""
],
[
"Shen",
"Bokui",
""
],
[
"Guibas",
"Leonidas",
""
],
[
"Daniilidis",
"Kostas",
""
]
] |
new_dataset
| 0.984605 |
2305.16316
|
Raymond A. Yeh
|
Renan A. Rojas-Gomez, Teck-Yian Lim, Minh N. Do, Raymond A. Yeh
|
Making Vision Transformers Truly Shift-Equivariant
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For computer vision tasks, Vision Transformers (ViTs) have become one of the
go-to deep net architectures. Despite being inspired by Convolutional Neural
Networks (CNNs), ViTs remain sensitive to small shifts in the input image. To
address this, we introduce novel designs for each of the modules in ViTs, such
as tokenization, self-attention, patch merging, and positional encoding. With
our proposed modules, we achieve truly shift-equivariant ViTs on four
well-established models, namely, Swin, SwinV2, MViTv2, and CvT, both in theory
and practice. Empirically, we tested these models on image classification and
semantic segmentation, achieving competitive performance across three different
datasets while maintaining 100% shift consistency.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 17:59:40 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Rojas-Gomez",
"Renan A.",
""
],
[
"Lim",
"Teck-Yian",
""
],
[
"Do",
"Minh N.",
""
],
[
"Yeh",
"Raymond A.",
""
]
] |
new_dataset
| 0.991605 |
2305.16318
|
Ziyu Guo
|
Shilin Yan, Renrui Zhang, Ziyu Guo, Wenchao Chen, Wei Zhang, Hongyang
Li, Yu Qiao, Zhongjiang He, Peng Gao
|
Referred by Multi-Modality: A Unified Temporal Transformer for Video
Object Segmentation
|
Code is released at https://github.com/OpenGVLab/MUTR
| null | null | null |
cs.CV cs.AI cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, video object segmentation (VOS) referred by multi-modal signals,
e.g., language and audio, has evoked increasing attention in both industry and
academia. It is challenging for exploring the semantic alignment within
modalities and the visual correspondence across frames. However, existing
methods adopt separate network architectures for different modalities, and
neglect the inter-frame temporal interaction with references. In this paper, we
propose MUTR, a Multi-modal Unified Temporal transformer for Referring video
object segmentation. With a unified framework for the first time, MUTR adopts a
DETR-style transformer and is capable of segmenting video objects designated by
either text or audio reference. Specifically, we introduce two strategies to
fully explore the temporal relations between videos and multi-modal signals.
Firstly, for low-level temporal aggregation before the transformer, we enable
the multi-modal references to capture multi-scale visual cues from consecutive
video frames. This effectively endows the text or audio signals with temporal
knowledge and boosts the semantic alignment between modalities. Secondly, for
high-level temporal interaction after the transformer, we conduct inter-frame
feature communication for different object embeddings, contributing to better
object-wise correspondence for tracking along the video. On Ref-YouTube-VOS and
AVSBench datasets with respective text and audio references, MUTR achieves
+4.2% and +4.2% J&F improvements to state-of-the-art methods, demonstrating our
significance for unified multi-modal VOS. Code is released at
https://github.com/OpenGVLab/MUTR.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 17:59:47 GMT"
}
] | 2023-05-26T00:00:00 |
[
[
"Yan",
"Shilin",
""
],
[
"Zhang",
"Renrui",
""
],
[
"Guo",
"Ziyu",
""
],
[
"Chen",
"Wenchao",
""
],
[
"Zhang",
"Wei",
""
],
[
"Li",
"Hongyang",
""
],
[
"Qiao",
"Yu",
""
],
[
"He",
"Zhongjiang",
""
],
[
"Gao",
"Peng",
""
]
] |
new_dataset
| 0.996902 |
2202.07721
|
Cunxi Yu
|
Walter Lau Neto and Yingjie Li and Pierre-Emmanuel Gaillardon and
Cunxi Yu
|
FlowTune: End-to-end Automatic Logic Optimization Exploration via
Domain-specific Multi-armed Bandit
|
13 pages
|
IEEE Transactions on Computer-Aided Design of Integrated Circuits
and Systems (TCAD) 2023
|
10.1109/TCAD.2022.3213611
| null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent years have seen increasing employment of decision intelligence in
electronic design automation (EDA), which aims to reduce the manual efforts and
boost the design closure process in modern toolflows. However, existing
approaches either require a large number of labeled data and expensive training
efforts, or are limited in practical EDA toolflow integration due to
computation overhead. This paper presents a generic end-to-end sequential
decision making framework FlowTune for synthesis tooflow optimization, with a
novel high-performance domain-specific, multi-stage multi-armed bandit (MAB)
approach. This framework addresses optimization problems on Boolean
optimization problems such as a) And-Inv-Graphs (# nodes), b) Conjunction
Normal Form (CNF) minimization (# clauses) for Boolean Satisfiability; logic
synthesis and technology mapping problems such as c) post static timing
analysis (STA) delay and area optimization for standard-cell technology
mapping, and d) FPGA technology mapping for 6-in LUT architectures. Moreover,
we demonstrate the high extnsibility and generalizability of the proposed
domain-specific MAB approach with end-to-end FPGA design flow, evaluated at
post-routing stage, with two different FPGA backend tools (OpenFPGA and VPR)
and two different logic synthesis representations (AIGs and MIGs). FlowTune is
fully integrated with ABC [1], Yosys [2], VTR [3], LSOracle [4], OpenFPGA [5],
and industrial tools, and is released publicly. The experimental results
conducted on various design stages in the flow all demonstrate that our
framework outperforms both hand-crafted flows [1] and ML explored flows [6],
[7] in quality of results, and is orders of magnitude faster compared to
ML-based approaches.
|
[
{
"version": "v1",
"created": "Tue, 15 Feb 2022 20:44:57 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 21:11:08 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Neto",
"Walter Lau",
""
],
[
"Li",
"Yingjie",
""
],
[
"Gaillardon",
"Pierre-Emmanuel",
""
],
[
"Yu",
"Cunxi",
""
]
] |
new_dataset
| 0.997627 |
2204.00294
|
Jana Kierdorf
|
Jana Kierdorf, Laura Verena Junker-Frohn, Mike Delaney, Mariele Donoso
Olave, Andreas Burkart, Hannah Jaenicke, Onno Muller, Uwe Rascher and Ribana
Roscher
|
GrowliFlower: An image time series dataset for GROWth analysis of
cauLIFLOWER
|
23 pages, 21 figures, 5 tables
| null |
10.1002/rob.22122
| null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This article presents GrowliFlower, a georeferenced, image-based UAV time
series dataset of two monitored cauliflower fields of size 0.39 and 0.60 ha
acquired in 2020 and 2021. The dataset contains RGB and multispectral
orthophotos from which about 14,000 individual plant coordinates are derived
and provided. The coordinates enable the dataset users the extraction of
complete and incomplete time series of image patches showing individual plants.
The dataset contains collected phenotypic traits of 740 plants, including the
developmental stage as well as plant and cauliflower size. As the harvestable
product is completely covered by leaves, plant IDs and coordinates are provided
to extract image pairs of plants pre and post defoliation, to facilitate
estimations of cauliflower head size. Moreover, the dataset contains
pixel-accurate leaf and plant instance segmentations, as well as stem
annotations to address tasks like classification, detection, segmentation,
instance segmentation, and similar computer vision tasks. The dataset aims to
foster the development and evaluation of machine learning approaches. It
specifically focuses on the analysis of growth and development of cauliflower
and the derivation of phenotypic traits to foster the development of automation
in agriculture. Two baseline results of instance segmentation at plant and leaf
level based on the labeled instance segmentation data are presented. The entire
data set is publicly available.
|
[
{
"version": "v1",
"created": "Fri, 1 Apr 2022 08:56:59 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Kierdorf",
"Jana",
""
],
[
"Junker-Frohn",
"Laura Verena",
""
],
[
"Delaney",
"Mike",
""
],
[
"Olave",
"Mariele Donoso",
""
],
[
"Burkart",
"Andreas",
""
],
[
"Jaenicke",
"Hannah",
""
],
[
"Muller",
"Onno",
""
],
[
"Rascher",
"Uwe",
""
],
[
"Roscher",
"Ribana",
""
]
] |
new_dataset
| 0.999761 |
2206.07012
|
Chen Liu
|
Chen Liu, Abhishek Chakraborty, Nikhil Chawla, Neer Roggel
|
Frequency Throttling Side-Channel Attack
| null |
CCS '22: Proceedings of the 2022 ACM SIGSAC Conference on Computer
and Communications Security
|
10.1145/3548606.3560682
| null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Modern processors dynamically control their operating frequency to optimize
resource utilization, maximize energy savings, and conform to system-defined
constraints. If, during the execution of a software workload, the running
average of any electrical or thermal parameter exceeds its corresponding
predefined threshold value, the power management architecture will reactively
adjust CPU frequency to ensure safe operating conditions. In this paper, we
demonstrate how such power management-based frequency throttling activity forms
a source of timing side-channel information leakage, which can be exploited by
an attacker to infer secret data even from a constant-cycle victim workload.
The proposed frequency throttling side-channel attack can be launched by both
kernel-space and user-space attackers, thus compromising security guarantees
provided by isolation boundaries. We validate our attack methodology across
different systems and threat models by performing experiments on a
constant-cycle implementation of AES algorithm based on AES-NI instructions.
The results of our experimental evaluations demonstrate that the attacker can
successfully recover all bytes of an AES key by measuring encryption execution
times. Finally, we discuss different options to mitigate the threat posed by
frequency throttling side-channel attacks, as well as their advantages and
disadvantages.
|
[
{
"version": "v1",
"created": "Tue, 14 Jun 2022 17:23:18 GMT"
},
{
"version": "v2",
"created": "Wed, 24 May 2023 01:30:03 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Liu",
"Chen",
""
],
[
"Chakraborty",
"Abhishek",
""
],
[
"Chawla",
"Nikhil",
""
],
[
"Roggel",
"Neer",
""
]
] |
new_dataset
| 0.964038 |
2208.00731
|
Stephan-Daniel Gravert
|
Stephan-Daniel Gravert, Mike Y. Michelis, Simon Rogler, Dario Tscholl,
Thomas Buchner, Robert K. Katzschmann
|
Planar Modeling and Sim-to-Real of a Tethered Multimaterial Soft Swimmer
Driven by Peano-HASELs
|
Published at IROS 2022. Stephan-Daniel Gravert and Mike Y. Michelis
contributed equally to this work
| null |
10.1109/IROS47612.2022.9981192
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Soft robotics has the potential to revolutionize robotic locomotion, in
particular, soft robotic swimmers offer a minimally invasive and adaptive
solution to explore and preserve our oceans. Unfortunately, current soft
robotic swimmers are vastly inferior to evolved biological swimmers, especially
in terms of controllability, efficiency, maneuverability, and longevity.
Additionally, the tedious iterative fabrication and empirical testing required
to design soft robots has hindered their optimization. In this work, we tackle
this challenge by providing an efficient and straightforward pipeline for
designing and fabricating soft robotic swimmers equipped with electrostatic
actuation. We streamline the process to allow for rapid additive manufacturing,
and show how a differentiable simulation can be used to match a simplified
model to the real deformation of a robotic swimmer. We perform several
experiments with the fabricated swimmer by varying the voltage and actuation
frequency of the swimmer's antagonistic muscles. We show how the voltage and
frequency vary the locomotion speed of the swimmer while moving in liquid oil
and observe a clear optimum in forward swimming speed. The differentiable
simulation model we propose has various downstream applications, such as
control and shape optimization of the swimmer; optimization results can be
directly mapped back to the real robot through our sim-to-real matching.
|
[
{
"version": "v1",
"created": "Mon, 1 Aug 2022 10:33:45 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Aug 2022 19:08:47 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Gravert",
"Stephan-Daniel",
""
],
[
"Michelis",
"Mike Y.",
""
],
[
"Rogler",
"Simon",
""
],
[
"Tscholl",
"Dario",
""
],
[
"Buchner",
"Thomas",
""
],
[
"Katzschmann",
"Robert K.",
""
]
] |
new_dataset
| 0.957725 |
2208.01470
|
Junxue Zhang
|
Junxue Zhang
|
Extremal numbers of disjoint triangles in $r$-partite graphs
| null | null | null | null |
cs.DM
|
http://creativecommons.org/licenses/by/4.0/
|
For two graphs $G$ and $F$, the extremal number of $F$ in $G$, denoted by
{ex}$(G,F)$, is the maximum number of edges in a spanning subgraph of $G$ not
containing $F$ as a subgraph. Determining {ex}$(K_n,F)$ for a given graph $F$
is a classical extremal problem in graph theory. In 1962, Erd\H{o}s determined
{ex}$(K_n,kK_3)$, which generalized Mantel's Theorem. On the other hand, in
1974, {Bollob\'{a}s}, Erd\H{o}s, and Straus determined
{ex}$(K_{n_1,n_2,\dots,n_r},K_t)$, which extended Tur\'{a}n's Theorem to
complete multipartite graphs. { In this paper,} we determine
{ex}$(K_{n_1,n_2,\dots,n_r},kK_3)$ for $r\ge 4$ and $10k-4\le n_1+4k\le n_2\le
n_3\le \cdots \le n_r$.
|
[
{
"version": "v1",
"created": "Tue, 2 Aug 2022 14:07:48 GMT"
},
{
"version": "v2",
"created": "Wed, 17 May 2023 01:38:32 GMT"
},
{
"version": "v3",
"created": "Wed, 24 May 2023 07:47:20 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Zhang",
"Junxue",
""
]
] |
new_dataset
| 0.981819 |
2211.01427
|
Rao Fu
|
Aditya Sanghi, Rao Fu, Vivian Liu, Karl Willis, Hooman Shayani, Amir
Hosein Khasahmadi, Srinath Sridhar, Daniel Ritchie
|
CLIP-Sculptor: Zero-Shot Generation of High-Fidelity and Diverse Shapes
from Natural Language
|
Accepted at Conference on Computer Vision and Pattern Recognition
2023(CVPR2023)
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent works have demonstrated that natural language can be used to generate
and edit 3D shapes. However, these methods generate shapes with limited
fidelity and diversity. We introduce CLIP-Sculptor, a method to address these
constraints by producing high-fidelity and diverse 3D shapes without the need
for (text, shape) pairs during training. CLIP-Sculptor achieves this in a
multi-resolution approach that first generates in a low-dimensional latent
space and then upscales to a higher resolution for improved shape fidelity. For
improved shape diversity, we use a discrete latent space which is modeled using
a transformer conditioned on CLIP's image-text embedding space. We also present
a novel variant of classifier-free guidance, which improves the
accuracy-diversity trade-off. Finally, we perform extensive experiments
demonstrating that CLIP-Sculptor outperforms state-of-the-art baselines. The
code is available at https://ivl.cs.brown.edu/#/projects/clip-sculptor.
|
[
{
"version": "v1",
"created": "Wed, 2 Nov 2022 18:50:25 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Nov 2022 17:25:45 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Apr 2023 20:37:07 GMT"
},
{
"version": "v4",
"created": "Wed, 24 May 2023 16:04:20 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Sanghi",
"Aditya",
""
],
[
"Fu",
"Rao",
""
],
[
"Liu",
"Vivian",
""
],
[
"Willis",
"Karl",
""
],
[
"Shayani",
"Hooman",
""
],
[
"Khasahmadi",
"Amir Hosein",
""
],
[
"Sridhar",
"Srinath",
""
],
[
"Ritchie",
"Daniel",
""
]
] |
new_dataset
| 0.998789 |
2211.13308
|
Amanpreet Singh
|
Amanpreet Singh, Mike D'Arcy, Arman Cohan, Doug Downey, Sergey Feldman
|
SciRepEval: A Multi-Format Benchmark for Scientific Document
Representations
|
21 pages, 2 figures, 9 tables. For associated code, see
https://github.com/allenai/scirepeval
| null | null | null |
cs.CL cs.AI cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Learned representations of scientific documents can serve as valuable input
features for downstream tasks, without the need for further fine-tuning.
However, existing benchmarks for evaluating these representations fail to
capture the diversity of relevant tasks. In response, we introduce SciRepEval,
the first comprehensive benchmark for training and evaluating scientific
document representations. It includes 25 challenging and realistic tasks, 11 of
which are new, across four formats: classification, regression, ranking and
search. We then use the benchmark to study and improve the generalization
ability of scientific document representation models. We show how
state-of-the-art models struggle to generalize across task formats, and that
simple multi-task training fails to improve them. However, a new approach that
learns multiple embeddings per document, each tailored to a different format,
can improve performance. We experiment with task-format-specific control codes
and adapters in a multi-task setting and find that they outperform the existing
single-embedding state-of-the-art by up to 1.5 points absolute.
|
[
{
"version": "v1",
"created": "Wed, 23 Nov 2022 21:25:39 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 21:34:56 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Singh",
"Amanpreet",
""
],
[
"D'Arcy",
"Mike",
""
],
[
"Cohan",
"Arman",
""
],
[
"Downey",
"Doug",
""
],
[
"Feldman",
"Sergey",
""
]
] |
new_dataset
| 0.999045 |
2212.10465
|
Hyunwoo Kim
|
Hyunwoo Kim, Jack Hessel, Liwei Jiang, Peter West, Ximing Lu, Youngjae
Yu, Pei Zhou, Ronan Le Bras, Malihe Alikhani, Gunhee Kim, Maarten Sap, Yejin
Choi
|
SODA: Million-scale Dialogue Distillation with Social Commonsense
Contextualization
|
Dataset, model, and code can be found at https://hyunw.kim/sodaverse
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present SODA: the first publicly available, million-scale high-quality
social dialogue dataset. In contrast to most existing crowdsourced, small-scale
dialogue corpora, we distill 1.5M socially-grounded dialogues from a large
language model (InstructGPT; Ouyang et al., 2022). Dialogues are distilled by
contextualizing social commonsense knowledge from a knowledge graph (Atomic10x;
West et al., 2022). Human evaluation shows that dialogues in SODA are more
consistent, specific, and (surprisingly) natural than those in prior
human-authored datasets.
Using SODA, we train COSMO: a generalizable conversation model that is
significantly more natural and consistent on unseen datasets than
best-performing conversation models (e.g., GODEL, BlenderBot-1, Koala, Vicuna).
Experiments reveal COSMO is sometimes even preferred to the original
human-written gold responses. Additionally, our results shed light on the
distinction between knowledge-enriched conversations and natural social
chitchats. We make our data, models, and code public.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 17:38:47 GMT"
},
{
"version": "v2",
"created": "Wed, 24 May 2023 08:45:17 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Kim",
"Hyunwoo",
""
],
[
"Hessel",
"Jack",
""
],
[
"Jiang",
"Liwei",
""
],
[
"West",
"Peter",
""
],
[
"Lu",
"Ximing",
""
],
[
"Yu",
"Youngjae",
""
],
[
"Zhou",
"Pei",
""
],
[
"Bras",
"Ronan Le",
""
],
[
"Alikhani",
"Malihe",
""
],
[
"Kim",
"Gunhee",
""
],
[
"Sap",
"Maarten",
""
],
[
"Choi",
"Yejin",
""
]
] |
new_dataset
| 0.999563 |
2212.10505
|
Fangyu Liu
|
Fangyu Liu, Julian Martin Eisenschlos, Francesco Piccinno, Syrine
Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Wenhu Chen, Nigel Collier,
Yasemin Altun
|
DePlot: One-shot visual language reasoning by plot-to-table translation
|
ACL 2023 (Findings)
| null | null | null |
cs.CL cs.AI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Visual language such as charts and plots is ubiquitous in the human world.
Comprehending plots and charts requires strong reasoning skills. Prior
state-of-the-art (SOTA) models require at least tens of thousands of training
examples and their reasoning capabilities are still much limited, especially on
complex human-written queries. This paper presents the first one-shot solution
to visual language reasoning. We decompose the challenge of visual language
reasoning into two steps: (1) plot-to-text translation, and (2) reasoning over
the translated text. The key in this method is a modality conversion module,
named as DePlot, which translates the image of a plot or chart to a linearized
table. The output of DePlot can then be directly used to prompt a pretrained
large language model (LLM), exploiting the few-shot reasoning capabilities of
LLMs. To obtain DePlot, we standardize the plot-to-table task by establishing
unified task formats and metrics, and train DePlot end-to-end on this task.
DePlot can then be used off-the-shelf together with LLMs in a plug-and-play
fashion. Compared with a SOTA model finetuned on more than >28k data points,
DePlot+LLM with just one-shot prompting achieves a 24.0% improvement over
finetuned SOTA on human-written queries from the task of chart QA.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 18:20:50 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 18:28:39 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Liu",
"Fangyu",
""
],
[
"Eisenschlos",
"Julian Martin",
""
],
[
"Piccinno",
"Francesco",
""
],
[
"Krichene",
"Syrine",
""
],
[
"Pang",
"Chenxi",
""
],
[
"Lee",
"Kenton",
""
],
[
"Joshi",
"Mandar",
""
],
[
"Chen",
"Wenhu",
""
],
[
"Collier",
"Nigel",
""
],
[
"Altun",
"Yasemin",
""
]
] |
new_dataset
| 0.999153 |
2301.12652
|
Weijia Shi
|
Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James,
Mike Lewis, Luke Zettlemoyer, Wen-tau Yih
|
REPLUG: Retrieval-Augmented Black-Box Language Models
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce REPLUG, a retrieval-augmented language modeling framework that
treats the language model (LM) as a black box and augments it with a tuneable
retrieval model. Unlike prior retrieval-augmented LMs that train language
models with special cross attention mechanisms to encode the retrieved text,
REPLUG simply prepends retrieved documents to the input for the frozen
black-box LM. This simple design can be easily applied to any existing
retrieval and language models. Furthermore, we show that the LM can be used to
supervise the retrieval model, which can then find documents that help the LM
make better predictions. Our experiments demonstrate that REPLUG with the tuned
retriever significantly improves the performance of GPT-3 (175B) on language
modeling by 6.3%, as well as the performance of Codex on five-shot MMLU by
5.1%.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 04:18:09 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Feb 2023 00:15:18 GMT"
},
{
"version": "v3",
"created": "Mon, 22 May 2023 23:26:11 GMT"
},
{
"version": "v4",
"created": "Wed, 24 May 2023 05:08:07 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Shi",
"Weijia",
""
],
[
"Min",
"Sewon",
""
],
[
"Yasunaga",
"Michihiro",
""
],
[
"Seo",
"Minjoon",
""
],
[
"James",
"Rich",
""
],
[
"Lewis",
"Mike",
""
],
[
"Zettlemoyer",
"Luke",
""
],
[
"Yih",
"Wen-tau",
""
]
] |
new_dataset
| 0.998579 |
2302.01973
|
Gabriel Orlanski
|
Gabriel Orlanski, Kefan Xiao, Xavier Garcia, Jeffrey Hui, Joshua
Howland, Jonathan Malmaud, Jacob Austin, Rishabh Singh, Michele Catasta
|
Measuring The Impact Of Programming Language Distribution
|
Accepted to ICML 2023, Code and data release:
https://github.com/google-research/babelcode
| null | null | null |
cs.LG cs.CL cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Current benchmarks for evaluating neural code models focus on only a small
subset of programming languages, excluding many popular languages such as Go or
Rust. To ameliorate this issue, we present the BabelCode framework for
execution-based evaluation of any benchmark in any language. BabelCode enables
new investigations into the qualitative performance of models' memory, runtime,
and individual test case results. Additionally, we present a new code
translation dataset called Translating Python Programming Puzzles (TP3) from
the Python Programming Puzzles (Schuster et al. 2021) benchmark that involves
translating expert-level python functions to any language. With both BabelCode
and the TP3 benchmark, we investigate if balancing the distributions of 14
languages in a training dataset improves a large language model's performance
on low-resource languages. Training a model on a balanced corpus results in, on
average, 12.34% higher $pass@k$ across all tasks and languages compared to the
baseline. We find that this strategy achieves 66.48% better $pass@k$ on
low-resource languages at the cost of only a 12.94% decrease to high-resource
languages. In our three translation tasks, this strategy yields, on average,
30.77% better low-resource $pass@k$ while having 19.58% worse high-resource
$pass@k$.
|
[
{
"version": "v1",
"created": "Fri, 3 Feb 2023 19:47:22 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Mar 2023 14:36:49 GMT"
},
{
"version": "v3",
"created": "Wed, 24 May 2023 16:20:33 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Orlanski",
"Gabriel",
""
],
[
"Xiao",
"Kefan",
""
],
[
"Garcia",
"Xavier",
""
],
[
"Hui",
"Jeffrey",
""
],
[
"Howland",
"Joshua",
""
],
[
"Malmaud",
"Jonathan",
""
],
[
"Austin",
"Jacob",
""
],
[
"Singh",
"Rishabh",
""
],
[
"Catasta",
"Michele",
""
]
] |
new_dataset
| 0.986886 |
2303.00716
|
Brandon Smock
|
Brandon Smock and Rohith Pesala and Robin Abraham
|
Aligning benchmark datasets for table structure recognition
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Benchmark datasets for table structure recognition (TSR) must be carefully
processed to ensure they are annotated consistently. However, even if a
dataset's annotations are self-consistent, there may be significant
inconsistency across datasets, which can harm the performance of models trained
and evaluated on them. In this work, we show that aligning these
benchmarks$\unicode{x2014}$removing both errors and inconsistency between
them$\unicode{x2014}$improves model performance significantly. We demonstrate
this through a data-centric approach where we adopt one model architecture, the
Table Transformer (TATR), that we hold fixed throughout. Baseline exact match
accuracy for TATR evaluated on the ICDAR-2013 benchmark is 65% when trained on
PubTables-1M, 42% when trained on FinTabNet, and 69% combined. After reducing
annotation mistakes and inter-dataset inconsistency, performance of TATR
evaluated on ICDAR-2013 increases substantially to 75% when trained on
PubTables-1M, 65% when trained on FinTabNet, and 81% combined. We show through
ablations over the modification steps that canonicalization of the table
annotations has a significantly positive effect on performance, while other
choices balance necessary trade-offs that arise when deciding a benchmark
dataset's final composition. Overall we believe our work has significant
implications for benchmark design for TSR and potentially other tasks as well.
Dataset processing and training code will be released at
https://github.com/microsoft/table-transformer.
|
[
{
"version": "v1",
"created": "Wed, 1 Mar 2023 18:20:24 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 18:57:24 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Smock",
"Brandon",
""
],
[
"Pesala",
"Rohith",
""
],
[
"Abraham",
"Robin",
""
]
] |
new_dataset
| 0.98269 |
2303.10974
|
Mikhail Pautov
|
Andrei Chertkov, Olga Tsymboi, Mikhail Pautov, Ivan Oseledets
|
Translate your gibberish: black-box adversarial attack on machine
translation systems
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Neural networks are deployed widely in natural language processing tasks on
the industrial scale, and perhaps the most often they are used as compounds of
automatic machine translation systems. In this work, we present a simple
approach to fool state-of-the-art machine translation tools in the task of
translation from Russian to English and vice versa. Using a novel black-box
gradient-free tensor-based optimizer, we show that many online translation
tools, such as Google, DeepL, and Yandex, may both produce wrong or offensive
translations for nonsensical adversarial input queries and refuse to translate
seemingly benign input phrases. This vulnerability may interfere with
understanding a new language and simply worsen the user's experience while
using machine translation systems, and, hence, additional improvements of these
tools are required to establish better translation.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 09:52:52 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 19:19:54 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Chertkov",
"Andrei",
""
],
[
"Tsymboi",
"Olga",
""
],
[
"Pautov",
"Mikhail",
""
],
[
"Oseledets",
"Ivan",
""
]
] |
new_dataset
| 0.994854 |
2304.09842
|
Pan Lu
|
Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying
Nian Wu, Song-Chun Zhu, Jianfeng Gao
|
Chameleon: Plug-and-Play Compositional Reasoning with Large Language
Models
|
31 pages, 9 figures. Project page: https://chameleon-llm.github.io
| null | null | null |
cs.CL cs.AI cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models (LLMs) have achieved remarkable progress in solving
various natural language processing tasks due to emergent reasoning abilities.
However, LLMs have inherent limitations as they are incapable of accessing
up-to-date information (stored on the Web or in task-specific knowledge bases),
using external tools, and performing precise mathematical and logical
reasoning. In this paper, we present Chameleon, an AI system that mitigates
these limitations by augmenting LLMs with plug-and-play modules for
compositional reasoning. Chameleon synthesizes programs by composing various
tools (e.g., LLMs, off-the-shelf vision models, web search engines, Python
functions, and heuristic-based modules) for accomplishing complex reasoning
tasks. At the heart of Chameleon is an LLM-based planner that assembles a
sequence of tools to execute to generate the final response. We showcase the
effectiveness of Chameleon on two multi-modal knowledge-intensive reasoning
tasks: ScienceQA and TabMWP. Chameleon, powered by GPT-4, achieves an 86.54%
overall accuracy on ScienceQA, improving the best published few-shot result by
11.37%. On TabMWP, GPT-4-powered Chameleon improves the accuracy by 17.0%,
lifting the state of the art to 98.78%. Our analysis also shows that the
GPT-4-powered planner exhibits more consistent and rational tool selection via
inferring potential constraints from instructions, compared to a
ChatGPT-powered planner.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 17:47:47 GMT"
},
{
"version": "v2",
"created": "Wed, 24 May 2023 17:52:19 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Lu",
"Pan",
""
],
[
"Peng",
"Baolin",
""
],
[
"Cheng",
"Hao",
""
],
[
"Galley",
"Michel",
""
],
[
"Chang",
"Kai-Wei",
""
],
[
"Wu",
"Ying Nian",
""
],
[
"Zhu",
"Song-Chun",
""
],
[
"Gao",
"Jianfeng",
""
]
] |
new_dataset
| 0.979032 |
2305.02559
|
Nils Loose
|
Nils Loose, Felix M\"achtle, Claudius Pott, Volodymyr Bezsmertnyi, and
Thomas Eisenbarth
|
Madvex: Instrumentation-based Adversarial Attacks on Machine Learning
Malware Detection
|
20 pages. To be published in The 20th Conference on Detection of
Intrusions and Malware & Vulnerability Assessment (DIMVA 2023)
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
WebAssembly (Wasm) is a low-level binary format for web applications, which
has found widespread adoption due to its improved performance and compatibility
with existing software. However, the popularity of Wasm has also led to its
exploitation for malicious purposes, such as cryptojacking, where malicious
actors use a victim's computing resources to mine cryptocurrencies without
their consent. To counteract this threat, machine learning-based detection
methods aiming to identify cryptojacking activities within Wasm code have
emerged. It is well-known that neural networks are susceptible to adversarial
attacks, where inputs to a classifier are perturbed with minimal changes that
result in a crass misclassification. While applying changes in image
classification is easy, manipulating binaries in an automated fashion to evade
malware classification without changing functionality is non-trivial. In this
work, we propose a new approach to include adversarial examples in the code
section of binaries via instrumentation. The introduced gadgets allow for the
inclusion of arbitrary bytes, enabling efficient adversarial attacks that
reliably bypass state-of-the-art machine learning classifiers such as the
CNN-based Minos recently proposed at NDSS 2021. We analyze the cost and
reliability of instrumentation-based adversarial example generation and show
that the approach works reliably at minimal size and performance overheads.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 05:25:33 GMT"
},
{
"version": "v2",
"created": "Wed, 24 May 2023 09:28:54 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Loose",
"Nils",
""
],
[
"Mächtle",
"Felix",
""
],
[
"Pott",
"Claudius",
""
],
[
"Bezsmertnyi",
"Volodymyr",
""
],
[
"Eisenbarth",
"Thomas",
""
]
] |
new_dataset
| 0.999688 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.