id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2103.09151
|
Han Wu
|
Han Wu, Syed Yunas, Sareh Rowlands, Wenjie Ruan, and Johan Wahlstrom
|
Adversarial Driving: Attacking End-to-End Autonomous Driving
|
Accepted by IEEE Intelligent Vehicle Symposium, 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
As research in deep neural networks advances, deep convolutional networks
become promising for autonomous driving tasks. In particular, there is an
emerging trend of employing end-to-end neural network models for autonomous
driving. However, previous research has shown that deep neural network
classifiers are vulnerable to adversarial attacks. While for regression tasks,
the effect of adversarial attacks is not as well understood. In this research,
we devise two white-box targeted attacks against end-to-end autonomous driving
models. Our attacks manipulate the behavior of the autonomous driving system by
perturbing the input image. In an average of 800 attacks with the same attack
strength (epsilon=1), the image-specific and image-agnostic attack deviates the
steering angle from the original output by 0.478 and 0.111, respectively, which
is much stronger than random noises that only perturbs the steering angle by
0.002 (The steering angle ranges from [-1, 1]). Both attacks can be initiated
in real-time on CPUs without employing GPUs. Demo video:
https://youtu.be/I0i8uN2oOP0.
|
[
{
"version": "v1",
"created": "Tue, 16 Mar 2021 15:47:34 GMT"
},
{
"version": "v2",
"created": "Sun, 21 Mar 2021 14:04:36 GMT"
},
{
"version": "v3",
"created": "Wed, 24 Aug 2022 16:42:49 GMT"
},
{
"version": "v4",
"created": "Fri, 16 Sep 2022 17:44:13 GMT"
},
{
"version": "v5",
"created": "Wed, 1 Feb 2023 10:12:11 GMT"
},
{
"version": "v6",
"created": "Tue, 4 Apr 2023 14:53:04 GMT"
},
{
"version": "v7",
"created": "Wed, 31 May 2023 10:51:04 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Wu",
"Han",
""
],
[
"Yunas",
"Syed",
""
],
[
"Rowlands",
"Sareh",
""
],
[
"Ruan",
"Wenjie",
""
],
[
"Wahlstrom",
"Johan",
""
]
] |
new_dataset
| 0.99391 |
2202.05619
|
Ehud Shapiro
|
Ehud Shapiro
|
Grassroots Cryptocurrencies: A Foundation for a Grassroots Digital
Economy
| null | null | null | null |
cs.MA
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Grassroots cryptocurrencies are a digital means for turning mutual trust into
liquidity. Their coins are units of debt that can be issued digitally by anyone
-- people, communities, corporations, banks, municipalities and governments --
and traded by anyone. The purpose of grassroots cryptocurrencies is to provide
a foundation for a grassroots digital economy. With grassroots
cryptocurrencies, local digital economies can emerge without initial capital or
external credit (beyond the financing of smartphones), and gradually merge into
one global digital economy.
In this paper we introduce the principles that underlie grassroots
cryptocurrencies; elaborate economic scenarios derived from these principles;
specify the Grassroots Cryptocurrencies protocol formally via multiagent
transition systems; provide it with an implementation by a grassroots
dissemination protocol; and prove the implementation correct, fault-resilient
and grassroots.
|
[
{
"version": "v1",
"created": "Fri, 11 Feb 2022 14:00:06 GMT"
},
{
"version": "v10",
"created": "Thu, 14 Apr 2022 14:16:30 GMT"
},
{
"version": "v11",
"created": "Fri, 8 Jul 2022 07:04:51 GMT"
},
{
"version": "v12",
"created": "Sat, 30 Jul 2022 15:34:40 GMT"
},
{
"version": "v13",
"created": "Fri, 21 Oct 2022 14:17:22 GMT"
},
{
"version": "v14",
"created": "Sun, 29 Jan 2023 18:42:11 GMT"
},
{
"version": "v15",
"created": "Wed, 31 May 2023 09:30:28 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Feb 2022 01:07:21 GMT"
},
{
"version": "v3",
"created": "Thu, 17 Feb 2022 13:39:21 GMT"
},
{
"version": "v4",
"created": "Tue, 1 Mar 2022 20:41:03 GMT"
},
{
"version": "v5",
"created": "Sat, 12 Mar 2022 16:51:08 GMT"
},
{
"version": "v6",
"created": "Fri, 8 Apr 2022 15:05:55 GMT"
},
{
"version": "v7",
"created": "Mon, 11 Apr 2022 00:34:23 GMT"
},
{
"version": "v8",
"created": "Tue, 12 Apr 2022 15:27:39 GMT"
},
{
"version": "v9",
"created": "Wed, 13 Apr 2022 12:22:39 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Shapiro",
"Ehud",
""
]
] |
new_dataset
| 0.999905 |
2205.14484
|
Hans Hanley
|
Hans W. A. Hanley, Deepak Kumar, Zakir Durumeric
|
Happenstance: Utilizing Semantic Search to Track Russian State Media
Narratives about the Russo-Ukrainian War On Reddit
|
Accepted to ICWSM 2023
| null | null | null |
cs.SI cs.CY cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the buildup to and in the weeks following the Russian Federation's
invasion of Ukraine, Russian state media outlets output torrents of misleading
and outright false information. In this work, we study this coordinated
information campaign in order to understand the most prominent state media
narratives touted by the Russian government to English-speaking audiences. To
do this, we first perform sentence-level topic analysis using the
large-language model MPNet on articles published by ten different pro-Russian
propaganda websites including the new Russian "fact-checking" website
waronfakes.com. Within this ecosystem, we show that smaller websites like
katehon.com were highly effective at publishing topics that were later echoed
by other Russian sites. After analyzing this set of Russian information
narratives, we then analyze their correspondence with narratives and topics of
discussion on the r/Russia and 10 other political subreddits. Using MPNet and a
semantic search algorithm, we map these subreddits' comments to the set of
topics extracted from our set of Russian websites, finding that 39.6% of
r/Russia comments corresponded to narratives from pro-Russian propaganda
websites compared to 8.86% on r/politics.
|
[
{
"version": "v1",
"created": "Sat, 28 May 2022 16:54:53 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Oct 2022 19:25:06 GMT"
},
{
"version": "v3",
"created": "Tue, 30 May 2023 20:45:34 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Hanley",
"Hans W. A.",
""
],
[
"Kumar",
"Deepak",
""
],
[
"Durumeric",
"Zakir",
""
]
] |
new_dataset
| 0.985058 |
2206.08356
|
Rohit Girdhar
|
Rohit Girdhar, Alaaeldin El-Nouby, Mannat Singh, Kalyan Vasudev
Alwala, Armand Joulin, Ishan Misra
|
OmniMAE: Single Model Masked Pretraining on Images and Videos
|
CVPR 2023. Code/models: https://github.com/facebookresearch/omnivore
| null | null | null |
cs.CV cs.AI cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformer-based architectures have become competitive across a variety of
visual domains, most notably images and videos. While prior work studies these
modalities in isolation, having a common architecture suggests that one can
train a single unified model for multiple visual modalities. Prior attempts at
unified modeling typically use architectures tailored for vision tasks, or
obtain worse performance compared to single modality models. In this work, we
show that masked autoencoding can be used to train a simple Vision Transformer
on images and videos, without requiring any labeled data. This single model
learns visual representations that are comparable to or better than
single-modality representations on both image and video benchmarks, while using
a much simpler architecture. Furthermore, this model can be learned by dropping
90% of the image and 95% of the video patches, enabling extremely fast training
of huge model architectures. In particular, we show that our single ViT-Huge
model can be finetuned to achieve 86.6% on ImageNet and 75.5% on the
challenging Something Something-v2 video benchmark, setting a new
state-of-the-art.
|
[
{
"version": "v1",
"created": "Thu, 16 Jun 2022 17:57:01 GMT"
},
{
"version": "v2",
"created": "Wed, 31 May 2023 04:53:11 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Girdhar",
"Rohit",
""
],
[
"El-Nouby",
"Alaaeldin",
""
],
[
"Singh",
"Mannat",
""
],
[
"Alwala",
"Kalyan Vasudev",
""
],
[
"Joulin",
"Armand",
""
],
[
"Misra",
"Ishan",
""
]
] |
new_dataset
| 0.965054 |
2209.01962
|
Han Wu
|
Han Wu, Syed Yunas, Sareh Rowlands, Wenjie Ruan, and Johan Wahlstrom
|
Adversarial Detection: Attacking Object Detection in Real Time
|
Accepted by IEEE Intelligent Vehicle Symposium, 2023
| null | null | null |
cs.AI cs.CV cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Intelligent robots rely on object detection models to perceive the
environment. Following advances in deep learning security it has been revealed
that object detection models are vulnerable to adversarial attacks. However,
prior research primarily focuses on attacking static images or offline videos.
Therefore, it is still unclear if such attacks could jeopardize real-world
robotic applications in dynamic environments. This paper bridges this gap by
presenting the first real-time online attack against object detection models.
We devise three attacks that fabricate bounding boxes for nonexistent objects
at desired locations. The attacks achieve a success rate of about 90% within
about 20 iterations. The demo video is available at
https://youtu.be/zJZ1aNlXsMU.
|
[
{
"version": "v1",
"created": "Mon, 5 Sep 2022 13:32:41 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Sep 2022 01:54:33 GMT"
},
{
"version": "v3",
"created": "Wed, 1 Feb 2023 10:10:02 GMT"
},
{
"version": "v4",
"created": "Tue, 4 Apr 2023 14:56:39 GMT"
},
{
"version": "v5",
"created": "Wed, 31 May 2023 10:54:05 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Wu",
"Han",
""
],
[
"Yunas",
"Syed",
""
],
[
"Rowlands",
"Sareh",
""
],
[
"Ruan",
"Wenjie",
""
],
[
"Wahlstrom",
"Johan",
""
]
] |
new_dataset
| 0.998645 |
2209.11255
|
Dening Lu
|
Dening Lu, Kyle Gao, Qian Xie, Linlin Xu, Jonathan Li
|
3DGTN: 3D Dual-Attention GLocal Transformer Network for Point Cloud
Classification and Segmentation
|
10 pages, 6 figures, 4 tables
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Although the application of Transformers in 3D point cloud processing has
achieved significant progress and success, it is still challenging for existing
3D Transformer methods to efficiently and accurately learn both valuable global
features and valuable local features for improved applications. This paper
presents a novel point cloud representational learning network, called 3D Dual
Self-attention Global Local (GLocal) Transformer Network (3DGTN), for improved
feature learning in both classification and segmentation tasks, with the
following key contributions. First, a GLocal Feature Learning (GFL) block with
the dual self-attention mechanism (i.e., a novel Point-Patch Self-Attention,
called PPSA, and a channel-wise self-attention) is designed to efficiently
learn the GLocal context information. Second, the GFL block is integrated with
a multi-scale Graph Convolution-based Local Feature Aggregation (LFA) block,
leading to a Global-Local (GLocal) information extraction module that can
efficiently capture critical information. Third, a series of GLocal modules are
used to construct a new hierarchical encoder-decoder structure to enable the
learning of "GLocal" information in different scales in a hierarchical manner.
The proposed framework is evaluated on both classification and segmentation
datasets, demonstrating that the proposed method is capable of outperforming
many state-of-the-art methods on both classification and segmentation tasks.
|
[
{
"version": "v1",
"created": "Wed, 21 Sep 2022 14:34:21 GMT"
},
{
"version": "v2",
"created": "Wed, 31 May 2023 02:20:58 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Lu",
"Dening",
""
],
[
"Gao",
"Kyle",
""
],
[
"Xie",
"Qian",
""
],
[
"Xu",
"Linlin",
""
],
[
"Li",
"Jonathan",
""
]
] |
new_dataset
| 0.981704 |
2210.12250
|
Christopher Agia
|
Christopher Agia and Toki Migimatsu and Jiajun Wu and Jeannette Bohg
|
STAP: Sequencing Task-Agnostic Policies
|
Video:
https://drive.google.com/file/d/1zp3qFeZLACNPsGLLP7p6q9X1tuA_PGEo/view.
Project page: https://sites.google.com/stanford.edu/stap. 12 pages, 7
figures. In proceedings of the IEEE International Conference on Robotics and
Automation (ICRA) 2023. The first two authors contributed equally
| null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Advances in robotic skill acquisition have made it possible to build
general-purpose libraries of learned skills for downstream manipulation tasks.
However, naively executing these skills one after the other is unlikely to
succeed without accounting for dependencies between actions prevalent in
long-horizon plans. We present Sequencing Task-Agnostic Policies (STAP), a
scalable framework for training manipulation skills and coordinating their
geometric dependencies at planning time to solve long-horizon tasks never seen
by any skill during training. Given that Q-functions encode a measure of skill
feasibility, we formulate an optimization problem to maximize the joint success
of all skills sequenced in a plan, which we estimate by the product of their
Q-values. Our experiments indicate that this objective function approximates
ground truth plan feasibility and, when used as a planning objective, reduces
myopic behavior and thereby promotes long-horizon task success. We further
demonstrate how STAP can be used for task and motion planning by estimating the
geometric feasibility of skill sequences provided by a task planner. We
evaluate our approach in simulation and on a real robot. Qualitative results
and code are made available at https://sites.google.com/stanford.edu/stap.
|
[
{
"version": "v1",
"created": "Fri, 21 Oct 2022 21:09:37 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Mar 2023 19:19:44 GMT"
},
{
"version": "v3",
"created": "Wed, 31 May 2023 10:53:34 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Agia",
"Christopher",
""
],
[
"Migimatsu",
"Toki",
""
],
[
"Wu",
"Jiajun",
""
],
[
"Bohg",
"Jeannette",
""
]
] |
new_dataset
| 0.956944 |
2212.05339
|
Yang You
|
Haichen Huang and Jiarui Fang and Hongxin Liu and Shenggui Li and Yang
You
|
Elixir: Train a Large Language Model on a Small GPU Cluster
| null | null | null | null |
cs.DC cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, large language models have achieved great success due to
their unprecedented size. However, training these models poses a challenge for
most researchers as it requires a substantial number of GPUs. To reduce GPU
memory usage, memory partitioning, and memory offloading have been proposed.
These approaches eliminate memory redundancies and offload memory usage to the
CPU and NVMe memory, respectively, enabling training on small GPU clusters.
However, directly deploying these solutions often leads to suboptimal
efficiency. Only experienced experts can unleash the full potential of hardware
by carefully tuning the distributed configuration. Thus, we present a novel
solution, Elixir, which automates efficient large-model training based on
pre-runtime model profiling. Elixir aims to identify the optimal combination of
partitioning and offloading techniques to maximize training throughput. In our
experiments, Elixir significantly outperforms the current state-of-the-art
baseline. Our optimal configuration achieves up to a 3.4$\times$ speedup on
GPT-2 models compared with SOTA solutions. We hope that our work will benefit
individuals who lack computing resources and expertise, granting them access to
large models. The beta version of Elixir is now available at
https://github.com/hpcaitech/ColossalAI/tree/feature/elixir.
|
[
{
"version": "v1",
"created": "Sat, 10 Dec 2022 17:26:05 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Feb 2023 14:38:09 GMT"
},
{
"version": "v3",
"created": "Wed, 31 May 2023 13:56:53 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Huang",
"Haichen",
""
],
[
"Fang",
"Jiarui",
""
],
[
"Liu",
"Hongxin",
""
],
[
"Li",
"Shenggui",
""
],
[
"You",
"Yang",
""
]
] |
new_dataset
| 0.986572 |
2212.13075
|
Giuseppe Stragapede
|
Giuseppe Stragapede, Paula Delgado-Santos, Ruben Tolosana, Ruben
Vera-Rodriguez, Richard Guest, Aythami Morales
|
TypeFormer: Transformers for Mobile Keystroke Biometrics
| null | null | null | null |
cs.CV cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The broad usage of mobile devices nowadays, the sensitiveness of the
information contained in them, and the shortcomings of current mobile user
authentication methods are calling for novel, secure, and unobtrusive solutions
to verify the users' identity. In this article, we propose TypeFormer, a novel
Transformer architecture to model free-text keystroke dynamics performed on
mobile devices for the purpose of user authentication. The proposed model
consists in Temporal and Channel Modules enclosing two Long Short-Term Memory
(LSTM) recurrent layers, Gaussian Range Encoding (GRE), a multi-head
Self-Attention mechanism, and a Block-Recurrent structure. Experimenting on one
of the largest public databases to date, the Aalto mobile keystroke database,
TypeFormer outperforms current state-of-the-art systems achieving Equal Error
Rate (EER) values of 3.25% using only 5 enrolment sessions of 50 keystrokes
each. In such way, we contribute to reducing the traditional performance gap of
the challenging mobile free-text scenario with respect to its desktop and
fixed-text counterparts. Additionally, we analyse the behaviour of the model
with different experimental configurations such as the length of the keystroke
sequences and the amount of enrolment sessions, showing margin for improvement
with more enrolment data. Finally, a cross-database evaluation is carried out,
demonstrating the robustness of the features extracted by TypeFormer in
comparison with existing approaches.
|
[
{
"version": "v1",
"created": "Mon, 26 Dec 2022 10:25:06 GMT"
},
{
"version": "v2",
"created": "Wed, 31 May 2023 11:38:22 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Stragapede",
"Giuseppe",
""
],
[
"Delgado-Santos",
"Paula",
""
],
[
"Tolosana",
"Ruben",
""
],
[
"Vera-Rodriguez",
"Ruben",
""
],
[
"Guest",
"Richard",
""
],
[
"Morales",
"Aythami",
""
]
] |
new_dataset
| 0.99878 |
2301.04391
|
Ehud Shapiro
|
Ehud Shapiro
|
Grassroots Distributed Systems for Digital Sovereignty: Concept,
Examples, Implementation and Applications
|
arXiv admin note: text overlap with arXiv:2202.05619
| null | null | null |
cs.NI cs.DC cs.MA cs.SI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Informally, a distributed system is grassroots if it can have autonomous,
independently-deployed instances -- geographically and over time -- that can
interoperate once interconnected. An example would be a serverless
smartphone-based social network supporting multiple independently-budding
communities that merge when a member of one community becomes also a member of
another. Grassroots applications are potentially important as they may provide
a foundation for digital sovereignty, which we interpret as the ability of
people to conduct their social, economic, civic, and political lives in the
digital realm solely using the networked computing devices they own and operate
(e.g., smartphones), free of third-party control, surveillance, manipulation,
coercion, or value-extraction (e.g., by global digital platforms such as
Facebook or Bitcoin). Here, we formalize the notion of grassroots distributed
systems and grassroots implementations; specify an abstract grassroots
dissemination protocol; describe and prove an implementation of grassroots
dissemination for the model of asynchrony; extend the implementation to mobile
(address-changing) devices that communicate via an unreliable network (e.g.
smartphones using UDP); and illustrate how grassroots dissemination can realize
applications that support digital sovereignty -- grassroots social networking
and sovereign cryptocurrencies. The mathematical construction employs
distributed multiagent transition systems to define the notions of grassroots
protocols and grassroots implementations, to specify grassroots dissemination
protocols and their implementation, and to prove their correctness. The
implementation uses the blocklace -- a partially-ordered DAG-like
generalization of the blockchain.
|
[
{
"version": "v1",
"created": "Wed, 11 Jan 2023 10:31:53 GMT"
},
{
"version": "v2",
"created": "Sun, 29 Jan 2023 18:49:30 GMT"
},
{
"version": "v3",
"created": "Sat, 15 Apr 2023 11:47:48 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Shapiro",
"Ehud",
""
]
] |
new_dataset
| 0.999057 |
2302.00624
|
Zejia Weng
|
Zejia Weng, Xitong Yang, Ang Li, Zuxuan Wu, Yu-Gang Jiang
|
Open-VCLIP: Transforming CLIP to an Open-vocabulary Video Model via
Interpolated Weight Optimization
|
12 pages, 4 figures, ICML 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Contrastive Language-Image Pretraining (CLIP) has demonstrated impressive
zero-shot learning abilities for image understanding, yet limited effort has
been made to investigate CLIP for zero-shot video recognition. We introduce
Open-VCLIP, a simple yet effective approach that transforms CLIP into a strong
zero-shot video classifier that can recognize unseen actions and events at test
time. Our framework extends CLIP with minimal modifications to model
spatial-temporal relationships in videos, making it a specialized video
classifier, while striving for generalization. We formally show that training
an Open-VCLIP is equivalent to continual learning with zero historical data. To
address this problem, we propose Interpolated Weight Optimization, which
utilizes the benefit of weight interpolation in both training and test time. We
evaluate our method on three popular and challenging action recognition
datasets following various zero-shot evaluation protocols and we demonstrate
our approach outperforms state-of-the-art methods by clear margins. In
particular, we achieve 87.9%, 58.3%, 81.1% zero-shot accuracy on UCF, HMDB and
Kinetics-600 respectively, outperforming state-of-the-art methods by 8.3%, 7.8%
and 12.2%. Code is released at https://github.com/wengzejia1/Open-VCLIP.
|
[
{
"version": "v1",
"created": "Wed, 1 Feb 2023 17:44:17 GMT"
},
{
"version": "v2",
"created": "Tue, 30 May 2023 03:38:23 GMT"
},
{
"version": "v3",
"created": "Wed, 31 May 2023 02:54:28 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Weng",
"Zejia",
""
],
[
"Yang",
"Xitong",
""
],
[
"Li",
"Ang",
""
],
[
"Wu",
"Zuxuan",
""
],
[
"Jiang",
"Yu-Gang",
""
]
] |
new_dataset
| 0.985327 |
2305.05665
|
Rohit Girdhar
|
Rohit Girdhar, Alaaeldin El-Nouby, Zhuang Liu, Mannat Singh, Kalyan
Vasudev Alwala, Armand Joulin, Ishan Misra
|
ImageBind: One Embedding Space To Bind Them All
|
CVPR 2023 (Highlighted Paper). Website:
https://imagebind.metademolab.com/ Code/Models:
https://github.com/facebookresearch/ImageBind
| null | null | null |
cs.CV cs.AI cs.LG cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present ImageBind, an approach to learn a joint embedding across six
different modalities - images, text, audio, depth, thermal, and IMU data. We
show that all combinations of paired data are not necessary to train such a
joint embedding, and only image-paired data is sufficient to bind the
modalities together. ImageBind can leverage recent large scale vision-language
models, and extends their zero-shot capabilities to new modalities just by
using their natural pairing with images. It enables novel emergent applications
'out-of-the-box' including cross-modal retrieval, composing modalities with
arithmetic, cross-modal detection and generation. The emergent capabilities
improve with the strength of the image encoder and we set a new
state-of-the-art on emergent zero-shot recognition tasks across modalities,
outperforming specialist supervised models. Finally, we show strong few-shot
recognition results outperforming prior work, and that ImageBind serves as a
new way to evaluate vision models for visual and non-visual tasks.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 17:59:07 GMT"
},
{
"version": "v2",
"created": "Wed, 31 May 2023 04:57:12 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Girdhar",
"Rohit",
""
],
[
"El-Nouby",
"Alaaeldin",
""
],
[
"Liu",
"Zhuang",
""
],
[
"Singh",
"Mannat",
""
],
[
"Alwala",
"Kalyan Vasudev",
""
],
[
"Joulin",
"Armand",
""
],
[
"Misra",
"Ishan",
""
]
] |
new_dataset
| 0.997603 |
2305.05775
|
Baibhab Chatterjee
|
Ovishake Sen and Baibhab Chatterjee
|
Modified Ring-Oscillator Physical Unclonable Function (RO-PUF) based
PRBS Generation as a Device Signature in Distributed Brain Implants
|
5 pages, 5 Figures
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose and evaluate a method of generating low-cost device
signatures for distributed wireless brain implants, using a Pseudo-Random
Binary Sequence (PRBS) Generator that utilizes a modified Ring-Oscillator-based
Physical Unclonable Function (RO-PUF). The modified RO-PUF's output is used as
a seed for the PRBS generator, which creates a multi-bit output that can be
mapped to a time-slot when the implant is allowed to communicate with the
external world using duty-cycled time-division multiplexing. A 9-bit PRBS
generator is shown in hardware (with a TSMC 65 nm test chip implementation)
that demonstrates < 100 nW Power consumption in measurement (72% lower power
and 78% lower area than a traditional 9-bit RO-PUF implementation), which
supports 26 implants with the probability of time-slot collision being < 50%.
This potentially creates a pathway for low-cost device signature generation for
highly resource-constrained scenarios such as wireless, distributed neural
implants.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 21:33:37 GMT"
},
{
"version": "v2",
"created": "Wed, 31 May 2023 12:48:38 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Sen",
"Ovishake",
""
],
[
"Chatterjee",
"Baibhab",
""
]
] |
new_dataset
| 0.95568 |
2305.07667
|
Iris Berent Dr.
|
Iris Berent, Alexzander Sansiveri
|
Davinci the Dualist: the mind-body divide in large language models and
in human learners
| null | null | null | null |
cs.AI cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
A large literature suggests that people are intuitive Dualists--they consider
the mind ethereal, distinct from the body. Past research also shows that
Dualism emerges, in part, via learning (e.g., Barlev & Shtulman, 2021). But
whether learning is sufficient to give rise to Dualism is unknown.The evidence
from human learners does address this question because humans are endowed not
only with general learning capacities but also with core knowledge capacities.
And recent results suggest that core knowledge begets Dualism (Berent, Theodore
& Valencia, 2021; Berent, 2023). To evaluate the role of learning, here, we
probe for a mind-body divide in Davinci--a large language model (LLM) that is
devoid of any innate core knowledge. We show that Davinci still leans towards
Dualism, and that this bias increases systematically with the learner's
inductive potential. Thus, davinci (a GPT-3 model) exhibits mild Dualist
tendencies, whereas its descendent, text-davinci-003 (a GPT-3.5 model), shows a
full-blown bias. It selectively considers thoughts (epistemic states) as
disembodied--as unlikely to show up in the body (in the brain), but not in its
absence (after death). While Davinci's performance is constrained by its
syntactic limitations, and it differs from humans, its Dualist bias is robust.
These results demonstrate that the mind-body divide is partly learnable from
experience.They also show how, as LLM's are exposed to human narratives, they
induce not only human knowledge but also human biases.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 12:28:09 GMT"
},
{
"version": "v2",
"created": "Tue, 30 May 2023 21:00:50 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Berent",
"Iris",
""
],
[
"Sansiveri",
"Alexzander",
""
]
] |
new_dataset
| 0.999069 |
2305.11694
|
Chaitanya Malaviya
|
Chaitanya Malaviya, Peter Shaw, Ming-Wei Chang, Kenton Lee, Kristina
Toutanova
|
QUEST: A Retrieval Dataset of Entity-Seeking Queries with Implicit Set
Operations
|
ACL 2023; Dataset available at
https://github.com/google-research/language/tree/master/language/quest
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Formulating selective information needs results in queries that implicitly
specify set operations, such as intersection, union, and difference. For
instance, one might search for "shorebirds that are not sandpipers" or
"science-fiction films shot in England". To study the ability of retrieval
systems to meet such information needs, we construct QUEST, a dataset of 3357
natural language queries with implicit set operations, that map to a set of
entities corresponding to Wikipedia documents. The dataset challenges models to
match multiple constraints mentioned in queries with corresponding evidence in
documents and correctly perform various set operations. The dataset is
constructed semi-automatically using Wikipedia category names. Queries are
automatically composed from individual categories, then paraphrased and further
validated for naturalness and fluency by crowdworkers. Crowdworkers also assess
the relevance of entities based on their documents and highlight attribution of
query constraints to spans of document text. We analyze several modern
retrieval systems, finding that they often struggle on such queries. Queries
involving negation and conjunction are particularly challenging and systems are
further challenged with combinations of these operations.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 14:19:32 GMT"
},
{
"version": "v2",
"created": "Wed, 31 May 2023 05:11:21 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Malaviya",
"Chaitanya",
""
],
[
"Shaw",
"Peter",
""
],
[
"Chang",
"Ming-Wei",
""
],
[
"Lee",
"Kenton",
""
],
[
"Toutanova",
"Kristina",
""
]
] |
new_dataset
| 0.999159 |
2305.18978
|
Jia-Qi Yang
|
Jia-Qi Yang, Yucheng Xu, Jia-Lei Shen, Kebin Fan, De-Chuan Zhan, Yang
Yang
|
IDToolkit: A Toolkit for Benchmarking and Developing Inverse Design
Algorithms in Nanophotonics
|
KDD'23
| null | null | null |
cs.AI cs.LG physics.optics
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Aiding humans with scientific designs is one of the most exciting of
artificial intelligence (AI) and machine learning (ML), due to their potential
for the discovery of new drugs, design of new materials and chemical compounds,
etc. However, scientific design typically requires complex domain knowledge
that is not familiar to AI researchers. Further, scientific studies involve
professional skills to perform experiments and evaluations. These obstacles
prevent AI researchers from developing specialized methods for scientific
designs. To take a step towards easy-to-understand and reproducible research of
scientific design, we propose a benchmark for the inverse design of
nanophotonic devices, which can be verified computationally and accurately.
Specifically, we implemented three different nanophotonic design problems,
namely a radiative cooler, a selective emitter for thermophotovoltaics, and
structural color filters, all of which are different in design parameter
spaces, complexity, and design targets. The benchmark environments are
implemented with an open-source simulator. We further implemented 10 different
inverse design algorithms and compared them in a reproducible and fair
framework. The results revealed the strengths and weaknesses of existing
methods, which shed light on several future directions for developing more
efficient inverse design algorithms. Our benchmark can also serve as the
starting point for more challenging scientific design problems. The code of
IDToolkit is available at https://github.com/ThyrixYang/IDToolkit.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 12:19:33 GMT"
},
{
"version": "v2",
"created": "Wed, 31 May 2023 09:06:25 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Yang",
"Jia-Qi",
""
],
[
"Xu",
"Yucheng",
""
],
[
"Shen",
"Jia-Lei",
""
],
[
"Fan",
"Kebin",
""
],
[
"Zhan",
"De-Chuan",
""
],
[
"Yang",
"Yang",
""
]
] |
new_dataset
| 0.984397 |
2305.19282
|
Roshanak Ghods
|
Vahid Reza Nafisi, Roshanak Ghods
|
A Telecare System for Use in Traditional Persian Medicine
| null | null |
10.2174/1874120702115010105
| null |
cs.HC cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Persian Medicine (PM) uses wrist temperature/humidity and pulse to determine
a person's health status and temperament. However, the diagnosis may depend on
the physician's interpretation, hindering the combination of PM with modern
medical methods. This study proposes a system for measuring pulse signals and
temperament detection based on PM. The system uses recorded thermal
distribution, a temperament questionnaire, and a customized pulse measurement
device. The collected data can be sent to a physician via a telecare system for
interpretation and prescription of medications. The system was clinically
implemented for patient care, assessed the temperaments of 34 participants, and
recorded thermal images of the wrist, back of the hand, and entire face. The
study suggests that a customized device for measuring pulse waves and other
criteria based on PM can be incorporated into a telemedicine system, reducing
the dependency on PM specialists for diagnosis.
|
[
{
"version": "v1",
"created": "Sat, 27 May 2023 05:20:01 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Nafisi",
"Vahid Reza",
""
],
[
"Ghods",
"Roshanak",
""
]
] |
new_dataset
| 0.999586 |
2305.19352
|
Artem Lykov
|
Artem Lykov and Dzmitry Tsetserukou
|
LLM-BRAIn: AI-driven Fast Generation of Robot Behaviour Tree based on
Large Language Model
|
10 pages, 5 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper presents a novel approach in autonomous robot control, named
LLM-BRAIn, that makes possible robot behavior generation, based on operator's
commands. LLM-BRAIn is a transformer-based Large Language Model (LLM)
fine-tuned from Stanford Alpaca 7B model to generate robot behavior tree (BT)
from the text description. We train the LLM-BRAIn on 8,5k instruction-following
demonstrations, generated in the style of self-instruct using
text-davinchi-003. The developed model accurately builds complex robot behavior
while remaining small enough to be run on the robot's onboard microcomputer.
The model gives structural and logical correct BTs and can successfully manage
instructions that were not presented in training set. The experiment did not
reveal any significant subjective differences between BTs generated by
LLM-BRAIn and those created by humans (on average, participants were able to
correctly distinguish between LLM-BRAIn generated BTs and human-created BTs in
only 4.53 out of 10 cases, indicating that their performance was close to
random chance). The proposed approach potentially can be applied to mobile
robotics, drone operation, robot manipulator systems and Industry 4.0.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 18:28:54 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Lykov",
"Artem",
""
],
[
"Tsetserukou",
"Dzmitry",
""
]
] |
new_dataset
| 0.99457 |
2305.19379
|
Mohammad Asif
|
Mohammad Asif, Diya Srivastava, Aditya Gupta, Uma Shanker Tiwary
|
Inter Subject Emotion Recognition Using Spatio-Temporal Features From
EEG Signal
| null | null | null | null |
cs.HC cs.LG eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inter-subject or subject-independent emotion recognition has been a
challenging task in affective computing. This work is about an
easy-to-implement emotion recognition model that classifies emotions from EEG
signals subject independently. It is based on the famous EEGNet architecture,
which is used in EEG-related BCIs. We used the Dataset on Emotion using
Naturalistic Stimuli (DENS) dataset. The dataset contains the Emotional Events
-- the precise information of the emotion timings that participants felt. The
model is a combination of regular, depthwise and separable convolution layers
of CNN to classify the emotions. The model has the capacity to learn the
spatial features of the EEG channels and the temporal features of the EEG
signals variability with time. The model is evaluated for the valence space
ratings. The model achieved an accuracy of 73.04%.
|
[
{
"version": "v1",
"created": "Sat, 27 May 2023 07:43:19 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Asif",
"Mohammad",
""
],
[
"Srivastava",
"Diya",
""
],
[
"Gupta",
"Aditya",
""
],
[
"Tiwary",
"Uma Shanker",
""
]
] |
new_dataset
| 0.998402 |
2305.19426
|
Jingyuan She
|
Jingyuan Selena She, Christopher Potts, Samuel R. Bowman, Atticus
Geiger
|
ScoNe: Benchmarking Negation Reasoning in Language Models With
Fine-Tuning and In-Context Learning
| null | null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
A number of recent benchmarks seek to assess how well models handle natural
language negation. However, these benchmarks lack the controlled example
paradigms that would allow us to infer whether a model had learned how negation
morphemes semantically scope. To fill these analytical gaps, we present the
Scoped Negation NLI (ScoNe-NLI) benchmark, which contains contrast sets of six
examples with up to two negations where either zero, one, or both negative
morphemes affect the NLI label. We use ScoNe-NLI to assess fine-tuning and
in-context learning strategies. We find that RoBERTa and DeBERTa models solve
ScoNe-NLI after many shot fine-tuning. For in-context learning, we test
InstructGPT models and find that most prompt strategies are not successful,
including those using step-by-step reasoning. To better understand this result,
we extend ScoNe with ScoNe-NLG, a sentence completion test set that embeds
negation reasoning in short narratives. Here, InstructGPT is successful, which
reveals the model can correctly reason about negation, but struggles to do so
on prompt-adapted NLI examples outside of its core pretraining regime.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 21:43:11 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"She",
"Jingyuan Selena",
""
],
[
"Potts",
"Christopher",
""
],
[
"Bowman",
"Samuel R.",
""
],
[
"Geiger",
"Atticus",
""
]
] |
new_dataset
| 0.99933 |
2305.19445
|
Deepayan Sanyal
|
Deepayan Sanyal, Joel Michelson, Yuan Yang, James Ainooson and
Maithilee Kunda
|
A Computational Account Of Self-Supervised Visual Learning From
Egocentric Object Play
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Research in child development has shown that embodied experience handling
physical objects contributes to many cognitive abilities, including visual
learning. One characteristic of such experience is that the learner sees the
same object from several different viewpoints. In this paper, we study how
learning signals that equate different viewpoints -- e.g., assigning similar
representations to different views of a single object -- can support robust
visual learning. We use the Toybox dataset, which contains egocentric videos of
humans manipulating different objects, and conduct experiments using a computer
vision framework for self-supervised contrastive learning. We find that
representations learned by equating different physical viewpoints of an object
benefit downstream image classification accuracy. Further experiments show that
this performance improvement is robust to variations in the gaps between
viewpoints, and that the benefits transfer to several different image
classification tasks.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 22:42:03 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Sanyal",
"Deepayan",
""
],
[
"Michelson",
"Joel",
""
],
[
"Yang",
"Yuan",
""
],
[
"Ainooson",
"James",
""
],
[
"Kunda",
"Maithilee",
""
]
] |
new_dataset
| 0.96575 |
2305.19487
|
Nardine Basta
|
Houssem Jmal, Firas Ben Hmida, Nardine Basta, Muhammad Ikram, Mohamed
Ali Kaafar, Andy Walker
|
SPGNN-API: A Transferable Graph Neural Network for Attack Paths
Identification and Autonomous Mitigation
| null | null | null | null |
cs.CR cs.NE cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Attack paths are the potential chain of malicious activities an attacker
performs to compromise network assets and acquire privileges through exploiting
network vulnerabilities. Attack path analysis helps organizations to identify
new/unknown chains of attack vectors that reach critical assets within the
network, as opposed to individual attack vectors in signature-based attack
analysis. Timely identification of attack paths enables proactive mitigation of
threats. Nevertheless, manual analysis of complex network configurations,
vulnerabilities, and security events to identify attack paths is rarely
feasible. This work proposes a novel transferable graph neural network-based
model for shortest path identification. The proposed shortest path detection
approach, integrated with a novel holistic and comprehensive model for
identifying potential network vulnerabilities interactions, is then utilized to
detect network attack paths. Our framework automates the risk assessment of
attack paths indicating the propensity of the paths to enable the compromise of
highly-critical assets (e.g., databases) given the network configuration,
assets' criticality, and the severity of the vulnerabilities in-path to the
asset. The proposed framework, named SPGNN-API, incorporates automated threat
mitigation through a proactive timely tuning of the network firewall rules and
zero-trust policies to break critical attack paths and bolster cyber defenses.
Our evaluation process is twofold; evaluating the performance of the shortest
path identification and assessing the attack path detection accuracy. Our
results show that SPGNN-API largely outperforms the baseline model for shortest
path identification with an average accuracy >= 95% and successfully detects
100% of the potentially compromised assets, outperforming the attack graph
baseline by 47%.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 01:48:12 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Jmal",
"Houssem",
""
],
[
"Hmida",
"Firas Ben",
""
],
[
"Basta",
"Nardine",
""
],
[
"Ikram",
"Muhammad",
""
],
[
"Kaafar",
"Mohamed Ali",
""
],
[
"Walker",
"Andy",
""
]
] |
new_dataset
| 0.971644 |
2305.19505
|
Jiaqi Gu
|
Jiaqi Gu, Hanqing Zhu, Chenghao Feng, Zixuan Jiang, Ray T. Chen, David
Z. Pan
|
M3ICRO: Machine Learning-Enabled Compact Photonic Tensor Core based on
PRogrammable Multi-Operand Multimode Interference
|
8 pages
| null | null | null |
cs.ET cs.LG physics.optics
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Photonic computing shows promise for transformative advancements in machine
learning (ML) acceleration, offering ultra-fast speed, massive parallelism, and
high energy efficiency. However, current photonic tensor core (PTC) designs
based on standard optical components hinder scalability and compute density due
to their large spatial footprint. To address this, we propose an ultra-compact
PTC using customized programmable multi-operand multimode interference (MOMMI)
devices, named M3ICRO. The programmable MOMMI leverages the intrinsic light
propagation principle, providing a single-device programmable matrix unit
beyond the conventional computing paradigm of one multiply-accumulate (MAC)
operation per device. To overcome the optimization difficulty of customized
devices that often requires time-consuming simulation, we apply ML for optics
to predict the device behavior and enable a differentiable optimization flow.
We thoroughly investigate the reconfigurability and matrix expressivity of our
customized PTC, and introduce a novel block unfolding method to fully exploit
the computing capabilities of a complex-valued PTC for near-universal
real-valued linear transformations. Extensive evaluations demonstrate that
M3ICRO achieves a 3.4-9.6x smaller footprint, 1.6-4.4x higher speed, 10.6-42x
higher compute density, 3.7-12x higher system throughput, and superior noise
robustness compared to state-of-the-art coherent PTC designs, while maintaining
close-to-digital task accuracy across various ML benchmarks. Our code is
open-sourced at https://github.com/JeremieMelo/M3ICRO-MOMMI.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 02:34:36 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Gu",
"Jiaqi",
""
],
[
"Zhu",
"Hanqing",
""
],
[
"Feng",
"Chenghao",
""
],
[
"Jiang",
"Zixuan",
""
],
[
"Chen",
"Ray T.",
""
],
[
"Pan",
"David Z.",
""
]
] |
new_dataset
| 0.999691 |
2305.19603
|
Yong Man Ro
|
Jeongsoo Choi, Minsu Kim, Yong Man Ro
|
Intelligible Lip-to-Speech Synthesis with Speech Units
|
Interspeech 2023
| null | null | null |
cs.SD cs.CV eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a novel Lip-to-Speech synthesis (L2S) framework,
for synthesizing intelligible speech from a silent lip movement video.
Specifically, to complement the insufficient supervisory signal of the previous
L2S model, we propose to use quantized self-supervised speech representations,
named speech units, as an additional prediction target for the L2S model.
Therefore, the proposed L2S model is trained to generate multiple targets,
mel-spectrogram and speech units. As the speech units are discrete while
mel-spectrogram is continuous, the proposed multi-target L2S model can be
trained with strong content supervision, without using text-labeled data.
Moreover, to accurately convert the synthesized mel-spectrogram into a
waveform, we introduce a multi-input vocoder that can generate a clear waveform
even from blurry and noisy mel-spectrogram by referring to the speech units.
Extensive experimental results confirm the effectiveness of the proposed method
in L2S.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 07:17:32 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Choi",
"Jeongsoo",
""
],
[
"Kim",
"Minsu",
""
],
[
"Ro",
"Yong Man",
""
]
] |
new_dataset
| 0.979577 |
2305.19650
|
Dmitry Nikolaev
|
Dmitry Nikolaev and Collin F. Baker and Miriam R.L. Petruck and
Sebastian Pad\'o
|
Adverbs, Surprisingly
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper begins with the premise that adverbs are neglected in
computational linguistics. This view derives from two analyses: a literature
review and a novel adverb dataset to probe a state-of-the-art language model,
thereby uncovering systematic gaps in accounts for adverb meaning. We suggest
that using Frame Semantics for characterizing word meaning, as in FrameNet,
provides a promising approach to adverb analysis, given its ability to describe
ambiguity, semantic roles, and null instantiation.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 08:30:08 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Nikolaev",
"Dmitry",
""
],
[
"Baker",
"Collin F.",
""
],
[
"Petruck",
"Miriam R. L.",
""
],
[
"Padó",
"Sebastian",
""
]
] |
new_dataset
| 0.97951 |
2305.19691
|
Hugo Richard
|
Hugo Richard, Etienne Boursier, Vianney Perchet
|
Constant or logarithmic regret in asynchronous multiplayer bandits
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multiplayer bandits have recently been extensively studied because of their
application to cognitive radio networks.
While the literature mostly considers synchronous players, radio networks
(e.g. for IoT) tend to have asynchronous devices. This motivates the harder,
asynchronous multiplayer bandits problem, which was first tackled with an
explore-then-commit (ETC) algorithm (see Dakdouk, 2022), with a regret
upper-bound in $\mathcal{O}(T^{\frac{2}{3}})$. Before even considering
decentralization, understanding the centralized case was still a challenge as
it was unknown whether getting a regret smaller than $\Omega(T^{\frac{2}{3}})$
was possible.
We answer positively this question, as a natural extension of UCB exhibits a
$\mathcal{O}(\sqrt{T\log(T)})$ minimax regret.
More importantly, we introduce Cautious Greedy, a centralized algorithm that
yields constant instance-dependent regret if the optimal policy assigns at
least one player on each arm (a situation that is proved to occur when arm
means are close enough). Otherwise, its regret increases as the sum of
$\log(T)$ over some sub-optimality gaps. We provide lower bounds showing that
Cautious Greedy is optimal in the data-dependent terms.
Therefore, we set up a strong baseline for asynchronous multiplayer bandits
and suggest that learning the optimal policy in this problem might be easier
than thought, at least with centralization.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 09:35:03 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Richard",
"Hugo",
""
],
[
"Boursier",
"Etienne",
""
],
[
"Perchet",
"Vianney",
""
]
] |
new_dataset
| 0.997699 |
2305.19734
|
Paul Darm
|
Paul Darm, Antonio Valerio Miceli-Barone, Shay B. Cohen, Annalisa
Riccardi
|
Knowledge Base Question Answering for Space Debris Queries
|
7 pages, ACL 2023 industry track
| null | null | null |
cs.AI cs.CL cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Space agencies execute complex satellite operations that need to be supported
by the technical knowledge contained in their extensive information systems.
Knowledge bases (KB) are an effective way of storing and accessing such
information at scale. In this work we present a system, developed for the
European Space Agency (ESA), that can answer complex natural language queries,
to support engineers in accessing the information contained in a KB that models
the orbital space debris environment. Our system is based on a pipeline which
first generates a sequence of basic database operations, called a %program
sketch, from a natural language question, then specializes the sketch into a
concrete query program with mentions of entities, attributes and relations, and
finally executes the program against the database. This pipeline decomposition
approach enables us to train the system by leveraging out-of-domain data and
semi-synthetic data generated by GPT-3, thus reducing overfitting and shortcut
learning even with limited amount of in-domain training data. Our code can be
found at \url{https://github.com/PaulDrm/DISCOSQA}.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 10:55:41 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Darm",
"Paul",
""
],
[
"Miceli-Barone",
"Antonio Valerio",
""
],
[
"Cohen",
"Shay B.",
""
],
[
"Riccardi",
"Annalisa",
""
]
] |
new_dataset
| 0.998334 |
2305.19750
|
Jan Deriu
|
Tobias Bollinger, Jan Deriu, Manfred Vogel
|
Text-to-Speech Pipeline for Swiss German -- A comparison
| null | null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, we studied the synthesis of Swiss German speech using different
Text-to-Speech (TTS) models. We evaluated the TTS models on three corpora, and
we found, that VITS models performed best, hence, using them for further
testing. We also introduce a new method to evaluate TTS models by letting the
discriminator of a trained vocoder GAN model predict whether a given waveform
is human or synthesized. In summary, our best model delivers speech synthesis
for different Swiss German dialects with previously unachieved quality.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 11:33:18 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Bollinger",
"Tobias",
""
],
[
"Deriu",
"Jan",
""
],
[
"Vogel",
"Manfred",
""
]
] |
new_dataset
| 0.959294 |
2305.19760
|
Marc Ohm
|
Marc Ohm, Timo Pohl, Felix Boes
|
You Can Run But You Can't Hide: Runtime Protection Against Malicious
Package Updates For Node.js
| null | null | null | null |
cs.CR cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Maliciously prepared software packages are an extensively leveraged weapon
for software supply chain attacks. The detection of malicious packages is
undoubtedly of high priority and many academic and commercial approaches have
been developed. In the inevitable case of an attack, one needs resilience
against malicious code. To this end, we present a runtime protection for
Node.js that automatically limits a package's capabilities to an established
minimum. The detection of required capabilities as well as their enforcement at
runtime has been implemented and evaluated against known malicious attacks. Our
approach was able to prevent 9/10 historic attacks with a median install-time
overhead of less than 0.6 seconds and a median runtime overhead of less than
0.2 seconds.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 11:45:43 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Ohm",
"Marc",
""
],
[
"Pohl",
"Timo",
""
],
[
"Boes",
"Felix",
""
]
] |
new_dataset
| 0.99058 |
2305.19821
|
Rita Ramos
|
Rita Ramos, Bruno Martins, Desmond Elliott
|
LMCap: Few-shot Multilingual Image Captioning by Retrieval Augmented
Language Model Prompting
|
To appear in the Findings of ACL 2023
| null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Multilingual image captioning has recently been tackled by training with
large-scale machine translated data, which is an expensive, noisy, and
time-consuming process. Without requiring any multilingual caption data, we
propose LMCap, an image-blind few-shot multilingual captioning model that works
by prompting a language model with retrieved captions. Specifically, instead of
following the standard encoder-decoder paradigm, given an image, LMCap first
retrieves the captions of similar images using a multilingual CLIP encoder.
These captions are then combined into a prompt for an XGLM decoder, in order to
generate captions in the desired language. In other words, the generation model
does not directly process the image, instead processing retrieved captions.
Experiments on the XM3600 dataset of geographically diverse images show that
our model is competitive with fully-supervised multilingual captioning models,
without requiring any supervised training on any captioning data.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 13:03:17 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Ramos",
"Rita",
""
],
[
"Martins",
"Bruno",
""
],
[
"Elliott",
"Desmond",
""
]
] |
new_dataset
| 0.995484 |
2305.19840
|
Konrad Wojtasik
|
Konrad Wojtasik, Vadim Shishkin, Kacper Wo{\l}owiec, Arkadiusz Janz,
Maciej Piasecki
|
BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish
Language
| null | null | null | null |
cs.IR cs.AI cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The BEIR dataset is a large, heterogeneous benchmark for Information
Retrieval (IR) in zero-shot settings, garnering considerable attention within
the research community. However, BEIR and analogous datasets are predominantly
restricted to the English language. Our objective is to establish extensive
large-scale resources for IR in the Polish language, thereby advancing the
research in this NLP area. In this work, inspired by mMARCO and Mr.~TyDi
datasets, we translated all accessible open IR datasets into Polish, and we
introduced the BEIR-PL benchmark -- a new benchmark which comprises 13
datasets, facilitating further development, training and evaluation of modern
Polish language models for IR tasks. We executed an evaluation and comparison
of numerous IR models on the newly introduced BEIR-PL benchmark. Furthermore,
we publish pre-trained open IR models for Polish language,d marking a
pioneering development in this field. Additionally, the evaluation revealed
that BM25 achieved significantly lower scores for Polish than for English,
which can be attributed to high inflection and intricate morphological
structure of the Polish language. Finally, we trained various re-ranking models
to enhance the BM25 retrieval, and we compared their performance to identify
their unique characteristic features. To ensure accurate model comparisons, it
is necessary to scrutinise individual results rather than to average across the
entire benchmark. Thus, we thoroughly analysed the outcomes of IR models in
relation to each individual data subset encompassed by the BEIR benchmark. The
benchmark data is available at URL {\bf https://huggingface.co/clarin-knext}.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 13:29:07 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Wojtasik",
"Konrad",
""
],
[
"Shishkin",
"Vadim",
""
],
[
"Wołowiec",
"Kacper",
""
],
[
"Janz",
"Arkadiusz",
""
],
[
"Piasecki",
"Maciej",
""
]
] |
new_dataset
| 0.999691 |
2305.19849
|
Eleonora Zedda
|
Benedetta Catrical\`a, Miriam Ledda, Marco Manca, Fabio Patern\`o,
Carmen Santoro, Eleonora Zedda
|
Biography-based Robot Games for Older Adults
| null | null | null |
SARTMI/2023/10
|
cs.RO cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
One issue in aging is how to stimulate the cognitive skills of older adults.
One way to address it is the use of serious games delivered through humanoid
robots, to provide engaging ways to perform exercises to train memory,
attention, processing, and planning activities. We present an approach in which
a humanoid robot, by using various modalities, propose the games in a way
personalised to specific individuals' experiences using their personal memories
associated with facts and events that occurred in older adults' life. This
personalization can increase their interest and engagement, and thus
potentially reduce the cognitive training drop-out.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 13:37:48 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Catricalà",
"Benedetta",
""
],
[
"Ledda",
"Miriam",
""
],
[
"Manca",
"Marco",
""
],
[
"Paternò",
"Fabio",
""
],
[
"Santoro",
"Carmen",
""
],
[
"Zedda",
"Eleonora",
""
]
] |
new_dataset
| 0.995874 |
2305.19859
|
Nelly Elsayed
|
Murat Ozer, Ismail Onat, Halil Akbas, Nelly Elsayed, Zag ElSayed, Said
Varlioglu
|
Exploring the Journey to Drug Overdose: Applying the Journey to Crime
Framework to Drug Sales Locations and Overdose Death Locations
|
Under review in The 7th International Conference on Applied Cognitive
Computing 2023
| null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Drug overdose is a pressing public health concern in the United States,
resulting in a significant number of fatalities each year. In this study, we
employ the Journey to Crime (JTC) framework borrowed from the field of
environmental criminology to examine the association between drug sales
locations and overdose death locations. In this research, our objective is to
elucidate the trajectory of overdose victims to overdose locations, aiming to
enhance the distribution of overdose services and interventions. To the best of
our knowledge, no previous studies have applied the JTC framework to
investigate drug overdose deaths. By scrutinizing data obtained from the
Hamilton County, OH Coroners, and the Cincinnati Police Department, we endeavor
to explore the plausible correlation between overdose deaths and drug sales
locations. Our findings underscore the necessity of implementing a
comprehensive strategy to curtail overdose deaths. This strategy should
encompass various facets, including targeted efforts to reduce the
accessibility of illicit drugs, the enhancement of responses to overdose
incidents through a collaborative multidisciplinary approach, and the
availability of data to inform evidence-based strategies and facilitate outcome
evaluation. By shedding light on the relationship between drug sales locations
and overdose death locations through the utilization of the JTC framework, this
study contributes valuable insights to the field of drug overdose prevention.
It emphasizes the significance of adopting multifaceted approaches to address
this public health crisis effectively. Ultimately, our research aims to inform
the development of evidence-based interventions and policies that can mitigate
the occurrence and impact of drug overdoses in our communities.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 13:50:37 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Ozer",
"Murat",
""
],
[
"Onat",
"Ismail",
""
],
[
"Akbas",
"Halil",
""
],
[
"Elsayed",
"Nelly",
""
],
[
"ElSayed",
"Zag",
""
],
[
"Varlioglu",
"Said",
""
]
] |
new_dataset
| 0.959367 |
2305.19863
|
Alessandro Bazzi
|
Alessandro Bazzi, Miguel Sepulcre, Quentin Delooz, Andreas Festag,
Jonas Vogt, Horst Wieker, Friedbert Berens, Paul Spaanderman
|
Multi-Channel Operation for the Release 2 of ETSI Cooperative
Intelligent Transport Systems
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Vehicles and road infrastructure are starting to be equipped with
vehicle-to-everything (V2X) communication solutions to increase road safety and
provide new services to drivers and passengers. In Europe, the deployment is
based on a set of Release 1 standards developed by ETSI to support basic use
cases for cooperative intelligent transport systems (C-ITS). For them, the
capacity of a single 10 MHz channel in the ITS band at 5.9 GHz is considered
sufficient. At the same time, the ITS stakeholders are working towards several
advanced use cases, which imply a significant increment of data traffic and the
need for multiple channels. To address this issue, ETSI has recently
standardized a new multi-channel operation (MCO) concept for flexible,
efficient, and future-proof use of multiple channels. This new concept is
defined in a set of new specifications that represent the foundation for the
future releases of C-ITS standards. The present paper provides a comprehensive
review of the new set of specifications, describing the main entities extending
the C-ITS architecture at the different layers of the protocol stack, In
addition, the paper provides representative examples that describe how these
MCO standards will be used in the future and discusses some of the main open
issues arising. The review and analysis of this paper facilitate the
understanding and motivation of the new set of Release 2 ETSI specifications
for MCO and the identification of new research opportunities.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 13:55:52 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Bazzi",
"Alessandro",
""
],
[
"Sepulcre",
"Miguel",
""
],
[
"Delooz",
"Quentin",
""
],
[
"Festag",
"Andreas",
""
],
[
"Vogt",
"Jonas",
""
],
[
"Wieker",
"Horst",
""
],
[
"Berens",
"Friedbert",
""
],
[
"Spaanderman",
"Paul",
""
]
] |
new_dataset
| 0.999066 |
2305.20015
|
Nikitha Rao
|
Nikitha Rao, Jason Tsay, Kiran Kate, Vincent J. Hellendoorn, Martin
Hirzel
|
AI for Low-Code for AI
| null | null | null | null |
cs.SE cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Low-code programming allows citizen developers to create programs with
minimal coding effort, typically via visual (e.g. drag-and-drop) interfaces. In
parallel, recent AI-powered tools such as Copilot and ChatGPT generate programs
from natural language instructions. We argue that these modalities are
complementary: tools like ChatGPT greatly reduce the need to memorize large
APIs but still require their users to read (and modify) programs, whereas
visual tools abstract away most or all programming but struggle to provide easy
access to large APIs. At their intersection, we propose LowCoder, the first
low-code tool for developing AI pipelines that supports both a visual
programming interface (LowCoder_VP) and an AI-powered natural language
interface (LowCoder_NL). We leverage this tool to provide some of the first
insights into whether and how these two modalities help programmers by
conducting a user study. We task 20 developers with varying levels of AI
expertise with implementing four ML pipelines using LowCoder, replacing the
LowCoder_NL component with a simple keyword search in half the tasks. Overall,
we find that LowCoder is especially useful for (i) Discoverability: using
LowCoder_NL, participants discovered new operators in 75% of the tasks,
compared to just 32.5% and 27.5% using web search or scrolling through options
respectively in the keyword-search condition, and (ii) Iterative Composition:
82.5% of tasks were successfully completed and many initial pipelines were
further successfully improved. Qualitative analysis shows that AI helps users
discover how to implement constructs when they know what to do, but still fails
to support novices when they lack clarity on what they want to accomplish.
Overall, our work highlights the benefits of combining the power of AI with
low-code programming.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 16:44:03 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Rao",
"Nikitha",
""
],
[
"Tsay",
"Jason",
""
],
[
"Kate",
"Kiran",
""
],
[
"Hellendoorn",
"Vincent J.",
""
],
[
"Hirzel",
"Martin",
""
]
] |
new_dataset
| 0.954129 |
2305.20068
|
Zihao Wen
|
Zihao Wen, Yifan Zhang, Xinhong Chen, Jianping Wang
|
TOFG: A Unified and Fine-Grained Environment Representation in
Autonomous Driving
|
Accepted by ICRA 2023
| null | null | null |
cs.RO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In autonomous driving, an accurate understanding of environment, e.g., the
vehicle-to-vehicle and vehicle-to-lane interactions, plays a critical role in
many driving tasks such as trajectory prediction and motion planning.
Environment information comes from high-definition (HD) map and historical
trajectories of vehicles. Due to the heterogeneity of the map data and
trajectory data, many data-driven models for trajectory prediction and motion
planning extract vehicle-to-vehicle and vehicle-to-lane interactions in a
separate and sequential manner. However, such a manner may capture biased
interpretation of interactions, causing lower prediction and planning accuracy.
Moreover, separate extraction leads to a complicated model structure and hence
the overall efficiency and scalability are sacrificed. To address the above
issues, we propose an environment representation, Temporal Occupancy Flow Graph
(TOFG). Specifically, the occupancy flow-based representation unifies the map
information and vehicle trajectories into a homogeneous data format and enables
a consistent prediction. The temporal dependencies among vehicles can help
capture the change of occupancy flow timely to further promote model
performance. To demonstrate that TOFG is capable of simplifying the model
architecture, we incorporate TOFG with a simple graph attention (GAT) based
neural network and propose TOFG-GAT, which can be used for both trajectory
prediction and motion planning. Experiment results show that TOFG-GAT achieves
better or competitive performance than all the SOTA baselines with less
training time.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 17:43:56 GMT"
}
] | 2023-06-01T00:00:00 |
[
[
"Wen",
"Zihao",
""
],
[
"Zhang",
"Yifan",
""
],
[
"Chen",
"Xinhong",
""
],
[
"Wang",
"Jianping",
""
]
] |
new_dataset
| 0.998418 |
1909.03691
|
Jan Krajicek
|
Jan Krajicek
|
The Cook-Reckhow definition
| null |
in: "Logic, Automata, and Computational Complexity: The Works of
Stephen A. Cook", ed.Bruce M.Kapron, Association for Computing Machinery
Books, New York, NY, USA, 43, pp.83-94, May 2023
|
10.1145/3588287.3588
| null |
cs.CC math.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Cook-Reckhow 1979 paper defined the area of research we now call Proof
Complexity. There were earlier papers which contributed to the subject as we
understand it today, the most significant being Tseitin's 1968 paper, but none
of them introduced general notions that would allow to make an explicit and
universal link between lengths-of-proofs problems and computational complexity
theory. In this note we shall highlight three particular definitions from the
paper: of proof systems, p-simulations and the pigeonhole principle formula,
and discuss their role in defining the field. We will also mention some related
developments and open problems.
|
[
{
"version": "v1",
"created": "Mon, 9 Sep 2019 08:01:27 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Krajicek",
"Jan",
""
]
] |
new_dataset
| 0.995635 |
2104.15081
|
Esen Yel
|
Esen Yel, Nicola Bezzo
|
A Meta-Learning-based Trajectory Tracking Framework for UAVs under
Degraded Conditions
|
2021 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS) (to appear) 2021 copyright IEEE
| null |
10.1109/IROS51168.2021.9635918
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to changes in model dynamics or unexpected disturbances, an autonomous
robotic system may experience unforeseen challenges during real-world
operations which may affect its safety and intended behavior: in particular
actuator and system failures and external disturbances are among the most
common causes of degraded mode of operation. To deal with this problem, in this
work, we present a meta-learning-based approach to improve the trajectory
tracking performance of an unmanned aerial vehicle (UAV) under actuator faults
and disturbances which have not been previously experienced. Our approach
leverages meta-learning to train a model that is easily adaptable at runtime to
make accurate predictions about the system's future state. A runtime monitoring
and validation technique is proposed to decide when the system needs to adapt
its model by considering a data pruning procedure for efficient learning.
Finally, the reference trajectory is adapted based on future predictions by
borrowing feedback control logic to make the system track the original and
desired path without needing to access the system's controller. The proposed
framework is applied and validated in both simulations and experiments on a
faulty UAV navigation case study demonstrating a drastic increase in tracking
performance.
|
[
{
"version": "v1",
"created": "Fri, 30 Apr 2021 16:04:16 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Aug 2021 21:06:12 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Yel",
"Esen",
""
],
[
"Bezzo",
"Nicola",
""
]
] |
new_dataset
| 0.985352 |
2109.02734
|
Oana Ignat
|
Oana Ignat, Y-Lan Boureau, Jane A. Yu, Alon Halevy
|
Detecting Inspiring Content on Social Media
|
accepted at ACII 2021
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inspiration moves a person to see new possibilities and transforms the way
they perceive their own potential. Inspiration has received little attention in
psychology, and has not been researched before in the NLP community. To the
best of our knowledge, this work is the first to study inspiration through
machine learning methods. We aim to automatically detect inspiring content from
social media data. To this end, we analyze social media posts to tease out what
makes a post inspiring and what topics are inspiring. We release a dataset of
5,800 inspiring and 5,800 non-inspiring English-language public post unique ids
collected from a dump of Reddit public posts made available by a third party
and use linguistic heuristics to automatically detect which social media
English-language posts are inspiring.
|
[
{
"version": "v1",
"created": "Mon, 6 Sep 2021 20:57:32 GMT"
},
{
"version": "v2",
"created": "Mon, 29 May 2023 18:08:16 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Ignat",
"Oana",
""
],
[
"Boureau",
"Y-Lan",
""
],
[
"Yu",
"Jane A.",
""
],
[
"Halevy",
"Alon",
""
]
] |
new_dataset
| 0.999064 |
2109.14394
|
Lefteris Loukas
|
Lefteris Loukas, Manos Fergadiotis, Ion Androutsopoulos, Prodromos
Malakasiotis
|
EDGAR-CORPUS: Billions of Tokens Make The World Go Round
|
6 pages, short paper at ECONLP 2021 Workshop, in conjunction with
EMNLP 2021
| null |
10.18653/v1/2021.econlp-1.2
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We release EDGAR-CORPUS, a novel corpus comprising annual reports from all
the publicly traded companies in the US spanning a period of more than 25
years. To the best of our knowledge, EDGAR-CORPUS is the largest financial NLP
corpus available to date. All the reports are downloaded, split into their
corresponding items (sections), and provided in a clean, easy-to-use JSON
format. We use EDGAR-CORPUS to train and release EDGAR-W2V, which are WORD2VEC
embeddings for the financial domain. We employ these embeddings in a battery of
financial NLP tasks and showcase their superiority over generic GloVe
embeddings and other existing financial word embeddings. We also open-source
EDGAR-CRAWLER, a toolkit that facilitates downloading and extracting future
annual reports.
|
[
{
"version": "v1",
"created": "Wed, 29 Sep 2021 12:56:20 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Oct 2021 08:19:42 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Loukas",
"Lefteris",
""
],
[
"Fergadiotis",
"Manos",
""
],
[
"Androutsopoulos",
"Ion",
""
],
[
"Malakasiotis",
"Prodromos",
""
]
] |
new_dataset
| 0.993693 |
2201.10171
|
Cheuk Ting Li
|
Chih Wei Ling and Yanxiao Liu and Cheuk Ting Li
|
Weighted Parity-Check Codes for Channels with State and Asymmetric
Channels
|
17 pages, 4 figure. This is the full version of a paper presented at
2022 IEEE International Symposium on Information Theory (ISIT)
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce a new class of codes, called weighted
parity-check codes, where each parity-check bit has a weight that indicates its
likelihood to be one (instead of fixing each parity-check bit to be zero). It
is applicable to a wide range of settings, e.g. asymmetric channels, channels
with state and/or cost constraints, and the Wyner-Ziv problem, and can provably
achieve the capacity. For the channels with state (Gelfand-Pinsker) setting,
the proposed coding scheme has two advantages compared to the nested linear
code. First, it achieves the capacity of any channel with state (e.g.
asymmetric channels). Second, simulation results show that the proposed code
achieves a smaller error rate compared to the nested linear code. We also
discuss a sparse construction where the belief propagation algorithm can be
applied to improve the coding efficiency.
|
[
{
"version": "v1",
"created": "Tue, 25 Jan 2022 08:34:28 GMT"
},
{
"version": "v2",
"created": "Tue, 30 May 2023 10:00:31 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Ling",
"Chih Wei",
""
],
[
"Liu",
"Yanxiao",
""
],
[
"Li",
"Cheuk Ting",
""
]
] |
new_dataset
| 0.955181 |
2203.06482
|
Lefteris Loukas
|
Lefteris Loukas, Manos Fergadiotis, Ilias Chalkidis, Eirini
Spyropoulou, Prodromos Malakasiotis, Ion Androutsopoulos, Georgios Paliouras
|
FiNER: Financial Numeric Entity Recognition for XBRL Tagging
|
13 pages, long paper at ACL 2022
| null |
10.18653/v1/2022.acl-long.303
| null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Publicly traded companies are required to submit periodic reports with
eXtensive Business Reporting Language (XBRL) word-level tags. Manually tagging
the reports is tedious and costly. We, therefore, introduce XBRL tagging as a
new entity extraction task for the financial domain and release FiNER-139, a
dataset of 1.1M sentences with gold XBRL tags. Unlike typical entity extraction
datasets, FiNER-139 uses a much larger label set of 139 entity types. Most
annotated tokens are numeric, with the correct tag per token depending mostly
on context, rather than the token itself. We show that subword fragmentation of
numeric expressions harms BERT's performance, allowing word-level BILSTMs to
perform better. To improve BERT's performance, we propose two simple and
effective solutions that replace numeric expressions with pseudo-tokens
reflecting original token shapes and numeric magnitudes. We also experiment
with FIN-BERT, an existing BERT model for the financial domain, and release our
own BERT (SEC-BERT), pre-trained on financial filings, which performs best.
Through data and error analysis, we finally identify possible limitations to
inspire future work on XBRL tagging.
|
[
{
"version": "v1",
"created": "Sat, 12 Mar 2022 16:43:57 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Apr 2022 18:51:43 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Loukas",
"Lefteris",
""
],
[
"Fergadiotis",
"Manos",
""
],
[
"Chalkidis",
"Ilias",
""
],
[
"Spyropoulou",
"Eirini",
""
],
[
"Malakasiotis",
"Prodromos",
""
],
[
"Androutsopoulos",
"Ion",
""
],
[
"Paliouras",
"Georgios",
""
]
] |
new_dataset
| 0.998321 |
2208.02693
|
Guilherme Pereira Bento Garcia
|
Guilherme P.B. Garcia and Carlos H. Grohmann and Lucas P. Soares and
Mateus Espadoto
|
Relict landslide detection using Deep-Learning architectures for image
segmentation in rainforest areas: A new framework
| null | null |
10.1080/01431161.2023.2197130
| null |
cs.CV eess.IV physics.data-an
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Landslides are destructive and recurrent natural disasters on steep slopes
and represent a risk to lives and properties. Knowledge of relict landslides
location is vital to understand their mechanisms, update inventory maps and
improve risk assessment. However, relict landslide mapping is complex in
tropical regions covered with rainforest vegetation. A new CNN framework is
proposed for semi-automatic detection of relict landslides, which uses a
dataset generated by a k-means clustering algorithm and has a pre-training
step. The weights computed in the pre-training are used to fine-tune the CNN
training process. A comparison between the proposed and the standard framework
is performed using CBERS-04A WPM images. Three CNNs for semantic segmentation
are used (Unet, FPN, Linknet) with two augmented datasets. A total of 42
combinations of CNNs are tested. Values of precision and recall were very
similar between the combinations tested. Recall was higher than 75% for every
combination, but precision values were usually smaller than 20%. False
positives (FP) samples were addressed as the cause for these low precision
values. Predictions of the proposed framework were more accurate and correctly
detected more landslides. This work demonstrates that there are limitations for
detecting relict landslides in areas covered with rainforest, mainly related to
similarities between the spectral response of pastures and deforested areas
with Gleichenella sp. ferns, commonly used as an indicator of landslide scars.
|
[
{
"version": "v1",
"created": "Thu, 4 Aug 2022 14:46:02 GMT"
},
{
"version": "v2",
"created": "Mon, 29 May 2023 20:07:08 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Garcia",
"Guilherme P. B.",
""
],
[
"Grohmann",
"Carlos H.",
""
],
[
"Soares",
"Lucas P.",
""
],
[
"Espadoto",
"Mateus",
""
]
] |
new_dataset
| 0.998643 |
2208.06448
|
Rafael Rodriguez Sanchez
|
Rafael Rodriguez-Sanchez, Benjamin A. Spiegel, Jennifer Wang, Roma
Patel, Stefanie Tellex and George Konidaris
|
RLang: A Declarative Language for Describing Partial World Knowledge to
Reinforcement Learning Agents
| null | null | null | null |
cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce RLang, a domain-specific language (DSL) for communicating domain
knowledge to an RL agent. Unlike existing RL DSLs that ground to
\textit{single} elements of a decision-making formalism (e.g., the reward
function or policy), RLang can specify information about every element of a
Markov decision process. We define precise syntax and grounding semantics for
RLang, and provide a parser that grounds RLang programs to an
algorithm-agnostic \textit{partial} world model and policy that can be
exploited by an RL agent. We provide a series of example RLang programs
demonstrating how different RL methods can exploit the resulting knowledge,
encompassing model-free and model-based tabular algorithms, policy gradient and
value-based methods, hierarchical approaches, and deep methods.
|
[
{
"version": "v1",
"created": "Fri, 12 Aug 2022 18:20:47 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Aug 2022 22:13:44 GMT"
},
{
"version": "v3",
"created": "Tue, 30 May 2023 15:07:56 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Rodriguez-Sanchez",
"Rafael",
""
],
[
"Spiegel",
"Benjamin A.",
""
],
[
"Wang",
"Jennifer",
""
],
[
"Patel",
"Roma",
""
],
[
"Tellex",
"Stefanie",
""
],
[
"Konidaris",
"George",
""
]
] |
new_dataset
| 0.951161 |
2210.14318
|
Nantheera Anantrasirichai
|
Disen Hu and Nantheera Anantrasirichai
|
Object recognition in atmospheric turbulence scenes
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The influence of atmospheric turbulence on acquired surveillance imagery
poses significant challenges in image interpretation and scene analysis.
Conventional approaches for target classification and tracking are less
effective under such conditions. While deep-learning-based object detection
methods have shown great success in normal conditions, they cannot be directly
applied to atmospheric turbulence sequences. In this paper, we propose a novel
framework that learns distorted features to detect and classify object types in
turbulent environments. Specifically, we utilise deformable convolutions to
handle spatial turbulent displacement. Features are extracted using a feature
pyramid network, and Faster R-CNN is employed as the object detector.
Experimental results on a synthetic VOC dataset demonstrate that the proposed
framework outperforms the benchmark with a mean Average Precision (mAP) score
exceeding 30%. Additionally, subjective results on real data show significant
improvement in performance.
|
[
{
"version": "v1",
"created": "Tue, 25 Oct 2022 20:21:25 GMT"
},
{
"version": "v2",
"created": "Mon, 29 May 2023 18:55:03 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Hu",
"Disen",
""
],
[
"Anantrasirichai",
"Nantheera",
""
]
] |
new_dataset
| 0.97195 |
2212.09741
|
Hongjin Su
|
Hongjin Su, Weijia Shi, Jungo Kasai, Yizhong Wang, Yushi Hu, Mari
Ostendorf, Wen-tau Yih, Noah A. Smith, Luke Zettlemoyer, Tao Yu
|
One Embedder, Any Task: Instruction-Finetuned Text Embeddings
|
Accepted in ACL2023 Findings
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce INSTRUCTOR, a new method for computing text embeddings given
task instructions: every text input is embedded together with instructions
explaining the use case (e.g., task and domain descriptions). Unlike encoders
from prior work that are more specialized, INSTRUCTOR is a single embedder that
can generate text embeddings tailored to different downstream tasks and
domains, without any further training. We first annotate instructions for 330
diverse tasks and train INSTRUCTOR on this multitask mixture with a contrastive
loss. We evaluate INSTRUCTOR on 70 embedding evaluation tasks (66 of which are
unseen during training), ranging from classification and information retrieval
to semantic textual similarity and text generation evaluation. INSTRUCTOR,
while having an order of magnitude fewer parameters than the previous best
model, achieves state-of-the-art performance, with an average improvement of
3.4% compared to the previous best results on the 70 diverse datasets. Our
analysis suggests that INSTRUCTOR is robust to changes in instructions, and
that instruction finetuning mitigates the challenge of training a single model
on diverse datasets. Our model, code, and data are available at
https://instructor-embedding.github.io.
|
[
{
"version": "v1",
"created": "Mon, 19 Dec 2022 18:57:05 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Dec 2022 05:11:06 GMT"
},
{
"version": "v3",
"created": "Tue, 30 May 2023 15:22:50 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Su",
"Hongjin",
""
],
[
"Shi",
"Weijia",
""
],
[
"Kasai",
"Jungo",
""
],
[
"Wang",
"Yizhong",
""
],
[
"Hu",
"Yushi",
""
],
[
"Ostendorf",
"Mari",
""
],
[
"Yih",
"Wen-tau",
""
],
[
"Smith",
"Noah A.",
""
],
[
"Zettlemoyer",
"Luke",
""
],
[
"Yu",
"Tao",
""
]
] |
new_dataset
| 0.998774 |
2212.10758
|
El Moatez Billah Nagoudi
|
AbdelRahim Elmadany, El Moatez Billah Nagoudi, Muhammad Abdul-Mageed
|
ORCA: A Challenging Benchmark for Arabic Language Understanding
|
All authors contributed equally. Accepted at ACL 2023, Toronto,
Canada
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Due to their crucial role in all NLP, several benchmarks have been proposed
to evaluate pretrained language models. In spite of these efforts, no public
benchmark of diverse nature currently exists for evaluation of Arabic. This
makes it challenging to measure progress for both Arabic and multilingual
language models. This challenge is compounded by the fact that any benchmark
targeting Arabic needs to take into account the fact that Arabic is not a
single language but rather a collection of languages and varieties. In this
work, we introduce ORCA, a publicly available benchmark for Arabic language
understanding evaluation. ORCA is carefully constructed to cover diverse Arabic
varieties and a wide range of challenging Arabic understanding tasks exploiting
60 different datasets across seven NLU task clusters. To measure current
progress in Arabic NLU, we use ORCA to offer a comprehensive comparison between
18 multilingual and Arabic language models. We also provide a public
leaderboard with a unified single-number evaluation metric (ORCA score) to
facilitate future research.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 04:35:43 GMT"
},
{
"version": "v2",
"created": "Mon, 29 May 2023 18:27:37 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Elmadany",
"AbdelRahim",
""
],
[
"Nagoudi",
"El Moatez Billah",
""
],
[
"Abdul-Mageed",
"Muhammad",
""
]
] |
new_dataset
| 0.999758 |
2301.02238
|
Benjamin Attal
|
Benjamin Attal, Jia-Bin Huang, Christian Richardt, Michael Zollhoefer,
Johannes Kopf, Matthew O'Toole, Changil Kim
|
HyperReel: High-Fidelity 6-DoF Video with Ray-Conditioned Sampling
|
Project page: https://hyperreel.github.io/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Volumetric scene representations enable photorealistic view synthesis for
static scenes and form the basis of several existing 6-DoF video techniques.
However, the volume rendering procedures that drive these representations
necessitate careful trade-offs in terms of quality, rendering speed, and memory
efficiency. In particular, existing methods fail to simultaneously achieve
real-time performance, small memory footprint, and high-quality rendering for
challenging real-world scenes. To address these issues, we present HyperReel --
a novel 6-DoF video representation. The two core components of HyperReel are:
(1) a ray-conditioned sample prediction network that enables high-fidelity,
high frame rate rendering at high resolutions and (2) a compact and
memory-efficient dynamic volume representation. Our 6-DoF video pipeline
achieves the best performance compared to prior and contemporary approaches in
terms of visual quality with small memory requirements, while also rendering at
up to 18 frames-per-second at megapixel resolution without any custom CUDA
code.
|
[
{
"version": "v1",
"created": "Thu, 5 Jan 2023 18:59:44 GMT"
},
{
"version": "v2",
"created": "Mon, 29 May 2023 18:35:21 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Attal",
"Benjamin",
""
],
[
"Huang",
"Jia-Bin",
""
],
[
"Richardt",
"Christian",
""
],
[
"Zollhoefer",
"Michael",
""
],
[
"Kopf",
"Johannes",
""
],
[
"O'Toole",
"Matthew",
""
],
[
"Kim",
"Changil",
""
]
] |
new_dataset
| 0.999589 |
2302.00716
|
Lorenzo Pichierri
|
Lorenzo Pichierri, Andrea Testa, Giuseppe Notarstefano
|
CrazyChoir: Flying Swarms of Crazyflie Quadrotors in ROS 2
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces CrazyChoir, a modular Python framework based on the
Robot Operating System (ROS) 2. The toolbox provides a comprehensive set of
functionalities to simulate and run experiments on teams of cooperating
Crazyflie nano-quadrotors. Specifically, it allows users to perform realistic
simulations over robotic simulators as, e.g., Webots and includes bindings of
the firmware control and planning functions. The toolbox also provides
libraries to perform radio communication with Crazyflie directly inside ROS 2
scripts. The package can be thus used to design, implement and test planning
strategies and control schemes for a Crazyflie nano-quadrotor. Moreover, the
modular structure of CrazyChoir allows users to easily implement online
distributed optimization and control schemes over multiple quadrotors. The
CrazyChoir package is validated via simulations and experiments on a swarm of
Crazyflies for formation control, pickup-and-delivery vehicle routing and
trajectory tracking tasks. CrazyChoir is available at
https://github.com/OPT4SMART/crazychoir.
|
[
{
"version": "v1",
"created": "Wed, 1 Feb 2023 19:16:33 GMT"
},
{
"version": "v2",
"created": "Tue, 30 May 2023 09:07:15 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Pichierri",
"Lorenzo",
""
],
[
"Testa",
"Andrea",
""
],
[
"Notarstefano",
"Giuseppe",
""
]
] |
new_dataset
| 0.98644 |
2303.04620
|
Andrew Beers
|
Andrew Beers, Joseph S. Schafer, Ian Kennedy, Morgan Wack, Emma S.
Spiro, Kate Starbird
|
Followback Clusters, Satellite Audiences, and Bridge Nodes: Coengagement
Networks for the 2020 US Election
|
Accepted for publication at ICWSM '23
| null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
The 2020 United States presidential election was, and has continued to be,
the focus of pervasive and persistent mis- and disinformation spreading through
our media ecosystems, including social media. This event has driven the
collection and analysis of large, directed social network datasets, but such
datasets can resist intuitive understanding. In such large datasets, the
overwhelming number of nodes and edges present in typical representations
create visual artifacts, such as densely overlapping edges and tightly-packed
formations of low-degree nodes, which obscure many features of more practical
interest. We apply a method, coengagement transformations, to convert such
networks of social data into tractable images. Intuitively, this approach
allows for parameterized network visualizations that make shared audiences of
engaged viewers salient to viewers. Using the interpretative capabilities of
this method, we perform an extensive case study of the 2020 United States
presidential election on Twitter, contributing an empirical analysis of
coengagement. By creating and contrasting different networks at different
parameter sets, we define and characterize several structures in this discourse
network, including bridging accounts, satellite audiences, and followback
communities. We discuss the importance and implications of these empirical
network features in this context. In addition, we release open-source code for
creating coengagement networks from Twitter and other structured interaction
data.
|
[
{
"version": "v1",
"created": "Tue, 28 Feb 2023 23:59:02 GMT"
},
{
"version": "v2",
"created": "Tue, 30 May 2023 14:44:24 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Beers",
"Andrew",
""
],
[
"Schafer",
"Joseph S.",
""
],
[
"Kennedy",
"Ian",
""
],
[
"Wack",
"Morgan",
""
],
[
"Spiro",
"Emma S.",
""
],
[
"Starbird",
"Kate",
""
]
] |
new_dataset
| 0.999263 |
2303.09307
|
Xin Qiao
|
Xin Qiao, Chenyang Ge, Youmin Zhang, Yanhui Zhou, Fabio Tosi, Matteo
Poggi, Stefano Mattoccia
|
Depth Super-Resolution from Explicit and Implicit High-Frequency
Features
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel multi-stage depth super-resolution network, which
progressively reconstructs high-resolution depth maps from explicit and
implicit high-frequency features. The former are extracted by an efficient
transformer processing both local and global contexts, while the latter are
obtained by projecting color images into the frequency domain. Both are
combined together with depth features by means of a fusion strategy within a
multi-stage and multi-scale framework. Experiments on the main benchmarks, such
as NYUv2, Middlebury, DIML and RGBDD, show that our approach outperforms
existing methods by a large margin (~20% on NYUv2 and DIML against the
contemporary work DADA, with 16x upsampling), establishing a new
state-of-the-art in the guided depth super-resolution task.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 13:33:24 GMT"
},
{
"version": "v2",
"created": "Tue, 30 May 2023 05:36:59 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Qiao",
"Xin",
""
],
[
"Ge",
"Chenyang",
""
],
[
"Zhang",
"Youmin",
""
],
[
"Zhou",
"Yanhui",
""
],
[
"Tosi",
"Fabio",
""
],
[
"Poggi",
"Matteo",
""
],
[
"Mattoccia",
"Stefano",
""
]
] |
new_dataset
| 0.994293 |
2303.16992
|
Adir Rahamim
|
Adir Rahamim, Yonatan Belinkov
|
ContraSim -- A Similarity Measure Based on Contrastive Learning
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Recent work has compared neural network representations via similarity-based
analyses to improve model interpretation. The quality of a similarity measure
is typically evaluated by its success in assigning a high score to
representations that are expected to be matched. However, existing similarity
measures perform mediocrely on standard benchmarks. In this work, we develop a
new similarity measure, dubbed ContraSim, based on contrastive learning. In
contrast to common closed-form similarity measures, ContraSim learns a
parameterized measure by using both similar and dissimilar examples. We perform
an extensive experimental evaluation of our method, with both language and
vision models, on the standard layer prediction benchmark and two new
benchmarks that we introduce: the multilingual benchmark and the image-caption
benchmark. In all cases, ContraSim achieves much higher accuracy than previous
similarity measures, even when presented with challenging examples. Finally,
ContraSim is more suitable for the analysis of neural networks, revealing new
insights not captured by previous measures.
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 19:43:26 GMT"
},
{
"version": "v2",
"created": "Tue, 30 May 2023 09:47:33 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Rahamim",
"Adir",
""
],
[
"Belinkov",
"Yonatan",
""
]
] |
new_dataset
| 0.996322 |
2304.12939
|
Carlos Eduardo Cancino-Chac\'on
|
Carlos Cancino-Chac\'on, Silvan Peter, Patricia Hu, Emmanouil
Karystinaios, Florian Henkel, Francesco Foscarin, Nimrod Varga, Gerhard
Widmer
|
The ACCompanion: Combining Reactivity, Robustness, and Musical
Expressivity in an Automatic Piano Accompanist
|
In Proceedings of the 32nd International Joint Conference on
Artificial Intelligence (IJCAI-23), Macao, China. The differences/extensions
with the previous version include a technical appendix, added missing links,
and minor text updates. 10 pages, 4 figures
| null | null | null |
cs.SD cs.HC eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces the ACCompanion, an expressive accompaniment system.
Similarly to a musician who accompanies a soloist playing a given musical
piece, our system can produce a human-like rendition of the accompaniment part
that follows the soloist's choices in terms of tempo, dynamics, and
articulation. The ACCompanion works in the symbolic domain, i.e., it needs a
musical instrument capable of producing and playing MIDI data, with explicitly
encoded onset, offset, and pitch for each played note. We describe the
components that go into such a system, from real-time score following and
prediction to expressive performance generation and online adaptation to the
expressive choices of the human player. Based on our experience with repeated
live demonstrations in front of various audiences, we offer an analysis of the
challenges of combining these components into a system that is highly reactive
and precise, while still a reliable musical partner, robust to possible
performance errors and responsive to expressive variations.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 05:19:52 GMT"
},
{
"version": "v2",
"created": "Tue, 30 May 2023 14:53:47 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Cancino-Chacón",
"Carlos",
""
],
[
"Peter",
"Silvan",
""
],
[
"Hu",
"Patricia",
""
],
[
"Karystinaios",
"Emmanouil",
""
],
[
"Henkel",
"Florian",
""
],
[
"Foscarin",
"Francesco",
""
],
[
"Varga",
"Nimrod",
""
],
[
"Widmer",
"Gerhard",
""
]
] |
new_dataset
| 0.997244 |
2305.11541
|
Lu Wang Wang
|
Zezhong Wang, Fangkai Yang, Pu Zhao, Lu Wang, Jue Zhang, Mohit Garg,
Qingwei Lin, Dongmei Zhang
|
Empower Large Language Model to Perform Better on Industrial
Domain-Specific Question Answering
|
13 pages, 1 figure
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large Language Model (LLM) has gained popularity and achieved remarkable
results in open-domain tasks, but its performance in real industrial
domain-specific scenarios is average since there is no specific knowledge in
it. This issue has attracted widespread attention, but there are few relevant
benchmarks available. In this paper, we provide a benchmark Question Answering
(QA) dataset named MSQA, which is about Microsoft products and IT technical
problems encountered by customers. This dataset contains industry
cloud-specific QA knowledge, which is not available for general LLM, so it is
well suited for evaluating methods aimed at improving domain-specific
capabilities of LLM. In addition, we propose a new model interaction paradigm
that can empower LLM to achieve better performance on domain-specific tasks
where it is not proficient. Extensive experiments demonstrate that the approach
following our model fusion framework outperforms the commonly used LLM with
retrieval methods.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 09:23:25 GMT"
},
{
"version": "v2",
"created": "Tue, 30 May 2023 11:03:04 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Wang",
"Zezhong",
""
],
[
"Yang",
"Fangkai",
""
],
[
"Zhao",
"Pu",
""
],
[
"Wang",
"Lu",
""
],
[
"Zhang",
"Jue",
""
],
[
"Garg",
"Mohit",
""
],
[
"Lin",
"Qingwei",
""
],
[
"Zhang",
"Dongmei",
""
]
] |
new_dataset
| 0.999752 |
2305.16135
|
Mingxing Hu
|
Mingxing Hu, Yunhong Zhou
|
Ring Signature from Bonsai Tree: How to Preserve the Long-Term Anonymity
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Signer-anonymity is the central feature of ring signatures, which enable a
user to sign messages on behalf of an arbitrary set of users, called the ring,
without revealing exactly which member of the ring actually generated the
signature. Strong and long-term signer-anonymity is a reassuring guarantee for
users who are hesitant to leak a secret, especially if the consequences of
identification are dire in certain scenarios such as whistleblowing. The notion
of \textit{unconditional anonymity}, which protects signer-anonymity even
against an infinitely powerful adversary, is considered for ring signatures
that aim to achieve long-term signer-anonymity. However, the existing
lattice-based works that consider the unconditional anonymity notion did not
strictly capture the security requirements imposed in practice, this leads to a
realistic attack on signer-anonymity.
In this paper, we present a realistic attack on the unconditional anonymity
of ring signatures, and formalize the unconditional anonymity model to strictly
capture it. We then propose a lattice-based ring signature construction with
unconditional anonymity by leveraging bonsai tree mechanism. Finally, we prove
the security in the standard model and demonstrate the unconditional anonymity
through both theoretical proof and practical experiments.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 15:10:52 GMT"
},
{
"version": "v2",
"created": "Tue, 30 May 2023 08:01:46 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Hu",
"Mingxing",
""
],
[
"Zhou",
"Yunhong",
""
]
] |
new_dataset
| 0.99868 |
2305.17701
|
Hwaran Lee
|
Hwaran Lee, Seokhee Hong, Joonsuk Park, Takyoung Kim, Gunhee Kim and
Jung-Woo Ha
|
KoSBi: A Dataset for Mitigating Social Bias Risks Towards Safer Large
Language Model Application
|
17 pages, 8 figures, 12 tables, ACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models (LLMs) learn not only natural text generation abilities
but also social biases against different demographic groups from real-world
data. This poses a critical risk when deploying LLM-based applications.
Existing research and resources are not readily applicable in South Korea due
to the differences in language and culture, both of which significantly affect
the biases and targeted demographic groups. This limitation requires localized
social bias datasets to ensure the safe and effective deployment of LLMs. To
this end, we present KO SB I, a new social bias dataset of 34k pairs of
contexts and sentences in Korean covering 72 demographic groups in 15
categories. We find that through filtering-based moderation, social biases in
generated content can be reduced by 16.47%p on average for HyperCLOVA (30B and
82B), and GPT-3.
|
[
{
"version": "v1",
"created": "Sun, 28 May 2023 12:07:16 GMT"
},
{
"version": "v2",
"created": "Tue, 30 May 2023 01:42:07 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Lee",
"Hwaran",
""
],
[
"Hong",
"Seokhee",
""
],
[
"Park",
"Joonsuk",
""
],
[
"Kim",
"Takyoung",
""
],
[
"Kim",
"Gunhee",
""
],
[
"Ha",
"Jung-Woo",
""
]
] |
new_dataset
| 0.999465 |
2305.18313
|
Ryan Hardesty Lewis
|
Junfeng Jiao, Ryan Hardesty Lewis, Kijin Seong, Arya Farahi, Paul
Navratil, Nate Casebeer, Dev Niyogi
|
Fire and Smoke Digital Twin -- A computational framework for modeling
fire incident outcomes
|
8 pages, 8 figures, conference
| null | null | null |
cs.CY cs.CE cs.SI physics.soc-ph
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Fires and burning are the chief causes of particulate matter (PM2.5), a key
measurement of air quality in communities and cities worldwide. This work
develops a live fire tracking platform to show active reported fires from over
twenty cities in the U.S., as well as predict their smoke paths and impacts on
the air quality of regions within their range. Specifically, our close to
real-time tracking and predictions culminates in a digital twin to protect
public health and inform the public of fire and air quality risk. This tool
tracks fire incidents in real-time, utilizes the 3D building footprints of
Austin to simulate smoke outputs, and predicts fire incident smoke falloffs
within the complex city environment. Results from this study include a complete
fire and smoke digital twin model for Austin. We work in cooperation with the
City of Austin Fire Department to ensure the accuracy of our forecast and also
show that air quality sensor density within our cities cannot validate urban
fire presence. We additionally release code and methodology to replicate these
results for any city in the world. This work paves the path for similar digital
twin models to be developed and deployed to better protect the health and
safety of citizens.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 00:43:06 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Jiao",
"Junfeng",
""
],
[
"Lewis",
"Ryan Hardesty",
""
],
[
"Seong",
"Kijin",
""
],
[
"Farahi",
"Arya",
""
],
[
"Navratil",
"Paul",
""
],
[
"Casebeer",
"Nate",
""
],
[
"Niyogi",
"Dev",
""
]
] |
new_dataset
| 0.96369 |
2305.18315
|
Antonio Mauricio Brito Junior
|
Antonio Mauricio, Vladia Pinheiro, Vasco Furtado, Jo\~ao Ara\'ujo
Monteiro Neto, Francisco das Chagas Juc\'a Bomfim, Andr\'e C\^amara Ferreira
da Costa, Raquel Silveira, Nilsiton Arag\~ao
|
CDJUR-BR -- A Golden Collection of Legal Document from Brazilian Justice
with Fine-Grained Named Entities
|
15 pages, in Portuguese language, 3 figures, 5 tables
| null | null | null |
cs.CL cs.AI cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
A basic task for most Legal Artificial Intelligence (Legal AI) applications
is Named Entity Recognition (NER). However, texts produced in the context of
legal practice make references to entities that are not trivially recognized by
the currently available NERs. There is a lack of categorization of legislation,
jurisprudence, evidence, penalties, the roles of people in a legal process
(judge, lawyer, victim, defendant, witness), types of locations (crime
location, defendant's address), etc. In this sense, there is still a need for a
robust golden collection, annotated with fine-grained entities of the legal
domain, and which covers various documents of a legal process, such as
petitions, inquiries, complaints, decisions and sentences. In this article, we
describe the development of the Golden Collection of the Brazilian Judiciary
(CDJUR-BR) contemplating a set of fine-grained named entities that have been
annotated by experts in legal documents. The creation of CDJUR-BR followed its
own methodology that aimed to attribute a character of comprehensiveness and
robustness. Together with the CDJUR-BR repository we provided a NER based on
the BERT model and trained with the CDJUR-BR, whose results indicated the
prevalence of the CDJUR-BR.
|
[
{
"version": "v1",
"created": "Sat, 20 May 2023 00:48:52 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Mauricio",
"Antonio",
""
],
[
"Pinheiro",
"Vladia",
""
],
[
"Furtado",
"Vasco",
""
],
[
"Neto",
"João Araújo Monteiro",
""
],
[
"Bomfim",
"Francisco das Chagas Jucá",
""
],
[
"da Costa",
"André Câmara Ferreira",
""
],
[
"Silveira",
"Raquel",
""
],
[
"Aragão",
"Nilsiton",
""
]
] |
new_dataset
| 0.999621 |
2305.18317
|
Vincent Labatut
|
Lucas Potin (LIA), Vincent Labatut (LIA), Pierre-Henri Morand (LBNC),
Christine Largeron (LHC)
|
FOPPA: An Open Database of French Public Procurement Award Notices From
2010--2020
| null |
Scientific Data , 2023, 10, pp.303
|
10.1038/s41597-023-02213-z
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Public Procurement refers to governments' purchasing activities of goods,
services, and construction of public works. In the European Union (EU), it is
an essential sector, corresponding to 15% of the GDP. EU public procurement
generates large amounts of data, because award notices related to contracts
exceeding a predefined threshold must be published on the TED (EU's official
journal). Under the framework of the DeCoMaP project, which aims at leveraging
such data in order to predict fraud in public procurement, we constitute the
FOPPA (French Open Public Procurement Award notices) database. It contains the
description of 1,380,965 lots obtained from the TED, covering the 2010--2020
period for France. We detect a number of substantial issues in these data, and
propose a set of automated and semi-automated methods to solve them and produce
a usable database. It can be leveraged to study public procurement in an
academic setting, but also to facilitate the monitoring of public policies, and
to improve the quality of the data offered to buyers and suppliers.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 14:02:37 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Potin",
"Lucas",
"",
"LIA"
],
[
"Labatut",
"Vincent",
"",
"LIA"
],
[
"Morand",
"Pierre-Henri",
"",
"LBNC"
],
[
"Largeron",
"Christine",
"",
"LHC"
]
] |
new_dataset
| 0.996783 |
2305.18322
|
Simerjot Kaur
|
Simerjot Kaur, Charese Smiley, Akshat Gupta, Joy Sain, Dongsheng Wang,
Suchetha Siddagangappa, Toyin Aguda, Sameena Shah
|
REFinD: Relation Extraction Financial Dataset
| null | null |
10.1145/3539618.3591911
| null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
A number of datasets for Relation Extraction (RE) have been created to aide
downstream tasks such as information retrieval, semantic search, question
answering and textual entailment. However, these datasets fail to capture
financial-domain specific challenges since most of these datasets are compiled
using general knowledge sources such as Wikipedia, web-based text and news
articles, hindering real-life progress and adoption within the financial world.
To address this limitation, we propose REFinD, the first large-scale annotated
dataset of relations, with $\sim$29K instances and 22 relations amongst 8 types
of entity pairs, generated entirely over financial documents. We also provide
an empirical evaluation with various state-of-the-art models as benchmarks for
the RE task and highlight the challenges posed by our dataset. We observed that
various state-of-the-art deep learning models struggle with numeric inference,
relational and directional ambiguity.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 22:40:11 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Kaur",
"Simerjot",
""
],
[
"Smiley",
"Charese",
""
],
[
"Gupta",
"Akshat",
""
],
[
"Sain",
"Joy",
""
],
[
"Wang",
"Dongsheng",
""
],
[
"Siddagangappa",
"Suchetha",
""
],
[
"Aguda",
"Toyin",
""
],
[
"Shah",
"Sameena",
""
]
] |
new_dataset
| 0.998691 |
2305.18354
|
Yudong Xu
|
Yudong Xu, Wenhao Li, Pashootan Vaezipoor, Scott Sanner, Elias B.
Khalil
|
LLMs and the Abstraction and Reasoning Corpus: Successes, Failures, and
the Importance of Object-based Representations
|
17 pages, 11 figures
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Can a Large Language Model (LLM) solve simple abstract reasoning problems? We
explore this broad question through a systematic analysis of GPT on the
Abstraction and Reasoning Corpus (ARC), a representative benchmark of abstract
reasoning ability from limited examples in which solutions require some "core
knowledge" of concepts such as objects, goal states, counting, and basic
geometry. GPT-4 solves only 13/50 of the most straightforward ARC tasks when
using textual encodings for their two-dimensional input-output grids. Our
failure analysis reveals that GPT-4's capacity to identify objects and reason
about them is significantly influenced by the sequential nature of the text
that represents an object within a text encoding of a task. To test this
hypothesis, we design a new benchmark, the 1D-ARC, which consists of
one-dimensional (array-like) tasks that are more conducive to GPT-based
reasoning, and where it indeed performs better than on the (2D) ARC. To
alleviate this issue, we propose an object-based representation that is
obtained through an external tool, resulting in nearly doubling the performance
on solved ARC tasks and near-perfect scores on the easier 1D-ARC. Although the
state-of-the-art GPT-4 is unable to "reason" perfectly within non-language
domains such as the 1D-ARC or a simple ARC subset, our study reveals that the
use of object-based representations can significantly improve its reasoning
ability. Visualizations, GPT logs, and data are available at
https://khalil-research.github.io/LLM4ARC.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 16:32:17 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Xu",
"Yudong",
""
],
[
"Li",
"Wenhao",
""
],
[
"Vaezipoor",
"Pashootan",
""
],
[
"Sanner",
"Scott",
""
],
[
"Khalil",
"Elias B.",
""
]
] |
new_dataset
| 0.962316 |
2305.18356
|
Vani Nagarajan
|
Vani Nagarajan, Durga Mandarapu, Milind Kulkarni
|
RT-kNNS Unbound: Using RT Cores to Accelerate Unrestricted Neighbor
Search
|
This paper has been accepted at the International Conference on
Supercomputing 2023 (ICS'23)
| null | null | null |
cs.LG cs.CG cs.PF
|
http://creativecommons.org/licenses/by/4.0/
|
The problem of identifying the k-Nearest Neighbors (kNNS) of a point has
proven to be very useful both as a standalone application and as a subroutine
in larger applications. Given its far-reaching applicability in areas such as
machine learning and point clouds, extensive research has gone into leveraging
GPU acceleration to solve this problem. Recent work has shown that using Ray
Tracing cores in recent GPUs to accelerate kNNS is much more efficient compared
to traditional acceleration using shader cores. However, the existing
translation of kNNS to a ray tracing problem imposes a constraint on the search
space for neighbors. Due to this, we can only use RT cores to accelerate
fixed-radius kNNS, which requires the user to set a search radius a priori and
hence can miss neighbors. In this work, we propose TrueKNN, the first unbounded
RT-accelerated neighbor search. TrueKNN adopts an iterative approach where we
incrementally grow the search space until all points have found their k
neighbors. We show that our approach is orders of magnitude faster than
existing approaches and can even be used to accelerate fixed-radius neighbor
searches.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 17:40:25 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Nagarajan",
"Vani",
""
],
[
"Mandarapu",
"Durga",
""
],
[
"Kulkarni",
"Milind",
""
]
] |
new_dataset
| 0.956827 |
2305.18371
|
Sizhen Bian
|
Sizhen Bian, Lukas Schulthess, Georg Rutishauser, Alfio Di Mauro, Luca
Benini, Michele Magno
|
ColibriUAV: An Ultra-Fast, Energy-Efficient Neuromorphic Edge Processing
UAV-Platform with Event-Based and Frame-Based Cameras
| null | null | null | null |
cs.CV cs.AI cs.AR cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
The interest in dynamic vision sensor (DVS)-powered unmanned aerial vehicles
(UAV) is raising, especially due to the microsecond-level reaction time of the
bio-inspired event sensor, which increases robustness and reduces latency of
the perception tasks compared to a RGB camera. This work presents ColibriUAV, a
UAV platform with both frame-based and event-based cameras interfaces for
efficient perception and near-sensor processing. The proposed platform is
designed around Kraken, a novel low-power RISC-V System on Chip with two
hardware accelerators targeting spiking neural networks and deep ternary neural
networks.Kraken is capable of efficiently processing both event data from a DVS
camera and frame data from an RGB camera. A key feature of Kraken is its
integrated, dedicated interface with a DVS camera. This paper benchmarks the
end-to-end latency and power efficiency of the neuromorphic and event-based UAV
subsystem, demonstrating state-of-the-art event data with a throughput of 7200
frames of events per second and a power consumption of 10.7 \si{\milli\watt},
which is over 6.6 times faster and a hundred times less power-consuming than
the widely-used data reading approach through the USB interface. The overall
sensing and processing power consumption is below 50 mW, achieving latency in
the milliseconds range, making the platform suitable for low-latency autonomous
nano-drones as well.
|
[
{
"version": "v1",
"created": "Sat, 27 May 2023 23:08:22 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Bian",
"Sizhen",
""
],
[
"Schulthess",
"Lukas",
""
],
[
"Rutishauser",
"Georg",
""
],
[
"Di Mauro",
"Alfio",
""
],
[
"Benini",
"Luca",
""
],
[
"Magno",
"Michele",
""
]
] |
new_dataset
| 0.998739 |
2305.18389
|
Mansour Zoubeirou A Mayaki
|
Mansour Zoubeirou A Mayaki and Michel Riveill
|
AnoRand: A Semi Supervised Deep Learning Anomaly Detection Method by
Random Labeling
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Anomaly detection or more generally outliers detection is one of the most
popular and challenging subject in theoretical and applied machine learning.
The main challenge is that in general we have access to very few labeled data
or no labels at all. In this paper, we present a new semi-supervised anomaly
detection method called \textbf{AnoRand} by combining a deep learning
architecture with random synthetic label generation. The proposed architecture
has two building blocks: (1) a noise detection (ND) block composed of feed
forward ferceptron and (2) an autoencoder (AE) block. The main idea of this new
architecture is to learn one class (e.g. the majority class in case of anomaly
detection) as well as possible by taking advantage of the ability of auto
encoders to represent data in a latent space and the ability of Feed Forward
Perceptron (FFP) to learn one class when the data is highly imbalanced. First,
we create synthetic anomalies by randomly disturbing (add noise) few samples
(e.g. 2\%) from the training set. Second, we use the normal and the synthetic
samples as input to our model. We compared the performance of the proposed
method to 17 state-of-the-art unsupervised anomaly detection method on
synthetic datasets and 57 real-world datasets. Our results show that this new
method generally outperforms most of the state-of-the-art methods and has the
best performance (AUC ROC and AUC PR) on the vast majority of reference
datasets. We also tested our method in a supervised way by using the actual
labels to train the model. The results show that it has very good performance
compared to most of state-of-the-art supervised algorithms.
|
[
{
"version": "v1",
"created": "Sun, 28 May 2023 10:53:34 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Mayaki",
"Mansour Zoubeirou A",
""
],
[
"Riveill",
"Michel",
""
]
] |
new_dataset
| 0.968048 |
2305.18479
|
Petros Toupas
|
Petros Toupas, Christos-Savvas Bouganis, Dimitrios Tzovaras
|
FMM-X3D: FPGA-based modeling and mapping of X3D for Human Action
Recognition
|
8 pages, 6 figures, 2 tables
| null | null | null |
cs.CV cs.AI cs.AR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
3D Convolutional Neural Networks are gaining increasing attention from
researchers and practitioners and have found applications in many domains, such
as surveillance systems, autonomous vehicles, human monitoring systems, and
video retrieval. However, their widespread adoption is hindered by their high
computational and memory requirements, especially when resource-constrained
systems are targeted. This paper addresses the problem of mapping X3D, a
state-of-the-art model in Human Action Recognition that achieves accuracy of
95.5\% in the UCF101 benchmark, onto any FPGA device. The proposed toolflow
generates an optimised stream-based hardware system, taking into account the
available resources and off-chip memory characteristics of the FPGA device. The
generated designs push further the current performance-accuracy pareto front,
and enable for the first time the targeting of such complex model architectures
for the Human Action Recognition task.
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 11:17:51 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Toupas",
"Petros",
""
],
[
"Bouganis",
"Christos-Savvas",
""
],
[
"Tzovaras",
"Dimitrios",
""
]
] |
new_dataset
| 0.963184 |
2305.18500
|
Sihan Chen
|
Sihan Chen, Handong Li, Qunbo Wang, Zijia Zhao, Mingzhen Sun, Xinxin
Zhu, Jing Liu
|
VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and
Dataset
|
23 pages, 5 figures
| null | null | null |
cs.CV cs.AI cs.CL cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision and text have been fully explored in contemporary video-text
foundational models, while other modalities such as audio and subtitles in
videos have not received sufficient attention. In this paper, we resort to
establish connections between multi-modality video tracks, including Vision,
Audio, and Subtitle, and Text by exploring an automatically generated
large-scale omni-modality video caption dataset called VAST-27M. Specifically,
we first collect 27 million open-domain video clips and separately train a
vision and an audio captioner to generate vision and audio captions. Then, we
employ an off-the-shelf Large Language Model (LLM) to integrate the generated
captions, together with subtitles and instructional prompts into omni-modality
captions. Based on the proposed VAST-27M dataset, we train an omni-modality
video-text foundational model named VAST, which can perceive and process
vision, audio, and subtitle modalities from video, and better support various
tasks including vision-text, audio-text, and multi-modal video-text tasks
(retrieval, captioning and QA). Extensive experiments have been conducted to
demonstrate the effectiveness of our proposed VAST-27M corpus and VAST
foundation model. VAST achieves 22 new state-of-the-art results on various
cross-modality benchmarks. Code, model and dataset will be released at
https://github.com/TXH-mercury/VAST.
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 14:34:50 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Chen",
"Sihan",
""
],
[
"Li",
"Handong",
""
],
[
"Wang",
"Qunbo",
""
],
[
"Zhao",
"Zijia",
""
],
[
"Sun",
"Mingzhen",
""
],
[
"Zhu",
"Xinxin",
""
],
[
"Liu",
"Jing",
""
]
] |
new_dataset
| 0.999793 |
2305.18511
|
Kyra Gan
|
Kyra Gan, Esmaeil Keyvanshokooh, Xueqing Liu, Susan Murphy
|
Contextual Bandits with Budgeted Information Reveal
| null | null | null | null |
cs.LG math.OC
|
http://creativecommons.org/licenses/by/4.0/
|
Contextual bandit algorithms are commonly used in digital health to recommend
personalized treatments. However, to ensure the effectiveness of the
treatments, patients are often requested to take actions that have no immediate
benefit to them, which we refer to as pro-treatment actions. In practice,
clinicians have a limited budget to encourage patients to take these actions
and collect additional information. We introduce a novel optimization and
learning algorithm to address this problem. This algorithm effectively combines
the strengths of two algorithmic approaches in a seamless manner, including 1)
an online primal-dual algorithm for deciding the optimal timing to reach out to
patients, and 2) a contextual bandit learning algorithm to deliver personalized
treatment to the patient. We prove that this algorithm admits a sub-linear
regret bound. We illustrate the usefulness of this algorithm on both synthetic
and real-world data.
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 16:18:28 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Gan",
"Kyra",
""
],
[
"Keyvanshokooh",
"Esmaeil",
""
],
[
"Liu",
"Xueqing",
""
],
[
"Murphy",
"Susan",
""
]
] |
new_dataset
| 0.995004 |
2305.18543
|
Yue Kang
|
Yue Kang, Cho-Jui Hsieh, Thomas C. M. Lee
|
Robust Lipschitz Bandits to Adversarial Corruptions
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lipschitz bandit is a variant of stochastic bandits that deals with a
continuous arm set defined on a metric space, where the reward function is
subject to a Lipschitz constraint. In this paper, we introduce a new problem of
Lipschitz bandits in the presence of adversarial corruptions where an adaptive
adversary corrupts the stochastic rewards up to a total budget $C$. The budget
is measured by the sum of corruption levels across the time horizon $T$. We
consider both weak and strong adversaries, where the weak adversary is unaware
of the current action before the attack, while the strong one can observe it.
Our work presents the first line of robust Lipschitz bandit algorithms that can
achieve sub-linear regret under both types of adversary, even when the total
budget of corruption $C$ is unrevealed to the agent. We provide a lower bound
under each type of adversary, and show that our algorithm is optimal under the
strong case. Finally, we conduct experiments to illustrate the effectiveness of
our algorithms against two classic kinds of attacks.
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 18:16:59 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Kang",
"Yue",
""
],
[
"Hsieh",
"Cho-Jui",
""
],
[
"Lee",
"Thomas C. M.",
""
]
] |
new_dataset
| 0.997124 |
2305.18563
|
Mustafa Burak Gurbuz
|
Mustafa Burak Gurbuz, Jean Michael Moorman and Constantine Dovrolis
|
SHARP: Sparsity and Hidden Activation RePlay for Neuro-Inspired
Continual Learning
| null | null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Deep neural networks (DNNs) struggle to learn in dynamic environments since
they rely on fixed datasets or stationary environments. Continual learning (CL)
aims to address this limitation and enable DNNs to accumulate knowledge
incrementally, similar to human learning. Inspired by how our brain
consolidates memories, a powerful strategy in CL is replay, which involves
training the DNN on a mixture of new and all seen classes. However, existing
replay methods overlook two crucial aspects of biological replay: 1) the brain
replays processed neural patterns instead of raw input, and 2) it prioritizes
the replay of recently learned information rather than revisiting all past
experiences. To address these differences, we propose SHARP, an efficient
neuro-inspired CL method that leverages sparse dynamic connectivity and
activation replay. Unlike other activation replay methods, which assume layers
not subjected to replay have been pretrained and fixed, SHARP can continually
update all layers. Also, SHARP is unique in that it only needs to replay few
recently seen classes instead of all past classes. Our experiments on five
datasets demonstrate that SHARP outperforms state-of-the-art replay methods in
class incremental learning. Furthermore, we showcase SHARP's flexibility in a
novel CL scenario where the boundaries between learning episodes are blurry.
The SHARP code is available at
\url{https://github.com/BurakGurbuz97/SHARP-Continual-Learning}.
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 18:51:55 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Gurbuz",
"Mustafa Burak",
""
],
[
"Moorman",
"Jean Michael",
""
],
[
"Dovrolis",
"Constantine",
""
]
] |
new_dataset
| 0.997603 |
2305.18600
|
Adolfo Gustavo Serra-Seca-Neto
|
Tainara Silva Novaes, Kathleen Danielly Souza Lins, Adolfo Gustavo S.
Seca Neto, Mariangela de Oliveira G. Setti, Maria Claudia F. Pereira Emer
|
Despertando o Interesse de Mulheres para os Cursos em STEM
|
In Portuguese. 10 pages. Accepted for XVII Women in Information
Technology (WIT 2023)
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
This article presents initiatives aimed at promoting female participation in
STEM fields, with the goal of encouraging more women to pursue careers in these
areas. One of these initiatives is the Em\'ilias - Arma\c{c}\~ao em Bits
Project, which organizes workshops in schools. Additionally, a podcast has been
created to foster interaction between young people and professionals in the
field of computing, while also contributing to the establishment of female role
models in the industry. The results of these initiatives have been promising,
as 70.6% of the students who participated in the workshops expressed an
interest in computing. Furthermore, according to Spotify, the podcast's
audience consists of 53% females, 44% males, and 3% unspecified, indicating
that it has successfully reached a female demographic.
Resumo. Este artigo apresenta iniciativas que t\^em como objetivo promover a
participa\c{c}\~ao das mulheres nas \'areas de STEM, buscando encorajar mais
mulheres a seguirem carreiras nesses campos. O Projeto Em\'ilias -
Arma\c{c}\~ao em Bits desenvolve oficinas nas escolas e tamb\'em um podcast,
promovendo a intera\c{c}\~ao entre jovens e profissionais da \'area de
computa\c{c}\~ao, al\'em de contribuir para a forma\c{c}\~ao de modelos
femininos nesse campo. Os resultados demonstraram que 70,6% das estudantes
demonstraram interesse pela computa\c{c}\~ao ap\'os participarem das oficinas.
Em rela\c{c}\~ao aos ouvintes do podcast, dados do Spotify indicaram que 53% do
p\'ublico se identifica como feminino, 44% como masculino, e 3% n\~ao
especificaram o g\^enero, o que mostra que o podcast tem alcan\c{c}ado um
p\'ublico feminino.
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 20:34:31 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Novaes",
"Tainara Silva",
""
],
[
"Lins",
"Kathleen Danielly Souza",
""
],
[
"Neto",
"Adolfo Gustavo S. Seca",
""
],
[
"Setti",
"Mariangela de Oliveira G.",
""
],
[
"Emer",
"Maria Claudia F. Pereira",
""
]
] |
new_dataset
| 0.999407 |
2305.18618
|
Vagelis Plevris
|
Vagelis Plevris, George Papazafeiropoulos, Alejandro Jim\'enez Rios
|
Chatbots put to the test in math and logic problems: A preliminary
comparison and assessment of ChatGPT-3.5, ChatGPT-4, and Google Bard
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
A comparison between three chatbots which are based on large language models,
namely ChatGPT-3.5, ChatGPT-4 and Google Bard is presented, focusing on their
ability to give correct answers to mathematics and logic problems. In
particular, we check their ability to Understand the problem at hand; Apply
appropriate algorithms or methods for its solution; and Generate a coherent
response and a correct answer. We use 30 questions that are clear, without any
ambiguities, fully described with plain text only, and have a unique, well
defined correct answer. The questions are divided into two sets of 15 each. The
questions of Set A are 15 "Original" problems that cannot be found online,
while Set B contains 15 "Published" problems that one can find online, usually
with their solution. Each question is posed three times to each chatbot. The
answers are recorded and discussed, highlighting their strengths and
weaknesses. It has been found that for straightforward arithmetic, algebraic
expressions, or basic logic puzzles, chatbots may provide accurate solutions,
although not in every attempt. However, for more complex mathematical problems
or advanced logic tasks, their answers, although written in a usually
"convincing" way, may not be reliable. Consistency is also an issue, as many
times a chatbot will provide conflicting answers when given the same question
more than once. A comparative quantitative evaluation of the three chatbots is
made through scoring their final answers based on correctness. It was found
that ChatGPT-4 outperforms ChatGPT-3.5 in both sets of questions. Bard comes
third in the original questions of Set A, behind the other two chatbots, while
it has the best performance (first place) in the published questions of Set B.
This is probably because Bard has direct access to the internet, in contrast to
ChatGPT chatbots which do not have any communication with the outside world.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 11:18:05 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Plevris",
"Vagelis",
""
],
[
"Papazafeiropoulos",
"George",
""
],
[
"Rios",
"Alejandro Jiménez",
""
]
] |
new_dataset
| 0.992333 |
2305.18620
|
Xi Chen
|
Nan Zhou, Xinghui Tao, Xi Chen
|
CONA: A novel CONtext-Aware instruction paradigm for communication using
large language model
| null | null | null | null |
cs.CL cs.AI cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce CONA, a novel context-aware instruction paradigm for effective
knowledge dissemination using generative pre-trained transformer (GPT) models.
CONA is a flexible framework designed to leverage the capabilities of Large
Language Models (LLMs) and incorporate DIKW (Data, Information, Knowledge,
Wisdom) hierarchy to automatically instruct and optimise presentation content,
anticipate potential audience inquiries, and provide context-aware answers that
adaptive to the knowledge level of the audience group. The unique aspect of the
CONA paradigm lies in its combination of an independent advisory mechanism and
a recursive feedback loop rooted on the DIKW hierarchy. This synergy
significantly enhances context-aware contents, ensuring they are accessible and
easily comprehended by the audience. This paradigm is an early pioneer to
explore new methods for knowledge dissemination and communication in the LLM
era, offering effective support for everyday knowledge sharing scenarios. We
conduct experiments on a range of audience roles, along with materials from
various disciplines using GPT4. Both quantitative and qualitative results
demonstrated that the proposed CONA paradigm achieved remarkable performance
compared to the outputs guided by conventional prompt engineering.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 00:53:18 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Zhou",
"Nan",
""
],
[
"Tao",
"Xinghui",
""
],
[
"Chen",
"Xi",
""
]
] |
new_dataset
| 0.981102 |
2305.18663
|
Frank Wanye
|
Frank Wanye, Vitaliy Gleyzer, Edward Kao, Wu-chun Feng
|
Exact Distributed Stochastic Block Partitioning
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Stochastic block partitioning (SBP) is a community detection algorithm that
is highly accurate even on graphs with a complex community structure, but its
inherently serial nature hinders its widespread adoption by the wider
scientific community. To make it practical to analyze large real-world graphs
with SBP, there is a growing need to parallelize and distribute the algorithm.
The current state-of-the-art distributed SBP algorithm is a divide-and-conquer
approach that limits communication between compute nodes until the end of
inference. This leads to the breaking of computational dependencies, which
causes convergence issues as the number of compute nodes increases, and when
the graph is sufficiently sparse. In this paper, we introduce EDiSt - an exact
distributed stochastic block partitioning algorithm. Under EDiSt, compute nodes
periodically share community assignments during inference. Due to this
additional communication, EDiSt improves upon the divide-and-conquer algorithm
by allowing it to scale out to a larger number of compute nodes without
suffering from convergence issues, even on sparse graphs. We show that EDiSt
provides speedups of up to 23.8X over the divide-and-conquer approach, and
speedups up to 38.0X over shared memory parallel SBP when scaled out to 64
compute nodes.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 00:07:56 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Wanye",
"Frank",
""
],
[
"Gleyzer",
"Vitaliy",
""
],
[
"Kao",
"Edward",
""
],
[
"Feng",
"Wu-chun",
""
]
] |
new_dataset
| 0.986803 |
2305.18714
|
Supeng Wang
|
Supeng Wang, Yuxi Li, Ming Xie, Mingmin Chi, Yabiao Wang, Chengjie
Wang, Wenbing Zhu
|
Align, Perturb and Decouple: Toward Better Leverage of Difference
Information for RSI Change Detection
|
To appear in IJCAI 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Change detection is a widely adopted technique in remote sense imagery (RSI)
analysis in the discovery of long-term geomorphic evolution. To highlight the
areas of semantic changes, previous effort mostly pays attention to learning
representative feature descriptors of a single image, while the difference
information is either modeled with simple difference operations or implicitly
embedded via feature interactions. Nevertheless, such difference modeling can
be noisy since it suffers from non-semantic changes and lacks explicit guidance
from image content or context. In this paper, we revisit the importance of
feature difference for change detection in RSI, and propose a series of
operations to fully exploit the difference information: Alignment, Perturbation
and Decoupling (APD). Firstly, alignment leverages contextual similarity to
compensate for the non-semantic difference in feature space. Next, a difference
module trained with semantic-wise perturbation is adopted to learn more
generalized change estimators, which reversely bootstraps feature extraction
and prediction. Finally, a decoupled dual-decoder structure is designed to
predict semantic changes in both content-aware and content-agnostic manners.
Extensive experiments are conducted on benchmarks of LEVIR-CD, WHU-CD and
DSIFN-CD, demonstrating our proposed operations bring significant improvement
and achieve competitive results under similar comparative conditions. Code is
available at https://github.com/wangsp1999/CD-Research/tree/main/openAPD
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 03:39:53 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Wang",
"Supeng",
""
],
[
"Li",
"Yuxi",
""
],
[
"Xie",
"Ming",
""
],
[
"Chi",
"Mingmin",
""
],
[
"Wang",
"Yabiao",
""
],
[
"Wang",
"Chengjie",
""
],
[
"Zhu",
"Wenbing",
""
]
] |
new_dataset
| 0.964875 |
2305.18727
|
Muskan Garg
|
Muskan Garg, Amirmohammad Shahbandegan, Amrit Chadha, Vijay Mago
|
An Annotated Dataset for Explainable Interpersonal Risk Factors of
Mental Disturbance in Social Media Posts
| null | null | null | null |
cs.CL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With a surge in identifying suicidal risk and its severity in social media
posts, we argue that a more consequential and explainable research is required
for optimal impact on clinical psychology practice and personalized mental
healthcare. The success of computational intelligence techniques for inferring
mental illness from social media resources, points to natural language
processing as a lens for determining Interpersonal Risk Factors (IRF) in human
writings. Motivated with limited availability of datasets for social NLP
research community, we construct and release a new annotated dataset with
human-labelled explanations and classification of IRF affecting mental
disturbance on social media: (i) Thwarted Belongingness (TBe), and (ii)
Perceived Burdensomeness (PBu). We establish baseline models on our dataset
facilitating future research directions to develop real-time personalized AI
models by detecting patterns of TBe and PBu in emotional spectrum of user's
historical social media profile.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 04:08:40 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Garg",
"Muskan",
""
],
[
"Shahbandegan",
"Amirmohammad",
""
],
[
"Chadha",
"Amrit",
""
],
[
"Mago",
"Vijay",
""
]
] |
new_dataset
| 0.998615 |
2305.18736
|
Muskan Garg
|
Muskan Garg, Chandni Saxena, Debabrata Samanta, Bonnie J. Dorr
|
LonXplain: Lonesomeness as a Consequence of Mental Disturbance in Reddit
Posts
| null | null | null | null |
cs.CL cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social media is a potential source of information that infers latent mental
states through Natural Language Processing (NLP). While narrating real-life
experiences, social media users convey their feeling of loneliness or isolated
lifestyle, impacting their mental well-being. Existing literature on
psychological theories points to loneliness as the major consequence of
interpersonal risk factors, propounding the need to investigate loneliness as a
major aspect of mental disturbance. We formulate lonesomeness detection in
social media posts as an explainable binary classification problem, discovering
the users at-risk, suggesting the need of resilience for early control. To the
best of our knowledge, there is no existing explainable dataset, i.e., one with
human-readable, annotated text spans, to facilitate further research and
development in loneliness detection causing mental disturbance. In this work,
three experts: a senior clinical psychologist, a rehabilitation counselor, and
a social NLP researcher define annotation schemes and perplexity guidelines to
mark the presence or absence of lonesomeness, along with the marking of
text-spans in original posts as explanation, in 3,521 Reddit posts. We expect
the public release of our dataset, LonXplain, and traditional classifiers as
baselines via GitHub.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 04:21:24 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Garg",
"Muskan",
""
],
[
"Saxena",
"Chandni",
""
],
[
"Samanta",
"Debabrata",
""
],
[
"Dorr",
"Bonnie J.",
""
]
] |
new_dataset
| 0.997699 |
2305.18745
|
Yiyu Cai
|
Souravik Dutta, Yiyu Cai, Jianmin Zheng
|
Multi-objective Anti-swing Trajectory Planning of Double-pendulum Tower
Crane Operations using Opposition-based Evolutionary Algorithm
|
14 pages, 14 figures, 6 tables
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Underactuated tower crane lifting requires time-energy optimal trajectories
for the trolley/slew operations and reduction of the unactuated swings
resulting from the trolley/jib motion. In scenarios involving non-negligible
hook mass or long rig-cable, the hook-payload unit exhibits double-pendulum
behaviour, making the problem highly challenging. This article introduces an
offline multi-objective anti-swing trajectory planning module for a
Computer-Aided Lift Planning (CALP) system of autonomous double-pendulum tower
cranes, addressing all the transient state constraints. A set of auxiliary
outputs are selected by methodically analyzing the payload swing dynamics and
are used to prove the differential flatness property of the crane operations.
The flat outputs are parameterized via suitable B\'{e}zier curves to formulate
the multi-objective trajectory optimization problems in the flat output space.
A novel multi-objective evolutionary algorithm called Collective Oppositional
Generalized Differential Evolution 3 (CO-GDE3) is employed as the optimizer. To
obtain faster convergence and better consistency in getting a wide range of
good solutions, a new population initialization strategy is integrated into the
conventional GDE3. The computationally efficient initialization method
incorporates various concepts of computational opposition. Statistical
comparisons based on trolley and slew operations verify the superiority of
convergence and reliability of CO-GDE3 over the standard GDE3. Trolley and slew
operations of a collision-free lifting path computed via the path planner of
the CALP system are selected for a simulation study. The simulated trajectories
demonstrate that the proposed planner can produce time-energy optimal
solutions, keeping all the state variables within their respective limits and
restricting the hook and payload swings.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 04:54:07 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Dutta",
"Souravik",
""
],
[
"Cai",
"Yiyu",
""
],
[
"Zheng",
"Jianmin",
""
]
] |
new_dataset
| 0.988714 |
2305.18756
|
Yuxuan Wang
|
Yuxuan Wang, Zilong Zheng, Xueliang Zhao, Jinpeng Li, Yueqian Wang,
and Dongyan Zhao
|
VSTAR: A Video-grounded Dialogue Dataset for Situated Semantic
Understanding with Scene and Topic Transitions
|
To appear at ACL 2023
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Video-grounded dialogue understanding is a challenging problem that requires
machine to perceive, parse and reason over situated semantics extracted from
weakly aligned video and dialogues. Most existing benchmarks treat both
modalities the same as a frame-independent visual understanding task, while
neglecting the intrinsic attributes in multimodal dialogues, such as scene and
topic transitions. In this paper, we present Video-grounded Scene&Topic AwaRe
dialogue (VSTAR) dataset, a large scale video-grounded dialogue understanding
dataset based on 395 TV series. Based on VSTAR, we propose two benchmarks for
video-grounded dialogue understanding: scene segmentation and topic
segmentation, and one benchmark for video-grounded dialogue generation.
Comprehensive experiments are performed on these benchmarks to demonstrate the
importance of multimodal information and segments in video-grounded dialogue
understanding and generation.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 05:40:37 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Wang",
"Yuxuan",
""
],
[
"Zheng",
"Zilong",
""
],
[
"Zhao",
"Xueliang",
""
],
[
"Li",
"Jinpeng",
""
],
[
"Wang",
"Yueqian",
""
],
[
"Zhao",
"Dongyan",
""
]
] |
new_dataset
| 0.999841 |
2305.18760
|
Yuxuan Wang
|
Yuxuan Wang, Jianghui Wang, Dongyan Zhao, and Zilong Zheng
|
Shuo Wen Jie Zi: Rethinking Dictionaries and Glyphs for Chinese Language
Pre-training
|
To appear at ACL 2023 Findings
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We introduce CDBERT, a new learning paradigm that enhances the semantics
understanding ability of the Chinese PLMs with dictionary knowledge and
structure of Chinese characters. We name the two core modules of CDBERT as
Shuowen and Jiezi, where Shuowen refers to the process of retrieving the most
appropriate meaning from Chinese dictionaries and Jiezi refers to the process
of enhancing characters' glyph representations with structure understanding. To
facilitate dictionary understanding, we propose three pre-training tasks, i.e.,
Masked Entry Modeling, Contrastive Learning for Synonym and Antonym, and
Example Learning. We evaluate our method on both modern Chinese understanding
benchmark CLUE and ancient Chinese benchmark CCLUE. Moreover, we propose a new
polysemy discrimination task PolyMRC based on the collected dictionary of
ancient Chinese. Our paradigm demonstrates consistent improvements on previous
Chinese PLMs across all tasks. Moreover, our approach yields significant
boosting on few-shot setting of ancient Chinese understanding.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 05:48:36 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Wang",
"Yuxuan",
""
],
[
"Wang",
"Jianghui",
""
],
[
"Zhao",
"Dongyan",
""
],
[
"Zheng",
"Zilong",
""
]
] |
new_dataset
| 0.99658 |
2305.18769
|
Keerth Rathakumar
|
Keerth Rathakumar, David Liebowitz, Christian Walder, Kristen Moore,
Salil S. Kanhere
|
DualVAE: Controlling Colours of Generated and Real Images
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Colour controlled image generation and manipulation are of interest to
artists and graphic designers. Vector Quantised Variational AutoEncoders
(VQ-VAEs) with autoregressive (AR) prior are able to produce high quality
images, but lack an explicit representation mechanism to control colour
attributes. We introduce DualVAE, a hybrid representation model that provides
such control by learning disentangled representations for colour and geometry.
The geometry is represented by an image intensity mapping that identifies
structural features. The disentangled representation is obtained by two novel
mechanisms:
(i) a dual branch architecture that separates image colour attributes from
geometric attributes, and (ii) a new ELBO that trains the combined colour and
geometry representations. DualVAE can control the colour of generated images,
and recolour existing images by transferring the colour latent representation
obtained from an exemplar image. We demonstrate that DualVAE generates images
with FID nearly two times better than VQ-GAN on a diverse collection of
datasets, including animated faces, logos and artistic landscapes.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 06:04:30 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Rathakumar",
"Keerth",
""
],
[
"Liebowitz",
"David",
""
],
[
"Walder",
"Christian",
""
],
[
"Moore",
"Kristen",
""
],
[
"Kanhere",
"Salil S.",
""
]
] |
new_dataset
| 0.997666 |
2305.18778
|
Sara Baradaran
|
Sepehr Ganji, Shirin Behnaminia, Ali Ahangarpour, Erfan Mazaheri, Sara
Baradaran, Zeinab Zali, Mohammad Reza Heidarpour, Ali Rakhshan, Mahsa Faraji
Shoyari
|
CN2F: A Cloud-Native Cellular Network Framework
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Upcoming 5G and Beyond 5G (B5G) cellular networks aim to improve the
efficiency and flexibility of mobile networks by incorporating various
technologies, such as Software Defined Networking (SDN), Network Function
Virtualization (NFV), and Network Slicing (NS). In this paper, we share our
findings, accompanied by a comprehensive online codebase, about the best
practice of using different open-source projects in order to realize a flexible
testbed for academia and industrial Research and Development (R&D) activities
on the future generation of cellular networks. In particular, a Cloud-Native
Cellular Network Framework (CN2F) is presented which uses OpenAirInterface's
codebase to generate cellular Virtual Network Functions (VNFs) and deploys
Kubernetes to disperse and manage them among some worker nodes. Moreover, CN2F
leverages ONOS and Mininet to emulate the effect of the IP transport networks
in the fronthaul and backhaul of real cellular networks. In this paper, we also
showcase two use cases of CN2F to demonstrate the importance of Edge Computing
(EC) and the capability of Radio Access Network (RAN) slicing.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 06:20:53 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Ganji",
"Sepehr",
""
],
[
"Behnaminia",
"Shirin",
""
],
[
"Ahangarpour",
"Ali",
""
],
[
"Mazaheri",
"Erfan",
""
],
[
"Baradaran",
"Sara",
""
],
[
"Zali",
"Zeinab",
""
],
[
"Heidarpour",
"Mohammad Reza",
""
],
[
"Rakhshan",
"Ali",
""
],
[
"Shoyari",
"Mahsa Faraji",
""
]
] |
new_dataset
| 0.990979 |
2305.18782
|
Takahiro Shindo
|
Takahiro Shindo, Taiju Watanabe, Kein Yamada, Hiroshi Watanabe
|
VVC Extension Scheme for Object Detection Using Contrast Reduction
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, video analysis using Artificial Intelligence (AI) has been
widely used, due to the remarkable development of image recognition technology
using deep learning. In 2019, the Moving Picture Experts Group (MPEG) has
started standardization of Video Coding for Machines (VCM) as a video coding
technology for image recognition. In the framework of VCM, both higher image
recognition accuracy and video compression performance are required. In this
paper, we propose an extention scheme of video coding for object detection
using Versatile Video Coding (VVC). Unlike video for human vision, video used
for object detection does not require a large image size or high contrast.
Since downsampling of the image can reduce the amount of information to be
transmitted. Due to the decrease in image contrast, entropy of the image
becomes smaller. Therefore, in our proposed scheme, the original image is
reduced in size and contrast, then coded with VVC encoder to achieve high
compression performance. Then, the output image from the VVC decoder is
restored to its original image size using the bicubic method. Experimental
results show that the proposed video coding scheme achieves better coding
performance than regular VVC in terms of object detection accuracy.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 06:29:04 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Shindo",
"Takahiro",
""
],
[
"Watanabe",
"Taiju",
""
],
[
"Yamada",
"Kein",
""
],
[
"Watanabe",
"Hiroshi",
""
]
] |
new_dataset
| 0.975967 |
2305.18834
|
Shengbo Liu
|
Shengbo Liu, Wen Wu, Liqun Fu, Kaige Qu, Qiang Ye, Weihua Zhuang, and
Sherman Shen
|
Millimeter Wave Full-Duplex Networks: MAC Design and Throughput
Optimization
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Full-duplex (FD) technique can remarkably boost the network capacity in the
millimeter wave (mmWave) bands by enabling simultaneous transmission and
reception. However, due to directional transmission and large bandwidth, the
throughput and fairness performance of a mmWave FD network are affected by
deafness and directional hidden-node (HN) problems and severe residual
self-interference (RSI). To address these challenges, this paper proposes a
directional FD medium access control protocol, named DFDMAC to support typical
directional FD transmission modes by exploiting FD to transmit control frames
to reduce signaling overhead. Furthermore, a novel busy-tone mechanism is
designed to avoid deafness and directional HN problems and improve the fairness
of channel access. To reduce the impact of RSI on link throughput, we formulate
a throughput maximization problem for different FD transmission modes and
propose a power control algorithm to obtain the optimal transmit power.
Simulation results show that the proposed DFDMAC can improve the network
throughput and fairness by over 60% and 32%, respectively, compared with the
existing MAC protocol in IEEE 802.11ay. Moreover, the proposed power control
algorithm can effectively enhance the network throughput.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 08:26:28 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Liu",
"Shengbo",
""
],
[
"Wu",
"Wen",
""
],
[
"Fu",
"Liqun",
""
],
[
"Qu",
"Kaige",
""
],
[
"Ye",
"Qiang",
""
],
[
"Zhuang",
"Weihua",
""
],
[
"Shen",
"Sherman",
""
]
] |
new_dataset
| 0.994014 |
2305.18855
|
Jan Deriu
|
Michel Pl\"uss, Jan Deriu, Yanick Schraner, Claudio Paonessa, Julia
Hartmann, Larissa Schmidt, Christian Scheller, Manuela H\"urlimann, Tanja
Samard\v{z}i\'c, Manfred Vogel, Mark Cieliebak
|
STT4SG-350: A Speech Corpus for All Swiss German Dialect Regions
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We present STT4SG-350 (Speech-to-Text for Swiss German), a corpus of Swiss
German speech, annotated with Standard German text at the sentence level. The
data is collected using a web app in which the speakers are shown Standard
German sentences, which they translate to Swiss German and record. We make the
corpus publicly available. It contains 343 hours of speech from all dialect
regions and is the largest public speech corpus for Swiss German to date.
Application areas include automatic speech recognition (ASR), text-to-speech,
dialect identification, and speaker recognition. Dialect information, age
group, and gender of the 316 speakers are provided. Genders are equally
represented and the corpus includes speakers of all ages. Roughly the same
amount of speech is provided per dialect region, which makes the corpus ideally
suited for experiments with speech technology for different dialects. We
provide training, validation, and test splits of the data. The test set
consists of the same spoken sentences for each dialect region and allows a fair
evaluation of the quality of speech technologies in different dialects. We
train an ASR model on the training set and achieve an average BLEU score of
74.7 on the test set. The model beats the best published BLEU scores on 2 other
Swiss German ASR test sets, demonstrating the quality of the corpus.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 08:49:38 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Plüss",
"Michel",
""
],
[
"Deriu",
"Jan",
""
],
[
"Schraner",
"Yanick",
""
],
[
"Paonessa",
"Claudio",
""
],
[
"Hartmann",
"Julia",
""
],
[
"Schmidt",
"Larissa",
""
],
[
"Scheller",
"Christian",
""
],
[
"Hürlimann",
"Manuela",
""
],
[
"Samardžić",
"Tanja",
""
],
[
"Vogel",
"Manfred",
""
],
[
"Cieliebak",
"Mark",
""
]
] |
new_dataset
| 0.999796 |
2305.18859
|
Jan Mrkos
|
David Fiedler and Jan Mrkos
|
Large-scale Ridesharing DARP Instances Based on Real Travel Demand
|
8 pages, 9 figures. Submitted to 26th IEEE International Conference
on Intelligent Transportation Systems ITSC 2023. For the published associated
dataset and source codes, see the repository
https://github.com/aicenter/Ridesharing_DARP_instances
| null | null | null |
cs.AI math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurately predicting the real-life performance of algorithms solving the
Dial-a-Ride Problem (DARP) in the context of Mobility on Demand (MoD) systems
with ridesharing requires evaluating them on representative instances. However,
the benchmarking of state-of-the-art DARP solution methods has been limited to
small, artificial instances or outdated non-public instances, hindering direct
comparisons. With the rise of large MoD systems and the availability of open
travel demand datasets for many US cities, there is now an opportunity to
evaluate these algorithms on standardized, realistic, and representative
instances. Despite the significant challenges involved in processing obfuscated
and diverse datasets, we have developed a methodology using which we have
created a comprehensive set of large-scale demand instances based on real-world
data. These instances cover diverse use cases, one of which is demonstrated in
an evaluation of two established DARP methods: the insertion heuristic and
optimal vehicle-group assignment method. We publish the full results of both
methods in a standardized format. The results show significant differences
between areas in all measured quantities, emphasizing the importance of
evaluating methods across different cities.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 08:51:11 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Fiedler",
"David",
""
],
[
"Mrkos",
"Jan",
""
]
] |
new_dataset
| 0.999334 |
2305.18907
|
Loukas Ilias
|
Loukas Ilias, Dimitris Askounis
|
Multitask learning for recognizing stress and depression in social media
| null | null | null | null |
cs.CL cs.SI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Stress and depression are prevalent nowadays across people of all ages due to
the quick paces of life. People use social media to express their feelings.
Thus, social media constitute a valuable form of information for the early
detection of stress and depression. Although many research works have been
introduced targeting the early recognition of stress and depression, there are
still limitations. There have been proposed multi-task learning settings, which
use depression and emotion (or figurative language) as the primary and
auxiliary tasks respectively. However, although stress is inextricably linked
with depression, researchers face these two tasks as two separate tasks. To
address these limitations, we present the first study, which exploits two
different datasets collected under different conditions, and introduce two
multitask learning frameworks, which use depression and stress as the main and
auxiliary tasks respectively. Specifically, we use a depression dataset and a
stressful dataset including stressful posts from ten subreddits of five
domains. In terms of the first approach, each post passes through a shared BERT
layer, which is updated by both tasks. Next, two separate BERT encoder layers
are exploited, which are updated by each task separately. Regarding the second
approach, it consists of shared and task-specific layers weighted by attention
fusion networks. We conduct a series of experiments and compare our approaches
with existing research initiatives, single-task learning, and transfer
learning. Experiments show multiple advantages of our approaches over
state-of-the-art ones.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 10:04:01 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Ilias",
"Loukas",
""
],
[
"Askounis",
"Dimitris",
""
]
] |
new_dataset
| 0.997384 |
2305.18909
|
Elochukwu Ukwandu Dr
|
Assumpta Ezugwu, Elochukwu Ukwandu, Celestine Ugwu, Modesta Ezema,
Comfort Olebara, Juliana Ndunagu, Lizzy Ofusori, Uchenna Ome
|
Password-Based Authentication and The Experiences of End Users
|
31 pages, 15 tables, 2 figures
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Passwords are used majorly for end-user authentication in information and
communication technology (ICT) systems due to its perceived ease of use. The
use for end-user authentication extends through mobile, computers and
network-based products and services. But with the attendant issues relating to
password hacks, leakages, and theft largely due to weak, reuse and poor
password habits of end-users, the call for passwordless authentication as
alternative intensifies. All the same, there are missing knowledge of whether
these password-based experiences are associated with societal economic status,
educational qualification of citizens, their age and gender, technological
advancements, and depth of penetration. In line with the above, understanding
the experience of end-users in developing economy to ascertain their
password-based experience has become of interest to the researchers. This paper
aims at measuring the experience of staff and students in University
communities within southeastern Nigeria on password-based authentication
systems. These communities have population whose age brackets are majorly
within the ages of 16 and 60 years; have people with requisite educational
qualifications ranging from Diploma to Doctorate degrees and constitutes good
number of ICT tools consumers. The survey had 291 respondents, and collected
data about age, educational qualifications, and gender from these respondents.
It also collected information about their password experience in social media
network, online shopping, electronic health care services, and internet
banking. Our analysis using SPSS and report by means of descriptive statistics,
frequency distribution, and Chi-Square tests showed that account compromise in
the geographical area is not common with the respondents reporting good
experience with passwords usage.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 10:05:46 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Ezugwu",
"Assumpta",
""
],
[
"Ukwandu",
"Elochukwu",
""
],
[
"Ugwu",
"Celestine",
""
],
[
"Ezema",
"Modesta",
""
],
[
"Olebara",
"Comfort",
""
],
[
"Ndunagu",
"Juliana",
""
],
[
"Ofusori",
"Lizzy",
""
],
[
"Ome",
"Uchenna",
""
]
] |
new_dataset
| 0.997652 |
2305.18939
|
Regina Stodden
|
Regina Stodden and Omar Momen and Laura Kallmeyer
|
DEPLAIN: A German Parallel Corpus with Intralingual Translations into
Plain Language for Sentence and Document Simplification
|
Accepted to ACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Text simplification is an intralingual translation task in which documents,
or sentences of a complex source text are simplified for a target audience. The
success of automatic text simplification systems is highly dependent on the
quality of parallel data used for training and evaluation. To advance sentence
simplification and document simplification in German, this paper presents
DEplain, a new dataset of parallel, professionally written and manually aligned
simplifications in plain German ("plain DE" or in German: "Einfache Sprache").
DEplain consists of a news domain (approx. 500 document pairs, approx. 13k
sentence pairs) and a web-domain corpus (approx. 150 aligned documents, approx.
2k aligned sentence pairs). In addition, we are building a web harvester and
experimenting with automatic alignment methods to facilitate the integration of
non-aligned and to be published parallel documents. Using this approach, we are
dynamically increasing the web domain corpus, so it is currently extended to
approx. 750 document pairs and approx. 3.5k aligned sentence pairs. We show
that using DEplain to train a transformer-based seq2seq text simplification
model can achieve promising results. We make available the corpus, the adapted
alignment methods for German, the web harvester and the trained models here:
https://github.com/rstodden/DEPlain.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 11:07:46 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Stodden",
"Regina",
""
],
[
"Momen",
"Omar",
""
],
[
"Kallmeyer",
"Laura",
""
]
] |
new_dataset
| 0.999536 |
2305.19049
|
Yasaman Omid
|
Yasaman Omid, Zohre Mashayekh Bakhsh, Farbod Kayhan, Yi Ma, Rahim
Tafazolli
|
Space MIMO: Direct Unmodified Handheld to Multi-Satellite Communication
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This paper examines the uplink transmission of a single-antenna handsheld
user to a cluster of satellites, with a focus on utilizing the inter-satellite
links to enable cooperative signal detection. Two cases are studied: one with
full CSI and the other with partial CSI between satellites. The two cases are
compared in terms of capacity, overhead, and bit error rate. Additionally, the
impact of channel estimation error is analyzed in both designs, and robust
detection techniques are proposed to handle channel uncertainty up to a certain
level. The performance of each case is demonstrated, and a comparison is made
with conventional satellite communication schemes where only one satellite can
connect to a user. The results of our study reveal that the proposed
constellation with a total of 3168 satellites in orbit can enable a capacity of
800 Mbits/sec through cooperation of $12$ satellites with and occupied
bandwidth of 500 MHz. In contrast, conventional satellite communication
approaches with the same system parameters yield a significantly lower capacity
of less than 150 Mbits/sec for the nearest satellite.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 14:12:23 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Omid",
"Yasaman",
""
],
[
"Bakhsh",
"Zohre Mashayekh",
""
],
[
"Kayhan",
"Farbod",
""
],
[
"Ma",
"Yi",
""
],
[
"Tafazolli",
"Rahim",
""
]
] |
new_dataset
| 0.995295 |
2305.19108
|
Lior Bracha
|
Lior Bracha, Eitan Shaar, Aviv Shamsian, Ethan Fetaya, Gal Chechik
|
DisCLIP: Open-Vocabulary Referring Expression Generation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Referring Expressions Generation (REG) aims to produce textual descriptions
that unambiguously identifies specific objects within a visual scene.
Traditionally, this has been achieved through supervised learning methods,
which perform well on specific data distributions but often struggle to
generalize to new images and concepts. To address this issue, we present a
novel approach for REG, named DisCLIP, short for discriminative CLIP. We build
on CLIP, a large-scale visual-semantic model, to guide an LLM to generate a
contextual description of a target concept in an image while avoiding other
distracting concepts. Notably, this optimization happens at inference time and
does not require additional training or tuning of learned parameters. We
measure the quality of the generated text by evaluating the capability of a
receiver model to accurately identify the described object within the scene. To
achieve this, we use a frozen zero-shot comprehension module as a critique of
our generated referring expressions. We evaluate DisCLIP on multiple referring
expression benchmarks through human evaluation and show that it significantly
outperforms previous methods on out-of-domain datasets. Our results highlight
the potential of using pre-trained visual-semantic models for generating
high-quality contextual descriptions.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 15:13:17 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Bracha",
"Lior",
""
],
[
"Shaar",
"Eitan",
""
],
[
"Shamsian",
"Aviv",
""
],
[
"Fetaya",
"Ethan",
""
],
[
"Chechik",
"Gal",
""
]
] |
new_dataset
| 0.996946 |
2305.19112
|
Ibrahim Hamamci Mr.
|
Ibrahim Ethem Hamamci, Sezgin Er, Enis Simsar, Atif Emre Yuksel,
Sadullah Gultekin, Serife Damla Ozdemir, Kaiyuan Yang, Hongwei Bran Li,
Sarthak Pati, Bernd Stadlinger, Albert Mehl, Mustafa Gundogar, Bjoern Menze
|
DENTEX: An Abnormal Tooth Detection with Dental Enumeration and
Diagnosis Benchmark for Panoramic X-rays
|
MICCAI 2023 Challenge
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Panoramic X-rays are frequently used in dentistry for treatment planning, but
their interpretation can be both time-consuming and prone to error. Artificial
intelligence (AI) has the potential to aid in the analysis of these X-rays,
thereby improving the accuracy of dental diagnoses and treatment plans.
Nevertheless, designing automated algorithms for this purpose poses significant
challenges, mainly due to the scarcity of annotated data and variations in
anatomical structure. To address these issues, the Dental Enumeration and
Diagnosis on Panoramic X-rays Challenge (DENTEX) has been organized in
association with the International Conference on Medical Image Computing and
Computer-Assisted Intervention (MICCAI) in 2023. This challenge aims to promote
the development of algorithms for multi-label detection of abnormal teeth,
using three types of hierarchically annotated data: partially annotated
quadrant data, partially annotated quadrant-enumeration data, and fully
annotated quadrant-enumeration-diagnosis data, inclusive of four different
diagnoses. In this paper, we present the results of evaluating participant
algorithms on the fully annotated data, additionally investigating performance
variation for quadrant, enumeration, and diagnosis labels in the detection of
abnormal teeth. The provision of this annotated dataset, alongside the results
of this challenge, may lay the groundwork for the creation of AI-powered tools
that can offer more precise and efficient diagnosis and treatment planning in
the field of dentistry. The evaluation code and datasets can be accessed at
https://github.com/ibrahimethemhamamci/DENTEX
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 15:15:50 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Hamamci",
"Ibrahim Ethem",
""
],
[
"Er",
"Sezgin",
""
],
[
"Simsar",
"Enis",
""
],
[
"Yuksel",
"Atif Emre",
""
],
[
"Gultekin",
"Sadullah",
""
],
[
"Ozdemir",
"Serife Damla",
""
],
[
"Yang",
"Kaiyuan",
""
],
[
"Li",
"Hongwei Bran",
""
],
[
"Pati",
"Sarthak",
""
],
[
"Stadlinger",
"Bernd",
""
],
[
"Mehl",
"Albert",
""
],
[
"Gundogar",
"Mustafa",
""
],
[
"Menze",
"Bjoern",
""
]
] |
new_dataset
| 0.999447 |
2305.19115
|
Reza Faieghi
|
Mohammadreza Izadi, Reza Faieghi
|
High-Gain Disturbance Observer for Robust Trajectory Tracking of
Quadrotors
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a simple method to boost the robustness of quadrotors in
trajectory tracking. The presented method features a high-gain disturbance
observer (HGDO) that provides disturbance estimates in real-time. The estimates
are then used in a trajectory control law to compensate for disturbance
effects. We present theoretical convergence results showing that the proposed
HGDO can quickly converge to an adjustable neighborhood of actual disturbance
values. We will then integrate the disturbance estimates with a typical robust
trajectory controller, namely sliding mode control (SMC), and present Lyapunov
stability analysis to establish the boundedness of trajectory tracking errors.
However, our stability analysis can be easily extended to other Lyapunov-based
controllers to develop different HGDO-based controllers with formal stability
guarantees. We evaluate the proposed HGDO-based control method using both
simulation and laboratory experiments in various scenarios and in the presence
of external disturbances. Our results indicate that the addition of HGDO to a
quadrotor trajectory controller can significantly improve the accuracy and
precision of trajectory tracking in the presence of external disturbances.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 15:24:40 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Izadi",
"Mohammadreza",
""
],
[
"Faieghi",
"Reza",
""
]
] |
new_dataset
| 0.99599 |
2305.19157
|
Reza Faieghi
|
S. Mohammadreza Ebrahimi, Farid Norouzi, Hossein Dastres, Reza
Faieghi, Mehdi Naderi, Milad Malekzadeh
|
Sensor Fault Detection and Compensation with Performance Prescription
for Robotic Manipulators
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper focuses on sensor fault detection and compensation for robotic
manipulators. The proposed method features a new adaptive observer and a new
terminal sliding mode control law established on a second-order integral
sliding surface. The method enables sensor fault detection without the need to
impose known bounds on fault value and/or its derivative. It also enables fast
and fixed-time fault-tolerant control whose performance can be prescribed
beforehand by defining funnel bounds on the tracking error. The ultimate
boundedness of the estimation errors for the proposed observer and the
fixed-time stability of the control system are shown using Lyapunov stability
analysis. The effectiveness of the proposed method is verified using numerical
simulations on two different robotic manipulators, and the results are compared
with existing methods. Our results demonstrate performance gains obtained by
the proposed method compared to the existing results.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 15:58:56 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Ebrahimi",
"S. Mohammadreza",
""
],
[
"Norouzi",
"Farid",
""
],
[
"Dastres",
"Hossein",
""
],
[
"Faieghi",
"Reza",
""
],
[
"Naderi",
"Mehdi",
""
],
[
"Malekzadeh",
"Milad",
""
]
] |
new_dataset
| 0.995453 |
2305.19164
|
Viraj Prabhu
|
Viraj Prabhu, Sriram Yenamandra, Prithvijit Chattopadhyay, Judy
Hoffman
|
LANCE: Stress-testing Visual Models by Generating Language-guided
Counterfactual Images
|
Project webpage: https://virajprabhu.github.io/lance-web/
| null | null | null |
cs.CV cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We propose an automated algorithm to stress-test a trained visual model by
generating language-guided counterfactual test images (LANCE). Our method
leverages recent progress in large language modeling and text-based image
editing to augment an IID test set with a suite of diverse, realistic, and
challenging test images without altering model weights. We benchmark the
performance of a diverse set of pretrained models on our generated data and
observe significant and consistent performance drops. We further analyze model
sensitivity across different types of edits, and demonstrate its applicability
at surfacing previously unknown class-level model biases in ImageNet.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 16:09:16 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Prabhu",
"Viraj",
""
],
[
"Yenamandra",
"Sriram",
""
],
[
"Chattopadhyay",
"Prithvijit",
""
],
[
"Hoffman",
"Judy",
""
]
] |
new_dataset
| 0.997192 |
2305.19181
|
Bin Xiao
|
Bin Xiao, Murat Simsek, Burak Kantarci, Ala Abu Alkheir
|
Table Detection for Visually Rich Document Images
| null | null | null | null |
cs.CV cs.IR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Table Detection (TD) is a fundamental task towards visually rich document
understanding. Current studies usually formulate the TD problem as an object
detection problem, then leverage Intersection over Union (IoU) based metrics to
evaluate the model performance and IoU-based loss functions to optimize the
model. TD applications usually require the prediction results to cover all the
table contents and avoid information loss. However, IoU and IoU-based loss
functions cannot directly reflect the degree of information loss for the
prediction results. Therefore, we propose to decouple IoU into a ground truth
coverage term and a prediction coverage term, in which the former can be used
to measure the information loss of the prediction results.
Besides, tables in the documents are usually large, sparsely distributed, and
have no overlaps because they are designed to summarize essential information
to make it easy to read and interpret for human readers. Therefore, in this
study, we use SparseR-CNN as the base model, and further improve the model by
using Gaussian Noise Augmented Image Size region proposals and many-to-one
label assignments.
To demonstrate the effectiveness of proposed method and compare with
state-of-the-art methods fairly, we conduct experiments and use IoU-based
evaluation metrics to evaluate the model performance. The experimental results
show that the proposed method can consistently outperform state-of-the-art
methods under different IoU-based metric on a variety of datasets. We conduct
further experiments to show the superiority of the proposed decoupled IoU for
the TD applications by replacing the IoU-based loss functions and evaluation
metrics with proposed decoupled IoU counterparts. The experimental results show
that our proposed decoupled IoU loss can encourage the model to alleviate
information loss.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 16:25:16 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Xiao",
"Bin",
""
],
[
"Simsek",
"Murat",
""
],
[
"Kantarci",
"Burak",
""
],
[
"Alkheir",
"Ala Abu",
""
]
] |
new_dataset
| 0.995251 |
2305.19194
|
Jun Wu
|
Jun Wu and Xuesong Ye
|
FakeSwarm: Improving Fake News Detection with Swarming Characteristics
|
9th International Conference on Data Mining and Applications (DMA
2023). Keywords: Fake News Detection, Metric Learning, Clustering,
Dimensionality Reduction
| null |
10.5121/csit.2023.130813
| null |
cs.SI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The proliferation of fake news poses a serious threat to society, as it can
misinform and manipulate the public, erode trust in institutions, and undermine
democratic processes. To address this issue, we present FakeSwarm, a fake news
identification system that leverages the swarming characteristics of fake news.
To extract the swarm behavior, we propose a novel concept of fake news swarming
characteristics and design three types of swarm features, including principal
component analysis, metric representation, and position encoding. We evaluate
our system on a public dataset and demonstrate the effectiveness of
incorporating swarm features in fake news identification, achieving an f1-score
and accuracy of over 97% by combining all three types of swarm features.
Furthermore, we design an online learning pipeline based on the hypothesis of
the temporal distribution pattern of fake news emergence, validated on a topic
with early emerging fake news and a shortage of text samples, showing that
swarm features can significantly improve recall rates in such cases. Our work
provides a new perspective and approach to fake news detection and highlights
the importance of considering swarming characteristics in detecting fake news.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 16:39:11 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Wu",
"Jun",
""
],
[
"Ye",
"Xuesong",
""
]
] |
new_dataset
| 0.993504 |
2305.19204
|
Philippe Laban
|
Philippe Laban, Jesse Vig, Wojciech Kryscinski, Shafiq Joty, Caiming
Xiong, Chien-Sheng Wu
|
SWiPE: A Dataset for Document-Level Simplification of Wikipedia Pages
|
ACL 2023, Long Paper
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text simplification research has mostly focused on sentence-level
simplification, even though many desirable edits - such as adding relevant
background information or reordering content - may require document-level
context. Prior work has also predominantly framed simplification as a
single-step, input-to-output task, only implicitly modeling the fine-grained,
span-level edits that elucidate the simplification process. To address both
gaps, we introduce the SWiPE dataset, which reconstructs the document-level
editing process from English Wikipedia (EW) articles to paired Simple Wikipedia
(SEW) articles. In contrast to prior work, SWiPE leverages the entire revision
history when pairing pages in order to better identify simplification edits. We
work with Wikipedia editors to annotate 5,000 EW-SEW document pairs, labeling
more than 40,000 edits with proposed 19 categories. To scale our efforts, we
propose several models to automatically label edits, achieving an F-1 score of
up to 70.6, indicating that this is a tractable but challenging NLU task.
Finally, we categorize the edits produced by several simplification models and
find that SWiPE-trained models generate more complex edits while reducing
unwanted edits.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 16:52:42 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Laban",
"Philippe",
""
],
[
"Vig",
"Jesse",
""
],
[
"Kryscinski",
"Wojciech",
""
],
[
"Joty",
"Shafiq",
""
],
[
"Xiong",
"Caiming",
""
],
[
"Wu",
"Chien-Sheng",
""
]
] |
new_dataset
| 0.999563 |
2305.19211
|
Giovanni Squillero
|
Nicol\`o Bellarmino, Giorgio Bozzini, Riccardo Cantoro, Francesco
Castelletti, Michele Castelluzzo, Carla Ciricugno, Raffaele Correale, Daniela
Dalla Gasperina, Francesco Dentali, Giovanni Poggialini, Piergiorgio Salerno,
Giovanni Squillero, Stefano Taborelli
|
COVID-19 Detection from Mass Spectra of Exhaled Breath
|
15 pages
| null | null | null |
cs.LG q-bio.QM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
According to the World Health Organization, the SARS-CoV-2 virus generated a
global emergency between 2020 and 2023 resulting in about 7 million deaths out
of more than 750 million individuals diagnosed with COVID-19. During these
years, polymerase-chain-reaction and antigen testing played a prominent role in
disease control. In this study, we propose a fast and non-invasive detection
system exploiting a proprietary mass spectrometer to measure ions in exhaled
breath. We demonstrated that infected individuals, even if asymptomatic,
exhibit characteristics in the air expelled from the lungs that can be detected
by a nanotech-based technology and then recognized by soft-computing
algorithms. A clinical trial was ran on about 300 patients: the mass spectra in
the 10-351 mass-to-charge range were measured, suitably pre-processed, and
analyzed by different classification models; eventually, the system shown an
accuracy of 95% and a recall of 94% in identifying cases of COVID-19. With
performances comparable to traditional methodologies, the proposed system could
play a significant role in both routine examination for common diseases and
emergency response for new epidemics.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 17:01:53 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Bellarmino",
"Nicolò",
""
],
[
"Bozzini",
"Giorgio",
""
],
[
"Cantoro",
"Riccardo",
""
],
[
"Castelletti",
"Francesco",
""
],
[
"Castelluzzo",
"Michele",
""
],
[
"Ciricugno",
"Carla",
""
],
[
"Correale",
"Raffaele",
""
],
[
"Gasperina",
"Daniela Dalla",
""
],
[
"Dentali",
"Francesco",
""
],
[
"Poggialini",
"Giovanni",
""
],
[
"Salerno",
"Piergiorgio",
""
],
[
"Squillero",
"Giovanni",
""
],
[
"Taborelli",
"Stefano",
""
]
] |
new_dataset
| 0.985794 |
2305.19223
|
Catalin Mitelut
|
Catalin Mitelut, Ben Smith, Peter Vamplew
|
Intent-aligned AI systems deplete human agency: the need for agency
foundations research in AI safety
| null | null | null | null |
cs.AI cs.CY cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
The rapid advancement of artificial intelligence (AI) systems suggests that
artificial general intelligence (AGI) systems may soon arrive. Many researchers
are concerned that AIs and AGIs will harm humans via intentional misuse
(AI-misuse) or through accidents (AI-accidents). In respect of AI-accidents,
there is an increasing effort focused on developing algorithms and paradigms
that ensure AI systems are aligned to what humans intend, e.g. AI systems that
yield actions or recommendations that humans might judge as consistent with
their intentions and goals. Here we argue that alignment to human intent is
insufficient for safe AI systems and that preservation of long-term agency of
humans may be a more robust standard, and one that needs to be separated
explicitly and a priori during optimization. We argue that AI systems can
reshape human intention and discuss the lack of biological and psychological
mechanisms that protect humans from loss of agency. We provide the first formal
definition of agency-preserving AI-human interactions which focuses on
forward-looking agency evaluations and argue that AI systems - not humans -
must be increasingly tasked with making these evaluations. We show how agency
loss can occur in simple environments containing embedded agents that use
temporal-difference learning to make action recommendations. Finally, we
propose a new area of research called "agency foundations" and pose four
initial topics designed to improve our understanding of agency in AI-human
interactions: benevolent game theory, algorithmic foundations of human rights,
mechanistic interpretability of agency representation in neural-networks and
reinforcement learning from internal states.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 17:14:01 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Mitelut",
"Catalin",
""
],
[
"Smith",
"Ben",
""
],
[
"Vamplew",
"Peter",
""
]
] |
new_dataset
| 0.971956 |
2305.19228
|
Yufei Tian
|
Yufei Tian, Anjali Narayan-Chen, Shereen Oraby, Alessandra Cervone,
Gunnar Sigurdsson, Chenyang Tao, Wenbo Zhao, Tagyoung Chung, Jing Huang,
Nanyun Peng
|
Unsupervised Melody-to-Lyric Generation
|
Accepted to ACL 23. arXiv admin note: substantial text overlap with
arXiv:2305.07760
| null | null | null |
cs.CL cs.AI cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic melody-to-lyric generation is a task in which song lyrics are
generated to go with a given melody. It is of significant practical interest
and more challenging than unconstrained lyric generation as the music imposes
additional constraints onto the lyrics. The training data is limited as most
songs are copyrighted, resulting in models that underfit the complicated
cross-modal relationship between melody and lyrics. In this work, we propose a
method for generating high-quality lyrics without training on any aligned
melody-lyric data. Specifically, we design a hierarchical lyric generation
framework that first generates a song outline and second the complete lyrics.
The framework enables disentanglement of training (based purely on text) from
inference (melody-guided text generation) to circumvent the shortage of
parallel data.
We leverage the segmentation and rhythm alignment between melody and lyrics
to compile the given melody into decoding constraints as guidance during
inference. The two-step hierarchical design also enables content control via
the lyric outline, a much-desired feature for democratizing collaborative song
creation. Experimental results show that our model can generate high-quality
lyrics that are more on-topic, singable, intelligible, and coherent than strong
baselines, for example SongMASS, a SOTA model trained on a parallel dataset,
with a 24% relative overall quality improvement based on human ratings. O
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 17:20:25 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Tian",
"Yufei",
""
],
[
"Narayan-Chen",
"Anjali",
""
],
[
"Oraby",
"Shereen",
""
],
[
"Cervone",
"Alessandra",
""
],
[
"Sigurdsson",
"Gunnar",
""
],
[
"Tao",
"Chenyang",
""
],
[
"Zhao",
"Wenbo",
""
],
[
"Chung",
"Tagyoung",
""
],
[
"Huang",
"Jing",
""
],
[
"Peng",
"Nanyun",
""
]
] |
new_dataset
| 0.97171 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.