id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.00107
|
Yizhi Li
|
Yizhi Li, Ruibin Yuan, Ge Zhang, Yinghao Ma, Xingran Chen, Hanzhi Yin,
Chenghua Lin, Anton Ragni, Emmanouil Benetos, Norbert Gyenge, Roger
Dannenberg, Ruibo Liu, Wenhu Chen, Gus Xia, Yemin Shi, Wenhao Huang, Yike
Guo, Jie Fu
|
MERT: Acoustic Music Understanding Model with Large-Scale
Self-supervised Training
| null | null | null | null |
cs.SD cs.AI cs.CL cs.LG eess.AS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Self-supervised learning (SSL) has recently emerged as a promising paradigm
for training generalisable models on large-scale data in the fields of vision,
text, and speech. Although SSL has been proven effective in speech and audio,
its application to music audio has yet to be thoroughly explored. This is
primarily due to the distinctive challenges associated with modelling musical
knowledge, particularly its tonal and pitched characteristics of music. To
address this research gap, we propose an acoustic Music undERstanding model
with large-scale self-supervised Training (MERT), which incorporates teacher
models to provide pseudo labels in the masked language modelling (MLM) style
acoustic pre-training. In our exploration, we identified a superior combination
of teacher models, which outperforms conventional speech and audio approaches
in terms of performance. This combination includes an acoustic teacher based on
Residual Vector Quantization - Variational AutoEncoder (RVQ-VAE) and a musical
teacher based on the Constant-Q Transform (CQT). These teachers effectively
guide our student model, a BERT-style transformer encoder, to better model
music audio. In addition, we introduce an in-batch noise mixture augmentation
to enhance the representation robustness. Furthermore, we explore a wide range
of settings to overcome the instability in acoustic language model
pre-training, which allows our designed paradigm to scale from 95M to 330M
parameters. Experimental results indicate that our model can generalise and
perform well on 14 music understanding tasks and attains state-of-the-art
(SOTA) overall scores. The code and models are online:
https://github.com/yizhilll/MERT.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 18:27:43 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jun 2023 14:06:02 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Li",
"Yizhi",
""
],
[
"Yuan",
"Ruibin",
""
],
[
"Zhang",
"Ge",
""
],
[
"Ma",
"Yinghao",
""
],
[
"Chen",
"Xingran",
""
],
[
"Yin",
"Hanzhi",
""
],
[
"Lin",
"Chenghua",
""
],
[
"Ragni",
"Anton",
""
],
[
"Benetos",
"Emmanouil",
""
],
[
"Gyenge",
"Norbert",
""
],
[
"Dannenberg",
"Roger",
""
],
[
"Liu",
"Ruibo",
""
],
[
"Chen",
"Wenhu",
""
],
[
"Xia",
"Gus",
""
],
[
"Shi",
"Yemin",
""
],
[
"Huang",
"Wenhao",
""
],
[
"Guo",
"Yike",
""
],
[
"Fu",
"Jie",
""
]
] |
new_dataset
| 0.997335 |
2306.00301
|
Sagnik Anupam
|
Shinjini Ghosh, Sagnik Anupam
|
CapText: Large Language Model-based Caption Generation From Image
Context and Description
|
Update 6/6/23: Fixed typographic error in abstract
| null | null | null |
cs.LG cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While deep-learning models have been shown to perform well on image-to-text
datasets, it is difficult to use them in practice for captioning images. This
is because captions traditionally tend to be context-dependent and offer
complementary information about an image, while models tend to produce
descriptions that describe the visual features of the image. Prior research in
caption generation has explored the use of models that generate captions when
provided with the images alongside their respective descriptions or contexts.
We propose and evaluate a new approach, which leverages existing large language
models to generate captions from textual descriptions and context alone,
without ever processing the image directly. We demonstrate that after
fine-tuning, our approach outperforms current state-of-the-art image-text
alignment models like OSCAR-VinVL on this task on the CIDEr metric.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 02:40:44 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jun 2023 03:41:05 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Ghosh",
"Shinjini",
""
],
[
"Anupam",
"Sagnik",
""
]
] |
new_dataset
| 0.998883 |
2306.02254
|
Kichang Yang
|
Hyunwoong Ko, Kichang Yang, Minho Ryu, Taekyoon Choi, Seungmu Yang,
Jiwung Hyun, Sungho Park, Kyubyong Park
|
A Technical Report for Polyglot-Ko: Open-Source Large-Scale Korean
Language Models
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Polyglot is a pioneering project aimed at enhancing the non-English language
performance of multilingual language models. Despite the availability of
various multilingual models such as mBERT (Devlin et al., 2019), XGLM (Lin et
al., 2022), and BLOOM (Scao et al., 2022), researchers and developers often
resort to building monolingual models in their respective languages due to the
dissatisfaction with the current multilingual models non-English language
capabilities. Addressing this gap, we seek to develop advanced multilingual
language models that offer improved performance in non-English languages. In
this paper, we introduce the Polyglot Korean models, which represent a specific
focus rather than being multilingual in nature. In collaboration with TUNiB,
our team collected 1.2TB of Korean data meticulously curated for our research
journey. We made a deliberate decision to prioritize the development of Korean
models before venturing into multilingual models. This choice was motivated by
multiple factors: firstly, the Korean models facilitated performance
comparisons with existing multilingual models; and finally, they catered to the
specific needs of Korean companies and researchers. This paper presents our
work in developing the Polyglot Korean models, which propose some steps towards
addressing the non-English language performance gap in multilingual language
models.
|
[
{
"version": "v1",
"created": "Sun, 4 Jun 2023 04:04:04 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jun 2023 03:27:33 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Ko",
"Hyunwoong",
""
],
[
"Yang",
"Kichang",
""
],
[
"Ryu",
"Minho",
""
],
[
"Choi",
"Taekyoon",
""
],
[
"Yang",
"Seungmu",
""
],
[
"Hyun",
"Jiwung",
""
],
[
"Park",
"Sungho",
""
],
[
"Park",
"Kyubyong",
""
]
] |
new_dataset
| 0.968736 |
2306.03102
|
Shulamit Reches
|
Amos Azaria, Rina Azoulay, Shulamit Reches
|
ChatGPT is a Remarkable Tool -- For Experts
| null | null | null | null |
cs.HC cs.AI cs.CL cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper investigates the capabilities of ChatGPT as an automated assistant
in diverse domains, including scientific writing, mathematics, education,
programming, and healthcare. We explore the potential of ChatGPT to enhance
productivity, streamline problem-solving processes, and improve writing style.
Furthermore, we highlight the potential risks associated with excessive
reliance on ChatGPT in these fields. These limitations encompass factors like
incorrect and fictitious responses, inaccuracies in code, limited logical
reasoning abilities, overconfidence, and critical ethical concerns of
copyrights and privacy violation. We outline areas and objectives where ChatGPT
proves beneficial, applications where it should be used judiciously, and
scenarios where its reliability may be limited. In light of observed
limitations, and given that the tool's fundamental errors may pose a special
challenge for non-experts, ChatGPT should be used with a strategic methodology.
By drawing from comprehensive experimental studies, we offer methods and flow
charts for effectively using ChatGPT. Our recommendations emphasize iterative
interaction with ChatGPT and independent verification of its outputs.
Considering the importance of utilizing ChatGPT judiciously and with expertise,
we recommend its usage for experts who are well-versed in the respective
domains.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 06:28:21 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Azaria",
"Amos",
""
],
[
"Azoulay",
"Rina",
""
],
[
"Reches",
"Shulamit",
""
]
] |
new_dataset
| 0.983895 |
2306.03110
|
Chen Lei
|
Lei Chen, Fei Du, Yuan Hu, Fan Wang, Zhibin Wang
|
SwinRDM: Integrate SwinRNN with Diffusion Model towards High-Resolution
and High-Quality Weather Forecasting
| null | null |
10.48448/zn7f-fc64
| null |
cs.AI cs.CV physics.ao-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Data-driven medium-range weather forecasting has attracted much attention in
recent years. However, the forecasting accuracy at high resolution is
unsatisfactory currently. Pursuing high-resolution and high-quality weather
forecasting, we develop a data-driven model SwinRDM which integrates an
improved version of SwinRNN with a diffusion model. SwinRDM performs
predictions at 0.25-degree resolution and achieves superior forecasting
accuracy to IFS (Integrated Forecast System), the state-of-the-art operational
NWP model, on representative atmospheric variables including 500 hPa
geopotential (Z500), 850 hPa temperature (T850), 2-m temperature (T2M), and
total precipitation (TP), at lead times of up to 5 days. We propose to leverage
a two-step strategy to achieve high-resolution predictions at 0.25-degree
considering the trade-off between computation memory and forecasting accuracy.
Recurrent predictions for future atmospheric fields are firstly performed at
1.40625-degree resolution, and then a diffusion-based super-resolution model is
leveraged to recover the high spatial resolution and finer-scale atmospheric
details. SwinRDM pushes forward the performance and potential of data-driven
models for a large margin towards operational applications.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 05:11:03 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Chen",
"Lei",
""
],
[
"Du",
"Fei",
""
],
[
"Hu",
"Yuan",
""
],
[
"Wang",
"Fan",
""
],
[
"Wang",
"Zhibin",
""
]
] |
new_dataset
| 0.993259 |
2306.03115
|
Carlos Crispim-Junior
|
Carlos Crispim-Junior, Romain Guesdon, Christophe Jallais, Florent
Laroche, Stephanie Souche-Le Corvec, Laure Tougne Rodet
|
AutoExp: A multidisciplinary, multi-sensor framework to evaluate human
activities in self-driving cars
|
This paper is currently under review by the 26th IEEE International
Conference on Intelligent Transportation Systems (ITSC 2023)
| null | null | null |
cs.HC cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The adoption of self-driving cars will certainly revolutionize our lives,
even though they may take more time to become fully autonomous than initially
predicted. The first vehicles are already present in certain cities of the
world, as part of experimental robot-taxi services. However, most existing
studies focus on the navigation part of such vehicles. We currently miss
methods, datasets, and studies to assess the in-cabin human component of the
adoption of such technology in real-world conditions. This paper proposes an
experimental framework to study the activities of occupants of self-driving
cars using a multidisciplinary approach (computer vision associated with human
and social sciences), particularly non-driving related activities. The
framework is composed of an experimentation scenario, and a data acquisition
module. We seek firstly to capture real-world data about the usage of the
vehicle in the nearest possible, real-world conditions, and secondly to create
a dataset containing in-cabin human activities to foster the development and
evaluation of computer vision algorithms. The acquisition module records
multiple views of the front seats of the vehicle (Intel RGB-D and GoPro
cameras); in addition to survey data about the internal states and attitudes of
participants towards this type of vehicle before, during, and after the
experimentation. We evaluated the proposed framework with the realization of
real-world experimentation with 30 participants (1 hour each) to study the
acceptance of SDCs of SAE level 4.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 13:13:19 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Crispim-Junior",
"Carlos",
""
],
[
"Guesdon",
"Romain",
""
],
[
"Jallais",
"Christophe",
""
],
[
"Laroche",
"Florent",
""
],
[
"Corvec",
"Stephanie Souche-Le",
""
],
[
"Rodet",
"Laure Tougne",
""
]
] |
new_dataset
| 0.997057 |
2306.03195
|
Shreya Ghosh
|
Jakob Hederich, Shreya Ghosh, Zeyu He and Prasenjit Mitra
|
Lumos in the Night Sky: AI-enabled Visual Tool for Exploring Night-Time
Light Patterns
|
5 pages, 3 figures. Accepted in ECML PKDD Demo track
| null | null | null |
cs.HC cs.AI cs.IR cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We introduce NightPulse, an interactive tool for Night-time light (NTL) data
visualization and analytics, which enables researchers and stakeholders to
explore and analyze NTL data with a user-friendly platform. Powered by
efficient system architecture, NightPulse supports image segmentation,
clustering, and change pattern detection to identify urban development and
sprawl patterns. It captures temporal trends of NTL and semantics of cities,
answering questions about demographic factors, city boundaries, and unusual
differences.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 19:13:44 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Hederich",
"Jakob",
""
],
[
"Ghosh",
"Shreya",
""
],
[
"He",
"Zeyu",
""
],
[
"Mitra",
"Prasenjit",
""
]
] |
new_dataset
| 0.996467 |
2306.03206
|
Yingwei Li
|
Yingwei Li, Charles R. Qi, Yin Zhou, Chenxi Liu, Dragomir Anguelov
|
MoDAR: Using Motion Forecasting for 3D Object Detection in Point Cloud
Sequences
|
CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Occluded and long-range objects are ubiquitous and challenging for 3D object
detection. Point cloud sequence data provide unique opportunities to improve
such cases, as an occluded or distant object can be observed from different
viewpoints or gets better visibility over time. However, the efficiency and
effectiveness in encoding long-term sequence data can still be improved. In
this work, we propose MoDAR, using motion forecasting outputs as a type of
virtual modality, to augment LiDAR point clouds. The MoDAR modality propagates
object information from temporal contexts to a target frame, represented as a
set of virtual points, one for each object from a waypoint on a forecasted
trajectory. A fused point cloud of both raw sensor points and the virtual
points can then be fed to any off-the-shelf point-cloud based 3D object
detector. Evaluated on the Waymo Open Dataset, our method significantly
improves prior art detectors by using motion forecasting from extra-long
sequences (e.g. 18 seconds), achieving new state of the arts, while not adding
much computation overhead.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 19:28:19 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Li",
"Yingwei",
""
],
[
"Qi",
"Charles R.",
""
],
[
"Zhou",
"Yin",
""
],
[
"Liu",
"Chenxi",
""
],
[
"Anguelov",
"Dragomir",
""
]
] |
new_dataset
| 0.999695 |
2306.03252
|
Amar Kulkarni
|
Amar Kulkarni, John Chrosniak, Emory Ducote, Florian Sauerbeck, Andrew
Saba, Utkarsh Chirimar, John Link, Marcello Cellina, Madhur Behl
|
RACECAR -- The Dataset for High-Speed Autonomous Racing
|
9 pages, 10 figures. For links to data and reference material go to
https://github.com/linklab-uva/RACECAR_DATA
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper describes the first open dataset for full-scale and high-speed
autonomous racing. Multi-modal sensor data has been collected from fully
autonomous Indy race cars operating at speeds of up to 170 mph (273 kph). Six
teams who raced in the Indy Autonomous Challenge have contributed to this
dataset. The dataset spans 11 interesting racing scenarios across two race
tracks which include solo laps, multi-agent laps, overtaking situations,
high-accelerations, banked tracks, obstacle avoidance, pit entry and exit at
different speeds. The dataset contains data from 27 racing sessions across the
11 scenarios with over 6.5 hours of sensor data recorded from the track. The
data is organized and released in both ROS2 and nuScenes format. We have also
developed the ROS2-to-nuScenes conversion library to achieve this. The RACECAR
data is unique because of the high-speed environment of autonomous racing. We
present several benchmark problems on localization, object detection and
tracking (LiDAR, Radar, and Camera), and mapping using the RACECAR data to
explore issues that arise at the limits of operation of the vehicle.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 21:13:46 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Kulkarni",
"Amar",
""
],
[
"Chrosniak",
"John",
""
],
[
"Ducote",
"Emory",
""
],
[
"Sauerbeck",
"Florian",
""
],
[
"Saba",
"Andrew",
""
],
[
"Chirimar",
"Utkarsh",
""
],
[
"Link",
"John",
""
],
[
"Cellina",
"Marcello",
""
],
[
"Behl",
"Madhur",
""
]
] |
new_dataset
| 0.999796 |
2306.03264
|
Sanjeev Kumar Karn
|
Sanjeev Kumar Karn, Rikhiya Ghosh, Kusuma P and Oladimeji Farri
|
shs-nlp at RadSum23: Domain-Adaptive Pre-training of Instruction-tuned
LLMs for Radiology Report Impression Generation
|
1st Place in Task 1B: Radiology Report Summarization at BioNLP 2023
|
BioNLP 2023, Co-located with ACL 2023
| null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Instruction-tuned generative Large language models (LLMs) like ChatGPT and
Bloomz possess excellent generalization abilities, but they face limitations in
understanding radiology reports, particularly in the task of generating the
IMPRESSIONS section from the FINDINGS section. They tend to generate either
verbose or incomplete IMPRESSIONS, mainly due to insufficient exposure to
medical text data during training. We present a system which leverages
large-scale medical text data for domain-adaptive pre-training of
instruction-tuned LLMs to enhance its medical knowledge and performance on
specific medical tasks. We show that this system performs better in a zero-shot
setting than a number of pretrain-and-finetune adaptation methods on the
IMPRESSIONS generation task, and ranks 1st among participating systems in Task
1B: Radiology Report Summarization at the BioNLP 2023 workshop.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 21:33:04 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Karn",
"Sanjeev Kumar",
""
],
[
"Ghosh",
"Rikhiya",
""
],
[
"P",
"Kusuma",
""
],
[
"Farri",
"Oladimeji",
""
]
] |
new_dataset
| 0.985226 |
2306.03310
|
Bo Liu
|
Bo Liu, Yifeng Zhu, Chongkai Gao, Yihao Feng, Qiang Liu, Yuke Zhu,
Peter Stone
|
LIBERO: Benchmarking Knowledge Transfer for Lifelong Robot Learning
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Lifelong learning offers a promising paradigm of building a generalist agent
that learns and adapts over its lifespan. Unlike traditional lifelong learning
problems in image and text domains, which primarily involve the transfer of
declarative knowledge of entities and concepts, lifelong learning in
decision-making (LLDM) also necessitates the transfer of procedural knowledge,
such as actions and behaviors. To advance research in LLDM, we introduce
LIBERO, a novel benchmark of lifelong learning for robot manipulation.
Specifically, LIBERO highlights five key research topics in LLDM: 1) how to
efficiently transfer declarative knowledge, procedural knowledge, or the
mixture of both; 2) how to design effective policy architectures and 3)
effective algorithms for LLDM; 4) the robustness of a lifelong learner with
respect to task ordering; and 5) the effect of model pretraining for LLDM. We
develop an extendible procedural generation pipeline that can in principle
generate infinitely many tasks. For benchmarking purpose, we create four task
suites (130 tasks in total) that we use to investigate the above-mentioned
research topics. To support sample-efficient learning, we provide high-quality
human-teleoperated demonstration data for all tasks. Our extensive experiments
present several insightful or even unexpected discoveries: sequential
finetuning outperforms existing lifelong learning methods in forward transfer,
no single visual encoder architecture excels at all types of knowledge
transfer, and naive supervised pretraining can hinder agents' performance in
the subsequent LLDM. Check the website at https://libero-project.github.io for
the code and the datasets.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 23:32:26 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Liu",
"Bo",
""
],
[
"Zhu",
"Yifeng",
""
],
[
"Gao",
"Chongkai",
""
],
[
"Feng",
"Yihao",
""
],
[
"Liu",
"Qiang",
""
],
[
"Zhu",
"Yuke",
""
],
[
"Stone",
"Peter",
""
]
] |
new_dataset
| 0.96946 |
2306.03329
|
Hirofumi Tsuruta
|
Hirofumi Tsuruta, Hiroyuki Yamazaki, Ryota Maeda, Ryotaro Tamura,
Jennifer N. Wei, Zelda Mariet, Poomarin Phloyphisut, Hidetoshi Shimokawa,
Joseph R. Ledsam, Lucy Colwell, Akihiro Imura
|
AVIDa-hIL6: A Large-Scale VHH Dataset Produced from an Immunized Alpaca
for Predicting Antigen-Antibody Interactions
| null | null | null | null |
cs.LG q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Antibodies have become an important class of therapeutic agents to treat
human diseases. To accelerate therapeutic antibody discovery, computational
methods, especially machine learning, have attracted considerable interest for
predicting specific interactions between antibody candidates and target
antigens such as viruses and bacteria. However, the publicly available datasets
in existing works have notable limitations, such as small sizes and the lack of
non-binding samples and exact amino acid sequences. To overcome these
limitations, we have developed AVIDa-hIL6, a large-scale dataset for predicting
antigen-antibody interactions in the variable domain of heavy chain of heavy
chain antibodies (VHHs), produced from an alpaca immunized with the human
interleukin-6 (IL-6) protein, as antigens. By leveraging the simple structure
of VHHs, which facilitates identification of full-length amino acid sequences
by DNA sequencing technology, AVIDa-hIL6 contains 573,891 antigen-VHH pairs
with amino acid sequences. All the antigen-VHH pairs have reliable labels for
binding or non-binding, as generated by a novel labeling method. Furthermore,
via introduction of artificial mutations, AVIDa-hIL6 contains 30 different
mutants in addition to wild-type IL-6 protein. This characteristic provides
opportunities to develop machine learning models for predicting changes in
antibody binding by antigen mutations. We report experimental benchmark results
on AVIDa-hIL6 by using neural network-based baseline models. The results
indicate that the existing models have potential, but further research is
needed to generalize them to predict effective antibodies against unknown
mutants. The dataset is available at https://avida-hil6.cognanous.com.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2023 00:42:36 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Tsuruta",
"Hirofumi",
""
],
[
"Yamazaki",
"Hiroyuki",
""
],
[
"Maeda",
"Ryota",
""
],
[
"Tamura",
"Ryotaro",
""
],
[
"Wei",
"Jennifer N.",
""
],
[
"Mariet",
"Zelda",
""
],
[
"Phloyphisut",
"Poomarin",
""
],
[
"Shimokawa",
"Hidetoshi",
""
],
[
"Ledsam",
"Joseph R.",
""
],
[
"Colwell",
"Lucy",
""
],
[
"Imura",
"Akihiro",
""
]
] |
new_dataset
| 0.999843 |
2306.03381
|
Elliott Wen
|
Elliott Wen, Chitralekha Gupta, Prasanth Sasikumar, Mark Billinghurst,
James Wilmott, Emily Skow, Arindam Dey, Suranga Nanayakkara
|
VR.net: A Real-world Dataset for Virtual Reality Motion Sickness
Research
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Researchers have used machine learning approaches to identify motion sickness
in VR experience. These approaches demand an accurately-labeled, real-world,
and diverse dataset for high accuracy and generalizability. As a starting point
to address this need, we introduce `VR.net', a dataset offering approximately
12-hour gameplay videos from ten real-world games in 10 diverse genres. For
each video frame, a rich set of motion sickness-related labels, such as
camera/object movement, depth field, and motion flow, are accurately assigned.
Building such a dataset is challenging since manual labeling would require an
infeasible amount of time. Instead, we utilize a tool to automatically and
precisely extract ground truth data from 3D engines' rendering pipelines
without accessing VR games' source code. We illustrate the utility of VR.net
through several applications, such as risk factor detection and sickness level
prediction. We continuously expand VR.net and envision its next version
offering 10X more data than the current form. We believe that the scale,
accuracy, and diversity of VR.net can offer unparalleled opportunities for VR
motion sickness research and beyond.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2023 03:43:11 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Wen",
"Elliott",
""
],
[
"Gupta",
"Chitralekha",
""
],
[
"Sasikumar",
"Prasanth",
""
],
[
"Billinghurst",
"Mark",
""
],
[
"Wilmott",
"James",
""
],
[
"Skow",
"Emily",
""
],
[
"Dey",
"Arindam",
""
],
[
"Nanayakkara",
"Suranga",
""
]
] |
new_dataset
| 0.999824 |
2306.03502
|
Despoina Antonakaki
|
Alexander Shevtsov, Despoina Antonakaki, Ioannis Lamprou, Ioannis
Kontogiorgakis, Polyvios Pratikakis, Sotiris Ioannidis
|
Russo-Ukrainian War: Prediction and explanation of Twitter suspension
| null | null | null | null |
cs.SI cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
On 24 February 2022, Russia invaded Ukraine, starting what is now known as
the Russo-Ukrainian War, initiating an online discourse on social media.
Twitter as one of the most popular SNs, with an open and democratic character,
enables a transparent discussion among its large user base. Unfortunately, this
often leads to Twitter's policy violations, propaganda, abusive actions, civil
integrity violation, and consequently to user accounts' suspension and
deletion. This study focuses on the Twitter suspension mechanism and the
analysis of shared content and features of the user accounts that may lead to
this. Toward this goal, we have obtained a dataset containing 107.7M tweets,
originating from 9.8 million users, using Twitter API. We extract the
categories of shared content of the suspended accounts and explain their
characteristics, through the extraction of text embeddings in junction with
cosine similarity clustering. Our results reveal scam campaigns taking
advantage of trending topics regarding the Russia-Ukrainian conflict for
Bitcoin and Ethereum fraud, spam, and advertisement campaigns. Additionally, we
apply a machine learning methodology including a SHapley Additive
explainability model to understand and explain how user accounts get suspended.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2023 08:41:02 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Shevtsov",
"Alexander",
""
],
[
"Antonakaki",
"Despoina",
""
],
[
"Lamprou",
"Ioannis",
""
],
[
"Kontogiorgakis",
"Ioannis",
""
],
[
"Pratikakis",
"Polyvios",
""
],
[
"Ioannidis",
"Sotiris",
""
]
] |
new_dataset
| 0.998022 |
2306.03577
|
Anuj Rai
|
Anuj Rai, Ashutosh Anshul, Ashwini Jha, Prayag Jain, Ramprakash
Sharma, Somnath Dey
|
An Open Patch Generator based Fingerprint Presentation Attack Detection
using Generative Adversarial Network
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The low-cost, user-friendly, and convenient nature of Automatic Fingerprint
Recognition Systems (AFRS) makes them suitable for a wide range of
applications. This spreading use of AFRS also makes them vulnerable to various
security threats. Presentation Attack (PA) or spoofing is one of the threats
which is caused by presenting a spoof of a genuine fingerprint to the sensor of
AFRS. Fingerprint Presentation Attack Detection (FPAD) is a countermeasure
intended to protect AFRS against fake or spoof fingerprints created using
various fabrication materials. In this paper, we have proposed a Convolutional
Neural Network (CNN) based technique that uses a Generative Adversarial Network
(GAN) to augment the dataset with spoof samples generated from the proposed
Open Patch Generator (OPG). This OPG is capable of generating realistic
fingerprint samples which have no resemblance to the existing spoof fingerprint
samples generated with other materials. The augmented dataset is fed to the
DenseNet classifier which helps in increasing the performance of the
Presentation Attack Detection (PAD) module for the various real-world attacks
possible with unknown spoof materials. Experimental evaluations of the proposed
approach are carried out on the Liveness Detection (LivDet) 2015, 2017, and
2019 competition databases. An overall accuracy of 96.20\%, 94.97\%, and
92.90\% has been achieved on the LivDet 2015, 2017, and 2019 databases,
respectively under the LivDet protocol scenarios. The performance of the
proposed PAD model is also validated in the cross-material and cross-sensor
attack paradigm which further exhibits its capability to be used under
real-world attack scenarios.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2023 10:52:06 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Rai",
"Anuj",
""
],
[
"Anshul",
"Ashutosh",
""
],
[
"Jha",
"Ashwini",
""
],
[
"Jain",
"Prayag",
""
],
[
"Sharma",
"Ramprakash",
""
],
[
"Dey",
"Somnath",
""
]
] |
new_dataset
| 0.988113 |
2306.03642
|
Maria Korosteleva
|
Maria Korosteleva, Olga Sorkine-Hornung
|
GarmentCode: Programming Parametric Sewing Patterns
|
Supplementary video: https://youtu.be/16Yyr2G9_6E/
| null | null | null |
cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Garment modeling is an essential task of the global apparel industry and a
core part of digital human modeling. Realistic representation of garments with
valid sewing patterns is key to their accurate digital simulation and eventual
fabrication. However, little-to-no computational tools provide support for
bridging the gap between high-level construction goals and low-level editing of
pattern geometry, e.g., combining or switching garment elements, semantic
editing, or design exploration that maintains the validity of a sewing pattern.
We suggest the first DSL for garment modeling -- GarmentCode -- that applies
principles of object-oriented programming to garment construction and allows
designing sewing patterns in a hierarchical, component-oriented manner. The
programming-based paradigm naturally provides unique advantages of component
abstraction, algorithmic manipulation, and free-form design parametrization. We
additionally support the construction process by automating typical low-level
tasks like placing a dart at a desired location. In our prototype garment
configurator, users can manipulate meaningful design parameters and body
measurements, while the construction of pattern geometry is handled by garment
programs implemented with GarmentCode. Our configurator enables the free
exploration of rich design spaces and the creation of garments using
interchangeable, parameterized components. We showcase our approach by
producing a variety of garment designs and retargeting them to different body
shapes using our configurator.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2023 12:54:23 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Korosteleva",
"Maria",
""
],
[
"Sorkine-Hornung",
"Olga",
""
]
] |
new_dataset
| 0.99691 |
2306.03723
|
Soumya Sharma
|
Soumya Sharma, Subhendu Khatuya, Manjunath Hegde, Afreen Shaikh.
Koustuv Dasgupta, Pawan Goyal, Niloy Ganguly
|
Financial Numeric Extreme Labelling: A Dataset and Benchmarking for XBRL
Tagging
|
Accepted to ACL'23 Findings Paper
| null | null | null |
cs.CL cs.AI cs.CE
|
http://creativecommons.org/licenses/by/4.0/
|
The U.S. Securities and Exchange Commission (SEC) mandates all public
companies to file periodic financial statements that should contain numerals
annotated with a particular label from a taxonomy. In this paper, we formulate
the task of automating the assignment of a label to a particular numeral span
in a sentence from an extremely large label set. Towards this task, we release
a dataset, Financial Numeric Extreme Labelling (FNXL), annotated with 2,794
labels. We benchmark the performance of the FNXL dataset by formulating the
task as (a) a sequence labelling problem and (b) a pipeline with span
extraction followed by Extreme Classification. Although the two approaches
perform comparably, the pipeline solution provides a slight edge for the least
frequent labels.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2023 14:41:30 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Sharma",
"Soumya",
""
],
[
"Khatuya",
"Subhendu",
""
],
[
"Hegde",
"Manjunath",
""
],
[
"Dasgupta",
"Afreen Shaikh. Koustuv",
""
],
[
"Goyal",
"Pawan",
""
],
[
"Ganguly",
"Niloy",
""
]
] |
new_dataset
| 0.999752 |
2306.03736
|
Soumya Sharma
|
Soumya Sharma, Tapas Nayak, Arusarka Bose, Ajay Kumar Meena, Koustuv
Dasgupta, Niloy Ganguly, Pawan Goyal
|
FinRED: A Dataset for Relation Extraction in Financial Domain
|
Accepted at FinWeb at WWW'22
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Relation extraction models trained on a source domain cannot be applied on a
different target domain due to the mismatch between relation sets. In the
current literature, there is no extensive open-source relation extraction
dataset specific to the finance domain. In this paper, we release FinRED, a
relation extraction dataset curated from financial news and earning call
transcripts containing relations from the finance domain. FinRED has been
created by mapping Wikidata triplets using distance supervision method. We
manually annotate the test data to ensure proper evaluation. We also experiment
with various state-of-the-art relation extraction models on this dataset to
create the benchmark. We see a significant drop in their performance on FinRED
compared to the general relation extraction datasets which tells that we need
better models for financial relation extraction.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2023 14:52:47 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Sharma",
"Soumya",
""
],
[
"Nayak",
"Tapas",
""
],
[
"Bose",
"Arusarka",
""
],
[
"Meena",
"Ajay Kumar",
""
],
[
"Dasgupta",
"Koustuv",
""
],
[
"Ganguly",
"Niloy",
""
],
[
"Goyal",
"Pawan",
""
]
] |
new_dataset
| 0.999531 |
2306.03795
|
Julius Sch\"oning
|
Julius Sch\"oning and Niklas Kruse
|
AI-Supported Assessment of Load Safety
|
9 pages, 4 figures, 2 tables
| null | null | null |
cs.AI cs.HC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Load safety assessment and compliance is an essential step in the corporate
process of every logistics service provider. In 2020, a total of 11,371 police
checks of trucks were carried out, during which 9.6% (1091) violations against
the load safety regulations were detected. For a logistic service provider,
every load safety violation results in height fines and damage to reputation.
An assessment of load safety supported by artificial intelligence (AI) will
reduce the risk of accidents by unsecured loads and fines during safety
assessments. This work shows how photos of the load, taken by the truck driver
or the loadmaster after the loading process, can be used to assess load safety.
By a trained two-stage artificial neural network (ANN), these photos are
classified into three different classes I) cargo loaded safely, II) cargo
loaded unsafely, and III) unusable image. By applying several architectures of
convolutional neural networks (CNN), it can be shown that it is possible to
distinguish between unusable and usable images for cargo safety assessment.
This distinction is quite crucial since the truck driver and the loadmaster
sometimes provide photos without the essential image features like the case
structure of the truck and the whole cargo. A human operator or another ANN
will then assess the load safety within the second stage.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2023 15:40:27 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Schöning",
"Julius",
""
],
[
"Kruse",
"Niklas",
""
]
] |
new_dataset
| 0.998034 |
2306.03907
|
Janis Goldzycher
|
Janis Goldzycher
|
CL-UZH at SemEval-2023 Task 10: Sexism Detection through Incremental
Fine-Tuning and Multi-Task Learning with Label Descriptions
|
11 pages, 4 figures, Accepted at The 17th International Workshop on
Semantic Evaluation, ACL 2023
| null | null | null |
cs.CL cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
The widespread popularity of social media has led to an increase in hateful,
abusive, and sexist language, motivating methods for the automatic detection of
such phenomena. The goal of the SemEval shared task \textit{Towards Explainable
Detection of Online Sexism} (EDOS 2023) is to detect sexism in English social
media posts (subtask A), and to categorize such posts into four coarse-grained
sexism categories (subtask B), and eleven fine-grained subcategories (subtask
C). In this paper, we present our submitted systems for all three subtasks,
based on a multi-task model that has been fine-tuned on a range of related
tasks and datasets before being fine-tuned on the specific EDOS subtasks. We
implement multi-task learning by formulating each task as binary pairwise text
classification, where the dataset and label descriptions are given along with
the input text. The results show clear improvements over a fine-tuned
DeBERTa-V3 serving as a baseline leading to $F_1$-scores of 85.9\% in subtask A
(rank 13/84), 64.8\% in subtask B (rank 19/69), and 44.9\% in subtask C
(26/63).
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2023 17:59:49 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Goldzycher",
"Janis",
""
]
] |
new_dataset
| 0.99943 |
2306.03908
|
Yunhan Yang
|
Yunhan Yang, Xiaoyang Wu, Tong He, Hengshuang Zhao, Xihui Liu
|
SAM3D: Segment Anything in 3D Scenes
|
Technical Report. The code is released at
https://github.com/Pointcept/SegmentAnything3D
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we propose SAM3D, a novel framework that is able to predict
masks in 3D point clouds by leveraging the Segment-Anything Model (SAM) in RGB
images without further training or finetuning. For a point cloud of a 3D scene
with posed RGB images, we first predict segmentation masks of RGB images with
SAM, and then project the 2D masks into the 3D points. Later, we merge the 3D
masks iteratively with a bottom-up merging approach. At each step, we merge the
point cloud masks of two adjacent frames with the bidirectional merging
approach. In this way, the 3D masks predicted from different frames are
gradually merged into the 3D masks of the whole 3D scene. Finally, we can
optionally ensemble the result from our SAM3D with the over-segmentation
results based on the geometric information of the 3D scenes. Our approach is
experimented with ScanNet dataset and qualitative results demonstrate that our
SAM3D achieves reasonable and fine-grained 3D segmentation results without any
training or finetuning of SAM.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2023 17:59:51 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Yang",
"Yunhan",
""
],
[
"Wu",
"Xiaoyang",
""
],
[
"He",
"Tong",
""
],
[
"Zhao",
"Hengshuang",
""
],
[
"Liu",
"Xihui",
""
]
] |
new_dataset
| 0.999282 |
2104.14103
|
Jacob Hartzer
|
Jacob Hartzer and Srikanth Saripalli
|
AutoCone: An OmniDirectional Robot for Lane-Level Cone Placement
| null | null |
10.1109/IV47402.2020.9304683
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper summarizes the progress in developing a rugged, low-cost,
automated ground cone robot network capable of traffic delineation at
lane-level precision. A holonomic omnidirectional base with a traffic
delineator was developed to allow flexibility in initialization. RTK GPS was
utilized to reduce minimum position error to 2 centimeters. Due to recent
developments, the cost of the platform is now less than $1,600. To minimize the
effects of GPS-denied environments, wheel encoders and an Extended Kalman
Filter were implemented to maintain lane-level accuracy during operation and a
maximum error of 1.97 meters through 50 meters with little to no GPS signal.
Future work includes increasing the operational speed of the platforms,
incorporating lanelet information for path planning, and cross-platform
estimation.
|
[
{
"version": "v1",
"created": "Thu, 29 Apr 2021 04:50:30 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Hartzer",
"Jacob",
""
],
[
"Saripalli",
"Srikanth",
""
]
] |
new_dataset
| 0.995505 |
2104.15114
|
John Wieting
|
John Wieting, Kevin Gimpel, Graham Neubig, Taylor Berg-Kirkpatrick
|
Paraphrastic Representations at Scale
|
Published as a demo paper at EMNLP 2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a system that allows users to train their own state-of-the-art
paraphrastic sentence representations in a variety of languages. We also
release trained models for English, Arabic, German, French, Spanish, Russian,
Turkish, and Chinese. We train these models on large amounts of data, achieving
significantly improved performance from the original papers proposing the
methods on a suite of monolingual semantic similarity, cross-lingual semantic
similarity, and bitext mining tasks. Moreover, the resulting models surpass all
prior work on unsupervised semantic textual similarity, significantly
outperforming even BERT-based models like Sentence-BERT (Reimers and Gurevych,
2019). Additionally, our models are orders of magnitude faster than prior work
and can be used on CPU with little difference in inference speed (even improved
speed over GPU when using more CPU cores), making these models an attractive
choice for users without access to GPUs or for use on embedded devices.
Finally, we add significantly increased functionality to the code bases for
training paraphrastic sentence models, easing their use for both inference and
for training them for any desired language with parallel data. We also include
code to automatically download and preprocess training data.
|
[
{
"version": "v1",
"created": "Fri, 30 Apr 2021 16:55:28 GMT"
},
{
"version": "v2",
"created": "Sun, 4 Jun 2023 22:43:14 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Wieting",
"John",
""
],
[
"Gimpel",
"Kevin",
""
],
[
"Neubig",
"Graham",
""
],
[
"Berg-Kirkpatrick",
"Taylor",
""
]
] |
new_dataset
| 0.955556 |
2110.00460
|
Xuan Thang Duong
|
Thang Xuan Duong, Mikhail Itskov, and Roger Andrew Sauer
|
A general isogeometric finite element formulation for rotation-free
shells with in-plane bending of embedded fibers
|
This version changes the title for a better clarity. It also updates
the reference list and improves minor text editing. Results unchanged
| null |
10.1002/nme.6937
| null |
cs.CE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper presents a general, nonlinear isogeometric finite element
formulation for rotation-free shells with embedded fibers that captures
anisotropy in stretching, shearing, twisting and bending -- both in-plane and
out-of-plane. These capabilities allow for the simulation of large sheets of
heterogeneous and fibrous materials either with or without matrix, such as
textiles, composites, and pantographic structures. The work is a computational
extension of our earlier theoretical work [1] that extends existing
Kirchhoff-Love shell theory to incorporate the in-plane bending resistance of
initially straight or curved fibers. The formulation requires only displacement
degrees-of-freedom to capture all mentioned modes of deformation. To this end,
isogeometric shape functions are used in order to satisfy the required
$C^1$-continuity for bending across element boundaries. The proposed
formulation can admit a wide range of material models, such as surface
hyperelasticity that does not require any explicit thickness integration. To
deal with possible material instability due to fiber compression, a
stabilization scheme is added. Several benchmark examples are used to
demonstrate the robustness and accuracy of the proposed computational
formulation.
|
[
{
"version": "v1",
"created": "Fri, 1 Oct 2021 14:49:48 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Oct 2021 15:39:19 GMT"
},
{
"version": "v3",
"created": "Mon, 5 Jun 2023 13:50:27 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Duong",
"Thang Xuan",
""
],
[
"Itskov",
"Mikhail",
""
],
[
"Sauer",
"Roger Andrew",
""
]
] |
new_dataset
| 0.998505 |
2112.06164
|
Kazuma Tateiri
|
Kazuma Tateiri, Toru Ohmoto
|
An extended MMP algorithm: wavefront and cut-locus on a convex
polyhedron
|
To appear in International Journal of Computational Geometry &
Applications
| null |
10.1142/S0218195922500029
| null |
cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
In the present paper, we propose a novel generalization of the celebrated MMP
algorithm in order to find the wavefront propagation and the cut-locus on a
convex polyhedron with an emphasis on actual implementation for instantaneous
visualization and numerical computation.
|
[
{
"version": "v1",
"created": "Sun, 12 Dec 2021 06:12:34 GMT"
},
{
"version": "v2",
"created": "Fri, 6 May 2022 07:10:06 GMT"
},
{
"version": "v3",
"created": "Sun, 5 Jun 2022 07:53:02 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Tateiri",
"Kazuma",
""
],
[
"Ohmoto",
"Toru",
""
]
] |
new_dataset
| 0.992177 |
2202.04801
|
Shubhayu Bhattacharyay
|
Shubhayu Bhattacharyay, Ioan Milosevic, Lindsay Wilson, David K.
Menon, Robert D. Stevens, Ewout W. Steyerberg, David W. Nelson, Ari Ercole
and the CENTER-TBI investigators/participants
|
The leap to ordinal: detailed functional prognosis after traumatic brain
injury with a flexible modelling approach
|
68 pages, 4 figures, 4 tables, 1 appendix, 6 supplementary figures, 4
supplementary tables, 3 supplementary methods, 1 supplementary result
|
PLOS ONE 17:7 (2022) e0270973
|
10.1371/journal.pone.0270973
| null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
When a patient is admitted to the intensive care unit (ICU) after a traumatic
brain injury (TBI), an early prognosis is essential for baseline risk
adjustment and shared decision making. TBI outcomes are commonly categorised by
the Glasgow Outcome Scale-Extended (GOSE) into 8, ordered levels of functional
recovery at 6 months after injury. Existing ICU prognostic models predict
binary outcomes at a certain threshold of GOSE (e.g., prediction of survival
[GOSE>1] or functional independence [GOSE>4]). We aimed to develop ordinal
prediction models that concurrently predict probabilities of each GOSE score.
From a prospective cohort (n=1,550, 65 centres) in the ICU stratum of the
Collaborative European NeuroTrauma Effectiveness Research in TBI (CENTER-TBI)
patient dataset, we extracted all clinical information within 24 hours of ICU
admission (1,151 predictors) and 6-month GOSE scores. We analysed the effect of
2 design elements on ordinal model performance: (1) the baseline predictor set,
ranging from a concise set of 10 validated predictors to a token-embedded
representation of all possible predictors, and (2) the modelling strategy, from
ordinal logistic regression to multinomial deep learning. With repeated k-fold
cross-validation, we found that expanding the baseline predictor set
significantly improved ordinal prediction performance while increasing
analytical complexity did not. Half of these gains could be achieved with the
addition of 8 high-impact predictors (2 demographic variables, 4 protein
biomarkers, and 2 severity assessments) to the concise set. At best, ordinal
models achieved 0.76 (95% CI: 0.74-0.77) ordinal discrimination ability
(ordinal c-index) and 57% (95% CI: 54%-60%) explanation of ordinal variation in
6-month GOSE (Somers' D). Our results motivate the search for informative
predictors for higher GOSE and the development of ordinal dynamic prediction
models.
|
[
{
"version": "v1",
"created": "Thu, 10 Feb 2022 02:29:19 GMT"
},
{
"version": "v2",
"created": "Wed, 4 May 2022 15:49:10 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Bhattacharyay",
"Shubhayu",
""
],
[
"Milosevic",
"Ioan",
""
],
[
"Wilson",
"Lindsay",
""
],
[
"Menon",
"David K.",
""
],
[
"Stevens",
"Robert D.",
""
],
[
"Steyerberg",
"Ewout W.",
""
],
[
"Nelson",
"David W.",
""
],
[
"Ercole",
"Ari",
""
],
[
"investigators/participants",
"the CENTER-TBI",
""
]
] |
new_dataset
| 0.996006 |
2203.16794
|
Sreyan Ghosh
|
Sreyan Ghosh and Utkarsh Tyagi and S Ramaneswaran and Harshvardhan
Srivastava and Dinesh Manocha
|
MMER: Multimodal Multi-task Learning for Speech Emotion Recognition
|
InterSpeech 2023 Main Conference
| null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose MMER, a novel Multimodal Multi-task learning
approach for Speech Emotion Recognition. MMER leverages a novel multimodal
network based on early-fusion and cross-modal self-attention between text and
acoustic modalities and solves three novel auxiliary tasks for learning emotion
recognition from spoken utterances. In practice, MMER outperforms all our
baselines and achieves state-of-the-art performance on the IEMOCAP benchmark.
Additionally, we conduct extensive ablation studies and results analysis to
prove the effectiveness of our proposed approach.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 04:51:32 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Apr 2022 04:39:53 GMT"
},
{
"version": "v3",
"created": "Thu, 18 Aug 2022 15:12:39 GMT"
},
{
"version": "v4",
"created": "Mon, 31 Oct 2022 21:51:49 GMT"
},
{
"version": "v5",
"created": "Sat, 3 Jun 2023 21:55:28 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Ghosh",
"Sreyan",
""
],
[
"Tyagi",
"Utkarsh",
""
],
[
"Ramaneswaran",
"S",
""
],
[
"Srivastava",
"Harshvardhan",
""
],
[
"Manocha",
"Dinesh",
""
]
] |
new_dataset
| 0.951586 |
2204.02545
|
Jinsheng Ba
|
Jinsheng Ba, Marcel B\"ohme, Zahra Mirzamomen, Abhik Roychoudhury
|
Stateful Greybox Fuzzing
| null |
31st USENIX Security Symposium (USENIX Security 2022)
| null | null |
cs.CR cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many protocol implementations are reactive systems, where the protocol
process is in continuous interaction with other processes and the environment.
If a bug can be exposed only in a certain state, a fuzzer needs to provide a
specific sequence of events as inputs that would take protocol into this state
before the bug is manifested. We call these bugs as "stateful" bugs. Usually,
when we are testing a protocol implementation, we do not have a detailed formal
specification of the protocol to rely upon. Without knowledge of the protocol,
it is inherently difficult for a fuzzer to discover such stateful bugs. A key
challenge then is to cover the state space without an explicit specification of
the protocol.
In this work, we posit that manual annotations for state identification can
be avoided for stateful protocol fuzzing. Specifically, we rely on a
programmatic intuition that the state variables used in protocol
implementations often appear in enum type variables whose values (the state
names) come from named constants. In our analysis of the Top-50 most widely
used open-source protocol implementations, we found that every implementation
uses state variables that are assigned named constants (with easy to comprehend
names such as INIT, READY) to represent the current state. In this work, we
propose to automatically identify such state variables and track the sequence
of values assigned to them during fuzzing to produce a "map" of the explored
state space.
Our experiments confirm that our stateful fuzzer discovers stateful bugs
twice as fast as the baseline greybox fuzzer that we extended. Starting from
the initial state, our fuzzer exercises one order of magnitude more
state/transition sequences and covers code two times faster than the baseline
fuzzer. Several zero-day bugs in prominent protocol implementations were found
by our fuzzer, and 8 CVEs have been assigned.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 02:26:34 GMT"
},
{
"version": "v2",
"created": "Thu, 12 May 2022 13:30:03 GMT"
},
{
"version": "v3",
"created": "Mon, 16 May 2022 11:10:07 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Ba",
"Jinsheng",
""
],
[
"Böhme",
"Marcel",
""
],
[
"Mirzamomen",
"Zahra",
""
],
[
"Roychoudhury",
"Abhik",
""
]
] |
new_dataset
| 0.982891 |
2206.02831
|
Jason Z.S. Hu
|
Jason Z.S. Hu, Brigitte Pientka, Ulrich Sch\"opp
|
A Category Theoretic View of Contextual Types: from Simple Types to
Dependent Types
| null | null |
10.1145/3545115
| null |
cs.LO cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
We describe the categorical semantics for a simply typed variant and a
simplified dependently typed variant of Cocon, a contextual modal type theory
where the box modality mediates between the weak function space that is used to
represent higher-order abstract syntax (HOAS) trees and the strong function
space that describes (recursive) computations about them. What makes Cocon
different from standard type theories is the presence of first-class contexts
and contextual objects to describe syntax trees that are closed with respect to
a given context of assumptions. Following M. Hofmann's work, we use a presheaf
model to characterise HOAS trees. Surprisingly, this model already provides the
necessary structure to also model Cocon. In particular, we can capture the
contextual objects of Cocon using a comonad $\flat$ that restricts presheaves
to their closed elements. This gives a simple semantic characterisation of the
invariants of contextual types (e.g. substitution invariance) and identifies
Cocon as a type-theoretic syntax of presheaf models. We further extend this
characterisation to dependent types using categories with families and show
that we can model a fragment of Cocon without recursor in the Fitch-style
dependent modal type theory presented by Birkedal et. al..
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2022 18:11:52 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Jun 2022 02:21:41 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Hu",
"Jason Z. S.",
""
],
[
"Pientka",
"Brigitte",
""
],
[
"Schöpp",
"Ulrich",
""
]
] |
new_dataset
| 0.99858 |
2207.05623
|
Giacomo Longo
|
G. Longo, E. Russo, A. Armando, A. Merlo
|
Attacking (and defending) the Maritime Radar System
| null | null |
10.1109/TIFS.2023.3282132
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Operation of radar equipment is one of the key facilities used by navigators
to gather situational awareness about their surroundings. With an ever
increasing need for always-running logistics and tighter shipping schedules,
operators are relying more and more on computerized instruments and their
indications. As a result, modern ships have become a complex cyber-physical
system in which sensors and computers constantly communicate and coordinate. In
this work, we discuss novel threats related to the radar system, which is one
of the most security-sensitive component on a ship. In detail, we first discuss
some new attacks capable of compromising the integrity of data displayed on a
radar system, with potentially catastrophic impacts on the crew' situational
awareness or even safety itself. Then, we present a detection system aimed at
highlighting anomalies in the radar video feed, requiring no modifications to
the target ship configuration. Finally, we stimulate our detection system by
performing the attacks inside of a simulated environment. The experimental
results clearly indicate that the attacks are feasible, rather easy to carry
out, and hard-to-detect. Moreover, they prove that the proposed detection
technique is effective.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2022 15:45:39 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Longo",
"G.",
""
],
[
"Russo",
"E.",
""
],
[
"Armando",
"A.",
""
],
[
"Merlo",
"A.",
""
]
] |
new_dataset
| 0.995595 |
2209.05135
|
Federico Tavella
|
Federico Tavella and Aphrodite Galata and Angelo Cangelosi
|
Signs of Language: Embodied Sign Language Fingerspelling Acquisition
from Demonstrations for Human-Robot Interaction
| null | null | null | null |
cs.RO cs.CL cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Learning fine-grained movements is a challenging topic in robotics,
particularly in the context of robotic hands. One specific instance of this
challenge is the acquisition of fingerspelling sign language in robots. In this
paper, we propose an approach for learning dexterous motor imitation from video
examples without additional information. To achieve this, we first build a URDF
model of a robotic hand with a single actuator for each joint. We then leverage
pre-trained deep vision models to extract the 3D pose of the hand from RGB
videos. Next, using state-of-the-art reinforcement learning algorithms for
motion imitation (namely, proximal policy optimization and soft actor-critic),
we train a policy to reproduce the movement extracted from the demonstrations.
We identify the optimal set of hyperparameters for imitation based on a
reference motion. Finally, we demonstrate the generalizability of our approach
by testing it on six different tasks, corresponding to fingerspelled letters.
Our results show that our approach is able to successfully imitate these
fine-grained movements without additional information, highlighting its
potential for real-world applications in robotics.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 10:42:26 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Apr 2023 16:30:00 GMT"
},
{
"version": "v3",
"created": "Mon, 5 Jun 2023 12:56:14 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Tavella",
"Federico",
""
],
[
"Galata",
"Aphrodite",
""
],
[
"Cangelosi",
"Angelo",
""
]
] |
new_dataset
| 0.995253 |
2209.06794
|
Xi Chen
|
Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr
Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas
Beyer, Alexander Kolesnikov, Joan Puigcerver, Nan Ding, Keran Rong, Hassan
Akbari, Gaurav Mishra, Linting Xue, Ashish Thapliyal, James Bradbury,
Weicheng Kuo, Mojtaba Seyedhosseini, Chao Jia, Burcu Karagol Ayan, Carlos
Riquelme, Andreas Steiner, Anelia Angelova, Xiaohua Zhai, Neil Houlsby, Radu
Soricut
|
PaLI: A Jointly-Scaled Multilingual Language-Image Model
|
ICLR 2023 (Notable-top-5%)
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Effective scaling and a flexible task interface enable large language models
to excel at many tasks. We present PaLI (Pathways Language and Image model), a
model that extends this approach to the joint modeling of language and vision.
PaLI generates text based on visual and textual inputs, and with this interface
performs many vision, language, and multimodal tasks, in many languages. To
train PaLI, we make use of large pre-trained encoder-decoder language models
and Vision Transformers (ViTs). This allows us to capitalize on their existing
capabilities and leverage the substantial cost of training them. We find that
joint scaling of the vision and language components is important. Since
existing Transformers for language are much larger than their vision
counterparts, we train a large, 4-billion parameter ViT (ViT-e) to quantify the
benefits from even larger-capacity vision models. To train PaLI, we create a
large multilingual mix of pretraining tasks, based on a new image-text training
set containing 10B images and texts in over 100 languages. PaLI achieves
state-of-the-art in multiple vision and language tasks (such as captioning,
visual question-answering, scene-text understanding), while retaining a simple,
modular, and scalable design.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2022 17:24:07 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Sep 2022 17:44:29 GMT"
},
{
"version": "v3",
"created": "Sun, 28 May 2023 23:46:10 GMT"
},
{
"version": "v4",
"created": "Mon, 5 Jun 2023 17:55:12 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Chen",
"Xi",
""
],
[
"Wang",
"Xiao",
""
],
[
"Changpinyo",
"Soravit",
""
],
[
"Piergiovanni",
"AJ",
""
],
[
"Padlewski",
"Piotr",
""
],
[
"Salz",
"Daniel",
""
],
[
"Goodman",
"Sebastian",
""
],
[
"Grycner",
"Adam",
""
],
[
"Mustafa",
"Basil",
""
],
[
"Beyer",
"Lucas",
""
],
[
"Kolesnikov",
"Alexander",
""
],
[
"Puigcerver",
"Joan",
""
],
[
"Ding",
"Nan",
""
],
[
"Rong",
"Keran",
""
],
[
"Akbari",
"Hassan",
""
],
[
"Mishra",
"Gaurav",
""
],
[
"Xue",
"Linting",
""
],
[
"Thapliyal",
"Ashish",
""
],
[
"Bradbury",
"James",
""
],
[
"Kuo",
"Weicheng",
""
],
[
"Seyedhosseini",
"Mojtaba",
""
],
[
"Jia",
"Chao",
""
],
[
"Ayan",
"Burcu Karagol",
""
],
[
"Riquelme",
"Carlos",
""
],
[
"Steiner",
"Andreas",
""
],
[
"Angelova",
"Anelia",
""
],
[
"Zhai",
"Xiaohua",
""
],
[
"Houlsby",
"Neil",
""
],
[
"Soricut",
"Radu",
""
]
] |
new_dataset
| 0.977748 |
2209.15266
|
Ziqing Yang
|
Ziqing Yang and Xinlei He and Zheng Li and Michael Backes and Mathias
Humbert and Pascal Berrang and Yang Zhang
|
Data Poisoning Attacks Against Multimodal Encoders
|
To Appear in the 40th International Conference on Machine Learning,
July 2023
| null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, the newly emerged multimodal models, which leverage both visual and
linguistic modalities to train powerful encoders, have gained increasing
attention. However, learning from a large-scale unlabeled dataset also exposes
the model to the risk of potential poisoning attacks, whereby the adversary
aims to perturb the model's training data to trigger malicious behaviors in it.
In contrast to previous work, only poisoning visual modality, in this work, we
take the first step to studying poisoning attacks against multimodal models in
both visual and linguistic modalities. Specially, we focus on answering two
questions: (1) Is the linguistic modality also vulnerable to poisoning attacks?
and (2) Which modality is most vulnerable? To answer the two questions, we
propose three types of poisoning attacks against multimodal models. Extensive
evaluations on different datasets and model architectures show that all three
attacks can achieve significant attack performance while maintaining model
utility in both visual and linguistic modalities. Furthermore, we observe that
the poisoning effect differs between different modalities. To mitigate the
attacks, we propose both pre-training and post-training defenses. We
empirically show that both defenses can significantly reduce the attack
performance while preserving the model's utility.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 06:50:08 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Jun 2023 13:52:24 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Yang",
"Ziqing",
""
],
[
"He",
"Xinlei",
""
],
[
"Li",
"Zheng",
""
],
[
"Backes",
"Michael",
""
],
[
"Humbert",
"Mathias",
""
],
[
"Berrang",
"Pascal",
""
],
[
"Zhang",
"Yang",
""
]
] |
new_dataset
| 0.997074 |
2210.10629
|
Guanghu Yuan
|
Guanghu Yuan, Fajie Yuan, Yudong Li, Beibei Kong, Shujie Li, Lei Chen,
Min Yang, Chenyun Yu, Bo Hu, Zang Li, Yu Xu, Xiaohu Qie
|
Tenrec: A Large-scale Multipurpose Benchmark Dataset for Recommender
Systems
| null | null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Existing benchmark datasets for recommender systems (RS) either are created
at a small scale or involve very limited forms of user feedback. RS models
evaluated on such datasets often lack practical values for large-scale
real-world applications. In this paper, we describe Tenrec, a novel and
publicly available data collection for RS that records various user feedback
from four different recommendation scenarios. To be specific, Tenrec has the
following five characteristics: (1) it is large-scale, containing around 5
million users and 140 million interactions; (2) it has not only positive user
feedback, but also true negative feedback (vs. one-class recommendation); (3)
it contains overlapped users and items across four different scenarios; (4) it
contains various types of user positive feedback, in forms of clicks, likes,
shares, and follows, etc; (5) it contains additional features beyond the user
IDs and item IDs. We verify Tenrec on ten diverse recommendation tasks by
running several classical baseline models per task. Tenrec has the potential to
become a useful benchmark dataset for a majority of popular recommendation
tasks.
|
[
{
"version": "v1",
"created": "Thu, 13 Oct 2022 15:57:40 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Oct 2022 12:19:36 GMT"
},
{
"version": "v3",
"created": "Sun, 4 Jun 2023 04:00:05 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Yuan",
"Guanghu",
""
],
[
"Yuan",
"Fajie",
""
],
[
"Li",
"Yudong",
""
],
[
"Kong",
"Beibei",
""
],
[
"Li",
"Shujie",
""
],
[
"Chen",
"Lei",
""
],
[
"Yang",
"Min",
""
],
[
"Yu",
"Chenyun",
""
],
[
"Hu",
"Bo",
""
],
[
"Li",
"Zang",
""
],
[
"Xu",
"Yu",
""
],
[
"Qie",
"Xiaohu",
""
]
] |
new_dataset
| 0.999856 |
2211.11965
|
Juan Quiroz
|
Juan C. Quiroz, David Brieger, Louisa Jorm, Raymond W Sy, Benjumin
Hsu, Blanca Gallego
|
Predicting adverse outcomes following catheter ablation treatment for
atrial fibrillation
|
Under journal review; updated in response to reviewer comments
| null | null | null |
cs.LG q-bio.QM stat.OT
|
http://creativecommons.org/licenses/by/4.0/
|
Objective: To develop prognostic survival models for predicting adverse
outcomes after catheter ablation treatment for non-valvular atrial fibrillation
(AF).
Methods: We used a linked dataset including hospital administrative data,
prescription medicine claims, emergency department presentations, and death
registrations of patients in New South Wales, Australia. The cohort included
patients who received catheter ablation for AF. Traditional and deep survival
models were trained to predict major bleeding events and a composite of heart
failure, stroke, cardiac arrest, and death.
Results: Out of a total of 3285 patients in the cohort, 177 (5.3%)
experienced the composite outcome (heart failure, stroke, cardiac arrest,
death) and 167 (5.1%) experienced major bleeding events after catheter ablation
treatment. Models predicting the composite outcome had high risk discrimination
accuracy, with the best model having a concordance index > 0.79 at the
evaluated time horizons. Models for predicting major bleeding events had poor
risk discrimination performance, with all models having a concordance index <
0.66. The most impactful features for the models predicting higher risk were
comorbidities indicative of poor health, older age, and therapies commonly used
in sicker patients to treat heart failure and AF.
Conclusions: Diagnosis and medication history did not contain sufficient
information for precise risk prediction of experiencing major bleeding events.
The models for predicting the composite outcome have the potential to enable
clinicians to identify and manage high-risk patients following catheter
ablation proactively. Future research is needed to validate the usefulness of
these models in clinical practice.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 02:55:51 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Jun 2023 02:57:41 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Quiroz",
"Juan C.",
""
],
[
"Brieger",
"David",
""
],
[
"Jorm",
"Louisa",
""
],
[
"Sy",
"Raymond W",
""
],
[
"Hsu",
"Benjumin",
""
],
[
"Gallego",
"Blanca",
""
]
] |
new_dataset
| 0.9994 |
2212.06644
|
Daniel Lemire
|
Noble Mushtak, Daniel Lemire
|
Fast Number Parsing Without Fallback
| null |
Software: Practice and Experience 53 (6), 2023
|
10.1002/spe.3198
| null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
In recent work, Lemire (2021) presented a fast algorithm to convert number
strings into binary floating-point numbers. The algorithm has been adopted by
several important systems: e.g., it is part of the runtime libraries of GCC 12,
Rust 1.55, and Go 1.16. The algorithm parses any number string with a
significand containing no more than 19 digits into an IEEE floating-point
number. However, there is a check leading to a fallback function to ensure
correctness. This fallback function is never called in practice. We prove that
the fallback is unnecessary. Thus we can slightly simplify the algorithm and
its implementation.
|
[
{
"version": "v1",
"created": "Tue, 13 Dec 2022 15:26:46 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Feb 2023 03:33:06 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Mushtak",
"Noble",
""
],
[
"Lemire",
"Daniel",
""
]
] |
new_dataset
| 0.986965 |
2212.09865
|
Xinxi Lyu
|
Xinxi Lyu, Sewon Min, Iz Beltagy, Luke Zettlemoyer, Hannaneh
Hajishirzi
|
Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations
|
11 pages; 9 figures
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Although large language models can be prompted for both zero- and few-shot
learning, performance drops significantly when no demonstrations are available.
In this paper, we introduce Z-ICL, a new zero-shot method that closes the gap
by constructing pseudo-demonstrations for a given test input using a raw text
corpus. Concretely, pseudo-demonstrations are constructed by (1) finding the
nearest neighbors to the test input from the corpus and pairing them with
random task labels, and (2) applying a set of techniques to reduce the amount
of direct copying the model does from the resulting demonstrations. Evaluation
on nine classification datasets shows that Z-ICL outperforms previous zero-shot
methods by a significant margin, and is on par with in-context learning with
labeled training data in the few-shot setting. Overall, Z-ICL provides a
significantly higher estimate of the zero-shot performance levels of a model,
and supports future efforts to develop better pseudo-demonstrations that
further improve zero-shot results.
|
[
{
"version": "v1",
"created": "Mon, 19 Dec 2022 21:34:26 GMT"
},
{
"version": "v2",
"created": "Sat, 3 Jun 2023 22:51:39 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Lyu",
"Xinxi",
""
],
[
"Min",
"Sewon",
""
],
[
"Beltagy",
"Iz",
""
],
[
"Zettlemoyer",
"Luke",
""
],
[
"Hajishirzi",
"Hannaneh",
""
]
] |
new_dataset
| 0.999079 |
2301.02364
|
Zitian Wang
|
Zitian Wang, Zehao Huang, Jiahui Fu, Naiyan Wang, Si Liu
|
Object as Query: Lifting any 2D Object Detector to 3D Detection
|
technical report
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D object detection from multi-view images has drawn much attention over the
past few years. Existing methods mainly establish 3D representations from
multi-view images and adopt a dense detection head for object detection, or
employ object queries distributed in 3D space to localize objects. In this
paper, we design Multi-View 2D Objects guided 3D Object Detector (MV2D), which
can lift any 2D object detector to multi-view 3D object detection. Since 2D
detections can provide valuable priors for object existence, MV2D exploits 2D
detectors to generate object queries conditioned on the rich image semantics.
These dynamically generated queries help MV2D to recall objects in the field of
view and show a strong capability of localizing 3D objects. For the generated
queries, we design a sparse cross attention module to force them to focus on
the features of specific objects, which suppresses interference from noises.
The evaluation results on the nuScenes dataset demonstrate the dynamic object
queries and sparse feature aggregation can promote 3D detection capability.
MV2D also exhibits a state-of-the-art performance among existing methods. We
hope MV2D can serve as a new baseline for future research.
|
[
{
"version": "v1",
"created": "Fri, 6 Jan 2023 04:08:20 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Jun 2023 05:40:56 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Wang",
"Zitian",
""
],
[
"Huang",
"Zehao",
""
],
[
"Fu",
"Jiahui",
""
],
[
"Wang",
"Naiyan",
""
],
[
"Liu",
"Si",
""
]
] |
new_dataset
| 0.988001 |
2301.05412
|
Ling Cheng
|
Ling Cheng, Feida Zhu, Yong Wang, Ruicheng Liang, Huiwen Liu
|
Evolve Path Tracer: Early Detection of Malicious Addresses in
Cryptocurrency
|
In Proceedings of the 29th ACM SIGKDD Conference on Knowledge
Discovery and Data Mining (KDD23)
| null |
10.1145/3580305.3599817
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the ever-increasing boom of Cryptocurrency, detecting fraudulent
behaviors and associated malicious addresses draws significant research effort.
However, most existing studies still rely on the full history features or
full-fledged address transaction networks, thus cannot meet the requirements of
early malicious address detection, which is urgent but seldom discussed by
existing studies. To detect fraud behaviors of malicious addresses in the early
stage, we present Evolve Path Tracer, which consists of Evolve Path Encoder
LSTM, Evolve Path Graph GCN, and Hierarchical Survival Predictor. Specifically,
in addition to the general address features, we propose asset transfer paths
and corresponding path graphs to characterize early transaction patterns.
Further, since the transaction patterns are changing rapidly during the early
stage, we propose Evolve Path Encoder LSTM and Evolve Path Graph GCN to encode
asset transfer path and path graph under an evolving structure setting.
Hierarchical Survival Predictor then predicts addresses' labels with nice
scalability and faster prediction speed. We investigate the effectiveness and
versatility of Evolve Path Tracer on three real-world illicit bitcoin datasets.
Our experimental results demonstrate that Evolve Path Tracer outperforms the
state-of-the-art methods. Extensive scalability experiments demonstrate the
model's adaptivity under a dynamic prediction setting.
|
[
{
"version": "v1",
"created": "Fri, 13 Jan 2023 06:59:52 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Feb 2023 12:11:55 GMT"
},
{
"version": "v3",
"created": "Sat, 3 Jun 2023 05:59:42 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Cheng",
"Ling",
""
],
[
"Zhu",
"Feida",
""
],
[
"Wang",
"Yong",
""
],
[
"Liang",
"Ruicheng",
""
],
[
"Liu",
"Huiwen",
""
]
] |
new_dataset
| 0.995906 |
2302.09048
|
Cl\'ement Vignac
|
Clement Vignac, Nagham Osman, Laura Toni, Pascal Frossard
|
MiDi: Mixed Graph and 3D Denoising Diffusion for Molecule Generation
|
22 pages. Under review
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This work introduces MiDi, a novel diffusion model for jointly generating
molecular graphs and their corresponding 3D arrangement of atoms. Unlike
existing methods that rely on predefined rules to determine molecular bonds
based on the 3D conformation, MiDi offers an end-to-end differentiable approach
that streamlines the molecule generation process. Our experimental results
demonstrate the effectiveness of this approach. On the challenging GEOM-DRUGS
dataset, MiDi generates 92% of stable molecules, against 6% for the previous
EDM model that uses interatomic distances for bond prediction, and 40% using
EDM followed by an algorithm that directly optimize bond orders for validity.
Our code is available at github.com/cvignac/MiDi.
|
[
{
"version": "v1",
"created": "Fri, 17 Feb 2023 18:27:14 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Jun 2023 15:26:26 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Vignac",
"Clement",
""
],
[
"Osman",
"Nagham",
""
],
[
"Toni",
"Laura",
""
],
[
"Frossard",
"Pascal",
""
]
] |
new_dataset
| 0.998093 |
2303.06034
|
Kei Ota
|
Kei Ota, Devesh K. Jha, Hsiao-Yu Tung, Joshua B. Tenenbaum
|
Tactile-Filter: Interactive Tactile Perception for Part Mating
|
Accepted at RSS2023
| null | null | null |
cs.RO cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Humans rely on touch and tactile sensing for a lot of dexterous manipulation
tasks. Our tactile sensing provides us with a lot of information regarding
contact formations as well as geometric information about objects during any
interaction. With this motivation, vision-based tactile sensors are being
widely used for various robotic perception and control tasks. In this paper, we
present a method for interactive perception using vision-based tactile sensors
for a part mating task, where a robot can use tactile sensors and a feedback
mechanism using a particle filter to incrementally improve its estimate of
objects (pegs and holes) that fit together. To do this, we first train a deep
neural network that makes use of tactile images to predict the probabilistic
correspondence between arbitrarily shaped objects that fit together. The
trained model is used to design a particle filter which is used twofold. First,
given one partial (or non-unique) observation of the hole, it incrementally
improves the estimate of the correct peg by sampling more tactile observations.
Second, it selects the next action for the robot to sample the next touch (and
thus image) which results in maximum uncertainty reduction to minimize the
number of interactions during the perception task. We evaluate our method on
several part-mating tasks with novel objects using a robot equipped with a
vision-based tactile sensor. We also show the efficiency of the proposed action
selection method against a naive method. See supplementary video at
https://www.youtube.com/watch?v=jMVBg_e3gLw .
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 16:27:37 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Jun 2023 13:44:02 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Ota",
"Kei",
""
],
[
"Jha",
"Devesh K.",
""
],
[
"Tung",
"Hsiao-Yu",
""
],
[
"Tenenbaum",
"Joshua B.",
""
]
] |
new_dataset
| 0.997368 |
2303.14302
|
Junjie Ke
|
Junjie Ke, Keren Ye, Jiahui Yu, Yonghui Wu, Peyman Milanfar, Feng Yang
|
VILA: Learning Image Aesthetics from User Comments with Vision-Language
Pretraining
|
CVPR 2023,
https://github.com/google-research/google-research/tree/master/vila
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Assessing the aesthetics of an image is challenging, as it is influenced by
multiple factors including composition, color, style, and high-level semantics.
Existing image aesthetic assessment (IAA) methods primarily rely on
human-labeled rating scores, which oversimplify the visual aesthetic
information that humans perceive. Conversely, user comments offer more
comprehensive information and are a more natural way to express human opinions
and preferences regarding image aesthetics. In light of this, we propose
learning image aesthetics from user comments, and exploring vision-language
pretraining methods to learn multimodal aesthetic representations.
Specifically, we pretrain an image-text encoder-decoder model with
image-comment pairs, using contrastive and generative objectives to learn rich
and generic aesthetic semantics without human labels. To efficiently adapt the
pretrained model for downstream IAA tasks, we further propose a lightweight
rank-based adapter that employs text as an anchor to learn the aesthetic
ranking concept. Our results show that our pretrained aesthetic vision-language
model outperforms prior works on image aesthetic captioning over the
AVA-Captions dataset, and it has powerful zero-shot capability for aesthetic
tasks such as zero-shot style classification and zero-shot IAA, surpassing many
supervised baselines. With only minimal finetuning parameters using the
proposed adapter module, our model achieves state-of-the-art IAA performance
over the AVA dataset.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 23:57:28 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Jun 2023 18:57:30 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Ke",
"Junjie",
""
],
[
"Ye",
"Keren",
""
],
[
"Yu",
"Jiahui",
""
],
[
"Wu",
"Yonghui",
""
],
[
"Milanfar",
"Peyman",
""
],
[
"Yang",
"Feng",
""
]
] |
new_dataset
| 0.968864 |
2304.06129
|
Tuomas Oikarinen
|
Tuomas Oikarinen, Subhro Das, Lam M. Nguyen, Tsui-Wei Weng
|
Label-Free Concept Bottleneck Models
|
Published at ICLR 2023. New v2(5 June 2023): added crowdsourced human
study in Appendix B
| null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Concept bottleneck models (CBM) are a popular way of creating more
interpretable neural networks by having hidden layer neurons correspond to
human-understandable concepts. However, existing CBMs and their variants have
two crucial limitations: first, they need to collect labeled data for each of
the predefined concepts, which is time consuming and labor intensive; second,
the accuracy of a CBM is often significantly lower than that of a standard
neural network, especially on more complex datasets. This poor performance
creates a barrier for adopting CBMs in practical real world applications.
Motivated by these challenges, we propose Label-free CBM which is a novel
framework to transform any neural network into an interpretable CBM without
labeled concept data, while retaining a high accuracy. Our Label-free CBM has
many advantages, it is: scalable - we present the first CBM scaled to ImageNet,
efficient - creating a CBM takes only a few hours even for very large datasets,
and automated - training it for a new dataset requires minimal human effort.
Our code is available at https://github.com/Trustworthy-ML-Lab/Label-free-CBM.
Finally, in Appendix B we conduct a large scale user evaluation of the
interpretability of our method.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 19:27:09 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Jun 2023 17:33:43 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Oikarinen",
"Tuomas",
""
],
[
"Das",
"Subhro",
""
],
[
"Nguyen",
"Lam M.",
""
],
[
"Weng",
"Tsui-Wei",
""
]
] |
new_dataset
| 0.973416 |
2304.11379
|
Song Wang
|
Song Wang and Wentong Li and Wenyu Liu and Xiaolu Liu and Jianke Zhu
|
LiDAR2Map: In Defense of LiDAR-Based Semantic Map Construction Using
Online Camera Distillation
|
Accepted by CVPR2023
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Semantic map construction under bird's-eye view (BEV) plays an essential role
in autonomous driving. In contrast to camera image, LiDAR provides the accurate
3D observations to project the captured 3D features onto BEV space inherently.
However, the vanilla LiDAR-based BEV feature often contains many indefinite
noises, where the spatial features have little texture and semantic cues. In
this paper, we propose an effective LiDAR-based method to build semantic map.
Specifically, we introduce a BEV feature pyramid decoder that learns the robust
multi-scale BEV features for semantic map construction, which greatly boosts
the accuracy of the LiDAR-based method. To mitigate the defects caused by
lacking semantic cues in LiDAR data, we present an online Camera-to-LiDAR
distillation scheme to facilitate the semantic learning from image to point
cloud. Our distillation scheme consists of feature-level and logit-level
distillation to absorb the semantic information from camera in BEV. The
experimental results on challenging nuScenes dataset demonstrate the efficacy
of our proposed LiDAR2Map on semantic map construction, which significantly
outperforms the previous LiDAR-based methods over 27.9% mIoU and even performs
better than the state-of-the-art camera-based approaches. Source code is
available at: https://github.com/songw-zju/LiDAR2Map.
|
[
{
"version": "v1",
"created": "Sat, 22 Apr 2023 12:05:29 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Jun 2023 03:56:19 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Wang",
"Song",
""
],
[
"Li",
"Wentong",
""
],
[
"Liu",
"Wenyu",
""
],
[
"Liu",
"Xiaolu",
""
],
[
"Zhu",
"Jianke",
""
]
] |
new_dataset
| 0.998997 |
2305.03716
|
Xu Xiuwei
|
Xiuwei Xu, Zhihao Sun, Ziwei Wang, Hongmin Liu, Jie Zhou, Jiwen Lu
|
DSPDet3D: Dynamic Spatial Pruning for 3D Small Object Detection
|
Code is available at: https://github.com/xuxw98/DSPDet3D
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fine-grained 3D object detection is a core ability for agents to understand
their 3D environment and interact with surrounding objects. However, current
methods and benchmarks mainly focus on relatively large stuff. 3D object
detectors still struggle on small objects due to weak geometric information.
With in-depth study, we find increasing the spatial resolution of the feature
maps significantly boosts the performance of 3D small object detection. And
more interestingly, though the computational overhead increases dramatically
with resolution, the growth mainly comes from the upsampling operation of the
decoder. Inspired by this, we present a high-resolution multi-level detector
with dynamic spatial pruning named DSPDet3D, which detects objects from large
to small by iterative upsampling and meanwhile prunes the spatial
representation of the scene at regions where there is no smaller object to be
detected in higher resolution. We organize two benchmarks on ScanNet and
TO-SCENE dataset to evaluate the ability of fine-grained 3D object detection,
where our DSPDet3D improves the detection performance of small objects to a new
level while achieving leading inference speed compared with existing 3D object
detection methods. Moreover, DSPDet3D trained with only ScanNet rooms can
generalize well to scenes in larger scale. It takes less than 2s for DSPDet3D
to directly process a whole house or building consisting of dozens of rooms
while detecting out almost all objects, ranging from bottles to beds, on a
single RTX 3090 GPU. Project page: https://xuxw98.github.io/DSPDet3D/.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 17:57:04 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Jun 2023 17:35:33 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Xu",
"Xiuwei",
""
],
[
"Sun",
"Zhihao",
""
],
[
"Wang",
"Ziwei",
""
],
[
"Liu",
"Hongmin",
""
],
[
"Zhou",
"Jie",
""
],
[
"Lu",
"Jiwen",
""
]
] |
new_dataset
| 0.999604 |
2305.10133
|
Zaiyun Lin
|
Lvwei Wang (1), Zaiyun Lin (1), Yanhao Zhu (1), Rong Bai (1), Wei Feng
(1), Huting Wang (1), Jielong Zhou (1), Wei Peng (2), Bo Huang (1), Wenbiao
Zhou (1) ((1) Beijing StoneWise Technology Co Ltd (2) Innovation Center for
Pathogen Research Guangzhou Laboratory)
|
Lingo3DMol: Generation of a Pocket-based 3D Molecule using a Language
Model
| null | null | null | null |
cs.LG q-bio.BM
|
http://creativecommons.org/licenses/by/4.0/
|
Structure-based drug design powered by deep generative models have attracted
increasing research interest in recent years. Language models have demonstrated
a robust capacity for generating valid molecules in 2D structures, while
methods based on geometric deep learning can directly produce molecules with
accurate 3D coordinates. Inspired by both methods, this article proposes a
pocket-based 3D molecule generation method that leverages the language model
with the ability to generate 3D coordinates. High quality protein-ligand
complex data are insufficient; hence, a perturbation and restoration
pre-training task is designed that can utilize vast amounts of small-molecule
data. A new molecular representation, a fragment-based SMILES with local and
global coordinates, is also presented, enabling the language model to learn
molecular topological structures and spatial position information effectively.
Ultimately, CrossDocked and DUD-E dataset is employed for evaluation and
additional metrics are introduced. This method achieves state-of-the-art
performance in nearly all metrics, notably in terms of binding patterns,
drug-like properties, rational conformations, and inference speed. Our model is
available as an online service to academic users via sw3dmg.stonewise.cn
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 11:31:06 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Jun 2023 05:32:25 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Wang",
"Lvwei",
""
],
[
"Lin",
"Zaiyun",
""
],
[
"Zhu",
"Yanhao",
""
],
[
"Bai",
"Rong",
""
],
[
"Feng",
"Wei",
""
],
[
"Wang",
"Huting",
""
],
[
"Zhou",
"Jielong",
""
],
[
"Peng",
"Wei",
""
],
[
"Huang",
"Bo",
""
],
[
"Zhou",
"Wenbiao",
""
]
] |
new_dataset
| 0.970454 |
2305.10838
|
Yunsheng Bai
|
Yunsheng Bai, Atefeh Sohrabizadeh, Zongyue Qin, Ziniu Hu, Yizhou Sun,
Jason Cong
|
ProgSG: Cross-Modality Representation Learning for Programs in
Electronic Design Automation
|
Requires further polishing
| null | null | null |
cs.LG cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Recent years have witnessed the growing popularity of domain-specific
accelerators (DSAs), such as Google's TPUs, for accelerating various
applications such as deep learning, search, autonomous driving, etc. To
facilitate DSA designs, high-level synthesis (HLS) is used, which allows a
developer to compile a high-level description in the form of software code in C
and C++ into a design in low-level hardware description languages (such as VHDL
or Verilog) and eventually synthesized into a DSA on an ASIC
(application-specific integrated circuit) or FPGA (field-programmable gate
arrays). However, existing HLS tools still require microarchitecture decisions,
expressed in terms of pragmas (such as directives for parallelization and
pipelining). To enable more people to design DSAs, it is desirable to automate
such decisions with the help of deep learning for predicting the quality of HLS
designs. This requires us a deeper understanding of the program, which is a
combination of original code and pragmas. Naturally, these programs can be
considered as sequence data, for which large language models (LLM) can help. In
addition, these programs can be compiled and converted into a control data flow
graph (CDFG), and the compiler also provides fine-grained alignment between the
code tokens and the CDFG nodes. However, existing works either fail to leverage
both modalities or combine the two in shallow or coarse ways. We propose ProgSG
allowing the source code sequence modality and the graph modalities to interact
with each other in a deep and fine-grained way. To alleviate the scarcity of
labeled designs, a pre-training method is proposed based on a suite of
compiler's data flow analysis tasks. Experimental results on two benchmark
datasets show the superiority of ProgSG over baseline methods that either only
consider one modality or combine the two without utilizing the alignment
information.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 09:44:18 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Jun 2023 22:27:27 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Bai",
"Yunsheng",
""
],
[
"Sohrabizadeh",
"Atefeh",
""
],
[
"Qin",
"Zongyue",
""
],
[
"Hu",
"Ziniu",
""
],
[
"Sun",
"Yizhou",
""
],
[
"Cong",
"Jason",
""
]
] |
new_dataset
| 0.989665 |
2305.12711
|
Lingfeng He
|
De Cheng, Xiaojian Huang, Nannan Wang, Lingfeng He, Zhihui Li and
Xinbo Gao
|
Unsupervised Visible-Infrared Person ReID by Collaborative Learning with
Neighbor-Guided Label Refinement
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unsupervised learning visible-infrared person re-identification (USL-VI-ReID)
aims at learning modality-invariant features from unlabeled cross-modality
dataset, which is crucial for practical applications in video surveillance
systems. The key to essentially address the USL-VI-ReID task is to solve the
cross-modality data association problem for further heterogeneous joint
learning. To address this issue, we propose a Dual Optimal Transport Label
Assignment (DOTLA) framework to simultaneously assign the generated labels from
one modality to its counterpart modality. The proposed DOTLA mechanism
formulates a mutual reinforcement and efficient solution to cross-modality data
association, which could effectively reduce the side-effects of some
insufficient and noisy label associations. Besides, we further propose a
cross-modality neighbor consistency guided label refinement and regularization
module, to eliminate the negative effects brought by the inaccurate supervised
signals, under the assumption that the prediction or label distribution of each
example should be similar to its nearest neighbors. Extensive experimental
results on the public SYSU-MM01 and RegDB datasets demonstrate the
effectiveness of the proposed method, surpassing existing state-of-the-art
approach by a large margin of 7.76% mAP on average, which even surpasses some
supervised VI-ReID methods.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 04:40:30 GMT"
},
{
"version": "v2",
"created": "Sat, 3 Jun 2023 03:30:46 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Cheng",
"De",
""
],
[
"Huang",
"Xiaojian",
""
],
[
"Wang",
"Nannan",
""
],
[
"He",
"Lingfeng",
""
],
[
"Li",
"Zhihui",
""
],
[
"Gao",
"Xinbo",
""
]
] |
new_dataset
| 0.999176 |
2305.13823
|
Zhanwen Zhou
|
Zhanwen Zhou, Hankz Hankui Zhuo, Xiaowu Zhang, Qiyuan Deng
|
XRoute Environment: A Novel Reinforcement Learning Environment for
Routing
|
arXiv admin note: text overlap with arXiv:1907.11180 by other authors
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Routing is a crucial and time-consuming stage in modern design automation
flow for advanced technology nodes. Great progress in the field of
reinforcement learning makes it possible to use those approaches to improve the
routing quality and efficiency. However, the scale of the routing problems
solved by reinforcement learning-based methods in recent studies is too small
for these methods to be used in commercial EDA tools. We introduce the XRoute
Environment, a new reinforcement learning environment where agents are trained
to select and route nets in an advanced, end-to-end routing framework. Novel
algorithms and ideas can be quickly tested in a safe and reproducible manner in
it. The resulting environment is challenging, easy to use, customize and add
additional scenarios, and it is available under a permissive open-source
license. In addition, it provides support for distributed deployment and
multi-instance experiments. We propose two tasks for learning and build a
full-chip test bed with routing benchmarks of various region sizes. We also
pre-define several static routing regions with different pin density and number
of nets for easier learning and testing. For net ordering task, we report
baseline results for two widely used reinforcement learning algorithms (PPO and
DQN) and one searching-based algorithm (TritonRoute). The XRoute Environment
will be available at https://github.com/xplanlab/xroute_env.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 08:46:25 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Jun 2023 07:53:23 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Zhou",
"Zhanwen",
""
],
[
"Zhuo",
"Hankz Hankui",
""
],
[
"Zhang",
"Xiaowu",
""
],
[
"Deng",
"Qiyuan",
""
]
] |
new_dataset
| 0.999687 |
2305.17626
|
Xiaoyang Hu
|
Xiaoyang Hu, Shane Storks, Richard L. Lewis, Joyce Chai
|
In-Context Analogical Reasoning with Pre-Trained Language Models
| null | null | null | null |
cs.AI cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Analogical reasoning is a fundamental capacity of human cognition that allows
us to reason abstractly about novel situations by relating them to past
experiences. While it is thought to be essential for robust reasoning in AI
systems, conventional approaches require significant training and/or
hard-coding of domain knowledge to be applied to benchmark tasks. Inspired by
cognitive science research that has found connections between human language
and analogy-making, we explore the use of intuitive language-based abstractions
to support analogy in AI systems. Specifically, we apply large pre-trained
language models (PLMs) to visual Raven's Progressive Matrices (RPM), a common
relational reasoning test. By simply encoding the perceptual features of the
problem into language form, we find that PLMs exhibit a striking capacity for
zero-shot relational reasoning, exceeding human performance and nearing
supervised vision-based methods. We explore different encodings that vary the
level of abstraction over task features, finding that higher-level abstractions
further strengthen PLMs' analogical reasoning. Our detailed analysis reveals
insights on the role of model complexity, in-context learning, and prior
knowledge in solving RPM tasks.
|
[
{
"version": "v1",
"created": "Sun, 28 May 2023 04:22:26 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Jun 2023 06:57:29 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Hu",
"Xiaoyang",
""
],
[
"Storks",
"Shane",
""
],
[
"Lewis",
"Richard L.",
""
],
[
"Chai",
"Joyce",
""
]
] |
new_dataset
| 0.999248 |
2305.17914
|
Jingyi Shi
|
Jingyi Shi, Yang Xiao, Yuekang Li, Yeting Li, Dongsong Yu, Chendong
Yu, Hui Su, Yufeng Chen, Wei Huo
|
ACETest: Automated Constraint Extraction for Testing Deep Learning
Operators
|
Accepted by ISSTA 2023
| null |
10.1145/3597926.3598088
| null |
cs.SE cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Deep learning (DL) applications are prevalent nowadays as they can help with
multiple tasks. DL libraries are essential for building DL applications.
Furthermore, DL operators are the important building blocks of the DL
libraries, that compute the multi-dimensional data (tensors). Therefore, bugs
in DL operators can have great impacts. Testing is a practical approach for
detecting bugs in DL operators. In order to test DL operators effectively, it
is essential that the test cases pass the input validity check and are able to
reach the core function logic of the operators. Hence, extracting the input
validation constraints is required for generating high-quality test cases.
Existing techniques rely on either human effort or documentation of DL library
APIs to extract the constraints. They cannot extract complex constraints and
the extracted constraints may differ from the actual code implementation.
To address the challenge, we propose ACETest, a technique to automatically
extract input validation constraints from the code to build valid yet diverse
test cases which can effectively unveil bugs in the core function logic of DL
operators. For this purpose, ACETest can automatically identify the input
validation code in DL operators, extract the related constraints and generate
test cases according to the constraints. The experimental results on popular DL
libraries, TensorFlow and PyTorch, demonstrate that ACETest can extract
constraints with higher quality than state-of-the-art (SOTA) techniques.
Moreover, ACETest is capable of extracting 96.4% more constraints and detecting
1.95 to 55 times more bugs than SOTA techniques. In total, we have used ACETest
to detect 108 previously unknown bugs on TensorFlow and PyTorch, with 87 of
them confirmed by the developers. Lastly, five of the bugs were assigned with
CVE IDs due to their security impacts.
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 06:49:40 GMT"
},
{
"version": "v2",
"created": "Sun, 4 Jun 2023 04:01:26 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Shi",
"Jingyi",
""
],
[
"Xiao",
"Yang",
""
],
[
"Li",
"Yuekang",
""
],
[
"Li",
"Yeting",
""
],
[
"Yu",
"Dongsong",
""
],
[
"Yu",
"Chendong",
""
],
[
"Su",
"Hui",
""
],
[
"Chen",
"Yufeng",
""
],
[
"Huo",
"Wei",
""
]
] |
new_dataset
| 0.994457 |
2305.19533
|
Hanqing Zhu
|
Hanqing Zhu, Jiaqi Gu, Hanrui Wang, Zixuan Jiang, Zhekai Zhang,
Rongxing Tang, Chenghao Feng, Song Han, Ray T. Chen, David Z. Pan
|
DOTA: A Dynamically-Operated Photonic Tensor Core for Energy-Efficient
Transformer Accelerator
|
The short version is accepted by Next-Gen AI System Workshop at MLSys
2023
| null | null | null |
cs.ET cs.AR physics.optics
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The wide adoption and significant computing resource consumption of
attention-based Transformers, e.g., Vision Transformer and large language
models, have driven the demands for efficient hardware accelerators. While
electronic accelerators have been commonly used, there is a growing interest in
exploring photonics as an alternative technology due to its high energy
efficiency and ultra-fast processing speed. Optical neural networks (ONNs) have
demonstrated promising results for convolutional neural network (CNN) workloads
that only require weight-static linear operations. However, they fail to
efficiently support Transformer architectures with attention operations due to
the lack of ability to process dynamic full-range tensor multiplication. In
this work, we propose a customized high-performance and energy-efficient
photonic Transformer accelerator, DOTA. To overcome the fundamental limitation
of existing ONNs, we introduce a novel photonic tensor core, consisting of a
crossbar array of interference-based optical vector dot-product engines, that
supports highly-parallel, dynamic, and full-range matrix-matrix multiplication.
Our comprehensive evaluation demonstrates that DOTA achieves a >4x energy and a
>10x latency reduction compared to prior photonic accelerators, and delivers
over 20x energy reduction and 2 to 3 orders of magnitude lower latency compared
to the electronic Transformer accelerator. Our work highlights the immense
potential of photonic computing for efficient hardware accelerators,
particularly for advanced machine learning workloads.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 03:37:11 GMT"
},
{
"version": "v2",
"created": "Sat, 3 Jun 2023 20:08:50 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Zhu",
"Hanqing",
""
],
[
"Gu",
"Jiaqi",
""
],
[
"Wang",
"Hanrui",
""
],
[
"Jiang",
"Zixuan",
""
],
[
"Zhang",
"Zhekai",
""
],
[
"Tang",
"Rongxing",
""
],
[
"Feng",
"Chenghao",
""
],
[
"Han",
"Song",
""
],
[
"Chen",
"Ray T.",
""
],
[
"Pan",
"David Z.",
""
]
] |
new_dataset
| 0.999289 |
2306.00114
|
Amanda Ashley Boatswain Jacques
|
Amanda A. Boatswain Jacques and Abdoulaye Banir\'e Diallo and Etienne
Lord
|
The Canadian Cropland Dataset: A New Land Cover Dataset for
Multitemporal Deep Learning Classification in Agriculture
|
24 pages, 5 figures, dataset descriptor
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Monitoring land cover using remote sensing is vital for studying
environmental changes and ensuring global food security through crop yield
forecasting. Specifically, multitemporal remote sensing imagery provides
relevant information about the dynamics of a scene, which has proven to lead to
better land cover classification results. Nevertheless, few studies have
benefited from high spatial and temporal resolution data due to the difficulty
of accessing reliable, fine-grained and high-quality annotated samples to
support their hypotheses. Therefore, we introduce a temporal patch-based
dataset of Canadian croplands, enriched with labels retrieved from the Canadian
Annual Crop Inventory. The dataset contains 78,536 manually verified
high-resolution (10 m/pixel, 640 x 640 m) geo-referenced images from 10 crop
classes collected over four crop production years (2017-2020) and five months
(June-October). Each instance contains 12 spectral bands, an RGB image, and
additional vegetation index bands. Individually, each category contains at
least 4,800 images. Moreover, as a benchmark, we provide models and source code
that allow a user to predict the crop class using a single image (ResNet,
DenseNet, EfficientNet) or a sequence of images (LRCN, 3D-CNN) from the same
location. In perspective, we expect this evolving dataset to propel the
creation of robust agro-environmental models that can accelerate the
comprehension of complex agricultural regions by providing accurate and
continuous monitoring of land cover.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 18:40:15 GMT"
},
{
"version": "v2",
"created": "Sun, 4 Jun 2023 23:54:02 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Jacques",
"Amanda A. Boatswain",
""
],
[
"Diallo",
"Abdoulaye Baniré",
""
],
[
"Lord",
"Etienne",
""
]
] |
new_dataset
| 0.999747 |
2306.00937
|
Shalev Lifshitz
|
Shalev Lifshitz, Keiran Paster, Harris Chan, Jimmy Ba, Sheila
McIlraith
|
STEVE-1: A Generative Model for Text-to-Behavior in Minecraft
| null | null | null | null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Constructing AI models that respond to text instructions is challenging,
especially for sequential decision-making tasks. This work introduces an
instruction-tuned Video Pretraining (VPT) model for Minecraft called STEVE-1,
demonstrating that the unCLIP approach, utilized in DALL-E 2, is also effective
for creating instruction-following sequential decision-making agents. STEVE-1
is trained in two steps: adapting the pretrained VPT model to follow commands
in MineCLIP's latent space, then training a prior to predict latent codes from
text. This allows us to finetune VPT through self-supervised behavioral cloning
and hindsight relabeling, bypassing the need for costly human text annotations.
By leveraging pretrained models like VPT and MineCLIP and employing best
practices from text-conditioned image generation, STEVE-1 costs just $60 to
train and can follow a wide range of short-horizon open-ended text and visual
instructions in Minecraft. STEVE-1 sets a new bar for open-ended instruction
following in Minecraft with low-level controls (mouse and keyboard) and raw
pixel inputs, far outperforming previous baselines. We provide experimental
evidence highlighting key factors for downstream performance, including
pretraining, classifier-free guidance, and data scaling. All resources,
including our model weights, training scripts, and evaluation tools are made
available for further research.
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 17:39:41 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Jun 2023 17:58:30 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Lifshitz",
"Shalev",
""
],
[
"Paster",
"Keiran",
""
],
[
"Chan",
"Harris",
""
],
[
"Ba",
"Jimmy",
""
],
[
"McIlraith",
"Sheila",
""
]
] |
new_dataset
| 0.975544 |
2306.01743
|
Quazi Adibur Rahman Adib
|
Nazmuddoha Ansary, Quazi Adibur Rahman Adib, Tahsin Reasat, Sazia
Mehnaz, Asif Shahriyar Sushmit, Ahmed Imtiaz Humayun, Mohammad Mamun Or
Rashid, Farig Sadeque
|
Abugida Normalizer and Parser for Unicode texts
|
3 pages, 1 figure
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper proposes two libraries to address common and uncommon issues with
Unicode-based writing schemes for Indic languages. The first is a normalizer
that corrects inconsistencies caused by the encoding scheme
https://pypi.org/project/bnunicodenormalizer/ . The second is a grapheme parser
for Abugida text https://pypi.org/project/indicparser/ . Both tools are more
efficient and effective than previously used tools. We report 400% increase in
speed and ensure significantly better performance for different language model
based downstream tasks.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 14:34:08 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Ansary",
"Nazmuddoha",
""
],
[
"Adib",
"Quazi Adibur Rahman",
""
],
[
"Reasat",
"Tahsin",
""
],
[
"Mehnaz",
"Sazia",
""
],
[
"Sushmit",
"Asif Shahriyar",
""
],
[
"Humayun",
"Ahmed Imtiaz",
""
],
[
"Rashid",
"Mohammad Mamun Or",
""
],
[
"Sadeque",
"Farig",
""
]
] |
new_dataset
| 0.996358 |
2306.01748
|
Md Ragib Shaharear
|
Md Ragib Shaharear
|
Bio-inspired Dual-auger Self-burrowing Robots in Granular Media
|
Master's thesis, 62 pages, 40 figures, ProQuest
|
Order No. 30485358 Arizona State University, 2023 United States --
ArizonaProQuest. 17 May 2023
| null | null |
cs.RO
|
http://creativecommons.org/publicdomain/zero/1.0/
|
It has been found that certain biological organisms, such as Erodium seeds
and Scincus scincus, are capable of effectively and efficiently burying
themselves in soil. Biological Organisms employ various locomotion modes,
including coiling and uncoiling motions, asymmetric body twisting, and
undulating movements that generate motion waves. The coiling-uncoiling motion
drives a seed awn to bury itself like a corkscrew, while sandfish skinks use
undulatory swimming, which can be thought of as a 2D version of helical motion.
Studying burrowing behavior aims to understand how animals navigate
underground, whether in their natural burrows or underground habitats, and to
implement this knowledge in solving geotechnical penetration problems.
Underground horizontal burrowing is challenging due to overcoming the
resistance of interaction forces of granular media to move forward. Inspired by
the burrowing behavior of seed-awn and sandfish skink, a horizontal
self-burrowing robot is developed. The robot is driven by two augers and
stabilized by a fin structure. The robot's burrowing behavior is studied in a
laboratory setting. It is found that rotation and propulsive motion along the
axis of the auger's helical shape significantly reduce granular media's
resistance against horizontal penetration by breaking kinematic symmetry or
granular media boundary. Additional thrusting and dragging tests were performed
to examine the propulsive and resistive forces and unify the observed burrowing
behaviors. The tests revealed that the rotation of an auger not only reduces
the resistive force and generates a propulsive force, which is influenced by
the auger geometry, rotational speed, and direction. As a result, the burrowing
behavior of the robot can be predicted using the geometry-rotation-force
relations.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 06:09:28 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Shaharear",
"Md Ragib",
""
]
] |
new_dataset
| 0.980202 |
2306.01754
|
Roshanak Zilouchian Moghaddam
|
Aaron Chan, Anant Kharkar, Roshanak Zilouchian Moghaddam, Yevhen
Mohylevskyy, Alec Helyar, Eslam Kamal, Mohamed Elkamhawy, Neel Sundaresan
|
Transformer-based Vulnerability Detection in Code at EditTime:
Zero-shot, Few-shot, or Fine-tuning?
| null | null | null | null |
cs.CR cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Software vulnerabilities bear enterprises significant costs. Despite
extensive efforts in research and development of software vulnerability
detection methods, uncaught vulnerabilities continue to put software owners and
users at risk. Many current vulnerability detection methods require that code
snippets can compile and build before attempting detection. This,
unfortunately, introduces a long latency between the time a vulnerability is
injected to the time it is removed, which can substantially increases the cost
of fixing a vulnerability. We recognize that the current advances in machine
learning can be used to detect vulnerable code patterns on syntactically
incomplete code snippets as the developer is writing the code at EditTime. In
this paper we present a practical system that leverages deep learning on a
large-scale data set of vulnerable code patterns to learn complex
manifestations of more than 250 vulnerability types and detect vulnerable code
patterns at EditTime. We discuss zero-shot, few-shot, and fine-tuning
approaches on state of the art pre-trained Large Language Models (LLMs). We
show that in comparison with state of the art vulnerability detection models
our approach improves the state of the art by 10%. We also evaluate our
approach to detect vulnerability in auto-generated code by code LLMs.
Evaluation on a benchmark of high-risk code scenarios shows a reduction of up
to 90% vulnerability reduction.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 01:21:55 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Chan",
"Aaron",
""
],
[
"Kharkar",
"Anant",
""
],
[
"Moghaddam",
"Roshanak Zilouchian",
""
],
[
"Mohylevskyy",
"Yevhen",
""
],
[
"Helyar",
"Alec",
""
],
[
"Kamal",
"Eslam",
""
],
[
"Elkamhawy",
"Mohamed",
""
],
[
"Sundaresan",
"Neel",
""
]
] |
new_dataset
| 0.985463 |
2306.01857
|
Aida Ramezani
|
Aida Ramezani, Yang Xu
|
Knowledge of cultural moral norms in large language models
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Moral norms vary across cultures. A recent line of work suggests that English
large language models contain human-like moral biases, but these studies
typically do not examine moral variation in a diverse cultural setting. We
investigate the extent to which monolingual English language models contain
knowledge about moral norms in different countries. We consider two levels of
analysis: 1) whether language models capture fine-grained moral variation
across countries over a variety of topics such as ``homosexuality'' and
``divorce''; 2) whether language models capture cultural diversity and shared
tendencies in which topics people around the globe tend to diverge or agree on
in their moral judgment. We perform our analyses with two public datasets from
the World Values Survey (across 55 countries) and PEW global surveys (across 40
countries) on morality. We find that pre-trained English language models
predict empirical moral norms across countries worse than the English moral
norms reported previously. However, fine-tuning language models on the survey
data improves inference across countries at the expense of a less accurate
estimate of the English moral norms. We discuss the relevance and challenges of
incorporating cultural knowledge into the automated inference of moral norms.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 18:23:35 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Ramezani",
"Aida",
""
],
[
"Xu",
"Yang",
""
]
] |
new_dataset
| 0.99645 |
2306.01863
|
Yixin Xu
|
Yixin Xu, Yi Xiao, Zijian Zhao, Franz M\"uller, Alptekin Vardar, Xiao
Gong, Sumitha George, Thomas K\"ampfe, Vijaykrishnan Narayanan, Kai Ni
|
Embedding Security into Ferroelectric FET Array via In-Situ Memory
Operation
| null | null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Non-volatile memories (NVMs) have the potential to reshape next-generation
memory systems because of their promising properties of near-zero leakage power
consumption, high density and non-volatility. However, NVMs also face critical
security threats that exploit the non-volatile property. Compared to volatile
memory, the capability of retaining data even after power down makes NVM more
vulnerable. Existing solutions to address the security issues of NVMs are
mainly based on Advanced Encryption Standard (AES), which incurs significant
performance and power overhead. In this paper, we propose a lightweight memory
encryption/decryption scheme by exploiting in-situ memory operations with
negligible overhead. To validate the feasibility of the encryption/decryption
scheme, device-level and array-level experiments are performed using
ferroelectric field effect transistor (FeFET) as an example NVM without loss of
generality. Besides, a comprehensive evaluation is performed on a 128x128 FeFET
AND-type memory array in terms of area, latency, power and throughput. Compared
with the AES-based scheme, our scheme shows around 22.6x/14.1x increase in
encryption/decryption throughput with negligible power penalty. Furthermore, we
evaluate the performance of our scheme over the AES-based scheme when deploying
different neural network workloads. Our scheme yields significant latency
reduction by 90% on average for encryption and decryption processes.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 18:35:29 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Xu",
"Yixin",
""
],
[
"Xiao",
"Yi",
""
],
[
"Zhao",
"Zijian",
""
],
[
"Müller",
"Franz",
""
],
[
"Vardar",
"Alptekin",
""
],
[
"Gong",
"Xiao",
""
],
[
"George",
"Sumitha",
""
],
[
"Kämpfe",
"Thomas",
""
],
[
"Narayanan",
"Vijaykrishnan",
""
],
[
"Ni",
"Kai",
""
]
] |
new_dataset
| 0.953891 |
2306.01885
|
Jacob Morra
|
Jacob Morra, Andrew Flynn, Andreas Amann, Mark Daley
|
Multifunctionality in a Connectome-Based Reservoir Computer
|
6 pages, 6 figures
| null | null | null |
cs.LG cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
Multifunctionality describes the capacity for a neural network to perform
multiple mutually exclusive tasks without altering its network connections; and
is an emerging area of interest in the reservoir computing machine learning
paradigm. Multifunctionality has been observed in the brains of humans and
other animals: particularly, in the lateral horn of the fruit fly. In this
work, we transplant the connectome of the fruit fly lateral horn to a reservoir
computer (RC), and investigate the extent to which this 'fruit fly RC' (FFRC)
exhibits multifunctionality using the 'seeing double' problem as a benchmark
test. We furthermore explore the dynamics of how this FFRC achieves
multifunctionality while varying the network's spectral radius. Compared to the
widely-used Erd\"os-Renyi Reservoir Computer (ERRC), we report that the FFRC
exhibits a greater capacity for multifunctionality; is multifunctional across a
broader hyperparameter range; and solves the seeing double problem far beyond
the previously observed spectral radius limit, wherein the ERRC's dynamics
become chaotic.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 19:37:38 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Morra",
"Jacob",
""
],
[
"Flynn",
"Andrew",
""
],
[
"Amann",
"Andreas",
""
],
[
"Daley",
"Mark",
""
]
] |
new_dataset
| 0.997809 |
2306.01899
|
Levent Guvenc
|
Haoan Wang, Levent Guvenc
|
Discrete-time Robust PD Controlled System with DOB/CDOB Compensation for
High Speed Autonomous Vehicle Path Following
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Autonomous vehicle path following performance is one of significant
consideration. This paper presents discrete time design of robust PD controlled
system with disturbance observer (DOB) and communication disturbance observer
(CDOB) compensation to enhance autonomous vehicle path following performance.
Although always implemented on digital devices, DOB and CDOB structure are
usually designed in continuous time in the literature and also in our previous
work. However, it requires high sampling rate for continuous-time design block
diagram to automatically convert to corresponding discrete-time controller
using rapid controller prototyping systems. In this paper, direct discrete time
design is carried out. Digital PD feedback controller is designed based on the
nominal plant using the proposed parameter space approach. Zero order hold
method is applied to discretize the nominal plant, DOB and CDOB structure in
continuous domain. Discrete time DOB is embedded into the steering to path
following error loop for model regulation in the presence of uncertainty in
vehicle parameters such as vehicle mass, vehicle speed and road-tire friction
coefficient and rejecting external disturbance like crosswind force. On the
other hand, time delay from CAN bus based sensor and actuator command
interfaces results in degradation of system performance since large negative
phase angles are added to the plant frequency response. Discrete time CDOB
compensated control system can be used for time delay compensation where the
accurate knowledge of delay time value is not necessary. A validated model of
our lab Ford Fusion hybrid automated driving research vehicle is used for the
simulation analysis while the vehicle is driving at high speed. Simulation
results successfully demonstrate the improvement of autonomous vehicle path
following performance with the proposed discrete time DOB and CDOB structure.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 20:09:55 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Wang",
"Haoan",
""
],
[
"Guvenc",
"Levent",
""
]
] |
new_dataset
| 0.999217 |
2306.01903
|
Emilio Mart\'inez-Pa\~neda
|
E. Korec, M. Jirasek, H.S. Wong, E. Mart\'inez-Pa\~neda
|
A phase-field chemo-mechanical model for corrosion-induced cracking in
reinforced concrete
| null | null | null | null |
cs.CE cond-mat.other physics.app-ph physics.chem-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new mechanistic framework for corrosion-induced cracking in
reinforced concrete that resolves the underlying chemo-mechanical processes.
The framework combines, for the first time, (i) a model for reactive transport
and precipitation of dissolved Fe2+ and Fe3+ ions in the concrete pore space,
(ii) a precipitation eigenstrain model for the pressure caused by the
accumulation of precipitates (rusts) under pore confinement conditions, (iii) a
phase-field model calibrated for the quasi-brittle fracture behaviour of
concrete, and (iv) a damage-dependent diffusivity tensor. Finite element model
predictions show good agreement with experimental data from impressed current
tests under natural-like corrosion current densities.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 20:20:14 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Korec",
"E.",
""
],
[
"Jirasek",
"M.",
""
],
[
"Wong",
"H. S.",
""
],
[
"Martínez-Pañeda",
"E.",
""
]
] |
new_dataset
| 0.997853 |
2306.01944
|
Ayan Banerjee
|
Sameena Hossain, Payal Kamboj, Aranyak Maity, Tamiko Azuma, Ayan
Banerjee, Sandeep K. S. Gupta
|
EdGCon: Auto-assigner of Iconicity Ratings Grounded by Lexical
Properties to Aid in Generation of Technical Gestures
|
Accepted for publication in ACM SAC 2023
| null | null |
ILTR-2023-1
|
cs.HC cs.AI cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Gestures that share similarities in their forms and are related in their
meanings, should be easier for learners to recognize and incorporate into their
existing lexicon. In that regard, to be more readily accepted as standard by
the Deaf and Hard of Hearing community, technical gestures in American Sign
Language (ASL) will optimally share similar in forms with their lexical
neighbors. We utilize a lexical database of ASL, ASL-LEX, to identify lexical
relations within a set of technical gestures. We use automated identification
for 3 unique sub-lexical properties in ASL- location, handshape and movement.
EdGCon assigned an iconicity rating based on the lexical property similarities
of the new gesture with an existing set of technical gestures and the
relatedness of the meaning of the new technical word to that of the existing
set of technical words. We collected 30 ad hoc crowdsourced technical gestures
from different internet websites and tested them against 31 gestures from the
DeafTEC technical corpus. We found that EdGCon was able to correctly
auto-assign the iconicity ratings 80.76% of the time.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 23:04:01 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Hossain",
"Sameena",
""
],
[
"Kamboj",
"Payal",
""
],
[
"Maity",
"Aranyak",
""
],
[
"Azuma",
"Tamiko",
""
],
[
"Banerjee",
"Ayan",
""
],
[
"Gupta",
"Sandeep K. S.",
""
]
] |
new_dataset
| 0.993621 |
2306.02022
|
Wen-Wai Yim
|
Wen-wai Yim, Yujuan Fu, Asma Ben Abacha, Neal Snider, Thomas Lin, and
Meliha Yetisgen
|
ACI-BENCH: a Novel Ambient Clinical Intelligence Dataset for
Benchmarking Automatic Visit Note Generation
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Recent immense breakthroughs in generative models such as in GPT4 have
precipitated re-imagined ubiquitous usage of these models in all applications.
One area that can benefit by improvements in artificial intelligence (AI) is
healthcare. The note generation task from doctor-patient encounters, and its
associated electronic medical record documentation, is one of the most arduous
time-consuming tasks for physicians. It is also a natural prime potential
beneficiary to advances in generative models. However with such advances,
benchmarking is more critical than ever. Whether studying model weaknesses or
developing new evaluation metrics, shared open datasets are an imperative part
of understanding the current state-of-the-art. Unfortunately as clinic
encounter conversations are not routinely recorded and are difficult to
ethically share due to patient confidentiality, there are no sufficiently large
clinic dialogue-note datasets to benchmark this task. Here we present the
Ambient Clinical Intelligence Benchmark (ACI-BENCH) corpus, the largest dataset
to date tackling the problem of AI-assisted note generation from visit
dialogue. We also present the benchmark performances of several common
state-of-the-art approaches.
|
[
{
"version": "v1",
"created": "Sat, 3 Jun 2023 06:42:17 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Yim",
"Wen-wai",
""
],
[
"Fu",
"Yujuan",
""
],
[
"Abacha",
"Asma Ben",
""
],
[
"Snider",
"Neal",
""
],
[
"Lin",
"Thomas",
""
],
[
"Yetisgen",
"Meliha",
""
]
] |
new_dataset
| 0.999689 |
2306.02032
|
Kuntal Deka
|
Vinjamoori Vikas, Kuntal Deka, Sanjeev Sharma, and A. Rajesh
|
ADMM-based Detector for Large-scale MIMO Code-domain NOMA Systems
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Large-scale multi-input multi-output (MIMO) code domain non-orthogonal
multiple access (CD-NOMA) techniques are one of the potential candidates to
address the next-generation wireless needs such as massive connectivity, and
high reliability. This work focuses on two primary CD-NOMA techniques:
sparse-code multiple access (SCMA) and dense-code multiple access (DCMA). One
of the primary challenges in implementing MIMO-CD-NOMA systems is designing the
optimal detector with affordable computation cost and complexity. This paper
proposes an iterative linear detector based on the alternating direction method
of multipliers (ADMM). First, the maximum likelihood (ML) detection problem is
converted into a sharing optimization problem. The set constraint in the ML
detection problem is relaxed into the box constraint sharing problem. An
alternative variable is introduced via the penalty term, which compensates for
the loss incurred by the constraint relaxation. The system models, i.e., the
relation between the input signal and the received signal, are reformulated so
that the proposed sharing optimization problem can be readily applied.
The ADMM is a robust algorithm to solve the sharing problem in a distributed
manner. The proposed detector leverages the distributive nature to reduce
per-iteration cost and time. An ADMM-based linear detector is designed for
three MIMO-CD-NOMA systems: single input multi output CD-NOMA (SIMO-CD-NOMA),
spatial multiplexing CD-NOMA (SMX-CD-NOMA), and spatial modulated CD-NOMA
(SM-CD-NOMA). The impact of various system parameters and ADMM parameters on
computational complexity and symbol error rate (SER) has been thoroughly
examined through extensive Monte Carlo simulations.
|
[
{
"version": "v1",
"created": "Sat, 3 Jun 2023 07:22:35 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Vikas",
"Vinjamoori",
""
],
[
"Deka",
"Kuntal",
""
],
[
"Sharma",
"Sanjeev",
""
],
[
"Rajesh",
"A.",
""
]
] |
new_dataset
| 0.998549 |
2306.02142
|
Sagar Chakraborty
|
Sagar Chakraborty, Gaurav Harit and Saptarshi Ghosh
|
TransDocAnalyser: A Framework for Offline Semi-structured Handwritten
Document Analysis in the Legal Domain
|
This paper has been accepted in 17th International Conference on
Document Analysis and Recognition(ICDAR) as an Oral presentation
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
State-of-the-art offline Optical Character Recognition (OCR) frameworks
perform poorly on semi-structured handwritten domain-specific documents due to
their inability to localize and label form fields with domain-specific
semantics. Existing techniques for semi-structured document analysis have
primarily used datasets comprising invoices, purchase orders, receipts, and
identity-card documents for benchmarking. In this work, we build the first
semi-structured document analysis dataset in the legal domain by collecting a
large number of First Information Report (FIR) documents from several police
stations in India. This dataset, which we call the FIR dataset, is more
challenging than most existing document analysis datasets, since it combines a
wide variety of handwritten text with printed text. We also propose an
end-to-end framework for offline processing of handwritten semi-structured
documents, and benchmark it on our novel FIR dataset. Our framework used
Encoder-Decoder architecture for localizing and labelling the form fields and
for recognizing the handwritten content. The encoder consists of Faster-RCNN
and Vision Transformers. Further the Transformer-based decoder architecture is
trained with a domain-specific tokenizer. We also propose a post-correction
method to handle recognition errors pertaining to the domain-specific terms.
Our proposed framework achieves state-of-the-art results on the FIR dataset
outperforming several existing models
|
[
{
"version": "v1",
"created": "Sat, 3 Jun 2023 15:56:30 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Chakraborty",
"Sagar",
""
],
[
"Harit",
"Gaurav",
""
],
[
"Ghosh",
"Saptarshi",
""
]
] |
new_dataset
| 0.998549 |
2306.02182
|
Vinay Nagalapura Ramesh
|
Vinay N Ramesh, Rohan Eswara
|
FlairNLP at SemEval-2023 Task 6b: Extraction of Legal Named Entities
from Legal Texts using Contextual String Embeddings
|
5 pages, 4 figures
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Indian court legal texts and processes are essential towards the integrity of
the judicial system and towards maintaining the social and political order of
the nation. Due to the increase in number of pending court cases, there is an
urgent need to develop tools to automate many of the legal processes with the
knowledge of artificial intelligence. In this paper, we employ knowledge
extraction techniques, specially the named entity extraction of legal entities
within court case judgements. We evaluate several state of the art
architectures in the realm of sequence labeling using models trained on a
curated dataset of legal texts. We observe that a Bi-LSTM model trained on
Flair Embeddings achieves the best results, and we also publish the BIO
formatted dataset as part of this paper.
|
[
{
"version": "v1",
"created": "Sat, 3 Jun 2023 19:38:04 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Ramesh",
"Vinay N",
""
],
[
"Eswara",
"Rohan",
""
]
] |
new_dataset
| 0.997255 |
2306.02224
|
Hui Yang
|
Hui Yang, Sifu Yue, Yunzhong He
|
Auto-GPT for Online Decision Making: Benchmarks and Additional Opinions
| null | null | null | null |
cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Auto-GPT is an autonomous agent that leverages recent advancements in
adapting Large Language Models (LLMs) for decision-making tasks. While there
has been a growing interest in Auto-GPT stypled agents, questions remain
regarding the effectiveness and flexibility of Auto-GPT in solving real-world
decision-making tasks. Its limited capability for real-world engagement and the
absence of benchmarks contribute to these uncertainties. In this paper, we
present a comprehensive benchmark study of Auto-GPT styled agents in
decision-making tasks that simulate real-world scenarios. Our aim is to gain
deeper insights into this problem and understand the adaptability of GPT-based
agents. We compare the performance of popular LLMs such as GPT-4, GPT-3.5,
Claude, and Vicuna in Auto-GPT styled decision-making tasks. Furthermore, we
introduce the Additional Opinions algorithm, an easy and effective method that
incorporates supervised/imitation-based learners into the Auto-GPT scheme. This
approach enables lightweight supervised learning without requiring fine-tuning
of the foundational LLMs. We demonstrate through careful baseline comparisons
and ablation studies that the Additional Opinions algorithm significantly
enhances performance in online decision-making benchmarks, including WebShop
and ALFWorld.
|
[
{
"version": "v1",
"created": "Sun, 4 Jun 2023 01:07:20 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Yang",
"Hui",
""
],
[
"Yue",
"Sifu",
""
],
[
"He",
"Yunzhong",
""
]
] |
new_dataset
| 0.99191 |
2306.02230
|
Yu Cheng
|
Zhenchang Xing, Qing Huang, Yu Cheng, Liming Zhu, Qinghua Lu, Xiwei Xu
|
Prompt Sapper: LLM-Empowered Software Engineering Infrastructure for
AI-Native Services
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Foundation models, such as GPT-4, DALL-E have brought unprecedented AI
"operating system" effect and new forms of human-AI interaction, sparking a
wave of innovation in AI-native services, where natural language prompts serve
as executable "code" directly (prompt as executable code), eliminating the need
for programming language as an intermediary and opening up the door to personal
AI. Prompt Sapper has emerged in response, committed to support the development
of AI-native services by AI chain engineering. It creates a large language
model (LLM) empowered software engineering infrastructure for authoring AI
chains through human-AI collaborative intelligence, unleashing the AI
innovation potential of every individual, and forging a future where everyone
can be a master of AI innovation. This article will introduce the R\&D
motivation behind Prompt Sapper, along with its corresponding AI chain
engineering methodology and technical practices.
|
[
{
"version": "v1",
"created": "Sun, 4 Jun 2023 01:47:42 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Xing",
"Zhenchang",
""
],
[
"Huang",
"Qing",
""
],
[
"Cheng",
"Yu",
""
],
[
"Zhu",
"Liming",
""
],
[
"Lu",
"Qinghua",
""
],
[
"Xu",
"Xiwei",
""
]
] |
new_dataset
| 0.980465 |
2306.02247
|
Lingfeng Shen
|
Lingfeng Shen, Haiyun Jiang, Lemao Liu, Shuming Shi
|
Sen2Pro: A Probabilistic Perspective to Sentence Embedding from
Pre-trained Language Model
|
Accepted to ACL2023 workshop Rep4NLP
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Sentence embedding is one of the most fundamental tasks in Natural Language
Processing and plays an important role in various tasks. The recent
breakthrough in sentence embedding is achieved by pre-trained language models
(PLMs). Despite its success, an embedded vector (Sen2Vec) representing a point
estimate does not naturally express uncertainty in a taskagnostic way. This
paper thereby proposes an efficient framework on probabilistic sentence
embedding (Sen2Pro) from PLMs, and it represents a sentence as a probability
density distribution in an embedding space to reflect both model uncertainty
and data uncertainty (i.e., many-to-one nature) in the sentence representation.
The proposed framework performs in a plug-and-play way without retraining PLMs
anymore, and it is easy to implement and generally applied on top of any PLM.
The superiority of Sen2Pro over Sen2Vec has been theoretically verified and
practically illustrated on different NLP tasks.
|
[
{
"version": "v1",
"created": "Sun, 4 Jun 2023 03:26:43 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Shen",
"Lingfeng",
""
],
[
"Jiang",
"Haiyun",
""
],
[
"Liu",
"Lemao",
""
],
[
"Shi",
"Shuming",
""
]
] |
new_dataset
| 0.971528 |
2306.02258
|
Kazushi Kondo
|
Kazushi Kondo, Saku Sugawara, Akiko Aizawa
|
Probing Physical Reasoning with Counter-Commonsense Context
|
Accepted to ACL 2023(Short Paper)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this study, we create a CConS (Counter-commonsense Contextual Size
comparison) dataset to investigate how physical commonsense affects the
contextualized size comparison task; the proposed dataset consists of both
contexts that fit physical commonsense and those that do not. This dataset
tests the ability of language models to predict the size relationship between
objects under various contexts generated from our curated noun list and
templates. We measure the ability of several masked language models and
generative models. The results show that while large language models can use
prepositions such as ``in'' and ``into'' in the provided context to infer size
relationships, they fail to use verbs and thus make incorrect judgments led by
their prior physical commonsense.
|
[
{
"version": "v1",
"created": "Sun, 4 Jun 2023 04:24:43 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Kondo",
"Kazushi",
""
],
[
"Sugawara",
"Saku",
""
],
[
"Aizawa",
"Akiko",
""
]
] |
new_dataset
| 0.989112 |
2306.02263
|
Yuchen Huo
|
Jianrong Wang, Yuchen Huo, Li Liu, Tianyi Xu, Qi Li, Sen Li
|
MAVD: The First Open Large-Scale Mandarin Audio-Visual Dataset with
Depth Information
| null | null | null | null |
cs.SD cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Audio-visual speech recognition (AVSR) gains increasing attention from
researchers as an important part of human-computer interaction. However, the
existing available Mandarin audio-visual datasets are limited and lack the
depth information. To address this issue, this work establishes the MAVD, a new
large-scale Mandarin multimodal corpus comprising 12,484 utterances spoken by
64 native Chinese speakers. To ensure the dataset covers diverse real-world
scenarios, a pipeline for cleaning and filtering the raw text material has been
developed to create a well-balanced reading material. In particular, the latest
data acquisition device of Microsoft, Azure Kinect is used to capture depth
information in addition to the traditional audio signals and RGB images during
data acquisition. We also provide a baseline experiment, which could be used to
evaluate the effectiveness of the dataset. The dataset and code will be
released at https://github.com/SpringHuo/MAVD.
|
[
{
"version": "v1",
"created": "Sun, 4 Jun 2023 05:00:12 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Wang",
"Jianrong",
""
],
[
"Huo",
"Yuchen",
""
],
[
"Liu",
"Li",
""
],
[
"Xu",
"Tianyi",
""
],
[
"Li",
"Qi",
""
],
[
"Li",
"Sen",
""
]
] |
new_dataset
| 0.999815 |
2306.02264
|
Adithya Athreya
|
Aravind Joshi, Akshara Kairali, Renju Raju, Adithya Athreya, Reena
Monica P, Sanjay Vishwakarma and Srinjoy Ganguly
|
Quantum Circuit Optimization of Arithmetic circuits using ZX Calculus
| null | null | null | null |
cs.ET quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Quantum computing is an emerging technology in which quantum mechanical
properties are suitably utilized to perform certain compute-intensive
operations faster than classical computers. Quantum algorithms are designed as
a combination of quantum circuits that each require a large number of quantum
gates, which is a challenge considering the limited number of qubit resources
available in quantum computing systems. Our work proposes a technique to
optimize quantum arithmetic algorithms by reducing the hardware resources and
the number of qubits based on ZX calculus. We have utilised ZX calculus rewrite
rules for the optimization of fault-tolerant quantum multiplier circuits where
we are able to achieve a significant reduction in the number of ancilla bits
and T-gates as compared to the originally required numbers to achieve
fault-tolerance. Our work is the first step in the series of arithmetic circuit
optimization using graphical rewrite tools and it paves the way for advancing
the optimization of various complex quantum circuits and establishing the
potential for new applications of the same.
|
[
{
"version": "v1",
"created": "Sun, 4 Jun 2023 05:05:57 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Joshi",
"Aravind",
""
],
[
"Kairali",
"Akshara",
""
],
[
"Raju",
"Renju",
""
],
[
"Athreya",
"Adithya",
""
],
[
"P",
"Reena Monica",
""
],
[
"Vishwakarma",
"Sanjay",
""
],
[
"Ganguly",
"Srinjoy",
""
]
] |
new_dataset
| 0.99884 |
2306.02299
|
Bruno Steffen
|
Bruno Steffen
|
DSL-driven Integration of HTTP Services in DIME
| null | null | null | null |
cs.SE cs.PL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
As the integration of web services into web applications becomes more and
more common, it is necessary to find a solution for low-code or no-code
environments. This thesis is the first attempt to allow for the easy
integration of web services into the low-code immersive modeling environment
(IME) DIME, by means of a domain-specific language (DSL), the HTTP-DSL. DIME
users can specify HTTP requests to web services with few lines of code, and
then integrate these requests into the modeling languages provided by DIME.
|
[
{
"version": "v1",
"created": "Sun, 4 Jun 2023 08:40:53 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Steffen",
"Bruno",
""
]
] |
new_dataset
| 0.9774 |
2306.02306
|
Zhengbin Zhang
|
Zhengbin Zhang, Zhenhao Xu, Xingsheng Gu, Juan Xiong
|
Cross-CBAM: A Lightweight network for Scene Segmentation
| null | null | null | null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scene parsing is a great challenge for real-time semantic segmentation.
Although traditional semantic segmentation networks have made remarkable
leap-forwards in semantic accuracy, the performance of inference speed is
unsatisfactory. Meanwhile, this progress is achieved with fairly large networks
and powerful computational resources. However, it is difficult to run extremely
large models on edge computing devices with limited computing power, which
poses a huge challenge to the real-time semantic segmentation tasks. In this
paper, we present the Cross-CBAM network, a novel lightweight network for
real-time semantic segmentation. Specifically, a Squeeze-and-Excitation Atrous
Spatial Pyramid Pooling Module(SE-ASPP) is proposed to get variable
field-of-view and multiscale information. And we propose a Cross Convolutional
Block Attention Module(CCBAM), in which a cross-multiply operation is employed
in the CCBAM module to make high-level semantic information guide low-level
detail information. Different from previous work, these works use attention to
focus on the desired information in the backbone. CCBAM uses cross-attention
for feature fusion in the FPN structure. Extensive experiments on the
Cityscapes dataset and Camvid dataset demonstrate the effectiveness of the
proposed Cross-CBAM model by achieving a promising trade-off between
segmentation accuracy and inference speed. On the Cityscapes test set, we
achieve 73.4% mIoU with a speed of 240.9FPS and 77.2% mIoU with a speed of
88.6FPS on NVIDIA GTX 1080Ti.
|
[
{
"version": "v1",
"created": "Sun, 4 Jun 2023 09:03:05 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Zhang",
"Zhengbin",
""
],
[
"Xu",
"Zhenhao",
""
],
[
"Gu",
"Xingsheng",
""
],
[
"Xiong",
"Juan",
""
]
] |
new_dataset
| 0.989495 |
2306.02308
|
Orchid Chetia Phukan
|
Gautam Siddharth Kashyap, Alexander E. I. Brownlee, Orchid Chetia
Phukan, Karan Malik, Samar Wazir
|
Roulette-Wheel Selection-Based PSO Algorithm for Solving the Vehicle
Routing Problem with Time Windows
| null | null | null | null |
cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
The well-known Vehicle Routing Problem with Time Windows (VRPTW) aims to
reduce the cost of moving goods between several destinations while
accommodating constraints like set time windows for certain locations and
vehicle capacity. Applications of the VRPTW problem in the real world include
Supply Chain Management (SCM) and logistic dispatching, both of which are
crucial to the economy and are expanding quickly as work habits change.
Therefore, to solve the VRPTW problem, metaheuristic algorithms i.e. Particle
Swarm Optimization (PSO) have been found to work effectively, however, they can
experience premature convergence. To lower the risk of PSO's premature
convergence, the authors have solved VRPTW in this paper utilising a novel form
of the PSO methodology that uses the Roulette Wheel Method (RWPSO). Computing
experiments using the Solomon VRPTW benchmark datasets on the RWPSO demonstrate
that RWPSO is competitive with other state-of-the-art algorithms from the
literature. Also, comparisons with two cutting-edge algorithms from the
literature show how competitive the suggested algorithm is.
|
[
{
"version": "v1",
"created": "Sun, 4 Jun 2023 09:18:02 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Kashyap",
"Gautam Siddharth",
""
],
[
"Brownlee",
"Alexander E. I.",
""
],
[
"Phukan",
"Orchid Chetia",
""
],
[
"Malik",
"Karan",
""
],
[
"Wazir",
"Samar",
""
]
] |
new_dataset
| 0.990533 |
2306.02331
|
Lipeng Zhu
|
Lipeng Zhu, Wenyan Ma, Rui Zhang
|
Movable Antennas for Wireless Communication: Opportunities and
Challenges
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Movable antenna (MA) technology is a recent development that fully exploits
the wireless channel spatial variation in a confined region by enabling local
movement of the antenna. Specifically, the positions of antennas at the
transmitter and/or receiver can be dynamically changed to obtain better channel
conditions for improving the communication performance. In this article, we
first provide an overview of the promising applications for MA-aided wireless
communication. Then, we present the hardware architecture and channel
characterization for MA systems, based on which the variation of the channel
gain with respect to the MA's position is illustrated. Furthermore, we analyze
the performance advantages of MAs over conventional fixed-position antennas, in
terms of signal power improvement, interference mitigation, flexible
beamforming, and spatial multiplexing. Finally, we discuss the main design
challenges and their potential solutions for MA-aided communication systems.
|
[
{
"version": "v1",
"created": "Sun, 4 Jun 2023 11:24:07 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Zhu",
"Lipeng",
""
],
[
"Ma",
"Wenyan",
""
],
[
"Zhang",
"Rui",
""
]
] |
new_dataset
| 0.999052 |
2306.02346
|
Shuo Ye
|
Shuo Ye and Yufeng Shi and Ruxin Wang and Yu Wang and Jiamiao Xu and
Chuanwu Yang and Xinge You
|
CDLT: A Dataset with Concept Drift and Long-Tailed Distribution for
Fine-Grained Visual Categorization
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data is the foundation for the development of computer vision, and the
establishment of datasets plays an important role in advancing the techniques
of fine-grained visual categorization~(FGVC). In the existing FGVC datasets
used in computer vision, it is generally assumed that each collected instance
has fixed characteristics and the distribution of different categories is
relatively balanced. In contrast, the real world scenario reveals the fact that
the characteristics of instances tend to vary with time and exhibit a
long-tailed distribution. Hence, the collected datasets may mislead the
optimization of the fine-grained classifiers, resulting in unpleasant
performance in real applications. Starting from the real-world conditions and
to promote the practical progress of fine-grained visual categorization, we
present a Concept Drift and Long-Tailed Distribution dataset. Specifically, the
dataset is collected by gathering 11195 images of 250 instances in different
species for 47 consecutive months in their natural contexts. The collection
process involves dozens of crowd workers for photographing and domain experts
for labelling. Extensive baseline experiments using the state-of-the-art
fine-grained classification models demonstrate the issues of concept drift and
long-tailed distribution existed in the dataset, which require the attention of
future researches.
|
[
{
"version": "v1",
"created": "Sun, 4 Jun 2023 12:42:45 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Ye",
"Shuo",
""
],
[
"Shi",
"Yufeng",
""
],
[
"Wang",
"Ruxin",
""
],
[
"Wang",
"Yu",
""
],
[
"Xu",
"Jiamiao",
""
],
[
"Yang",
"Chuanwu",
""
],
[
"You",
"Xinge",
""
]
] |
new_dataset
| 0.999772 |
2306.02351
|
Zhitong Xiong
|
Zhitong Xiong, Yanfeng Liu, Qi Wang, Xiao Xiang Zhu
|
RSSOD-Bench: A large-scale benchmark dataset for Salient Object
Detection in Optical Remote Sensing Imagery
|
IGARSS 2023, 4 pages
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present the RSSOD-Bench dataset for salient object detection (SOD) in
optical remote sensing imagery. While SOD has achieved success in natural scene
images with deep learning, research in SOD for remote sensing imagery (RSSOD)
is still in its early stages. Existing RSSOD datasets have limitations in terms
of scale, and scene categories, which make them misaligned with real-world
applications. To address these shortcomings, we construct the RSSOD-Bench
dataset, which contains images from four different cities in the USA. The
dataset provides annotations for various salient object categories, such as
buildings, lakes, rivers, highways, bridges, aircraft, ships, athletic fields,
and more. The salient objects in RSSOD-Bench exhibit large-scale variations,
cluttered backgrounds, and different seasons. Unlike existing datasets,
RSSOD-Bench offers uniform distribution across scene categories. We benchmark
23 different state-of-the-art approaches from both the computer vision and
remote sensing communities. Experimental results demonstrate that more research
efforts are required for the RSSOD task.
|
[
{
"version": "v1",
"created": "Sun, 4 Jun 2023 13:01:19 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Xiong",
"Zhitong",
""
],
[
"Liu",
"Yanfeng",
""
],
[
"Wang",
"Qi",
""
],
[
"Zhu",
"Xiao Xiang",
""
]
] |
new_dataset
| 0.99988 |
2306.02358
|
Sunwoo Kim
|
Sunwoo Kim, Fanchen Bu, Minyoung Choe, Jaemin Yoo, Kijung Shin
|
How Transitive Are Real-World Group Interactions? -- Measurement and
Reproduction
|
To be published in KDD 2023. 12 pages, 7 figures, and 11 tables
| null |
10.1145/3580305.3599382
| null |
cs.SI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Many real-world interactions (e.g., researcher collaborations and email
communication) occur among multiple entities. These group interactions are
naturally modeled as hypergraphs. In graphs, transitivity is helpful to
understand the connections between node pairs sharing a neighbor, and it has
extensive applications in various domains. Hypergraphs, an extension of graphs,
are designed to represent group relations. However, to the best of our
knowledge, there has been no examination regarding the transitivity of
real-world group interactions. In this work, we investigate the transitivity of
group interactions in real-world hypergraphs. We first suggest intuitive axioms
as necessary characteristics of hypergraph transitivity measures. Then, we
propose a principled hypergraph transitivity measure HyperTrans, which
satisfies all the proposed axioms, with a fast computation algorithm
Fast-HyperTrans. After that, we analyze the transitivity patterns in real-world
hypergraphs distinguished from those in random hypergraphs. Lastly, we propose
a scalable hypergraph generator THera. It reproduces the observed transitivity
patterns by leveraging community structures, which are pervasive in real-world
hypergraphs. Our code and datasets are available at
https://github.com/kswoo97/hypertrans.
|
[
{
"version": "v1",
"created": "Sun, 4 Jun 2023 13:35:38 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Kim",
"Sunwoo",
""
],
[
"Bu",
"Fanchen",
""
],
[
"Choe",
"Minyoung",
""
],
[
"Yoo",
"Jaemin",
""
],
[
"Shin",
"Kijung",
""
]
] |
new_dataset
| 0.995711 |
2306.02359
|
Jiancheng Zhao
|
Jiancheng Zhao, Jiaqi Yue, Liangjun Feng, Chunhui Zhao, and Jinliang
Ding
|
Addressing Domain Shift via Knowledge Space Sharing for Generalized
Zero-Shot Industrial Fault Diagnosis
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fault diagnosis is a critical aspect of industrial safety, and supervised
industrial fault diagnosis has been extensively researched. However, obtaining
fault samples of all categories for model training can be challenging due to
cost and safety concerns. As a result, the generalized zero-shot industrial
fault diagnosis has gained attention as it aims to diagnose both seen and
unseen faults. Nevertheless, the lack of unseen fault data for training poses a
challenging domain shift problem (DSP), where unseen faults are often
identified as seen faults. In this article, we propose a knowledge space
sharing (KSS) model to address the DSP in the generalized zero-shot industrial
fault diagnosis task. The KSS model includes a generation mechanism (KSS-G) and
a discrimination mechanism (KSS-D). KSS-G generates samples for rare faults by
recombining transferable attribute features extracted from seen samples under
the guidance of auxiliary knowledge. KSS-D is trained in a supervised way with
the help of generated samples, which aims to address the DSP by modeling seen
categories in the knowledge space. KSS-D avoids misclassifying rare faults as
seen faults and identifies seen fault samples. We conduct generalized zero-shot
diagnosis experiments on the benchmark Tennessee-Eastman process, and our
results show that our approach outperforms state-of-the-art methods for the
generalized zero-shot industrial fault diagnosis problem.
|
[
{
"version": "v1",
"created": "Sun, 4 Jun 2023 13:50:01 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Zhao",
"Jiancheng",
""
],
[
"Yue",
"Jiaqi",
""
],
[
"Feng",
"Liangjun",
""
],
[
"Zhao",
"Chunhui",
""
],
[
"Ding",
"Jinliang",
""
]
] |
new_dataset
| 0.969064 |
2306.02361
|
Ruichun Ma
|
Ruichun Ma, R. Ivan Zelaya, Wenjun Hu
|
Softly, Deftly, Scrolls Unfurl Their Splendor: Rolling Flexible Surfaces
for Wideband Wireless
| null | null |
10.1145/3570361.3592520
| null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
With new frequency bands opening up, emerging wireless IoT devices are
capitalizing on an increasingly divergent range of frequencies. However,
existing coverage provisioning practice is often tied to specific standards and
frequencies. There is little shareable wireless infrastructure for concurrent
links on different frequencies, across networks and standards. This paper
presents Scrolls, a frequency-tunable soft smart surface system to enhance
wideband, multi-network coverage. Scrolls' hardware comprises many rows of
rollable thin plastic film, each attached with flexible copper strips. When
rolled to different lengths, the copper strips act as wire antennas reflecting
signals on the corresponding frequencies. The surface control algorithm
determines the unrolled strip lengths for link enhancement by probing the
search space efficiently. We build a set of distributed, composable Scrolls
prototypes and deploy them in an office. Extensive evaluation shows that
Scrolls can adapt the antenna lengths effectively to provide link enhancement
across diverse standards on sub-6 GHz bands. For concurrent links on 900 MHz
(LoRa), 2.4 GHz (Wi-Fi), 3.7 GHz, and 5 GHz, Scrolls can provide received
signal strength gains to all links simultaneously, by a median of 4 dB and up
to 10 dB
|
[
{
"version": "v1",
"created": "Sun, 4 Jun 2023 13:58:07 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Ma",
"Ruichun",
""
],
[
"Zelaya",
"R. Ivan",
""
],
[
"Hu",
"Wenjun",
""
]
] |
new_dataset
| 0.994672 |
2306.02444
|
Onel Luis Alcaraz L\'opez
|
Onel A. L\'opez, Osmel M. Rosabal, David Ruiz-Guirola, Prasoon
Raghuwanshi, Konstantin Mikhaylov, Lauri Lov\'en, Sridhar Iyer
|
Energy-Sustainable IoT Connectivity: Vision, Technological Enablers,
Challenges, and Future Directions
|
25 figures, 12 tables, submitted to IEEE Open Journal of the
Communications Society
| null | null | null |
cs.NI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Technology solutions must effectively balance economic growth, social equity,
and environmental integrity to achieve a sustainable society. Notably, although
the Internet of Things (IoT) paradigm constitutes a key sustainability enabler,
critical issues such as the increasing maintenance operations, energy
consumption, and manufacturing/disposal of IoT devices have long-term negative
economic, societal, and environmental impacts and must be efficiently
addressed. This calls for self-sustainable IoT ecosystems requiring minimal
external resources and intervention, effectively utilizing renewable energy
sources, and recycling materials whenever possible, thus encompassing energy
sustainability. In this work, we focus on energy-sustainable IoT during the
operation phase, although our discussions sometimes extend to other
sustainability aspects and IoT lifecycle phases. Specifically, we provide a
fresh look at energy-sustainable IoT and identify energy provision, transfer,
and energy efficiency as the three main energy-related processes whose
harmonious coexistence pushes toward realizing self-sustainable IoT systems.
Their main related technologies, recent advances, challenges, and research
directions are also discussed. Moreover, we overview relevant performance
metrics to assess the energy-sustainability potential of a certain technique,
technology, device, or network and list some target values for the next
generation of wireless systems. Overall, this paper offers insights that are
valuable for advancing sustainability goals for present and future generations.
|
[
{
"version": "v1",
"created": "Sun, 4 Jun 2023 19:22:20 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"López",
"Onel A.",
""
],
[
"Rosabal",
"Osmel M.",
""
],
[
"Ruiz-Guirola",
"David",
""
],
[
"Raghuwanshi",
"Prasoon",
""
],
[
"Mikhaylov",
"Konstantin",
""
],
[
"Lovén",
"Lauri",
""
],
[
"Iyer",
"Sridhar",
""
]
] |
new_dataset
| 0.996755 |
2306.02475
|
Omar Shaikh
|
Omar Shaikh, Caleb Ziems, William Held, Aryan J. Pariani, Fred
Morstatter, Diyi Yang
|
Modeling Cross-Cultural Pragmatic Inference with Codenames Duet
|
ACL 2023 Findings
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pragmatic reference enables efficient interpersonal communication. Prior work
uses simple reference games to test models of pragmatic reasoning, often with
unidentified speakers and listeners. In practice, however, speakers'
sociocultural background shapes their pragmatic assumptions. For example,
readers of this paper assume NLP refers to "Natural Language Processing," and
not "Neuro-linguistic Programming." This work introduces the Cultural Codes
dataset, which operationalizes sociocultural pragmatic inference in a simple
word reference game.
Cultural Codes is based on the multi-turn collaborative two-player game,
Codenames Duet. Our dataset consists of 794 games with 7,703 turns, distributed
across 153 unique players. Alongside gameplay, we collect information about
players' personalities, values, and demographics. Utilizing theories of
communication and pragmatics, we predict each player's actions via joint
modeling of their sociocultural priors and the game context. Our experiments
show that accounting for background characteristics significantly improves
model performance for tasks related to both clue giving and guessing,
indicating that sociocultural priors play a vital role in gameplay decisions.
|
[
{
"version": "v1",
"created": "Sun, 4 Jun 2023 20:47:07 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Shaikh",
"Omar",
""
],
[
"Ziems",
"Caleb",
""
],
[
"Held",
"William",
""
],
[
"Pariani",
"Aryan J.",
""
],
[
"Morstatter",
"Fred",
""
],
[
"Yang",
"Diyi",
""
]
] |
new_dataset
| 0.986636 |
2306.02496
|
Elias Gr\"unewald
|
Elias Gr\"unewald, Jannis Kiesel, Siar-Remzi Akbayin, Frank Pallas
|
Hawk: DevOps-driven Transparency and Accountability in Cloud Native
Systems
|
preprint, accepted for the 16th IEEE International Conference on
Cloud Computing 2023, IEEE Cloud 2023
| null | null | null |
cs.DC cs.CR cs.CY cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transparency is one of the most important principles of modern privacy
regulations, such as the GDPR or CCPA. To be compliant with such regulatory
frameworks, data controllers must provide data subjects with precise
information about the collection, processing, storage, and transfer of personal
data. To do so, respective facts and details must be compiled and always kept
up to date. In traditional, rather static system environments, this inventory
(including details such as the purposes of processing or the storage duration
for each system component) could be done manually. In current circumstances of
agile, DevOps-driven, and cloud-native information systems engineering,
however, such manual practices do not suit anymore, making it increasingly hard
for data controllers to achieve regulatory compliance. To allow for proper
collection and maintenance of always up-to-date transparency information
smoothly integrating into DevOps practices, we herein propose a set of novel
approaches explicitly tailored to specific phases of the DevOps lifecycle most
relevant in matters of privacy-related transparency and accountability at
runtime: Release, Operation, and Monitoring. For each of these phases, we
examine the specific challenges arising in determining the details of personal
data processing, develop a distinct approach and provide respective proof of
concept implementations that can easily be applied in cloud native systems. We
also demonstrate how these components can be integrated with each other to
establish transparency information comprising design- and runtime-elements.
Furthermore, our experimental evaluation indicates reasonable overheads. On
this basis, data controllers can fulfill their regulatory transparency
obligations in line with actual engineering practices.
|
[
{
"version": "v1",
"created": "Sun, 4 Jun 2023 22:09:42 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Grünewald",
"Elias",
""
],
[
"Kiesel",
"Jannis",
""
],
[
"Akbayin",
"Siar-Remzi",
""
],
[
"Pallas",
"Frank",
""
]
] |
new_dataset
| 0.994615 |
2306.02508
|
Sam Leone
|
Samuel Leone, Aarthi Venkat, Guillaume Huguet, Alexander Tong, Guy
Wolf, Smita Krishnaswamy
|
Graph Fourier MMD for Signals on Graphs
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While numerous methods have been proposed for computing distances between
probability distributions in Euclidean space, relatively little attention has
been given to computing such distances for distributions on graphs. However,
there has been a marked increase in data that either lies on graph (such as
protein interaction networks) or can be modeled as a graph (single cell data),
particularly in the biomedical sciences. Thus, it becomes important to find
ways to compare signals defined on such graphs. Here, we propose Graph Fourier
MMD (GFMMD), a novel distance between distributions and signals on graphs.
GFMMD is defined via an optimal witness function that is both smooth on the
graph and maximizes difference in expectation between the pair of distributions
on the graph. We find an analytical solution to this optimization problem as
well as an embedding of distributions that results from this method. We also
prove several properties of this method including scale invariance and
applicability to disconnected graphs. We showcase it on graph benchmark
datasets as well on single cell RNA-sequencing data analysis. In the latter, we
use the GFMMD-based gene embeddings to find meaningful gene clusters. We also
propose a novel type of score for gene selection called "gene localization
score" which helps select genes for cellular state space characterization.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 00:01:17 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Leone",
"Samuel",
""
],
[
"Venkat",
"Aarthi",
""
],
[
"Huguet",
"Guillaume",
""
],
[
"Tong",
"Alexander",
""
],
[
"Wolf",
"Guy",
""
],
[
"Krishnaswamy",
"Smita",
""
]
] |
new_dataset
| 0.995827 |
2306.02514
|
Aryaman Arora
|
Aryaman Arora, Adam Farris, Samopriya Basu, Suresh Kolichala
|
Jambu: A historical linguistic database for South Asian languages
|
5 pages main text, 10 pages total. To appear at SIGMORPHON
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce Jambu, a cognate database of South Asian languages which unifies
dozens of previous sources in a structured and accessible format. The database
includes 287k lemmata from 602 lects, grouped together in 23k sets of cognates.
We outline the data wrangling necessary to compile the dataset and train neural
models for reflex prediction on the Indo-Aryan subset of the data. We hope that
Jambu is an invaluable resource for all historical linguists and Indologists,
and look towards further improvement and expansion of the database.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 00:32:57 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Arora",
"Aryaman",
""
],
[
"Farris",
"Adam",
""
],
[
"Basu",
"Samopriya",
""
],
[
"Kolichala",
"Suresh",
""
]
] |
new_dataset
| 0.999435 |
2306.02546
|
Xiangzhe Xu
|
Xiangzhe Xu, Zhuo Zhang, Shiwei Feng, Yapeng Ye, Zian Su, Nan Jiang,
Siyuan Cheng, Lin Tan, Xiangyu Zhang
|
LmPa: Improving Decompilation by Synergy of Large Language Model and
Program Analysis
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Decompilation aims to recover the source code form of a binary executable. It
has many applications in security and software engineering such as malware
analysis, vulnerability detection and code reuse. A prominent challenge in
decompilation is to recover variable names. We propose a novel method that
leverages the synergy of large language model (LLM) and program analysis.
Language models encode rich multi-modal knowledge, but its limited input size
prevents providing sufficient global context for name recovery. We propose to
divide the task to many LLM queries and use program analysis to correlate and
propagate the query results, which in turn improves the performance of LLM by
providing additional contextual information. Our results show that 75% of the
recovered names are considered good by users and our technique outperforms the
state-of-the-art technique by 16.5% and 20.23% in precision and recall,
respectively.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 02:39:48 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Xu",
"Xiangzhe",
""
],
[
"Zhang",
"Zhuo",
""
],
[
"Feng",
"Shiwei",
""
],
[
"Ye",
"Yapeng",
""
],
[
"Su",
"Zian",
""
],
[
"Jiang",
"Nan",
""
],
[
"Cheng",
"Siyuan",
""
],
[
"Tan",
"Lin",
""
],
[
"Zhang",
"Xiangyu",
""
]
] |
new_dataset
| 0.9872 |
2306.02593
|
Yayue Deng
|
Dengfeng Ke, Yayue Deng, Yukang Jia, Jinlong Xue, Qi Luo, Ya Li,
Jianqing Sun, Jiaen Liang, Binghuai Lin
|
Rhythm-controllable Attention with High Robustness for Long Sentence
Speech Synthesis
|
5 pages, 3 figures, Published in: 2022 13th International Symposium
on Chinese Spoken Language Processing (ISCSLP)
| null |
10.1109/ISCSLP57327.2022.10037822
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Regressive Text-to-Speech (TTS) system utilizes attention mechanism to
generate alignment between text and acoustic feature sequence. Alignment
determines synthesis robustness (e.g, the occurence of skipping, repeating, and
collapse) and rhythm via duration control. However, current attention
algorithms used in speech synthesis cannot control rhythm using external
duration information to generate natural speech while ensuring robustness. In
this study, we propose Rhythm-controllable Attention (RC-Attention) based on
Tracotron2, which improves robustness and naturalness simultaneously. Proposed
attention adopts a trainable scalar learned from four kinds of information to
achieve rhythm control, which makes rhythm control more robust and natural,
even when synthesized sentences are extremely longer than training corpus. We
use word errors counting and AB preference test to measure robustness of
proposed method and naturalness of synthesized speech, respectively. Results
shows that RC-Attention has the lowest word error rate of nearly 0.6%, compared
with 11.8% for baseline system. Moreover, nearly 60% subjects prefer to the
speech synthesized with RC-Attention to that with Forward Attention, because
the former has more natural rhythm.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 04:52:33 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Ke",
"Dengfeng",
""
],
[
"Deng",
"Yayue",
""
],
[
"Jia",
"Yukang",
""
],
[
"Xue",
"Jinlong",
""
],
[
"Luo",
"Qi",
""
],
[
"Li",
"Ya",
""
],
[
"Sun",
"Jianqing",
""
],
[
"Liang",
"Jiaen",
""
],
[
"Lin",
"Binghuai",
""
]
] |
new_dataset
| 0.961718 |
2306.02613
|
Zhe Zhang
|
Zhe Zhang, Yi Yu, Atsuhiro Takasu
|
Controllable Lyrics-to-Melody Generation
| null | null | null | null |
cs.SD cs.AI eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lyrics-to-melody generation is an interesting and challenging topic in AI
music research field. Due to the difficulty of learning the correlations
between lyrics and melody, previous methods suffer from low generation quality
and lack of controllability. Controllability of generative models enables human
interaction with models to generate desired contents, which is especially
important in music generation tasks towards human-centered AI that can
facilitate musicians in creative activities. To address these issues, we
propose a controllable lyrics-to-melody generation network, ConL2M, which is
able to generate realistic melodies from lyrics in user-desired musical style.
Our work contains three main novelties: 1) To model the dependencies of music
attributes cross multiple sequences, inter-branch memory fusion (Memofu) is
proposed to enable information flow between multi-branch stacked LSTM
architecture; 2) Reference style embedding (RSE) is proposed to improve the
quality of generation as well as control the musical style of generated
melodies; 3) Sequence-level statistical loss (SeqLoss) is proposed to help the
model learn sequence-level features of melodies given lyrics. Verified by
evaluation metrics for music quality and controllability, initial study of
controllable lyrics-to-melody generation shows better generation quality and
the feasibility of interacting with users to generate the melodies in desired
musical styles when given lyrics.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 06:14:08 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Zhang",
"Zhe",
""
],
[
"Yu",
"Yi",
""
],
[
"Takasu",
"Atsuhiro",
""
]
] |
new_dataset
| 0.985975 |
2306.02680
|
Soumitri Chattopadhyay
|
Ahana Deb, Sayan Nag, Ayan Mahapatra, Soumitri Chattopadhyay, Aritra
Marik, Pijush Kanti Gayen, Shankha Sanyal, Archi Banerjee, Samir Karmakar
|
BeAts: Bengali Speech Acts Recognition using Multimodal Attention Fusion
|
Accepted at INTERSPEECH 2023
| null | null | null |
cs.CL cs.LG cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Spoken languages often utilise intonation, rhythm, intensity, and structure,
to communicate intention, which can be interpreted differently depending on the
rhythm of speech of their utterance. These speech acts provide the foundation
of communication and are unique in expression to the language. Recent
advancements in attention-based models, demonstrating their ability to learn
powerful representations from multilingual datasets, have performed well in
speech tasks and are ideal to model specific tasks in low resource languages.
Here, we develop a novel multimodal approach combining two models, wav2vec2.0
for audio and MarianMT for text translation, by using multimodal attention
fusion to predict speech acts in our prepared Bengali speech corpus. We also
show that our model BeAts ($\underline{\textbf{Be}}$ngali speech acts
recognition using Multimodal $\underline{\textbf{At}}$tention
Fu$\underline{\textbf{s}}$ion) significantly outperforms both the unimodal
baseline using only speech data and a simpler bimodal fusion using both speech
and text data. Project page: https://soumitri2001.github.io/BeAts
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 08:12:17 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Deb",
"Ahana",
""
],
[
"Nag",
"Sayan",
""
],
[
"Mahapatra",
"Ayan",
""
],
[
"Chattopadhyay",
"Soumitri",
""
],
[
"Marik",
"Aritra",
""
],
[
"Gayen",
"Pijush Kanti",
""
],
[
"Sanyal",
"Shankha",
""
],
[
"Banerjee",
"Archi",
""
],
[
"Karmakar",
"Samir",
""
]
] |
new_dataset
| 0.999483 |
2306.02742
|
Xinyu Jia
|
Xinyu Jia, Jun Yang, Kaixin Lu, Haoyong Yu
|
Motion Control based on Disturbance Estimation and Time-Varying Gain for
Robotic Manipulators
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
To achieve high-accuracy manipulation in the presence of unknown dynamics and
external disturbance, we propose an efficient and robust motion controller
(named TvUDE) for robotic manipulators. The controller incorporates a
disturbance estimation mechanism that utilizes reformulated robot dynamics and
filtering operations to obtain uncertainty and disturbance without requiring
measurement of acceleration. Furthermore, we design a time-varying control
input gain to enhance the control system's robustness. Finally, we analyze the
boundness of the control signal and the stability of the closed-loop system,
and conduct a set of experiments on a six-DOF robotic manipulator. The
experimental results verify the effectiveness of TvUDE in handling internal
uncertainty and external static or transient disturbance.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 09:50:34 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Jia",
"Xinyu",
""
],
[
"Yang",
"Jun",
""
],
[
"Lu",
"Kaixin",
""
],
[
"Yu",
"Haoyong",
""
]
] |
new_dataset
| 0.984992 |
2306.02754
|
Hao Li
|
Hao Li, Yuping Wu, Viktor Schlegel, Riza Batista-Navarro, Thanh-Tung
Nguyen, Abhinav Ramesh Kashyap, Xiaojun Zeng, Daniel Beck, Stefan Winkler,
Goran Nenadic
|
PULSAR: Pre-training with Extracted Healthcare Terms for Summarising
Patients' Problems and Data Augmentation with Black-box Large Language Models
|
Accepted by ACL 2023's workshop BioNLP 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Medical progress notes play a crucial role in documenting a patient's
hospital journey, including his or her condition, treatment plan, and any
updates for healthcare providers. Automatic summarisation of a patient's
problems in the form of a problem list can aid stakeholders in understanding a
patient's condition, reducing workload and cognitive bias. BioNLP 2023 Shared
Task 1A focuses on generating a list of diagnoses and problems from the
provider's progress notes during hospitalisation. In this paper, we introduce
our proposed approach to this task, which integrates two complementary
components. One component employs large language models (LLMs) for data
augmentation; the other is an abstractive summarisation LLM with a novel
pre-training objective for generating the patients' problems summarised as a
list. Our approach was ranked second among all submissions to the shared task.
The performance of our model on the development and test datasets shows that
our approach is more robust on unknown data, with an improvement of up to 3.1
points over the same size of the larger model.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 10:17:50 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Li",
"Hao",
""
],
[
"Wu",
"Yuping",
""
],
[
"Schlegel",
"Viktor",
""
],
[
"Batista-Navarro",
"Riza",
""
],
[
"Nguyen",
"Thanh-Tung",
""
],
[
"Kashyap",
"Abhinav Ramesh",
""
],
[
"Zeng",
"Xiaojun",
""
],
[
"Beck",
"Daniel",
""
],
[
"Winkler",
"Stefan",
""
],
[
"Nenadic",
"Goran",
""
]
] |
new_dataset
| 0.978461 |
2306.02845
|
Puneet Kumar
|
Puneet Kumar and Xiaobai Li
|
Interpretable Multimodal Emotion Recognition using Facial Features and
Physiological Signals
|
Accepted for Oral Presentation in DAI 2023
(https://rbcdsai.iitm.ac.in/DAI-2023/program.html)
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This paper aims to demonstrate the importance and feasibility of fusing
multimodal information for emotion recognition. It introduces a multimodal
framework for emotion understanding by fusing the information from visual
facial features and rPPG signals extracted from the input videos. An
interpretability technique based on permutation feature importance analysis has
also been implemented to compute the contributions of rPPG and visual
modalities toward classifying a given input video into a particular emotion
class. The experiments on IEMOCAP dataset demonstrate that the emotion
classification performance improves by combining the complementary information
from multiple modalities.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 12:57:07 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Kumar",
"Puneet",
""
],
[
"Li",
"Xiaobai",
""
]
] |
new_dataset
| 0.964014 |
2306.02902
|
Bashar Talafha
|
Bashar Talafha, Abdul Waheed, Muhammad Abdul-Mageed
|
N-Shot Benchmarking of Whisper on Diverse Arabic Speech Recognition
|
4 pages, INTERSPEECH 2023
| null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Whisper, the recently developed multilingual weakly supervised model, is
reported to perform well on multiple speech recognition benchmarks in both
monolingual and multilingual settings. However, it is not clear how Whisper
would fare under diverse conditions even on languages it was evaluated on such
as Arabic. In this work, we address this gap by comprehensively evaluating
Whisper on several varieties of Arabic speech for the ASR task. Our evaluation
covers most publicly available Arabic speech data and is performed under n-shot
(zero-, few-, and full) finetuning. We also investigate the robustness of
Whisper under completely novel conditions, such as in dialect-accented standard
Arabic and in unseen dialects for which we develop evaluation data. Our
experiments show that although Whisper zero-shot outperforms fully finetuned
XLS-R models on all datasets, its performance deteriorates significantly in the
zero-shot setting for five unseen dialects (i.e., Algeria, Jordan, Palestine,
UAE, and Yemen).
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 14:09:25 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Talafha",
"Bashar",
""
],
[
"Waheed",
"Abdul",
""
],
[
"Abdul-Mageed",
"Muhammad",
""
]
] |
new_dataset
| 0.997088 |
2306.03050
|
Yu-Hsuan Ho
|
Yu-Hsuan Ho, Cheng-Chun Lee, Nicholas D. Diaz, Samuel D. Brody, and
Ali Mostafavi
|
ELEV-VISION: Automated Lowest Floor Elevation Estimation from Segmenting
Street View Images
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We propose an automated lowest floor elevation (LFE) estimation algorithm
based on computer vision techniques to leverage the latent information in
street view images. Flood depth-damage models use a combination of LFE and
flood depth for determining flood risk and extent of damage to properties. We
used image segmentation for detecting door bottoms and roadside edges from
Google Street View images. The characteristic of equirectangular projection
with constant spacing representation of horizontal and vertical angles allows
extraction of the pitch angle from the camera to the door bottom. The depth
from the camera to the door bottom was obtained from the depthmap paired with
the Google Street View image. LFEs were calculated from the pitch angle and the
depth. The testbed for application of the proposed method is Meyerland (Harris
County, Texas). The results show that the proposed method achieved mean
absolute error of 0.190 m (1.18 %) in estimating LFE. The height difference
between the street and the lowest floor (HDSL) was estimated to provide
information for flood damage estimation. The proposed automatic LFE estimation
algorithm using Street View images and image segmentation provides a rapid and
cost-effective method for LFE estimation compared with the surveys using total
station theodolite and unmanned aerial systems. By obtaining more accurate and
up-to-date LFE data using the proposed method, city planners, emergency
planners and insurance companies could make a more precise estimation of flood
damage.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 17:22:27 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Ho",
"Yu-Hsuan",
""
],
[
"Lee",
"Cheng-Chun",
""
],
[
"Diaz",
"Nicholas D.",
""
],
[
"Brody",
"Samuel D.",
""
],
[
"Mostafavi",
"Ali",
""
]
] |
new_dataset
| 0.978785 |
2306.03090
|
Rose Wang
|
Rose E. Wang, Dorottya Demszky
|
Is ChatGPT a Good Teacher Coach? Measuring Zero-Shot Performance For
Scoring and Providing Actionable Insights on Classroom Instruction
|
In the Proceedings of Innovative Use of NLP for Building Educational
Applications 2023; The code and model outputs are open-sourced here:
https://github.com/rosewang2008/zero-shot-teacher-feedback
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Coaching, which involves classroom observation and expert feedback, is a
widespread and fundamental part of teacher training. However, the majority of
teachers do not have access to consistent, high quality coaching due to limited
resources and access to expertise. We explore whether generative AI could
become a cost-effective complement to expert feedback by serving as an
automated teacher coach. In doing so, we propose three teacher coaching tasks
for generative AI: (A) scoring transcript segments based on classroom
observation instruments, (B) identifying highlights and missed opportunities
for good instructional strategies, and (C) providing actionable suggestions for
eliciting more student reasoning. We recruit expert math teachers to evaluate
the zero-shot performance of ChatGPT on each of these tasks for elementary math
classroom transcripts. Our results reveal that ChatGPT generates responses that
are relevant to improving instruction, but they are often not novel or
insightful. For example, 82% of the model's suggestions point to places in the
transcript where the teacher is already implementing that suggestion. Our work
highlights the challenges of producing insightful, novel and truthful feedback
for teachers while paving the way for future research to address these
obstacles and improve the capacity of generative AI to coach teachers.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 17:59:21 GMT"
}
] | 2023-06-06T00:00:00 |
[
[
"Wang",
"Rose E.",
""
],
[
"Demszky",
"Dorottya",
""
]
] |
new_dataset
| 0.962341 |
2005.11177
|
Muhammad Imran
|
Umair Qazi, Muhammad Imran, Ferda Ofli
|
GeoCoV19: A Dataset of Hundreds of Millions of Multilingual COVID-19
Tweets with Location Information
|
10 pages, 5 figures, accepted at ACM SIGSPATIAL Special May 2020
|
SIGSPATIAL Special 12, 1 (March 2020), 6-15
|
10.1145/3404820.3404823
| null |
cs.SI cs.CL cs.CY cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The past several years have witnessed a huge surge in the use of social media
platforms during mass convergence events such as health emergencies, natural or
human-induced disasters. These non-traditional data sources are becoming vital
for disease forecasts and surveillance when preparing for epidemic and pandemic
outbreaks. In this paper, we present GeoCoV19, a large-scale Twitter dataset
containing more than 524 million multilingual tweets posted over a period of 90
days since February 1, 2020. Moreover, we employ a gazetteer-based approach to
infer the geolocation of tweets. We postulate that this large-scale,
multilingual, geolocated social media data can empower the research communities
to evaluate how societies are collectively coping with this unprecedented
global crisis as well as to develop computational methods to address challenges
such as identifying fake news, understanding communities' knowledge gaps,
building disease forecast and surveillance models, among others.
|
[
{
"version": "v1",
"created": "Fri, 22 May 2020 13:30:42 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Qazi",
"Umair",
""
],
[
"Imran",
"Muhammad",
""
],
[
"Ofli",
"Ferda",
""
]
] |
new_dataset
| 0.999819 |
2108.12828
|
Firoj Alam
|
Firoj Alam, Tanvirul Alam, Md. Arid Hasan, Abul Hasnat, Muhammad
Imran, Ferda Ofli
|
MEDIC: A Multi-Task Learning Dataset for Disaster Image Classification
|
Multi-task Learning, Social media images, Image Classification,
Natural disasters, Crisis Informatics, Deep learning, Dataset
|
Neural Computing and Applications 35, 2609-2632 (2023)
|
10.1007/s00521-022-07717-0
| null |
cs.CV cs.CY cs.LG cs.SI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recent research in disaster informatics demonstrates a practical and
important use case of artificial intelligence to save human lives and suffering
during natural disasters based on social media contents (text and images).
While notable progress has been made using texts, research on exploiting the
images remains relatively under-explored. To advance image-based approaches, we
propose MEDIC (Available at: https://crisisnlp.qcri.org/medic/index.html),
which is the largest social media image classification dataset for humanitarian
response consisting of 71,198 images to address four different tasks in a
multi-task learning setup. This is the first dataset of its kind: social media
images, disaster response, and multi-task learning research. An important
property of this dataset is its high potential to facilitate research on
multi-task learning, which recently receives much interest from the machine
learning community and has shown remarkable results in terms of memory,
inference speed, performance, and generalization capability. Therefore, the
proposed dataset is an important resource for advancing image-based disaster
management and multi-task machine learning research. We experiment with
different deep learning architectures and report promising results, which are
above the majority baselines for all tasks. Along with the dataset, we also
release all relevant scripts (https://github.com/firojalam/medic).
|
[
{
"version": "v1",
"created": "Sun, 29 Aug 2021 11:55:50 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Sep 2021 20:03:26 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Dec 2021 19:51:05 GMT"
},
{
"version": "v4",
"created": "Wed, 8 Jun 2022 19:39:41 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Alam",
"Firoj",
""
],
[
"Alam",
"Tanvirul",
""
],
[
"Hasan",
"Md. Arid",
""
],
[
"Hasnat",
"Abul",
""
],
[
"Imran",
"Muhammad",
""
],
[
"Ofli",
"Ferda",
""
]
] |
new_dataset
| 0.999858 |
2110.03664
|
Muhammad Imran
|
Muhammad Imran, Umair Qazi, Ferda Ofli
|
TBCOV: Two Billion Multilingual COVID-19 Tweets with Sentiment, Entity,
Geo, and Gender Labels
|
20 pages, 13 figures, 8 tables
|
Data. 2022; 7(1):8
|
10.3390/data7010008
| null |
cs.SI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The widespread usage of social networks during mass convergence events, such
as health emergencies and disease outbreaks, provides instant access to
citizen-generated data that carry rich information about public opinions,
sentiments, urgent needs, and situational reports. Such information can help
authorities understand the emergent situation and react accordingly. Moreover,
social media plays a vital role in tackling misinformation and disinformation.
This work presents TBCOV, a large-scale Twitter dataset comprising more than
two billion multilingual tweets related to the COVID-19 pandemic collected
worldwide over a continuous period of more than one year. More importantly,
several state-of-the-art deep learning models are used to enrich the data with
important attributes, including sentiment labels, named-entities (e.g.,
mentions of persons, organizations, locations), user types, and gender
information. Last but not least, a geotagging method is proposed to assign
country, state, county, and city information to tweets, enabling a myriad of
data analysis tasks to understand real-world issues. Our sentiment and trend
analyses reveal interesting insights and confirm TBCOV's broad coverage of
important topics.
|
[
{
"version": "v1",
"created": "Mon, 4 Oct 2021 06:17:12 GMT"
}
] | 2023-06-05T00:00:00 |
[
[
"Imran",
"Muhammad",
""
],
[
"Qazi",
"Umair",
""
],
[
"Ofli",
"Ferda",
""
]
] |
new_dataset
| 0.999856 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.