id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2305.11024
|
Xinyu Zhao
|
Xinyu Zhao, Sa Huang, Wei Pang, You Zhou
|
CDIDN: A Registration Model with High Deformation Impedance Capability
for Long-Term Tracking of Pulmonary Lesion Dynamics
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the problem of registration for medical CT images from a novel
perspective -- the sensitivity to degree of deformations in CT images. Although
some learning-based methods have shown success in terms of average accuracy,
their ability to handle regions with local large deformation (LLD) may
significantly decrease compared to dealing with regions with minor deformation.
This motivates our research into this issue. Two main causes of LLDs are organ
motion and changes in tissue structure, with the latter often being a long-term
process. In this paper, we propose a novel registration model called
Cascade-Dilation Inter-Layer Differential Network (CDIDN), which exhibits both
high deformation impedance capability (DIC) and accuracy. CDIDN improves its
resilience to LLDs in CT images by enhancing LLDs in the displacement field
(DF). It uses a feature-based progressive decomposition of LLDs, blending
feature flows of different levels into a main flow in a top-down manner. It
leverages Inter-Layer Differential Module (IDM) at each level to locally refine
the main flow and globally smooth the feature flow, and also integrates feature
velocity fields that can effectively handle feature deformations of various
degrees. We assess CDIDN using lungs as representative organs with large
deformation. Our findings show that IDM significantly enhances LLDs of the DF,
by which improves the DIC and accuracy of the model. Compared with other
outstanding learning-based methods, CDIDN exhibits the best DIC and excellent
accuracy. Based on vessel enhancement and enhanced LLDs of the DF, we propose a
novel method to accurately track the appearance, disappearance, enlargement,
and shrinkage of pulmonary lesions, which effectively addresses detection of
early lesions and peripheral lung lesions, issues of false enlargement, false
shrinkage, and mutilation of lesions.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 15:05:55 GMT"
},
{
"version": "v2",
"created": "Wed, 24 May 2023 12:45:44 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Zhao",
"Xinyu",
""
],
[
"Huang",
"Sa",
""
],
[
"Pang",
"Wei",
""
],
[
"Zhou",
"You",
""
]
] |
new_dataset
| 0.995615 |
2305.11176
|
Siyuan Huang
|
Siyuan Huang, Zhengkai Jiang, Hao Dong, Yu Qiao, Peng Gao, Hongsheng
Li
|
Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions
with Large Language Model
| null | null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Foundation models have made significant strides in various applications,
including text-to-image generation, panoptic segmentation, and natural language
processing. This paper presents Instruct2Act, a framework that utilizes Large
Language Models to map multi-modal instructions to sequential actions for
robotic manipulation tasks. Specifically, Instruct2Act employs the LLM model to
generate Python programs that constitute a comprehensive perception, planning,
and action loop for robotic tasks. In the perception section, pre-defined APIs
are used to access multiple foundation models where the Segment Anything Model
(SAM) accurately locates candidate objects, and CLIP classifies them. In this
way, the framework leverages the expertise of foundation models and robotic
abilities to convert complex high-level instructions into precise policy codes.
Our approach is adjustable and flexible in accommodating various instruction
modalities and input types and catering to specific task demands. We validated
the practicality and efficiency of our approach by assessing it on robotic
tasks in different scenarios within tabletop manipulation domains. Furthermore,
our zero-shot method outperformed many state-of-the-art learning-based policies
in several tasks. The code for our proposed approach is available at
https://github.com/OpenGVLab/Instruct2Act, serving as a robust benchmark for
high-level robotic instruction tasks with assorted modality inputs.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 17:59:49 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 15:24:17 GMT"
},
{
"version": "v3",
"created": "Wed, 24 May 2023 04:17:34 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Huang",
"Siyuan",
""
],
[
"Jiang",
"Zhengkai",
""
],
[
"Dong",
"Hao",
""
],
[
"Qiao",
"Yu",
""
],
[
"Gao",
"Peng",
""
],
[
"Li",
"Hongsheng",
""
]
] |
new_dataset
| 0.997433 |
2305.11651
|
Yulin Shao
|
Pengfei Shen, Yulin Shao, Haoyuan Pan, Lu Lu
|
Channel Cycle Time: A New Measure of Short-term Fairness
| null | null | null | null |
cs.IT cs.MA cs.PF cs.SY eess.SY math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper puts forth a new metric, dubbed channel cycle time, to measure the
short-term fairness of communication networks. Channel cycle time characterizes
the average duration between two successful transmissions of a user, during
which all other users have successfully accessed the channel at least once.
Compared with existing short-term fairness measures, channel cycle time
provides a comprehensive picture of the transient behavior of communication
networks, and is a single real value that is easy to compute. To demonstrate
the effectiveness of our new approach, we analytically characterize the channel
cycle time of slotted Aloha and CSMA/CA. It is shown that CSMA/CA is a
short-term fairer protocol than slotted Aloha. Channel cycle time can serve as
a promising design principle for future communication networks, placing greater
emphasis on optimizing short-term behaviors like fairness, delay, and jitter.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 12:58:42 GMT"
},
{
"version": "v2",
"created": "Wed, 24 May 2023 00:23:32 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Shen",
"Pengfei",
""
],
[
"Shao",
"Yulin",
""
],
[
"Pan",
"Haoyuan",
""
],
[
"Lu",
"Lu",
""
]
] |
new_dataset
| 0.996227 |
2305.11938
|
Jonathan Clark
|
Sebastian Ruder, Jonathan H. Clark, Alexander Gutkin, Mihir Kale, Min
Ma, Massimo Nicosia, Shruti Rijhwani, Parker Riley, Jean-Michel A. Sarr,
Xinyi Wang, John Wieting, Nitish Gupta, Anna Katanova, Christo Kirov, Dana L.
Dickinson, Brian Roark, Bidisha Samanta, Connie Tao, David I. Adelani, Vera
Axelrod, Isaac Caswell, Colin Cherry, Dan Garrette, Reeve Ingle, Melvin
Johnson, Dmitry Panteleev, Partha Talukdar
|
XTREME-UP: A User-Centric Scarce-Data Benchmark for Under-Represented
Languages
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Data scarcity is a crucial issue for the development of highly multilingual
NLP systems. Yet for many under-represented languages (ULs) -- languages for
which NLP re-search is particularly far behind in meeting user needs -- it is
feasible to annotate small amounts of data. Motivated by this, we propose
XTREME-UP, a benchmark defined by: its focus on the scarce-data scenario rather
than zero-shot; its focus on user-centric tasks -- tasks with broad adoption by
speakers of high-resource languages; and its focus on under-represented
languages where this scarce-data scenario tends to be most realistic. XTREME-UP
evaluates the capabilities of language models across 88 under-represented
languages over 9 key user-centric technologies including ASR, OCR, MT, and
information access tasks that are of general utility. We create new datasets
for OCR, autocomplete, semantic parsing, and transliteration, and build on and
refine existing datasets for other tasks. XTREME-UP provides methodology for
evaluating many modeling scenarios including text-only, multi-modal (vision,
audio, and text),supervised parameter tuning, and in-context learning. We
evaluate commonly used models on the benchmark. We release all code and scripts
to train and evaluate models
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 18:00:03 GMT"
},
{
"version": "v2",
"created": "Wed, 24 May 2023 06:09:28 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Ruder",
"Sebastian",
""
],
[
"Clark",
"Jonathan H.",
""
],
[
"Gutkin",
"Alexander",
""
],
[
"Kale",
"Mihir",
""
],
[
"Ma",
"Min",
""
],
[
"Nicosia",
"Massimo",
""
],
[
"Rijhwani",
"Shruti",
""
],
[
"Riley",
"Parker",
""
],
[
"Sarr",
"Jean-Michel A.",
""
],
[
"Wang",
"Xinyi",
""
],
[
"Wieting",
"John",
""
],
[
"Gupta",
"Nitish",
""
],
[
"Katanova",
"Anna",
""
],
[
"Kirov",
"Christo",
""
],
[
"Dickinson",
"Dana L.",
""
],
[
"Roark",
"Brian",
""
],
[
"Samanta",
"Bidisha",
""
],
[
"Tao",
"Connie",
""
],
[
"Adelani",
"David I.",
""
],
[
"Axelrod",
"Vera",
""
],
[
"Caswell",
"Isaac",
""
],
[
"Cherry",
"Colin",
""
],
[
"Garrette",
"Dan",
""
],
[
"Ingle",
"Reeve",
""
],
[
"Johnson",
"Melvin",
""
],
[
"Panteleev",
"Dmitry",
""
],
[
"Talukdar",
"Partha",
""
]
] |
new_dataset
| 0.997178 |
2305.12524
|
Wenhu Chen
|
Wenhu Chen, Ming Yin, Max Ku, Pan Lu, Yixin Wan, Xueguang Ma, Jianyu
Xu, Xinyi Wang, Tony Xia
|
TheoremQA: A Theorem-driven Question Answering dataset
|
Work in Progress
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The recent LLMs like GPT-4 and PaLM-2 have made tremendous progress in
solving fundamental math problems like GSM8K by achieving over 90% accuracy.
However, their capabilities to solve more challenging math problems which
require domain-specific knowledge (i.e. theorem) have yet to be investigated.
In this paper, we introduce TheoremQA, the first theorem-driven
question-answering dataset designed to evaluate AI models' capabilities to
apply theorems to solve challenging science problems. TheoremQA is curated by
domain experts containing 800 high-quality questions covering 350 theorems
(e.g. Taylor's theorem, Lagrange's theorem, Huffman coding, Quantum Theorem,
Elasticity Theorem, etc) from Math, Physics, EE&CS, and Finance. We evaluate a
wide spectrum of 16 large language and code models with different prompting
strategies like Chain-of-Thoughts and Program-of-Thoughts. We found that
GPT-4's capabilities to solve these problems are unparalleled, achieving an
accuracy of 51% with Program-of-Thoughts Prompting. All the existing
open-sourced models are below 15%, barely surpassing the random-guess baseline.
Given the diversity and broad coverage of TheoremQA, we believe it can be used
as a better benchmark to evaluate LLMs' capabilities to solve challenging
science problems. The data and code are released in
https://github.com/wenhuchen/TheoremQA.
|
[
{
"version": "v1",
"created": "Sun, 21 May 2023 17:51:35 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 22:35:20 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Chen",
"Wenhu",
""
],
[
"Yin",
"Ming",
""
],
[
"Ku",
"Max",
""
],
[
"Lu",
"Pan",
""
],
[
"Wan",
"Yixin",
""
],
[
"Ma",
"Xueguang",
""
],
[
"Xu",
"Jianyu",
""
],
[
"Wang",
"Xinyi",
""
],
[
"Xia",
"Tony",
""
]
] |
new_dataset
| 0.998039 |
2305.13117
|
Michael Sejr Schlichtkrull
|
Michael Schlichtkrull, Zhijiang Guo, Andreas Vlachos
|
AVeriTeC: A Dataset for Real-world Claim Verification with Evidence from
the Web
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing datasets for automated fact-checking have substantial limitations,
such as relying on artificial claims, lacking annotations for evidence and
intermediate reasoning, or including evidence published after the claim. In
this paper we introduce AVeriTeC, a new dataset of 4,568 real-world claims
covering fact-checks by 50 different organizations. Each claim is annotated
with question-answer pairs supported by evidence available online, as well as
textual justifications explaining how the evidence combines to produce a
verdict. Through a multi-round annotation process, we avoid common pitfalls
including context dependence, evidence insufficiency, and temporal leakage, and
reach a substantial inter-annotator agreement of $\kappa=0.619$ on verdicts. We
develop a baseline as well as an evaluation scheme for verifying claims through
several question-answering steps against the open web.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 15:17:18 GMT"
},
{
"version": "v2",
"created": "Wed, 24 May 2023 10:44:08 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Schlichtkrull",
"Michael",
""
],
[
"Guo",
"Zhijiang",
""
],
[
"Vlachos",
"Andreas",
""
]
] |
new_dataset
| 0.999851 |
2305.13162
|
Marc Brooker
|
Marc Brooker and Mike Danilov and Chris Greenwood and Phil Piwonka
|
On-demand Container Loading in AWS Lambda
| null | null | null | null |
cs.DC cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
AWS Lambda is a serverless event-driven compute service, part of a category
of cloud compute offerings sometimes called Function-as-a-service (FaaS). When
we first released AWS Lambda, functions were limited to 250MB of code and
dependencies, packaged as a simple compressed archive. In 2020, we released
support for deploying container images as large as 10GiB as Lambda functions,
allowing customers to bring much larger code bases and sets of dependencies to
Lambda. Supporting larger packages, while still meeting Lambda's goals of rapid
scale (adding up to 15,000 new containers per second for a single customer, and
much more in aggregate), high request rate (millions of requests per second),
high scale (millions of unique workloads), and low start-up times (as low as
50ms) presented a significant challenge.
We describe the storage and caching system we built, optimized for delivering
container images on-demand, and our experiences designing, building, and
operating it at scale. We focus on challenges around security, efficiency,
latency, and cost, and how we addressed these challenges in a system that
combines caching, deduplication, convergent encryption, erasure coding, and
block-level demand loading.
Since building this system, it has reliably processed hundreds of trillions
of Lambda invocations for over a million AWS customers, and has shown excellent
resilience to load and infrastructure failures.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 15:48:37 GMT"
},
{
"version": "v2",
"created": "Wed, 24 May 2023 13:21:11 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Brooker",
"Marc",
""
],
[
"Danilov",
"Mike",
""
],
[
"Greenwood",
"Chris",
""
],
[
"Piwonka",
"Phil",
""
]
] |
new_dataset
| 0.994624 |
2305.14352
|
Trevor Standley
|
Trevor Standley, Ruohan Gao, Dawn Chen, Jiajun Wu, Silvio Savarese
|
An Extensible Multimodal Multi-task Object Dataset with Materials
|
ICLR 2023
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We present EMMa, an Extensible, Multimodal dataset of Amazon product listings
that contains rich Material annotations. It contains more than 2.8 million
objects, each with image(s), listing text, mass, price, product ratings, and
position in Amazon's product-category taxonomy. We also design a comprehensive
taxonomy of 182 physical materials (e.g., Plastic $\rightarrow$ Thermoplastic
$\rightarrow$ Acrylic). Objects are annotated with one or more materials from
this taxonomy. With the numerous attributes available for each object, we
develop a Smart Labeling framework to quickly add new binary labels to all
objects with very little manual labeling effort, making the dataset extensible.
Each object attribute in our dataset can be included in either the model inputs
or outputs, leading to combinatorial possibilities in task configurations. For
example, we can train a model to predict the object category from the listing
text, or the mass and price from the product listing image. EMMa offers a new
benchmark for multi-task learning in computer vision and NLP, and allows
practitioners to efficiently add new tasks and object attributes at scale.
|
[
{
"version": "v1",
"created": "Sat, 29 Apr 2023 09:13:40 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Standley",
"Trevor",
""
],
[
"Gao",
"Ruohan",
""
],
[
"Chen",
"Dawn",
""
],
[
"Wu",
"Jiajun",
""
],
[
"Savarese",
"Silvio",
""
]
] |
new_dataset
| 0.99985 |
2305.14384
|
Lora Aroyo
|
Alicia Parrish, Hannah Rose Kirk, Jessica Quaye, Charvi Rastogi, Max
Bartolo, Oana Inel, Juan Ciro, Rafael Mosquera, Addison Howard, Will
Cukierski, D. Sculley, Vijay Janapa Reddi, Lora Aroyo
|
Adversarial Nibbler: A Data-Centric Challenge for Improving the Safety
of Text-to-Image Models
| null | null | null | null |
cs.LG cs.AI cs.CR cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The generative AI revolution in recent years has been spurred by an expansion
in compute power and data quantity, which together enable extensive
pre-training of powerful text-to-image (T2I) models. With their greater
capabilities to generate realistic and creative content, these T2I models like
DALL-E, MidJourney, Imagen or Stable Diffusion are reaching ever wider
audiences. Any unsafe behaviors inherited from pretraining on uncurated
internet-scraped datasets thus have the potential to cause wide-reaching harm,
for example, through generated images which are violent, sexually explicit, or
contain biased and derogatory stereotypes. Despite this risk of harm, we lack
systematic and structured evaluation datasets to scrutinize model behavior,
especially adversarial attacks that bypass existing safety filters. A typical
bottleneck in safety evaluation is achieving a wide coverage of different types
of challenging examples in the evaluation set, i.e., identifying 'unknown
unknowns' or long-tail problems. To address this need, we introduce the
Adversarial Nibbler challenge. The goal of this challenge is to crowdsource a
diverse set of failure modes and reward challenge participants for successfully
finding safety vulnerabilities in current state-of-the-art T2I models.
Ultimately, we aim to provide greater awareness of these issues and assist
developers in improving the future safety and reliability of generative AI
models. Adversarial Nibbler is a data-centric challenge, part of the DataPerf
challenge suite, organized and supported by Kaggle and MLCommons.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 15:02:40 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Parrish",
"Alicia",
""
],
[
"Kirk",
"Hannah Rose",
""
],
[
"Quaye",
"Jessica",
""
],
[
"Rastogi",
"Charvi",
""
],
[
"Bartolo",
"Max",
""
],
[
"Inel",
"Oana",
""
],
[
"Ciro",
"Juan",
""
],
[
"Mosquera",
"Rafael",
""
],
[
"Howard",
"Addison",
""
],
[
"Cukierski",
"Will",
""
],
[
"Sculley",
"D.",
""
],
[
"Reddi",
"Vijay Janapa",
""
],
[
"Aroyo",
"Lora",
""
]
] |
new_dataset
| 0.99973 |
2305.14392
|
Amogh Joshi
|
Amogh Joshi, Adarsh Kosta, Wachirawit Ponghiran, Manish Nagaraj,
Kaushik Roy
|
FEDORA: Flying Event Dataset fOr Reactive behAvior
| null | null | null | null |
cs.CV cs.ET cs.LG cs.NE cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ability of living organisms to perform complex high speed manoeuvers in
flight with a very small number of neurons and an incredibly low failure rate
highlights the efficacy of these resource-constrained biological systems.
Event-driven hardware has emerged, in recent years, as a promising avenue for
implementing complex vision tasks in resource-constrained environments.
Vision-based autonomous navigation and obstacle avoidance consists of several
independent but related tasks such as optical flow estimation, depth
estimation, Simultaneous Localization and Mapping (SLAM), object detection, and
recognition. To ensure coherence between these tasks, it is imperative that
they be trained on a single dataset. However, most existing datasets provide
only a selected subset of the required data. This makes inter-network coherence
difficult to achieve. Another limitation of existing datasets is the limited
temporal resolution they provide. To address these limitations, we present
FEDORA, a first-of-its-kind fully synthetic dataset for vision-based tasks,
with ground truths for depth, pose, ego-motion, and optical flow. FEDORA is the
first dataset to provide optical flow at three different frequencies - 10Hz,
25Hz, and 50Hz
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 22:59:05 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Joshi",
"Amogh",
""
],
[
"Kosta",
"Adarsh",
""
],
[
"Ponghiran",
"Wachirawit",
""
],
[
"Nagaraj",
"Manish",
""
],
[
"Roy",
"Kaushik",
""
]
] |
new_dataset
| 0.999837 |
2305.14467
|
Anatol Garioud
|
Anatol Garioud, Apolline De Wit, Marc Poup\'ee, Marion Valette,
S\'ebastien Giordano, Boris Wattrelos
|
FLAIR #2: textural and temporal information for semantic segmentation
from multi-source optical imagery
| null | null |
10.13140/RG.2.2.30938.93128/1
| null |
cs.CV cs.LG eess.IV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The FLAIR #2 dataset hereby presented includes two very distinct types of
data, which are exploited for a semantic segmentation task aimed at mapping
land cover. The data fusion workflow proposes the exploitation of the fine
spatial and textural information of very high spatial resolution (VHR)
mono-temporal aerial imagery and the temporal and spectral richness of high
spatial resolution (HR) time series of Copernicus Sentinel-2 satellite images.
The French National Institute of Geographical and Forest Information (IGN), in
response to the growing availability of high-quality Earth Observation (EO)
data, is actively exploring innovative strategies to integrate these data with
heterogeneous characteristics. IGN is therefore offering this dataset to
promote innovation and improve our knowledge of our territories.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 18:47:19 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Garioud",
"Anatol",
""
],
[
"De Wit",
"Apolline",
""
],
[
"Poupée",
"Marc",
""
],
[
"Valette",
"Marion",
""
],
[
"Giordano",
"Sébastien",
""
],
[
"Wattrelos",
"Boris",
""
]
] |
new_dataset
| 0.988244 |
2305.14470
|
Mark Van der Merwe
|
Mark Van der Merwe, Youngsun Wi, Dmitry Berenson, Nima Fazeli
|
Integrated Object Deformation and Contact Patch Estimation from
Visuo-Tactile Feedback
|
12 pages
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reasoning over the interplay between object deformation and force
transmission through contact is central to the manipulation of compliant
objects. In this paper, we propose Neural Deforming Contact Field (NDCF), a
representation that jointly models object deformations and contact patches from
visuo-tactile feedback using implicit representations. Representing the object
geometry and contact with the environment implicitly allows a single model to
predict contact patches of varying complexity. Additionally, learning geometry
and contact simultaneously allows us to enforce physical priors, such as
ensuring contacts lie on the surface of the object. We propose a neural network
architecture to learn a NDCF, and train it using simulated data. We then
demonstrate that the learned NDCF transfers directly to the real-world without
the need for fine-tuning. We benchmark our proposed approach against a baseline
representing geometry and contact patches with point clouds. We find that NDCF
performs better on simulated data and in transfer to the real-world.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 18:53:24 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Van der Merwe",
"Mark",
""
],
[
"Wi",
"Youngsun",
""
],
[
"Berenson",
"Dmitry",
""
],
[
"Fazeli",
"Nima",
""
]
] |
new_dataset
| 0.986492 |
2305.14471
|
Xuanyu Zhang
|
Xuanyu Zhang and Bingbing Li and Qing Yang
|
CGCE: A Chinese Generative Chat Evaluation Benchmark for General and
Financial Domains
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Generative chat models, such as ChatGPT and GPT-4, have revolutionized
natural language generation (NLG) by incorporating instructions and human
feedback to achieve significant performance improvements. However, the lack of
standardized evaluation benchmarks for chat models, particularly for Chinese
and domain-specific models, hinders their assessment and progress. To address
this gap, we introduce the Chinese Generative Chat Evaluation (CGCE) benchmark,
focusing on general and financial domains. The CGCE benchmark encompasses
diverse tasks, including 200 questions in the general domain and 150 specific
professional questions in the financial domain. Manual scoring evaluates
factors such as accuracy, coherence, expression clarity, and completeness. The
CGCE benchmark provides researchers with a standardized framework to assess and
compare Chinese generative chat models, fostering advancements in NLG research.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 18:54:15 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Zhang",
"Xuanyu",
""
],
[
"Li",
"Bingbing",
""
],
[
"Yang",
"Qing",
""
]
] |
new_dataset
| 0.998652 |
2305.14480
|
Zihao Fu
|
Zihao Fu, Meiru Zhang, Zaiqiao Meng, Yannan Shen, Anya Okhmatovskaia,
David Buckeridge, Nigel Collier
|
BAND: Biomedical Alert News Dataset
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Infectious disease outbreaks continue to pose a significant threat to human
health and well-being. To improve disease surveillance and understanding of
disease spread, several surveillance systems have been developed to monitor
daily news alerts and social media. However, existing systems lack thorough
epidemiological analysis in relation to corresponding alerts or news, largely
due to the scarcity of well-annotated reports data. To address this gap, we
introduce the Biomedical Alert News Dataset (BAND), which includes 1,508
samples from existing reported news articles, open emails, and alerts, as well
as 30 epidemiology-related questions. These questions necessitate the model's
expert reasoning abilities, thereby offering valuable insights into the
outbreak of the disease. The BAND dataset brings new challenges to the NLP
world, requiring better disguise capability of the content and the ability to
infer important information. We provide several benchmark tasks, including
Named Entity Recognition (NER), Question Answering (QA), and Event Extraction
(EE), to show how existing models are capable of handling these tasks in the
epidemiology domain. To the best of our knowledge, the BAND corpus is the
largest corpus of well-annotated biomedical outbreak alert news with
elaborately designed questions, making it a valuable resource for
epidemiologists and NLP researchers alike.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 19:21:00 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Fu",
"Zihao",
""
],
[
"Zhang",
"Meiru",
""
],
[
"Meng",
"Zaiqiao",
""
],
[
"Shen",
"Yannan",
""
],
[
"Okhmatovskaia",
"Anya",
""
],
[
"Buckeridge",
"David",
""
],
[
"Collier",
"Nigel",
""
]
] |
new_dataset
| 0.999797 |
2305.14485
|
Arijit Khan
|
Arijit Khan
|
Knowledge Graphs Querying
|
accepted at ACM SIGMOD Record 2023
|
ACM SIGMOD Record 2023
| null | null |
cs.DB cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Knowledge graphs (KGs) such as DBpedia, Freebase, YAGO, Wikidata, and NELL
were constructed to store large-scale, real-world facts as (subject, predicate,
object) triples -- that can also be modeled as a graph, where a node (a subject
or an object) represents an entity with attributes, and a directed edge (a
predicate) is a relationship between two entities. Querying KGs is critical in
web search, question answering (QA), semantic search, personal assistants, fact
checking, and recommendation. While significant progress has been made on KG
construction and curation, thanks to deep learning recently we have seen a
surge of research on KG querying and QA. The objectives of our survey are
two-fold. First, research on KG querying has been conducted by several
communities, such as databases, data mining, semantic web, machine learning,
information retrieval, and natural language processing (NLP), with different
focus and terminologies; and also in diverse topics ranging from graph
databases, query languages, join algorithms, graph patterns matching, to more
sophisticated KG embedding and natural language questions (NLQs). We aim at
uniting different interdisciplinary topics and concepts that have been
developed for KG querying. Second, many recent advances on KG and query
embedding, multimodal KG, and KG-QA come from deep learning, IR, NLP, and
computer vision domains. We identify important challenges of KG querying that
received less attention by graph databases, and by the DB community in general,
e.g., incomplete KG, semantic matching, multimodal data, and NLQs. We conclude
by discussing interesting opportunities for the data management community, for
instance, KG as a unified data model and vector-based query processing.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 19:32:42 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Khan",
"Arijit",
""
]
] |
new_dataset
| 0.997482 |
2305.14490
|
Xiang Zhang
|
Xiang Zhang, Yu Gu, Huan Yan, Yantong Wang, Mianxiong Dong, Kaoru Ota,
Fuji Ren, Yusheng Ji
|
Wital: A COTS WiFi Devices Based Vital Signs Monitoring System Using
NLOS Sensing Model
|
Accepted by IEEE THMS
|
IEEE Transactions on Human-Machine Systems,2023
|
10.1109/THMS.2023.3264247
| null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Vital sign (breathing and heartbeat) monitoring is essential for patient care
and sleep disease prevention. Most current solutions are based on wearable
sensors or cameras; however, the former could affect sleep quality, while the
latter often present privacy concerns. To address these shortcomings, we
propose Wital, a contactless vital sign monitoring system based on low-cost and
widespread commercial off-the-shelf (COTS) Wi-Fi devices. There are two
challenges that need to be overcome. First, the torso deformations caused by
breathing/heartbeats are weak. How can such deformations be effectively
captured? Second, movements such as turning over affect the accuracy of vital
sign monitoring. How can such detrimental effects be avoided? For the former,
we propose a non-line-of-sight (NLOS) sensing model for modeling the
relationship between the energy ratio of line-of-sight (LOS) to NLOS signals
and the vital sign monitoring capability using Ricean K theory and use this
model to guide the system construction to better capture the deformations
caused by breathing/heartbeats. For the latter, we propose a motion
segmentation method based on motion regularity detection that accurately
distinguishes respiration from other motions, and we remove periods that
include movements such as turning over to eliminate detrimental effects. We
have implemented and validated Wital on low-cost COTS devices. The experimental
results demonstrate the effectiveness of Wital in monitoring vital signs.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 19:38:40 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Zhang",
"Xiang",
""
],
[
"Gu",
"Yu",
""
],
[
"Yan",
"Huan",
""
],
[
"Wang",
"Yantong",
""
],
[
"Dong",
"Mianxiong",
""
],
[
"Ota",
"Kaoru",
""
],
[
"Ren",
"Fuji",
""
],
[
"Ji",
"Yusheng",
""
]
] |
new_dataset
| 0.99797 |
2305.14522
|
Yutong Zhou
|
Yutong Zhou
|
Design a Delicious Lunchbox in Style
|
Accepted by WiCV @CVPR2023 (In Progress). Dataset:
https://github.com/Yutong-Zhou-cv/Bento800_Dataset
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a cyclic generative adversarial network with spatial-wise and
channel-wise attention modules for text-to-image synthesis. To accurately
depict and design scenes with multiple occluded objects, we design a
pre-trained ordering recovery model and a generative adversarial network to
predict layout and composite novel box lunch presentations. In the experiments,
we devise the Bento800 dataset to evaluate the performance of the text-to-image
synthesis model and the layout generation & image composition model. This paper
is the continuation of our previous paper works. We also present additional
experiments and qualitative performance comparisons to verify the effectiveness
of our proposed method. Bento800 dataset is available at
https://github.com/Yutong-Zhou-cv/Bento800_Dataset
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 05:16:12 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Zhou",
"Yutong",
""
]
] |
new_dataset
| 0.999273 |
2305.14531
|
David Mohaisen
|
Mohammed Alqadhi, Ali Alkinoon, Saeed Salem, David Mohaisen
|
Understanding the Country-Level Security of Free Content Websites and
their Hosting Infrastructure
|
10 pages, 2 figures, 4 tables
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper examines free content websites (FCWs) and premium content websites
(PCWs) in different countries, comparing them to general websites. The focus is
on the distribution of malicious websites and their correlation with the
national cyber security index (NCSI), which measures a country's cyber security
maturity and its ability to deter the hosting of such malicious websites. By
analyzing a dataset comprising 1,562 FCWs and PCWs, along with Alexa's top
million websites dataset sample, we discovered that a majority of the
investigated websites are hosted in the United States. Interestingly, the
United States has a relatively low NCSI, mainly due to a lower score in privacy
policy development. Similar patterns were observed for other countries With
varying NCSI criteria. Furthermore, we present the distribution of various
categories of FCWs and PCWs across countries. We identify the top hosting
countries for each category and provide the percentage of discovered malicious
websites in those countries. Ultimately, the goal of this study is to identify
regional vulnerabilities in hosting FCWs and guide policy improvements at the
country level to mitigate potential cyber threats.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 21:31:02 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Alqadhi",
"Mohammed",
""
],
[
"Alkinoon",
"Ali",
""
],
[
"Salem",
"Saeed",
""
],
[
"Mohaisen",
"David",
""
]
] |
new_dataset
| 0.999528 |
2305.14536
|
Jakub Macina
|
Jakub Macina, Nico Daheim, Sankalan Pal Chowdhury, Tanmay Sinha, Manu
Kapur, Iryna Gurevych, Mrinmaya Sachan
|
MathDial: A Dialogue Tutoring Dataset with Rich Pedagogical Properties
Grounded in Math Reasoning Problems
|
Jakub Macina, Nico Daheim, and Sankalan Pal Chowdhury contributed
equally to this work. Code and dataset available:
https://github.com/eth-nlped/mathdial
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Although automatic dialogue tutors hold great potential in making education
personalized and more accessible, research on such systems has been hampered by
a lack of sufficiently large and high-quality datasets. However, collecting
such datasets remains challenging, as recording tutoring sessions raises
privacy concerns and crowdsourcing leads to insufficient data quality. To
address this problem, we propose a framework to semi-synthetically generate
such dialogues by pairing real teachers with a large language model (LLM)
scaffolded to represent common student errors. In this paper, we describe our
ongoing efforts to use this framework to collect MathDial, a dataset of
currently ca. 1.5k tutoring dialogues grounded in multi-step math word
problems. We show that our dataset exhibits rich pedagogical properties,
focusing on guiding students using sense-making questions to let them explore
problems. Moreover, we outline that MathDial and its grounding annotations can
be used to finetune language models to be more effective tutors (and not just
solvers) and highlight remaining challenges that need to be addressed by the
research community. We will release our dataset publicly to foster research in
this socially important area of NLP.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 21:44:56 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Macina",
"Jakub",
""
],
[
"Daheim",
"Nico",
""
],
[
"Chowdhury",
"Sankalan Pal",
""
],
[
"Sinha",
"Tanmay",
""
],
[
"Kapur",
"Manu",
""
],
[
"Gurevych",
"Iryna",
""
],
[
"Sachan",
"Mrinmaya",
""
]
] |
new_dataset
| 0.999608 |
2305.14541
|
Eric Ruzomberka
|
Eric Ruzomberka and Yongkyu Jang and David J. Love and H. Vincent Poor
|
Adversarial Channels with O(1)-Bit Partial Feedback
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider point-to-point communication over $q$-ary adversarial channels
with partial noiseless feedback. In this setting, a sender Alice transmits $n$
symbols from a $q$-ary alphabet over a noisy forward channel to a receiver Bob,
while Bob sends feedback to Alice over a noiseless reverse channel. In the
forward channel, an adversary can inject both symbol errors and erasures up to
an error fraction $p \in [0,1]$ and erasure fraction $r \in [0,1]$,
respectively. In the reverse channel, Bob's feedback is partial such that he
can send at most $B(n) \geq 0$ bits during the communication session.
As a case study on minimal partial feedback, we initiate the study of the
$O(1)$-bit feedback setting in which $B$ is $O(1)$ in $n$. As our main result,
we provide a tight characterization of zero-error capacity under $O(1)$-bit
feedback for all $q \geq 2$, $p \in [0,1]$ and $r \in [0,1]$, which we prove
this result via novel achievability and converse schemes inspired by recent
studies of causal adversarial channels without feedback. Perhaps surprisingly,
we show that $O(1)$-bits of feedback are sufficient to achieve the zero-error
capacity of the $q$-ary adversarial error channel with full feedback when the
error fraction $p$ is sufficiently small.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 21:51:38 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Ruzomberka",
"Eric",
""
],
[
"Jang",
"Yongkyu",
""
],
[
"Love",
"David J.",
""
],
[
"Poor",
"H. Vincent",
""
]
] |
new_dataset
| 0.995825 |
2305.14564
|
Simeng Sun
|
Simeng Sun, Yang Liu, Shuohang Wang, Chenguang Zhu, Mohit Iyyer
|
PEARL: Prompting Large Language Models to Plan and Execute Actions Over
Long Documents
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Strategies such as chain-of-thought prompting improve the performance of
large language models (LLMs) on complex reasoning tasks by decomposing input
examples into intermediate steps. However, it remains unclear how to apply such
methods to reason over long input documents, in which both the decomposition
and the output of each intermediate step are non-trivial to obtain. In this
work, we propose PEARL, a prompting framework to improve reasoning over long
documents, which consists of three stages: action mining, plan formulation, and
plan execution. More specifically, given a question about a long document,
PEARL decomposes the question into a sequence of actions (e.g., SUMMARIZE,
FIND_EVENT, FIND_RELATION) and then executes them over the document to obtain
the answer. Each stage of PEARL is implemented via zero-shot or few-shot
prompting of LLMs (in our work, GPT-4) with minimal human input. We evaluate
PEARL on a challenging subset of the QuALITY dataset, which contains questions
that require complex reasoning over long narrative texts. PEARL outperforms
zero-shot and chain-of-thought prompting on this dataset, and ablation
experiments show that each stage of PEARL is critical to its performance.
Overall, PEARL is a first step towards leveraging LLMs to reason over long
documents.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 23:06:04 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Sun",
"Simeng",
""
],
[
"Liu",
"Yang",
""
],
[
"Wang",
"Shuohang",
""
],
[
"Zhu",
"Chenguang",
""
],
[
"Iyyer",
"Mohit",
""
]
] |
new_dataset
| 0.971405 |
2305.14603
|
Li Zhang
|
Li Zhang, Hainiu Xu, Abhinav Kommula, Niket Tandon, Chris
Callison-Burch
|
OpenPI2.0: An Improved Dataset for Entity Tracking in Texts
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Representing texts as information about entities has long been deemed
effective in event reasoning. We propose OpenPI2.0, an improved dataset for
tracking entity states in procedural texts. OpenPI2.0 features not only
canonicalized entities that facilitate evaluation, but also salience
annotations including both manual labels and automatic predictions. Regarding
entity salience, we provide a survey on annotation subjectivity, modeling
feasibility, and downstream applications in tasks such as question answering
and classical planning.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 00:57:35 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Zhang",
"Li",
""
],
[
"Xu",
"Hainiu",
""
],
[
"Kommula",
"Abhinav",
""
],
[
"Tandon",
"Niket",
""
],
[
"Callison-Burch",
"Chris",
""
]
] |
new_dataset
| 0.999566 |
2305.14610
|
Bryan Li
|
Bryan Li, Chris Callison-Burch
|
This Land is {Your, My} Land: Evaluating Geopolitical Biases in Language
Models
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the notion of geopolitical bias -- a tendency to report
different geopolitical knowledge depending on the linguistic context. As a case
study, we consider territorial disputes between countries. For example, for the
widely contested Spratly Islands, would an LM be more likely to say they belong
to China if asked in Chinese, vs. to the Philippines if asked in Tagalog? To
evaluate if such biases exist, we first collect a dataset of territorial
disputes from Wikipedia, then associate each territory with a set of
multilingual, multiple-choice questions. This dataset, termed BorderLines,
consists of 250 territories with questions in 45 languages. We pose these
question sets to language models, and analyze geopolitical bias in their
responses through several proposed quantitative metrics. The metrics compare
between responses in different question languages as well as to the actual
geopolitical situation. The phenomenon of geopolitical bias is a uniquely
cross-lingual evaluation, contrasting with prior work's monolingual (mostly
English) focus on bias evaluation. Its existence shows that the knowledge of
LMs, unlike multilingual humans, is inconsistent across languages.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 01:16:17 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Li",
"Bryan",
""
],
[
"Callison-Burch",
"Chris",
""
]
] |
new_dataset
| 0.987972 |
2305.14611
|
Mulong Xie
|
Mulong Xie, Jiaming Ye, Zhenchang Xing, Lei Ma
|
NiCro: Purely Vision-based, Non-intrusive Cross-Device and
Cross-Platform GUI Testing
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
To ensure app compatibility and smoothness of user experience across diverse
devices and platforms, developers have to perform cross-device, cross-platform
testing of their apps, which is laborious. There comes a recently increasing
trend of using a record and replay approach to facilitate the testing process.
However, the graphic user interface (GUI) of an app running on different
devices and platforms differs dramatically. This complicates the record and
replay process as the presence, appearance and layout of the GUI widgets in the
recording phase and replaying phase can be inconsistent. Existing techniques
resort to instrumenting into the underlying system to obtain the app metadata
for widget identification and matching between various devices. But such
intrusive practices are limited by the accessibility and accuracy of the
metadata on different platforms. On the other hand, several recent works
attempt to derive the GUI information by analyzing the GUI image. Nevertheless,
their performance is curbed by the applied preliminary visual approaches and
the failure to consider the divergence of the same GUI displayed on different
devices. To address the challenge, we propose a non-intrusive cross-device and
cross-platform system NiCro. NiCro utilizes the state-of-the-art GUI widget
detector to detect widgets from GUI images and then analyses a set of
comprehensive information to match the widgets across diverse devices. At the
system level, NiCro can interact with a virtual device farm and a robotic arm
system to perform cross-device, cross-platform testing non-intrusively. We
first evaluated NiCro by comparing its multi-modal widget and GUI matching
approach with 4 commonly used matching techniques. Then, we further examined
its overall performance on 8 various devices, using it to record and replay 107
test cases of 28 popular apps and the home page to show its effectiveness.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 01:19:05 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Xie",
"Mulong",
""
],
[
"Ye",
"Jiaming",
""
],
[
"Xing",
"Zhenchang",
""
],
[
"Ma",
"Lei",
""
]
] |
new_dataset
| 0.999517 |
2305.14617
|
Sahithya Ravi
|
Sahithya Ravi, Raymond Ng, Vered Shwartz
|
COMET-M: Reasoning about Multiple Events in Complex Sentences
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding the speaker's intended meaning often involves drawing
commonsense inferences to reason about what is not stated explicitly. In
multi-event sentences, it requires understanding the relationships between
events based on contextual knowledge. We propose COMET-M (Multi-Event), an
event-centric commonsense model capable of generating commonsense inferences
for a target event within a complex sentence. COMET-M builds upon COMET
(Bosselut et al., 2019), which excels at generating event-centric inferences
for simple sentences, but struggles with the complexity of multi-event
sentences prevalent in natural text. To overcome this limitation, we curate a
multi-event inference dataset of 35K human-written inferences. We trained
COMET-M on the human-written inferences and also created baselines using
automatically labeled examples. Experimental results demonstrate the
significant performance improvement of COMET-M over COMET in generating
multi-event inferences. Moreover, COMET-M successfully produces distinct
inferences for each target event, taking the complete context into
consideration. COMET-M holds promise for downstream tasks involving natural
text such as coreference resolution, dialogue, and story understanding.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 01:35:01 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Ravi",
"Sahithya",
""
],
[
"Ng",
"Raymond",
""
],
[
"Shwartz",
"Vered",
""
]
] |
new_dataset
| 0.999077 |
2305.14644
|
Hemanth Manjunatha
|
Hemanth Manjunatha, Andrey Pak, Dimitar Filev, Panagiotis Tsiotras
|
KARNet: Kalman Filter Augmented Recurrent Neural Network for Learning
World Models in Autonomous Driving Tasks
|
arXiv admin note: substantial text overlap with arXiv:2205.08712
| null | null | null |
cs.LG cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Autonomous driving has received a great deal of attention in the automotive
industry and is often seen as the future of transportation. The development of
autonomous driving technology has been greatly accelerated by the growth of
end-to-end machine learning techniques that have been successfully used for
perception, planning, and control tasks. An important aspect of autonomous
driving planning is knowing how the environment evolves in the immediate future
and taking appropriate actions. An autonomous driving system should effectively
use the information collected from the various sensors to form an abstract
representation of the world to maintain situational awareness. For this
purpose, deep learning models can be used to learn compact latent
representations from a stream of incoming data. However, most deep learning
models are trained end-to-end and do not incorporate any prior knowledge (e.g.,
from physics) of the vehicle in the architecture. In this direction, many works
have explored physics-infused neural network (PINN) architectures to infuse
physics models during training. Inspired by this observation, we present a
Kalman filter augmented recurrent neural network architecture to learn the
latent representation of the traffic flow using front camera images only. We
demonstrate the efficacy of the proposed model in both imitation and
reinforcement learning settings using both simulated and real-world datasets.
The results show that incorporating an explicit model of the vehicle (states
estimated using Kalman filtering) in the end-to-end learning significantly
increases performance.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 02:27:34 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Manjunatha",
"Hemanth",
""
],
[
"Pak",
"Andrey",
""
],
[
"Filev",
"Dimitar",
""
],
[
"Tsiotras",
"Panagiotis",
""
]
] |
new_dataset
| 0.997758 |
2305.14654
|
Atil Iscen
|
Ken Caluwaerts, Atil Iscen, J. Chase Kew, Wenhao Yu, Tingnan Zhang,
Daniel Freeman, Kuang-Huei Lee, Lisa Lee, Stefano Saliceti, Vincent Zhuang,
Nathan Batchelor, Steven Bohez, Federico Casarini, Jose Enrique Chen, Omar
Cortes, Erwin Coumans, Adil Dostmohamed, Gabriel Dulac-Arnold, Alejandro
Escontrela, Erik Frey, Roland Hafner, Deepali Jain, Bauyrjan Jyenis, Yuheng
Kuang, Edward Lee, Linda Luu, Ofir Nachum, Ken Oslund, Jason Powell, Diego
Reyes, Francesco Romano, Feresteh Sadeghi, Ron Sloat, Baruch Tabanpour,
Daniel Zheng, Michael Neunert, Raia Hadsell, Nicolas Heess, Francesco Nori,
Jeff Seto, Carolina Parada, Vikas Sindhwani, Vincent Vanhoucke, and Jie Tan
|
Barkour: Benchmarking Animal-level Agility with Quadruped Robots
|
17 pages, 19 figures
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Animals have evolved various agile locomotion strategies, such as sprinting,
leaping, and jumping. There is a growing interest in developing legged robots
that move like their biological counterparts and show various agile skills to
navigate complex environments quickly. Despite the interest, the field lacks
systematic benchmarks to measure the performance of control policies and
hardware in agility. We introduce the Barkour benchmark, an obstacle course to
quantify agility for legged robots. Inspired by dog agility competitions, it
consists of diverse obstacles and a time based scoring mechanism. This
encourages researchers to develop controllers that not only move fast, but do
so in a controllable and versatile way. To set strong baselines, we present two
methods for tackling the benchmark. In the first approach, we train specialist
locomotion skills using on-policy reinforcement learning methods and combine
them with a high-level navigation controller. In the second approach, we
distill the specialist skills into a Transformer-based generalist locomotion
policy, named Locomotion-Transformer, that can handle various terrains and
adjust the robot's gait based on the perceived environment and robot states.
Using a custom-built quadruped robot, we demonstrate that our method can
complete the course at half the speed of a dog. We hope that our work
represents a step towards creating controllers that enable robots to reach
animal-level agility.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 02:49:43 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Caluwaerts",
"Ken",
""
],
[
"Iscen",
"Atil",
""
],
[
"Kew",
"J. Chase",
""
],
[
"Yu",
"Wenhao",
""
],
[
"Zhang",
"Tingnan",
""
],
[
"Freeman",
"Daniel",
""
],
[
"Lee",
"Kuang-Huei",
""
],
[
"Lee",
"Lisa",
""
],
[
"Saliceti",
"Stefano",
""
],
[
"Zhuang",
"Vincent",
""
],
[
"Batchelor",
"Nathan",
""
],
[
"Bohez",
"Steven",
""
],
[
"Casarini",
"Federico",
""
],
[
"Chen",
"Jose Enrique",
""
],
[
"Cortes",
"Omar",
""
],
[
"Coumans",
"Erwin",
""
],
[
"Dostmohamed",
"Adil",
""
],
[
"Dulac-Arnold",
"Gabriel",
""
],
[
"Escontrela",
"Alejandro",
""
],
[
"Frey",
"Erik",
""
],
[
"Hafner",
"Roland",
""
],
[
"Jain",
"Deepali",
""
],
[
"Jyenis",
"Bauyrjan",
""
],
[
"Kuang",
"Yuheng",
""
],
[
"Lee",
"Edward",
""
],
[
"Luu",
"Linda",
""
],
[
"Nachum",
"Ofir",
""
],
[
"Oslund",
"Ken",
""
],
[
"Powell",
"Jason",
""
],
[
"Reyes",
"Diego",
""
],
[
"Romano",
"Francesco",
""
],
[
"Sadeghi",
"Feresteh",
""
],
[
"Sloat",
"Ron",
""
],
[
"Tabanpour",
"Baruch",
""
],
[
"Zheng",
"Daniel",
""
],
[
"Neunert",
"Michael",
""
],
[
"Hadsell",
"Raia",
""
],
[
"Heess",
"Nicolas",
""
],
[
"Nori",
"Francesco",
""
],
[
"Seto",
"Jeff",
""
],
[
"Parada",
"Carolina",
""
],
[
"Sindhwani",
"Vikas",
""
],
[
"Vanhoucke",
"Vincent",
""
],
[
"Tan",
"Jie",
""
]
] |
new_dataset
| 0.998048 |
2305.14660
|
Anna Martin-Boyle
|
Anna Martin-Boyle, Andrew Head, Kyle Lo, Risham Sidhu, Marti A.
Hearst, and Dongyeop Kang
|
Complex Mathematical Symbol Definition Structures: A Dataset and Model
for Coordination Resolution in Definition Extraction
|
9 pages, 4 figures
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Mathematical symbol definition extraction is important for improving
scholarly reading interfaces and scholarly information extraction (IE).
However, the task poses several challenges: math symbols are difficult to
process as they are not composed of natural language morphemes; and scholarly
papers often contain sentences that require resolving complex coordinate
structures. We present SymDef, an English language dataset of 5,927 sentences
from full-text scientific papers where each sentence is annotated with all
mathematical symbols linked with their corresponding definitions. This dataset
focuses specifically on complex coordination structures such as "respectively"
constructions, which often contain overlapping definition spans. We also
introduce a new definition extraction method that masks mathematical symbols,
creates a copy of each sentence for each symbol, specifies a target symbol, and
predicts its corresponding definition spans using slot filling. Our experiments
show that our definition extraction model significantly outperforms RoBERTa and
other strong IE baseline systems by 10.9 points with a macro F1 score of 84.82.
With our dataset and model, we can detect complex definitions in scholarly
documents to make scientific writing more readable.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 02:53:48 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Martin-Boyle",
"Anna",
""
],
[
"Head",
"Andrew",
""
],
[
"Lo",
"Kyle",
""
],
[
"Sidhu",
"Risham",
""
],
[
"Hearst",
"Marti A.",
""
],
[
"Kang",
"Dongyeop",
""
]
] |
new_dataset
| 0.998527 |
2305.14682
|
Jian Wu
|
Jian Wu, Yicheng Xu, Yan Gao, Jian-Guang Lou, B\"orje F. Karlsson,
Manabu Okumura
|
TACR: A Table-alignment-based Cell-selection and Reasoning Model for
Hybrid Question-Answering
|
Accepted at Findings of ACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Hybrid Question-Answering (HQA), which targets reasoning over tables and
passages linked from table cells, has witnessed significant research in recent
years. A common challenge in HQA and other passage-table QA datasets is that it
is generally unrealistic to iterate over all table rows, columns, and linked
passages to retrieve evidence. Such a challenge made it difficult for previous
studies to show their reasoning ability in retrieving answers. To bridge this
gap, we propose a novel Table-alignment-based Cell-selection and Reasoning
model (TACR) for hybrid text and table QA, evaluated on the HybridQA and
WikiTableQuestions datasets. In evidence retrieval, we design a
table-question-alignment enhanced cell-selection method to retrieve
fine-grained evidence. In answer reasoning, we incorporate a QA module that
treats the row containing selected cells as context. Experimental results over
the HybridQA and WikiTableQuestions (WTQ) datasets show that TACR achieves
state-of-the-art results on cell selection and outperforms fine-grained
evidence retrieval baselines on HybridQA, while achieving competitive
performance on WTQ. We also conducted a detailed analysis to demonstrate that
being able to align questions to tables in the cell-selection stage can result
in important gains from experiments of over 90\% table row and column selection
accuracy, meanwhile also improving output explainability.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 03:42:44 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Wu",
"Jian",
""
],
[
"Xu",
"Yicheng",
""
],
[
"Gao",
"Yan",
""
],
[
"Lou",
"Jian-Guang",
""
],
[
"Karlsson",
"Börje F.",
""
],
[
"Okumura",
"Manabu",
""
]
] |
new_dataset
| 0.998584 |
2305.14719
|
Michael Kranzlein
|
Michael Kranzlein, Nathan Schneider, Kevin Tobia
|
CuRIAM: Corpus re Interpretation and Metalanguage in U.S. Supreme Court
Opinions
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most judicial decisions involve the interpretation of legal texts; as such,
judicial opinion requires the use of language as a medium to comment on or draw
attention to other language. Language used this way is called metalanguage. We
develop an annotation schema for categorizing types of legal metalanguage and
apply our schema to a set of U.S. Supreme Court opinions, yielding a corpus
totaling 59k tokens. We remark on several patterns observed in the kinds of
metalanguage used by the justices.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 04:47:55 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Kranzlein",
"Michael",
""
],
[
"Schneider",
"Nathan",
""
],
[
"Tobia",
"Kevin",
""
]
] |
new_dataset
| 0.995877 |
2305.14725
|
Barry Menglong Yao
|
Barry Menglong Yao, Yu Chen, Qifan Wang, Sijia Wang, Minqian Liu,
Zhiyang Xu, Licheng Yu, Lifu Huang
|
AMELI: Enhancing Multimodal Entity Linking with Fine-Grained Attributes
|
12 pages, 4 figures
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We propose attribute-aware multimodal entity linking, where the input is a
mention described with a text and image, and the goal is to predict the
corresponding target entity from a multimodal knowledge base (KB) where each
entity is also described with a text description, a visual image and a set of
attributes and values. To support this research, we construct AMELI, a
large-scale dataset consisting of 18,472 reviews and 35,598 products. To
establish baseline performance on AMELI, we experiment with the current
state-of-the-art multimodal entity linking approaches and our enhanced
attribute-aware model and demonstrate the importance of incorporating the
attribute information into the entity linking process. To be best of our
knowledge, we are the first to build benchmark dataset and solutions for the
attribute-aware multimodal entity linking task. Datasets and codes will be made
publicly available.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 05:01:48 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Yao",
"Barry Menglong",
""
],
[
"Chen",
"Yu",
""
],
[
"Wang",
"Qifan",
""
],
[
"Wang",
"Sijia",
""
],
[
"Liu",
"Minqian",
""
],
[
"Xu",
"Zhiyang",
""
],
[
"Yu",
"Licheng",
""
],
[
"Huang",
"Lifu",
""
]
] |
new_dataset
| 0.99894 |
2305.14751
|
Tianyu Liu
|
Zefan Cai, Xin Zheng, Tianyu Liu, Xu Wang, Haoran Meng, Jiaqi Han,
Gang Yuan, Binghuai Lin, Baobao Chang and Yunbo Cao
|
DialogVCS: Robust Natural Language Understanding in Dialogue System
Upgrade
|
work in progress. The first three authors contribute equally
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In the constant updates of the product dialogue systems, we need to retrain
the natural language understanding (NLU) model as new data from the real users
would be merged into the existent data accumulated in the last updates. Within
the newly added data, new intents would emerge and might have semantic
entanglement with the existing intents, e.g. new intents that are semantically
too specific or generic are actually subset or superset of some existing
intents in the semantic space, thus impairing the robustness of the NLU model.
As the first attempt to solve this problem, we setup a new benchmark consisting
of 4 Dialogue Version Control dataSets (DialogVCS). We formulate the intent
detection with imperfect data in the system update as a multi-label
classification task with positive but unlabeled intents, which asks the models
to recognize all the proper intents, including the ones with semantic
entanglement, in the inference. We also propose comprehensive baseline models
and conduct in-depth analyses for the benchmark, showing that the semantically
entangled intents can be effectively recognized with an automatic workflow.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 05:53:38 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Cai",
"Zefan",
""
],
[
"Zheng",
"Xin",
""
],
[
"Liu",
"Tianyu",
""
],
[
"Wang",
"Xu",
""
],
[
"Meng",
"Haoran",
""
],
[
"Han",
"Jiaqi",
""
],
[
"Yuan",
"Gang",
""
],
[
"Lin",
"Binghuai",
""
],
[
"Chang",
"Baobao",
""
],
[
"Cao",
"Yunbo",
""
]
] |
new_dataset
| 0.962646 |
2305.14761
|
Parsa Kavehzadeh
|
Ahmed Masry, Parsa Kavehzadeh, Xuan Long Do, Enamul Hoque, Shafiq Joty
|
UniChart: A Universal Vision-language Pretrained Model for Chart
Comprehension and Reasoning
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Charts are very popular for analyzing data, visualizing key insights and
answering complex reasoning questions about data. To facilitate chart-based
data analysis using natural language, several downstream tasks have been
introduced recently such as chart question answering and chart summarization.
However, most of the methods that solve these tasks use pretraining on language
or vision-language tasks that do not attempt to explicitly model the structure
of the charts (e.g., how data is visually encoded and how chart elements are
related to each other). To address this, we first build a large corpus of
charts covering a wide variety of topics and visual styles. We then present
UniChart, a pretrained model for chart comprehension and reasoning. UniChart
encodes the relevant text, data, and visual elements of charts and then uses a
chart-grounded text decoder to generate the expected output in natural
language. We propose several chart-specific pretraining tasks that include: (i)
low-level tasks to extract the visual elements (e.g., bars, lines) and data
from charts, and (ii) high-level tasks to acquire chart understanding and
reasoning skills. We find that pretraining the model on a large corpus with
chart-specific low- and high-level tasks followed by finetuning on three
down-streaming tasks results in state-of-the-art performance on three
downstream tasks.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 06:11:17 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Masry",
"Ahmed",
""
],
[
"Kavehzadeh",
"Parsa",
""
],
[
"Do",
"Xuan Long",
""
],
[
"Hoque",
"Enamul",
""
],
[
"Joty",
"Shafiq",
""
]
] |
new_dataset
| 0.995 |
2305.14783
|
Zihong Liang
|
Zihong Liang, Xiaojun Quan, Qifan Wang
|
Disentangled Phonetic Representation for Chinese Spelling Correction
|
Accepted to ACL 2023 Main Conference
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Chinese Spelling Correction (CSC) aims to detect and correct erroneous
characters in Chinese texts. Although efforts have been made to introduce
phonetic information (Hanyu Pinyin) in this task, they typically merge phonetic
representations with character representations, which tends to weaken the
representation effect of normal texts. In this work, we propose to disentangle
the two types of features to allow for direct interaction between textual and
phonetic information. To learn useful phonetic representations, we introduce a
pinyin-to-character objective to ask the model to predict the correct
characters based solely on phonetic information, where a separation mask is
imposed to disable attention from phonetic input to text. To avoid overfitting
the phonetics, we further design a self-distillation module to ensure that
semantic information plays a major role in the prediction. Extensive
experiments on three CSC benchmarks demonstrate the superiority of our method
in using phonetic information.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 06:39:12 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Liang",
"Zihong",
""
],
[
"Quan",
"Xiaojun",
""
],
[
"Wang",
"Qifan",
""
]
] |
new_dataset
| 0.995847 |
2305.14784
|
Ameet Deshpande
|
Ameet Deshpande, Tanmay Rajpurohit, Karthik Narasimhan, Ashwin Kalyan
|
Anthropomorphization of AI: Opportunities and Risks
| null | null | null | null |
cs.AI cs.CL cs.CY cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Anthropomorphization is the tendency to attribute human-like traits to
non-human entities. It is prevalent in many social contexts -- children
anthropomorphize toys, adults do so with brands, and it is a literary device.
It is also a versatile tool in science, with behavioral psychology and
evolutionary biology meticulously documenting its consequences. With widespread
adoption of AI systems, and the push from stakeholders to make it human-like
through alignment techniques, human voice, and pictorial avatars, the tendency
for users to anthropomorphize it increases significantly. We take a dyadic
approach to understanding this phenomenon with large language models (LLMs) by
studying (1) the objective legal implications, as analyzed through the lens of
the recent blueprint of AI bill of rights and the (2) subtle psychological
aspects customization and anthropomorphization. We find that anthropomorphized
LLMs customized for different user bases violate multiple provisions in the
legislative blueprint. In addition, we point out that anthropomorphization of
LLMs affects the influence they can have on their users, thus having the
potential to fundamentally change the nature of human-AI interaction, with
potential for manipulation and negative influence. With LLMs being
hyper-personalized for vulnerable groups like children and patients among
others, our work is a timely and important contribution. We propose a
conservative strategy for the cautious use of anthropomorphization to improve
trustworthiness of AI systems.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 06:39:45 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Deshpande",
"Ameet",
""
],
[
"Rajpurohit",
"Tanmay",
""
],
[
"Narasimhan",
"Karthik",
""
],
[
"Kalyan",
"Ashwin",
""
]
] |
new_dataset
| 0.99785 |
2305.14787
|
Michael Baltaxe
|
Michael Baltaxe, Tomer Pe'er, Dan Levi
|
Polarimetric Imaging for Perception
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Autonomous driving and advanced driver-assistance systems rely on a set of
sensors and algorithms to perform the appropriate actions and provide alerts as
a function of the driving scene. Typically, the sensors include color cameras,
radar, lidar and ultrasonic sensors. Strikingly however, although light
polarization is a fundamental property of light, it is seldom harnessed for
perception tasks. In this work we analyze the potential for improvement in
perception tasks when using an RGB-polarimetric camera, as compared to an RGB
camera. We examine monocular depth estimation and free space detection during
the middle of the day, when polarization is independent of subject heading, and
show that a quantifiable improvement can be achieved for both of them using
state-of-the-art deep neural networks, with a minimum of architectural changes.
We also present a new dataset composed of RGB-polarimetric images, lidar scans,
GNSS / IMU readings and free space segmentations that further supports
developing perception algorithms that take advantage of light polarization.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 06:42:27 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Baltaxe",
"Michael",
""
],
[
"Pe'er",
"Tomer",
""
],
[
"Levi",
"Dan",
""
]
] |
new_dataset
| 0.999615 |
2305.14810
|
Jiongnan Liu
|
Jiongnan Liu, Zhicheng Dou, Guoyu Tang, Sulong Xu
|
JDsearch: A Personalized Product Search Dataset with Real Queries and
Full Interactions
|
Accepted to SIGIR 2023
| null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recently, personalized product search attracts great attention and many
models have been proposed. To evaluate the effectiveness of these models,
previous studies mainly utilize the simulated Amazon recommendation dataset,
which contains automatically generated queries and excludes cold users and tail
products. We argue that evaluating with such a dataset may yield unreliable
results and conclusions, and deviate from real user satisfaction. To overcome
these problems, in this paper, we release a personalized product search dataset
comprised of real user queries and diverse user-product interaction types
(clicking, adding to cart, following, and purchasing) collected from JD.com, a
popular Chinese online shopping platform. More specifically, we sample about
170,000 active users on a specific date, then record all their interacted
products and issued queries in one year, without removing any tail users and
products. This finally results in roughly 12,000,000 products, 9,400,000 real
searches, and 26,000,000 user-product interactions. We study the
characteristics of this dataset from various perspectives and evaluate
representative personalization models to verify its feasibility. The dataset
can be publicly accessed at Github: https://github.com/rucliujn/JDsearch.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 07:06:21 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Liu",
"Jiongnan",
""
],
[
"Dou",
"Zhicheng",
""
],
[
"Tang",
"Guoyu",
""
],
[
"Xu",
"Sulong",
""
]
] |
new_dataset
| 0.997255 |
2305.14836
|
Tianwen Qian
|
Tianwen Qian, Jingjing Chen, Linhai Zhuo, Yang Jiao, Yu-Gang Jiang
|
NuScenes-QA: A Multi-modal Visual Question Answering Benchmark for
Autonomous Driving Scenario
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a novel visual question answering (VQA) task in the context of
autonomous driving, aiming to answer natural language questions based on
street-view clues. Compared to traditional VQA tasks, VQA in autonomous driving
scenario presents more challenges. Firstly, the raw visual data are
multi-modal, including images and point clouds captured by camera and LiDAR,
respectively. Secondly, the data are multi-frame due to the continuous,
real-time acquisition. Thirdly, the outdoor scenes exhibit both moving
foreground and static background. Existing VQA benchmarks fail to adequately
address these complexities. To bridge this gap, we propose NuScenes-QA, the
first benchmark for VQA in the autonomous driving scenario, encompassing 34K
visual scenes and 460K question-answer pairs. Specifically, we leverage
existing 3D detection annotations to generate scene graphs and design question
templates manually. Subsequently, the question-answer pairs are generated
programmatically based on these templates. Comprehensive statistics prove that
our NuScenes-QA is a balanced large-scale benchmark with diverse question
formats. Built upon it, we develop a series of baselines that employ advanced
3D detection and VQA techniques. Our extensive experiments highlight the
challenges posed by this new task. Codes and dataset are available at
https://github.com/qiantianwen/NuScenes-QA.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 07:40:50 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Qian",
"Tianwen",
""
],
[
"Chen",
"Jingjing",
""
],
[
"Zhuo",
"Linhai",
""
],
[
"Jiao",
"Yang",
""
],
[
"Jiang",
"Yu-Gang",
""
]
] |
new_dataset
| 0.999582 |
2305.14838
|
Chenyang Le
|
Chenyang Le, Yao Qian, Long Zhou, Shujie Liu, Michael Zeng, Xuedong
Huang
|
ComSL: A Composite Speech-Language Model for End-to-End Speech-to-Text
Translation
| null | null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Joint speech-language training is challenging due to the large demand for
training data and GPU consumption, as well as the modality gap between speech
and language. We present ComSL, a speech-language model built atop a composite
architecture of public pretrained speech-only and language-only models and
optimized data-efficiently for spoken language tasks. Particularly, we propose
to incorporate cross-modality learning into transfer learning and conduct them
simultaneously for downstream tasks in a multi-task learning manner. Our
approach has demonstrated effectiveness in end-to-end speech-to-text
translation tasks, achieving a new state-of-the-art average BLEU score of 31.5
on the multilingual speech to English text translation task for 21 languages,
as measured on the public CoVoST2 evaluation set.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 07:42:15 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Le",
"Chenyang",
""
],
[
"Qian",
"Yao",
""
],
[
"Zhou",
"Long",
""
],
[
"Liu",
"Shujie",
""
],
[
"Zeng",
"Michael",
""
],
[
"Huang",
"Xuedong",
""
]
] |
new_dataset
| 0.999101 |
2305.14857
|
Akari Asai
|
Akari Asai, Sneha Kudugunta, Xinyan Velocity Yu, Terra Blevins, Hila
Gonen, Machel Reid, Yulia Tsvetkov, Sebastian Ruder, Hannaneh Hajishirzi
|
BUFFET: Benchmarking Large Language Models for Few-shot Cross-lingual
Transfer
|
The data and code is available at https://buffetfs.github.io/
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Despite remarkable advancements in few-shot generalization in natural
language processing, most models are developed and evaluated primarily in
English. To facilitate research on few-shot cross-lingual transfer, we
introduce a new benchmark, called BUFFET, which unifies 15 diverse tasks across
54 languages in a sequence-to-sequence format and provides a fixed set of
few-shot examples and instructions. BUFFET is designed to establish a rigorous
and equitable evaluation framework for few-shot cross-lingual transfer across a
broad range of tasks and languages. Using BUFFET, we perform thorough
evaluations of state-of-the-art multilingual large language models with
different transfer methods, namely in-context learning and fine-tuning. Our
findings reveal significant room for improvement in few-shot in-context
cross-lingual transfer. In particular, ChatGPT with in-context learning often
performs worse than much smaller mT5-base models fine-tuned on English task
data and few-shot in-language examples. Our analysis suggests various avenues
for future research in few-shot cross-lingual transfer, such as improved
pretraining, understanding, and future evaluations.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 08:06:33 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Asai",
"Akari",
""
],
[
"Kudugunta",
"Sneha",
""
],
[
"Yu",
"Xinyan Velocity",
""
],
[
"Blevins",
"Terra",
""
],
[
"Gonen",
"Hila",
""
],
[
"Reid",
"Machel",
""
],
[
"Tsvetkov",
"Yulia",
""
],
[
"Ruder",
"Sebastian",
""
],
[
"Hajishirzi",
"Hannaneh",
""
]
] |
new_dataset
| 0.999054 |
2305.14874
|
Peter Jansen
|
Peter Jansen
|
From Words to Wires: Generating Functioning Electronic Devices from
Natural Language Descriptions
|
13 pages, 4 figures
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In this work, we show that contemporary language models have a previously
unknown skill -- the capacity for electronic circuit design from high-level
textual descriptions, akin to code generation. We introduce two benchmarks:
Pins100, assessing model knowledge of electrical components, and Micro25,
evaluating a model's capability to design common microcontroller circuits and
code in the Arduino ecosystem that involve input, output, sensors, motors,
protocols, and logic -- with models such as GPT-4 and Claude-V1 achieving
between 60% to 96% Pass@1 on generating full devices. We include six case
studies of using language models as a design assistant for moderately complex
devices, such as a radiation-powered random number generator, an emoji
keyboard, a visible spectrometer, and several assistive devices, while offering
a qualitative analysis performance, outlining evaluation challenges, and
suggesting areas of development to improve complex circuit design and practical
utility. With this work, we aim to spur research at the juncture of natural
language processing and electronic design.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 08:28:59 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Jansen",
"Peter",
""
]
] |
new_dataset
| 0.999425 |
2305.14879
|
Peter Jansen
|
Ruoyao Wang, Graham Todd, Eric Yuan, Ziang Xiao, Marc-Alexandre
C\^ot\'e, Peter Jansen
|
ByteSized32: A Corpus and Challenge Task for Generating Task-Specific
World Models Expressed as Text Games
|
10 pages
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In this work we examine the ability of language models to generate explicit
world models of scientific and common-sense reasoning tasks by framing this as
a problem of generating text-based games. To support this, we introduce
ByteSized32, a corpus of 32 highly-templated text games written in Python
totaling 24k lines of code, each centered around a particular task, and paired
with a set of 16 unseen text game specifications for evaluation. We propose a
suite of automatic and manual metrics for assessing simulation validity,
compliance with task specifications, playability, winnability, and alignment
with the physical world. In a single-shot evaluation of GPT-4 on this
simulation-as-code-generation task, we find it capable of producing runnable
games in 27% of cases, highlighting the difficulty of this challenge task. We
discuss areas of future improvement, including GPT-4's apparent capacity to
perform well at simulating near canonical task solutions, with performance
dropping off as simulations include distractors or deviate from canonical
solutions in the action space.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 08:31:30 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Wang",
"Ruoyao",
""
],
[
"Todd",
"Graham",
""
],
[
"Yuan",
"Eric",
""
],
[
"Xiao",
"Ziang",
""
],
[
"Côté",
"Marc-Alexandre",
""
],
[
"Jansen",
"Peter",
""
]
] |
new_dataset
| 0.999708 |
2305.14904
|
Alexander Spangher
|
Alexander Spangher, Nanyun Peng, Jonathan May, Emilio Ferrara
|
Identifying Informational Sources in News Articles
|
13 pages
| null | null | null |
cs.CL cs.AI cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
News articles are driven by the informational sources journalists use in
reporting. Modeling when, how and why sources get used together in stories can
help us better understand the information we consume and even help journalists
with the task of producing it. In this work, we take steps toward this goal by
constructing the largest and widest-ranging annotated dataset, to date, of
informational sources used in news writing. We show that our dataset can be
used to train high-performing models for information detection and source
attribution. We further introduce a novel task, source prediction, to study the
compositionality of sources in news articles. We show good performance on this
task, which we argue is an important proof for narrative science exploring the
internal structure of news articles and aiding in planning-based language
generation, and an important step towards a source-recommendation system to aid
journalists.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 08:56:35 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Spangher",
"Alexander",
""
],
[
"Peng",
"Nanyun",
""
],
[
"May",
"Jonathan",
""
],
[
"Ferrara",
"Emilio",
""
]
] |
new_dataset
| 0.988273 |
2305.14914
|
Zhitong Xiong
|
Zhitong Xiong, Sining Chen, Yi Wang, Lichao Mou, Xiao Xiang Zhu
|
GAMUS: A Geometry-aware Multi-modal Semantic Segmentation Benchmark for
Remote Sensing Data
|
13 pages
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Geometric information in the normalized digital surface models (nDSM) is
highly correlated with the semantic class of the land cover. Exploiting two
modalities (RGB and nDSM (height)) jointly has great potential to improve the
segmentation performance. However, it is still an under-explored field in
remote sensing due to the following challenges. First, the scales of existing
datasets are relatively small and the diversity of existing datasets is
limited, which restricts the ability of validation. Second, there is a lack of
unified benchmarks for performance assessment, which leads to difficulties in
comparing the effectiveness of different models. Last, sophisticated
multi-modal semantic segmentation methods have not been deeply explored for
remote sensing data. To cope with these challenges, in this paper, we introduce
a new remote-sensing benchmark dataset for multi-modal semantic segmentation
based on RGB-Height (RGB-H) data. Towards a fair and comprehensive analysis of
existing methods, the proposed benchmark consists of 1) a large-scale dataset
including co-registered RGB and nDSM pairs and pixel-wise semantic labels; 2) a
comprehensive evaluation and analysis of existing multi-modal fusion strategies
for both convolutional and Transformer-based networks on remote sensing data.
Furthermore, we propose a novel and effective Transformer-based intermediary
multi-modal fusion (TIMF) module to improve the semantic segmentation
performance through adaptive token-level multi-modal fusion.The designed
benchmark can foster future research on developing new methods for multi-modal
learning on remote sensing data. Extensive analyses of those methods are
conducted and valuable insights are provided through the experimental results.
Code for the benchmark and baselines can be accessed at
\url{https://github.com/EarthNets/RSI-MMSegmentation}.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 09:03:18 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Xiong",
"Zhitong",
""
],
[
"Chen",
"Sining",
""
],
[
"Wang",
"Yi",
""
],
[
"Mou",
"Lichao",
""
],
[
"Zhu",
"Xiao Xiang",
""
]
] |
new_dataset
| 0.996796 |
2305.14950
|
Muhao Chen
|
Jiongxiao Wang, Zichen Liu, Keun Hee Park, Muhao Chen, Chaowei Xiao
|
Adversarial Demonstration Attacks on Large Language Models
|
Work in Progress
| null | null | null |
cs.CL cs.AI cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
With the emergence of more powerful large language models (LLMs), such as
ChatGPT and GPT-4, in-context learning (ICL) has gained significant prominence
in leveraging these models for specific tasks by utilizing data-label pairs as
precondition prompts. While incorporating demonstrations can greatly enhance
the performance of LLMs across various tasks, it may introduce a new security
concern: attackers can manipulate only the demonstrations without changing the
input to perform an attack. In this paper, we investigate the security concern
of ICL from an adversarial perspective, focusing on the impact of
demonstrations. We propose an ICL attack based on TextAttack, which aims to
only manipulate the demonstration without changing the input to mislead the
models. Our results demonstrate that as the number of demonstrations increases,
the robustness of in-context learning would decreases. Furthermore, we also
observe that adversarially attacked demonstrations exhibit transferability to
diverse input examples. These findings emphasize the critical security risks
associated with ICL and underscore the necessity for extensive research on the
robustness of ICL, particularly given its increasing significance in the
advancement of LLMs.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 09:40:56 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Wang",
"Jiongxiao",
""
],
[
"Liu",
"Zichen",
""
],
[
"Park",
"Keun Hee",
""
],
[
"Chen",
"Muhao",
""
],
[
"Xiao",
"Chaowei",
""
]
] |
new_dataset
| 0.992812 |
2305.14963
|
Yau-Shian Wang
|
Yau-Shian Wang and Ta-Chung Chi and Ruohong Zhang and Yiming Yang
|
PESCO: Prompt-enhanced Self Contrastive Learning for Zero-shot Text
Classification
|
accepted by ACL 2023
|
ACL 2023
| null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present PESCO, a novel contrastive learning framework that substantially
improves the performance of zero-shot text classification. We formulate text
classification as a neural text matching problem where each document is treated
as a query, and the system learns the mapping from each query to the relevant
class labels by (1) adding prompts to enhance label matching, and (2) using
retrieved labels to enrich the training set in a self-training loop of
contrastive learning. PESCO achieves state-of-the-art performance on four
benchmark text classification datasets. On DBpedia, we achieve 98.5\% accuracy
without any labeled data, which is close to the fully-supervised result.
Extensive experiments and analyses show all the components of PESCO are
necessary for improving the performance of zero-shot text classification.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 09:57:06 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Wang",
"Yau-Shian",
""
],
[
"Chi",
"Ta-Chung",
""
],
[
"Zhang",
"Ruohong",
""
],
[
"Yang",
"Yiming",
""
]
] |
new_dataset
| 0.993647 |
2305.14969
|
Yichen Yan
|
Yichen Yan, Xingjian He, Wenxuan Wan, Jing Liu
|
MMNet: Multi-Mask Network for Referring Image Segmentation
|
10 pages, 5 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Referring image segmentation aims to segment an object referred to by natural
language expression from an image. However, this task is challenging due to the
distinct data properties between text and image, and the randomness introduced
by diverse objects and unrestricted language expression. Most of previous work
focus on improving cross-modal feature fusion while not fully addressing the
inherent uncertainty caused by diverse objects and unrestricted language. To
tackle these problems, we propose an end-to-end Multi-Mask Network for
referring image segmentation(MMNet). we first combine picture and language and
then employ an attention mechanism to generate multiple queries that represent
different aspects of the language expression. We then utilize these queries to
produce a series of corresponding segmentation masks, assigning a score to each
mask that reflects its importance. The final result is obtained through the
weighted sum of all masks, which greatly reduces the randomness of the language
expression. Our proposed framework demonstrates superior performance compared
to state-of-the-art approaches on the two most commonly used datasets, RefCOCO,
RefCOCO+ and G-Ref, without the need for any post-processing. This further
validates the efficacy of our proposed framework.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 10:02:27 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Yan",
"Yichen",
""
],
[
"He",
"Xingjian",
""
],
[
"Wan",
"Wenxuan",
""
],
[
"Liu",
"Jing",
""
]
] |
new_dataset
| 0.998467 |
2305.14982
|
Firoj Alam
|
Ahmed Abdelali, Hamdy Mubarak, Shammur Absar Chowdhury, Maram
Hasanain, Basel Mousi, Sabri Boughorbel, Yassine El Kheir, Daniel Izham,
Fahim Dalvi, Majd Hawasly, Nizi Nazar, Yousseif Elshahawy, Ahmed Ali, Nadir
Durrani, Natasa Milic-Frayling, Firoj Alam
|
Benchmarking Arabic AI with Large Language Models
|
Foundation Models, Large Language Models, Arabic NLP, Arabic Speech,
Arabic AI, , CHatGPT Evaluation, USM Evaluation, Whisper Evaluation
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
With large Foundation Models (FMs), language technologies (AI in general) are
entering a new paradigm: eliminating the need for developing large-scale
task-specific datasets and supporting a variety of tasks through set-ups
ranging from zero-shot to few-shot learning. However, understanding FMs
capabilities requires a systematic benchmarking effort by comparing FMs
performance with the state-of-the-art (SOTA) task-specific models. With that
goal, past work focused on the English language and included a few efforts with
multiple languages. Our study contributes to ongoing research by evaluating FMs
performance for standard Arabic NLP and Speech processing, including a range of
tasks from sequence tagging to content classification across diverse domains.
We start with zero-shot learning using GPT-3.5-turbo, Whisper, and USM,
addressing 33 unique tasks using 59 publicly available datasets resulting in 96
test setups. For a few tasks, FMs performs on par or exceeds the performance of
the SOTA models but for the majority it under-performs. Given the importance of
prompt for the FMs performance, we discuss our prompt strategies in detail and
elaborate on our findings. Our future work on Arabic AI will explore few-shot
prompting, expand the range of tasks, and investigate additional open-source
models.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 10:16:16 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Abdelali",
"Ahmed",
""
],
[
"Mubarak",
"Hamdy",
""
],
[
"Chowdhury",
"Shammur Absar",
""
],
[
"Hasanain",
"Maram",
""
],
[
"Mousi",
"Basel",
""
],
[
"Boughorbel",
"Sabri",
""
],
[
"Kheir",
"Yassine El",
""
],
[
"Izham",
"Daniel",
""
],
[
"Dalvi",
"Fahim",
""
],
[
"Hawasly",
"Majd",
""
],
[
"Nazar",
"Nizi",
""
],
[
"Elshahawy",
"Yousseif",
""
],
[
"Ali",
"Ahmed",
""
],
[
"Durrani",
"Nadir",
""
],
[
"Milic-Frayling",
"Natasa",
""
],
[
"Alam",
"Firoj",
""
]
] |
new_dataset
| 0.999429 |
2305.14989
|
Abdelrahim Elmadany
|
El Moatez Billah Nagoudi, Ahmed El-Shangiti, AbdelRahim Elmadany,
Muhammad Abdul-Mageed
|
Dolphin: A Challenging and Diverse Benchmark for Arabic NLG
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Dolphin, a novel benchmark that addresses the need for an
evaluation framework for the wide collection of Arabic languages and varieties.
The proposed benchmark encompasses a broad range of 13 different NLG tasks,
including text summarization, machine translation, question answering, and
dialogue generation, among others. Dolphin comprises a substantial corpus of 40
diverse and representative public datasets across 50 test splits, carefully
curated to reflect real-world scenarios and the linguistic richness of Arabic.
It sets a new standard for evaluating the performance and generalization
capabilities of Arabic and multilingual models, promising to enable researchers
to push the boundaries of current methodologies. We provide an extensive
analysis of Dolphin, highlighting its diversity and identifying gaps in current
Arabic NLG research. We also evaluate several Arabic and multilingual models on
our benchmark, allowing us to set strong baselines against which researchers
can compare.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 10:24:10 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Nagoudi",
"El Moatez Billah",
""
],
[
"El-Shangiti",
"Ahmed",
""
],
[
"Elmadany",
"AbdelRahim",
""
],
[
"Abdul-Mageed",
"Muhammad",
""
]
] |
new_dataset
| 0.999395 |
2305.14996
|
Yanxia Qin
|
Shaurya Rohatgi, Yanxia Qin, Benjamin Aw, Niranjana Unnithan, Min-Yen
Kan
|
The ACL OCL Corpus: advancing Open science in Computational Linguistics
| null | null | null | null |
cs.CL cs.DL
|
http://creativecommons.org/licenses/by/4.0/
|
We present a scholarly corpus from the ACL Anthology to assist Open
scientific research in the Computational Linguistics domain, named as ACL OCL.
Compared with previous ARC and AAN versions, ACL OCL includes structured
full-texts with logical sections, references to figures, and links to a large
knowledge resource (semantic scholar). ACL OCL contains 74k scientific papers,
together with 210k figures extracted up to September 2022. To observe the
development in the computational linguistics domain, we detect the topics of
all OCL papers with a supervised neural model. We observe ''Syntax: Tagging,
Chunking and Parsing'' topic is significantly shrinking and ''Natural Language
Generation'' is resurging. Our dataset is open and available to download from
HuggingFace in https://huggingface.co/datasets/ACL-OCL/ACL-OCL-Corpus.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 10:35:56 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Rohatgi",
"Shaurya",
""
],
[
"Qin",
"Yanxia",
""
],
[
"Aw",
"Benjamin",
""
],
[
"Unnithan",
"Niranjana",
""
],
[
"Kan",
"Min-Yen",
""
]
] |
new_dataset
| 0.999631 |
2305.15028
|
Qingxiu Dong
|
Heming Xia, Qingxiu Dong, Lei Li, Jingjing Xu, Ziwei Qin, Zhifang Sui
|
ImageNetVC: Zero-Shot Visual Commonsense Evaluation on 1000 ImageNet
Categories
| null | null | null | null |
cs.CL cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, Pretrained Language Models (PLMs) have been serving as
general-purpose interfaces, posing a significant demand for comprehensive
visual knowledge. However, it remains unclear how well current PLMs and their
visually augmented counterparts (VaLMs) can master visual commonsense
knowledge. To investigate this, we propose ImageNetVC, a fine-grained,
human-annotated dataset specifically designed for zero-shot visual commonsense
evaluation across 1,000 ImageNet categories. Utilizing ImageNetVC, we delve
into the fundamental visual commonsense knowledge of both unimodal PLMs and
VaLMs, uncovering the scaling law and the influence of the backbone model on
VaLMs. Furthermore, we investigate the factors affecting the visual commonsense
knowledge of large-scale models, providing insights into the development of
language models enriched with visual commonsense knowledge. Our code and
dataset are available at https://github.com/hemingkx/ImageNetVC.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 11:14:31 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Xia",
"Heming",
""
],
[
"Dong",
"Qingxiu",
""
],
[
"Li",
"Lei",
""
],
[
"Xu",
"Jingjing",
""
],
[
"Qin",
"Ziwei",
""
],
[
"Sui",
"Zhifang",
""
]
] |
new_dataset
| 0.997164 |
2305.15035
|
Wei-Lin Chen
|
Wei-Lin Chen, Cheng-Kuang Wu, Hsin-Hsi Chen
|
Self-ICL: Zero-Shot In-Context Learning with Self-Generated
Demonstrations
|
Work in progress
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models (LMs) have exhibited superior in-context learning (ICL)
ability to adopt to target tasks by prompting with a few input-output
demonstrations. Towards better ICL, different methods are proposed to select
representative demonstrations from existing training corpora. However, such a
setting is not aligned with real-world practices, as end-users usually query
LMs without accesses to demonstration pools. Inspired by evidence suggesting
LMs' zero-shot capabilities are underrated, and the role of demonstrations are
primarily for exposing models' intrinsic functionalities, we introduce
Self-ICL, a simple framework for zero-shot ICL. Given a test input, Self-ICL
first prompts the model to generate pseudo-inputs. Next, the model predicts
pseudo-labels for the pseudo-inputs via zero-shot prompting. Finally, we
construct pseudo-demonstrations from pseudo-input-label pairs, and perform ICL
for the test input. Evaluation on BIG-Bench Hard shows Self-ICL steadily
surpasses zero-shot and zero-shot chain-of-thought baselines on head-to-head
and all-task average performance. Our findings suggest the possibility to
bootstrap LMs' intrinsic capabilities towards better zero-shot performance.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 11:22:34 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Chen",
"Wei-Lin",
""
],
[
"Wu",
"Cheng-Kuang",
""
],
[
"Chen",
"Hsin-Hsi",
""
]
] |
new_dataset
| 0.989265 |
2305.15060
|
Jamin Shin
|
Taehyun Lee, Seokhee Hong, Jaewoo Ahn, Ilgee Hong, Hwaran Lee, Sangdoo
Yun, Jamin Shin, Gunhee Kim
|
Who Wrote this Code? Watermarking for Code Generation
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models for code have recently shown remarkable performance in
generating executable code. However, this rapid advancement has been
accompanied by many legal and ethical concerns, such as code licensing issues,
code plagiarism, and malware generation, making watermarking machine-generated
code a very timely problem. Despite such imminent needs, we discover that
existing watermarking and machine-generated text detection methods for LLMs
fail to function with code generation tasks properly. Hence, in this work, we
propose a new watermarking method, SWEET, that significantly improves upon
previous approaches when watermarking machine-generated code. Our proposed
method selectively applies watermarking to the tokens with high enough entropy,
surpassing a defined threshold. The experiments on code generation benchmarks
show that our watermarked code has superior quality compared to code produced
by the previous state-of-the-art LLM watermarking method. Furthermore, our
watermark method also outperforms DetectGPT for the task of machine-generated
code detection.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 11:49:52 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Lee",
"Taehyun",
""
],
[
"Hong",
"Seokhee",
""
],
[
"Ahn",
"Jaewoo",
""
],
[
"Hong",
"Ilgee",
""
],
[
"Lee",
"Hwaran",
""
],
[
"Yun",
"Sangdoo",
""
],
[
"Shin",
"Jamin",
""
],
[
"Kim",
"Gunhee",
""
]
] |
new_dataset
| 0.983094 |
2305.15062
|
Quzhe Huang
|
Quzhe Huang, Mingxu Tao, Zhenwei An, Chen Zhang, Cong Jiang, Zhibin
Chen, Zirui Wu, Yansong Feng
|
Lawyer LLaMA Technical Report
|
Work in progress
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large Language Models (LLMs), like LLaMA, have exhibited remarkable
performances across various tasks. Nevertheless, when deployed to specific
domains such as law or medicine, the models still confront the challenge of a
deficiency in domain-specific knowledge and an inadequate capability to
leverage that knowledge to resolve domain-related problems. In this paper, we
focus on the legal domain and explore how to inject domain knowledge during the
continual training stage and how to design proper supervised finetune tasks to
help the model tackle practical issues. Moreover, to alleviate the
hallucination problem during model's generation, we add a retrieval module and
extract relevant articles before the model answers any queries. Augmenting with
the extracted evidence, our model could generate more reliable responses. We
release our data and model at https://github.com/AndrewZhe/lawyer-llama.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 11:52:07 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Huang",
"Quzhe",
""
],
[
"Tao",
"Mingxu",
""
],
[
"An",
"Zhenwei",
""
],
[
"Zhang",
"Chen",
""
],
[
"Jiang",
"Cong",
""
],
[
"Chen",
"Zhibin",
""
],
[
"Wu",
"Zirui",
""
],
[
"Feng",
"Yansong",
""
]
] |
new_dataset
| 0.995988 |
2305.15068
|
Lingyu Gao
|
Xiaomeng Ma, Lingyu Gao, Qihui Xu
|
ToMChallenges: A Principle-Guided Dataset and Diverse Evaluation Tasks
for Exploring Theory of Mind
|
work in progress
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Theory of Mind (ToM), the capacity to comprehend the mental states of
distinct individuals, is essential for numerous practical applications. With
the development of large language models, there is a heated debate about
whether they are able to perform ToM tasks. Previous studies have used
different tasks and prompts to test the ToM on large language models and the
results are inconsistent: some studies asserted these models are capable of
exhibiting ToM, while others suggest the opposite. In this study, We present
ToMChallenges, a dataset for comprehensively evaluating Theory of Mind based on
Sally-Anne and Smarties tests. We created 30 variations of each test (e.g.,
changing the person's name, location, and items). For each variation, we test
the model's understanding of different aspects: reality, belief, 1st order
belief, and 2nd order belief. We adapt our data for various tasks by creating
unique prompts tailored for each task category: Fill-in-the-Blank, Multiple
Choice, True/False, Chain-of-Thought True/False, Question Answering, and Text
Completion. If the model has a robust ToM, it should be able to achieve good
performance for different prompts across different tests. We evaluated two
GPT-3.5 models, text-davinci-003 and gpt-3.5-turbo-0301, with our datasets. Our
results indicate that consistent performance in ToM tasks remains a challenge.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 11:54:07 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Ma",
"Xiaomeng",
""
],
[
"Gao",
"Lingyu",
""
],
[
"Xu",
"Qihui",
""
]
] |
new_dataset
| 0.99979 |
2305.15084
|
Arian Bakhtiarnia
|
B{\l}a\.zej Leporowski, Arian Bakhtiarnia, Nicole Bonnici, Adrian
Muscat, Luca Zanella, Yiming Wang and Alexandros Iosifidis
|
Audio-Visual Dataset and Method for Anomaly Detection in Traffic Videos
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the first audio-visual dataset for traffic anomaly detection
taken from real-world scenes, called MAVAD, with a diverse range of weather and
illumination conditions. In addition, we propose a novel method named AVACA
that combines visual and audio features extracted from video sequences by means
of cross-attention to detect anomalies. We demonstrate that the addition of
audio improves the performance of AVACA by up to 5.2%. We also evaluate the
impact of image anonymization, showing only a minor decrease in performance
averaging at 1.7%.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 12:02:42 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Leporowski",
"Błażej",
""
],
[
"Bakhtiarnia",
"Arian",
""
],
[
"Bonnici",
"Nicole",
""
],
[
"Muscat",
"Adrian",
""
],
[
"Zanella",
"Luca",
""
],
[
"Wang",
"Yiming",
""
],
[
"Iosifidis",
"Alexandros",
""
]
] |
new_dataset
| 0.998991 |
2305.15087
|
Philipp Sadler
|
Philipp Sadler and David Schlangen
|
Pento-DIARef: A Diagnostic Dataset for Learning the Incremental
Algorithm for Referring Expression Generation from Examples
|
9 pages, Accepted to EACL 2023
| null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
NLP tasks are typically defined extensionally through datasets containing
example instantiations (e.g., pairs of image i and text t), but motivated
intensionally through capabilities invoked in verbal descriptions of the task
(e.g., "t is a description of i, for which the content of i needs to be
recognised and understood"). We present Pento-DIARef, a diagnostic dataset in a
visual domain of puzzle pieces where referring expressions are generated by a
well-known symbolic algorithm (the "Incremental Algorithm"), which itself is
motivated by appeal to a hypothesised capability (eliminating distractors
through application of Gricean maxims). Our question then is whether the
extensional description (the dataset) is sufficient for a neural model to pick
up the underlying regularity and exhibit this capability given the simple task
definition of producing expressions from visual inputs. We find that a model
supported by a vision detection step and a targeted data generation scheme
achieves an almost perfect BLEU@1 score and sentence accuracy, whereas simpler
baselines do not.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 12:05:53 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Sadler",
"Philipp",
""
],
[
"Schlangen",
"David",
""
]
] |
new_dataset
| 0.999863 |
2305.15186
|
Tetsu Kasanishi
|
Tetsu Kasanishi, Masaru Isonuma, Junichiro Mori, Ichiro Sakata
|
SciReviewGen: A Large-scale Dataset for Automatic Literature Review
Generation
|
ACL findings 2023 (to be appeared). arXiv admin note: text overlap
with arXiv:1810.04020 by other authors
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic literature review generation is one of the most challenging tasks
in natural language processing. Although large language models have tackled
literature review generation, the absence of large-scale datasets has been a
stumbling block to the progress. We release SciReviewGen, consisting of over
10,000 literature reviews and 690,000 papers cited in the reviews. Based on the
dataset, we evaluate recent transformer-based summarization models on the
literature review generation task, including Fusion-in-Decoder extended for
literature review generation. Human evaluation results show that some
machine-generated summaries are comparable to human-written reviews, while
revealing the challenges of automatic literature review generation such as
hallucinations and a lack of detailed information. Our dataset and code are
available at https://github.com/tetsu9923/SciReviewGen.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 14:26:30 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Kasanishi",
"Tetsu",
""
],
[
"Isonuma",
"Masaru",
""
],
[
"Mori",
"Junichiro",
""
],
[
"Sakata",
"Ichiro",
""
]
] |
new_dataset
| 0.999811 |
2305.15191
|
Aldin Vehabovic
|
Farooq Shaikh, Elias Bou-Harb, Aldin Vehabovic, Jorge Crichigno,
Aysegul Yayimli and Nasir Ghani
|
IoT Threat Detection Testbed Using Generative Adversarial Networks
|
8 pages, 5 figures
| null |
10.1109/BlackSeaCom54372.2022.9858239
| null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The Internet of Things(IoT) paradigm provides persistent sensing and data
collection capabilities and is becoming increasingly prevalent across many
market sectors. However, most IoT devices emphasize usability and function over
security, making them very vulnerable to malicious exploits. This concern is
evidenced by the increased use of compromised IoT devices in large scale bot
networks (botnets) to launch distributed denial of service(DDoS) attacks
against high value targets. Unsecured IoT systems can also provide entry points
to private networks, allowing adversaries relatively easy access to valuable
resources and services. Indeed, these evolving IoT threat vectors (ranging from
brute force attacks to remote code execution exploits) are posing key
challenges. Moreover, many traditional security mechanisms are not amenable for
deployment on smaller resource-constrained IoT platforms. As a result,
researchers have been developing a range of methods for IoT security, with many
strategies using advanced machine learning(ML) techniques. Along these lines,
this paper presents a novel generative adversarial network(GAN) solution to
detect threats from malicious IoT devices both inside and outside a network.
This model is trained using both benign IoT traffic and global darknet data and
further evaluated in a testbed with real IoT devices and malware threats.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 14:29:46 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Shaikh",
"Farooq",
""
],
[
"Bou-Harb",
"Elias",
""
],
[
"Vehabovic",
"Aldin",
""
],
[
"Crichigno",
"Jorge",
""
],
[
"Yayimli",
"Aysegul",
""
],
[
"Ghani",
"Nasir",
""
]
] |
new_dataset
| 0.962843 |
2305.15320
|
Furkan Danisman
|
Furkan Berk Danisman, Ilyurek Kilic, Gizem Sarul, Sena Akta\c{s},
Niyousha Amini, Osman Orcun Ada
|
METU Students' college life satisfaction
|
13 table, 18 figures, 34 pages
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The research was conducted to identify the factors that influence college
students' satisfaction with their college experience. Firstly, the study was
focused on the literature review to determine relevant factors that have been
previously studied in the literature. Then, the survey analysis examined three
main independent factors that have been found to be related to college
students' satisfaction: Major Satisfaction, Social Self-Efficacy, and Academic
Performance. The findings of the study suggested that the most important factor
affecting students' satisfaction with their college experience is their
satisfaction with their chosen major. This means that students who are
satisfied with the major they have chosen are more likely to be overall
satisfied with their college experience. It's worth noting that, while the
study found that major satisfaction is the most crucial factor, it doesn't mean
that other factors such as Social Self-Efficacy, Academic Performance, and
Campus Life Satisfaction are not important. Based on these findings, it is
recommend that students prioritize their major satisfaction when making college
choices in order to maximize their overall satisfaction with their college
experience.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 23:49:06 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Danisman",
"Furkan Berk",
""
],
[
"Kilic",
"Ilyurek",
""
],
[
"Sarul",
"Gizem",
""
],
[
"Aktaş",
"Sena",
""
],
[
"Amini",
"Niyousha",
""
],
[
"Ada",
"Osman Orcun",
""
]
] |
new_dataset
| 0.970229 |
2305.15365
|
Mahla Abdolahnejad
|
Mahla Abdolahnejad, Justin Lee, Hannah Chan, Alex Morzycki, Olivier
Ethier, Anthea Mo, Peter X. Liu, Joshua N. Wong, Colin Hong, Rakesh Joshi
|
Boundary Attention Mapping (BAM): Fine-grained saliency maps for
segmentation of Burn Injuries
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Burn injuries can result from mechanisms such as thermal, chemical, and
electrical insults. A prompt and accurate assessment of burns is essential for
deciding definitive clinical treatments. Currently, the primary approach for
burn assessments, via visual and tactile observations, is approximately 60%-80%
accurate. The gold standard is biopsy and a close second would be non-invasive
methods like Laser Doppler Imaging (LDI) assessments, which have up to 97%
accuracy in predicting burn severity and the required healing time. In this
paper, we introduce a machine learning pipeline for assessing burn severities
and segmenting the regions of skin that are affected by burn. Segmenting 2D
colour images of burns allows for the injured versus non-injured skin to be
delineated, clearly marking the extent and boundaries of the localized
burn/region-of-interest, even during remote monitoring of a burn patient. We
trained a convolutional neural network (CNN) to classify four severities of
burns. We built a saliency mapping method, Boundary Attention Mapping (BAM),
that utilises this trained CNN for the purpose of accurately localizing and
segmenting the burn regions from skin burn images. We demonstrated the
effectiveness of our proposed pipeline through extensive experiments and
evaluations using two datasets; 1) A larger skin burn image dataset consisting
of 1684 skin burn images of four burn severities, 2) An LDI dataset that
consists of a total of 184 skin burn images with their associated LDI scans.
The CNN trained using the first dataset achieved an average F1-Score of 78% and
micro/macro- average ROC of 85% in classifying the four burn severities.
Moreover, a comparison between the BAM results and LDI results for measuring
injury boundary showed that the segmentations generated by our method achieved
91.60% accuracy, 78.17% sensitivity, and 93.37% specificity.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 17:15:19 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Abdolahnejad",
"Mahla",
""
],
[
"Lee",
"Justin",
""
],
[
"Chan",
"Hannah",
""
],
[
"Morzycki",
"Alex",
""
],
[
"Ethier",
"Olivier",
""
],
[
"Mo",
"Anthea",
""
],
[
"Liu",
"Peter X.",
""
],
[
"Wong",
"Joshua N.",
""
],
[
"Hong",
"Colin",
""
],
[
"Joshi",
"Rakesh",
""
]
] |
new_dataset
| 0.998889 |
2305.15393
|
Wanrong Zhu
|
Weixi Feng, Wanrong Zhu, Tsu-jui Fu, Varun Jampani, Arjun Akula,
Xuehai He, Sugato Basu, Xin Eric Wang, William Yang Wang
|
LayoutGPT: Compositional Visual Planning and Generation with Large
Language Models
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Attaining a high degree of user controllability in visual generation often
requires intricate, fine-grained inputs like layouts. However, such inputs
impose a substantial burden on users when compared to simple text inputs. To
address the issue, we study how Large Language Models (LLMs) can serve as
visual planners by generating layouts from text conditions, and thus
collaborate with visual generative models. We propose LayoutGPT, a method to
compose in-context visual demonstrations in style sheet language to enhance the
visual planning skills of LLMs. LayoutGPT can generate plausible layouts in
multiple domains, ranging from 2D images to 3D indoor scenes. LayoutGPT also
shows superior performance in converting challenging language concepts like
numerical and spatial relations to layout arrangements for faithful
text-to-image generation. When combined with a downstream image generation
model, LayoutGPT outperforms text-to-image models/systems by 20-40% and
achieves comparable performance as human users in designing visual layouts for
numerical and spatial correctness. Lastly, LayoutGPT achieves comparable
performance to supervised methods in 3D indoor scene synthesis, demonstrating
its effectiveness and potential in multiple visual domains.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 17:56:16 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Feng",
"Weixi",
""
],
[
"Zhu",
"Wanrong",
""
],
[
"Fu",
"Tsu-jui",
""
],
[
"Jampani",
"Varun",
""
],
[
"Akula",
"Arjun",
""
],
[
"He",
"Xuehai",
""
],
[
"Basu",
"Sugato",
""
],
[
"Wang",
"Xin Eric",
""
],
[
"Wang",
"William Yang",
""
]
] |
new_dataset
| 0.991321 |
2305.15399
|
Rundi Wu
|
Rundi Wu, Ruoshi Liu, Carl Vondrick, Changxi Zheng
|
Sin3DM: Learning a Diffusion Model from a Single 3D Textured Shape
|
Project page: https://Sin3DM.github.io, Code:
https://github.com/Sin3DM/Sin3DM
| null | null | null |
cs.CV cs.AI cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Synthesizing novel 3D models that resemble the input example has long been
pursued by researchers and artists in computer graphics. In this paper, we
present Sin3DM, a diffusion model that learns the internal patch distribution
from a single 3D textured shape and generates high-quality variations with fine
geometry and texture details. Training a diffusion model directly in 3D would
induce large memory and computational cost. Therefore, we first compress the
input into a lower-dimensional latent space and then train a diffusion model on
it. Specifically, we encode the input 3D textured shape into triplane feature
maps that represent the signed distance and texture fields of the input. The
denoising network of our diffusion model has a limited receptive field to avoid
overfitting, and uses triplane-aware 2D convolution blocks to improve the
result quality. Aside from randomly generating new samples, our model also
facilitates applications such as retargeting, outpainting and local editing.
Through extensive qualitative and quantitative evaluation, we show that our
model can generate 3D shapes of various types with better quality than prior
methods.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 17:57:15 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Wu",
"Rundi",
""
],
[
"Liu",
"Ruoshi",
""
],
[
"Vondrick",
"Carl",
""
],
[
"Zheng",
"Changxi",
""
]
] |
new_dataset
| 0.969844 |
2305.15403
|
Rongjie Huang
|
Rongjie Huang, Huadai Liu, Xize Cheng, Yi Ren, Linjun Li, Zhenhui Ye,
Jinzheng He, Lichao Zhang, Jinglin Liu, Xiang Yin, Zhou Zhao
|
AV-TranSpeech: Audio-Visual Robust Speech-to-Speech Translation
|
Accepted to ACL 2023
| null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Direct speech-to-speech translation (S2ST) aims to convert speech from one
language into another, and has demonstrated significant progress to date.
Despite the recent success, current S2ST models still suffer from distinct
degradation in noisy environments and fail to translate visual speech (i.e.,
the movement of lips and teeth). In this work, we present AV-TranSpeech, the
first audio-visual speech-to-speech (AV-S2ST) translation model without relying
on intermediate text. AV-TranSpeech complements the audio stream with visual
information to promote system robustness and opens up a host of practical
applications: dictation or dubbing archival films. To mitigate the data
scarcity with limited parallel AV-S2ST data, we 1) explore self-supervised
pre-training with unlabeled audio-visual data to learn contextual
representation, and 2) introduce cross-modal distillation with S2ST models
trained on the audio-only corpus to further reduce the requirements of visual
data. Experimental results on two language pairs demonstrate that AV-TranSpeech
outperforms audio-only models under all settings regardless of the type of
noise. With low-resource audio-visual data (10h, 30h), cross-modal distillation
yields an improvement of 7.6 BLEU on average compared with baselines. Audio
samples are available at https://AV-TranSpeech.github.io
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 17:59:03 GMT"
}
] | 2023-05-25T00:00:00 |
[
[
"Huang",
"Rongjie",
""
],
[
"Liu",
"Huadai",
""
],
[
"Cheng",
"Xize",
""
],
[
"Ren",
"Yi",
""
],
[
"Li",
"Linjun",
""
],
[
"Ye",
"Zhenhui",
""
],
[
"He",
"Jinzheng",
""
],
[
"Zhang",
"Lichao",
""
],
[
"Liu",
"Jinglin",
""
],
[
"Yin",
"Xiang",
""
],
[
"Zhao",
"Zhou",
""
]
] |
new_dataset
| 0.993752 |
2104.04671
|
Changchun Zou
|
Edward L. Amoruso and Raghu Avula and Stephen P. Johnson and Cliff C.
Zou
|
A Web Infrastructure for Certifying Multimedia News Content for Fake
News Defense
|
7 pages, 6 figures
| null |
10.1109/ISCC55528.2022.9912787
| null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
In dealing with altered multimedia news content, also referred to as fake
news, we present a ready-to-deploy scheme based on existing public key
infrastructure as a new fake news defense paradigm. This scheme enables news
organizations to certify/endorse a newsworthy multimedia news content and
securely and conveniently pass this trust information to end users. A news
organization can use our program to digitally sign the multimedia news content
with its private key. By installing a browser extension, an end user can easily
verify whether a news content has been endorsed and by which organization. It
is totally up to the end user whether to trust the news or the endorsing news
organization. The underlining principles of our scheme are that fake news will
sooner or later be identified as fake by general population, and a news
organization puts its long-term reputation on the line when endorsing a news
content.
|
[
{
"version": "v1",
"created": "Sat, 10 Apr 2021 03:05:34 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 14:04:57 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Amoruso",
"Edward L.",
""
],
[
"Avula",
"Raghu",
""
],
[
"Johnson",
"Stephen P.",
""
],
[
"Zou",
"Cliff C.",
""
]
] |
new_dataset
| 0.999615 |
2110.08994
|
Tengfei Liang
|
Tengfei Liang, Yi Jin, Yajun Gao, Wu Liu, Songhe Feng, Tao Wang,
Yidong Li
|
CMTR: Cross-modality Transformer for Visible-infrared Person
Re-identification
|
11 pages, 7 figures, 7 tables
|
2023 IEEE Transactions on Multimedia (TMM)
|
10.1109/TMM.2023.3237155
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visible-infrared cross-modality person re-identification is a challenging
ReID task, which aims to retrieve and match the same identity's images between
the heterogeneous visible and infrared modalities. Thus, the core of this task
is to bridge the huge gap between these two modalities. The existing
convolutional neural network-based methods mainly face the problem of
insufficient perception of modalities' information, and can not learn good
discriminative modality-invariant embeddings for identities, which limits their
performance. To solve these problems, we propose a cross-modality
transformer-based method (CMTR) for the visible-infrared person
re-identification task, which can explicitly mine the information of each
modality and generate better discriminative features based on it. Specifically,
to capture modalities' characteristics, we design the novel modality
embeddings, which are fused with token embeddings to encode modalities'
information. Furthermore, to enhance representation of modality embeddings and
adjust matching embeddings' distribution, we propose a modality-aware
enhancement loss based on the learned modalities' information, reducing
intra-class distance and enlarging inter-class distance. To our knowledge, this
is the first work of applying transformer network to the cross-modality
re-identification task. We implement extensive experiments on the public
SYSU-MM01 and RegDB datasets, and our proposed CMTR model's performance
significantly surpasses existing outstanding CNN-based methods.
|
[
{
"version": "v1",
"created": "Mon, 18 Oct 2021 03:12:59 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Liang",
"Tengfei",
""
],
[
"Jin",
"Yi",
""
],
[
"Gao",
"Yajun",
""
],
[
"Liu",
"Wu",
""
],
[
"Feng",
"Songhe",
""
],
[
"Wang",
"Tao",
""
],
[
"Li",
"Yidong",
""
]
] |
new_dataset
| 0.998754 |
2201.13405
|
Evgeniia Razumovskaia
|
Olga Majewska, Evgeniia Razumovskaia, Edoardo Maria Ponti, Ivan
Vuli\'c, Anna Korhonen
|
Cross-Lingual Dialogue Dataset Creation via Outline-Based Generation
| null | null |
10.1162/tacl_a_00539
| null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Multilingual task-oriented dialogue (ToD) facilitates access to services and
information for many (communities of) speakers. Nevertheless, the potential of
this technology is not fully realised, as current datasets for multilingual ToD
- both for modular and end-to-end modelling - suffer from severe limitations.
1) When created from scratch, they are usually small in scale and fail to cover
many possible dialogue flows. 2) Translation-based ToD datasets might lack
naturalness and cultural specificity in the target language. In this work, to
tackle these limitations we propose a novel outline-based annotation process
for multilingual ToD datasets, where domain-specific abstract schemata of
dialogue are mapped into natural language outlines. These in turn guide the
target language annotators in writing a dialogue by providing instructions
about each turn's intents and slots. Through this process we annotate a new
large-scale dataset for training and evaluation of multilingual and
cross-lingual ToD systems. Our Cross-lingual Outline-based Dialogue dataset
(termed COD) enables natural language understanding, dialogue state tracking,
and end-to-end dialogue modelling and evaluation in 4 diverse languages:
Arabic, Indonesian, Russian, and Kiswahili. Qualitative and quantitative
analyses of COD versus an equivalent translation-based dataset demonstrate
improvements in data quality, unlocked by the outline-based approach. Finally,
we benchmark a series of state-of-the-art systems for cross-lingual ToD,
setting reference scores for future work and demonstrating that COD prevents
over-inflated performance, typically met with prior translation-based ToD
datasets.
|
[
{
"version": "v1",
"created": "Mon, 31 Jan 2022 18:11:21 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Majewska",
"Olga",
""
],
[
"Razumovskaia",
"Evgeniia",
""
],
[
"Ponti",
"Edoardo Maria",
""
],
[
"Vulić",
"Ivan",
""
],
[
"Korhonen",
"Anna",
""
]
] |
new_dataset
| 0.999143 |
2204.00853
|
Chengyin Hu
|
Chengyin Hu, Weiwen Shi, Wen Li
|
Adversarial Neon Beam: A Light-based Physical Attack to DNNs
| null | null | null | null |
cs.CV cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the physical world, deep neural networks (DNNs) are impacted by light and
shadow, which can have a significant effect on their performance. While
stickers have traditionally been used as perturbations in most physical
attacks, their perturbations can often be easily detected. To address this,
some studies have explored the use of light-based perturbations, such as lasers
or projectors, to generate more subtle perturbations, which are artificial
rather than natural. In this study, we introduce a novel light-based attack
called the adversarial neon beam (AdvNB), which utilizes common neon beams to
create a natural black-box physical attack. Our approach is evaluated on three
key criteria: effectiveness, stealthiness, and robustness. Quantitative results
obtained in simulated environments demonstrate the effectiveness of the
proposed method, and in physical scenarios, we achieve an attack success rate
of 81.82%, surpassing the baseline. By using common neon beams as
perturbations, we enhance the stealthiness of the proposed attack, enabling
physical samples to appear more natural. Moreover, we validate the robustness
of our approach by successfully attacking advanced DNNs with a success rate of
over 75% in all cases. We also discuss defense strategies against the AdvNB
attack and put forward other light-based physical attacks.
|
[
{
"version": "v1",
"created": "Sat, 2 Apr 2022 12:57:00 GMT"
},
{
"version": "v2",
"created": "Wed, 3 May 2023 08:04:33 GMT"
},
{
"version": "v3",
"created": "Tue, 23 May 2023 07:42:50 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Hu",
"Chengyin",
""
],
[
"Shi",
"Weiwen",
""
],
[
"Li",
"Wen",
""
]
] |
new_dataset
| 0.999292 |
2205.14268
|
Connor Pryor
|
Connor Pryor, Charles Dickens, Eriq Augustine, Alon Albalak, William
Wang, Lise Getoor
|
NeuPSL: Neural Probabilistic Soft Logic
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce Neural Probabilistic Soft Logic (NeuPSL), a novel
neuro-symbolic (NeSy) framework that unites state-of-the-art symbolic reasoning
with the low-level perception of deep neural networks. To model the boundary
between neural and symbolic representations, we propose a family of
energy-based models, NeSy Energy-Based Models, and show that they are general
enough to include NeuPSL and many other NeSy approaches. Using this framework,
we show how to seamlessly integrate neural and symbolic parameter learning and
inference in NeuPSL. Through an extensive empirical evaluation, we demonstrate
the benefits of using NeSy methods, achieving upwards of 30% improvement over
independent neural network models. On a well-established NeSy task,
MNIST-Addition, NeuPSL demonstrates its joint reasoning capabilities by
outperforming existing NeSy approaches by up to 10% in low-data settings.
Furthermore, NeuPSL achieves a 5% boost in performance over state-of-the-art
NeSy methods in a canonical citation network task with up to a 40 times speed
up.
|
[
{
"version": "v1",
"created": "Fri, 27 May 2022 23:06:52 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Jun 2022 22:49:50 GMT"
},
{
"version": "v3",
"created": "Tue, 23 May 2023 15:47:49 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Pryor",
"Connor",
""
],
[
"Dickens",
"Charles",
""
],
[
"Augustine",
"Eriq",
""
],
[
"Albalak",
"Alon",
""
],
[
"Wang",
"William",
""
],
[
"Getoor",
"Lise",
""
]
] |
new_dataset
| 0.998511 |
2206.01034
|
Chengyin Hu
|
Chengyin Hu, Yilong Wang, Kalibinuer Tiliwalidi, Wen Li
|
Adversarial Laser Spot: Robust and Covert Physical-World Attack to DNNs
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most existing deep neural networks (DNNs) are easily disturbed by slight
noise. However, there are few researches on physical attacks by deploying
lighting equipment. The light-based physical attacks has excellent covertness,
which brings great security risks to many vision-based applications (such as
self-driving). Therefore, we propose a light-based physical attack, called
adversarial laser spot (AdvLS), which optimizes the physical parameters of
laser spots through genetic algorithm to perform physical attacks. It realizes
robust and covert physical attack by using low-cost laser equipment. As far as
we know, AdvLS is the first light-based physical attack that perform physical
attacks in the daytime. A large number of experiments in the digital and
physical environments show that AdvLS has excellent robustness and covertness.
In addition, through in-depth analysis of the experimental data, we find that
the adversarial perturbations generated by AdvLS have superior adversarial
attack migration. The experimental results show that AdvLS impose serious
interference to advanced DNNs, we call for the attention of the proposed AdvLS.
The code of AdvLS is available at: https://github.com/ChengYinHu/AdvLS
|
[
{
"version": "v1",
"created": "Thu, 2 Jun 2022 13:15:08 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 09:39:05 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Hu",
"Chengyin",
""
],
[
"Wang",
"Yilong",
""
],
[
"Tiliwalidi",
"Kalibinuer",
""
],
[
"Li",
"Wen",
""
]
] |
new_dataset
| 0.99919 |
2206.12251
|
Chengyin Hu
|
Chengyin Hu, Weiwen Shi
|
Adversarial Zoom Lens: A Novel Physical-World Attack to DNNs
| null | null | null | null |
cs.CR cs.AI cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although deep neural networks (DNNs) are known to be fragile, no one has
studied the effects of zooming-in and zooming-out of images in the physical
world on DNNs performance. In this paper, we demonstrate a novel physical
adversarial attack technique called Adversarial Zoom Lens (AdvZL), which uses a
zoom lens to zoom in and out of pictures of the physical world, fooling DNNs
without changing the characteristics of the target object. The proposed method
is so far the only adversarial attack technique that does not add physical
adversarial perturbation attack DNNs. In a digital environment, we construct a
data set based on AdvZL to verify the antagonism of equal-scale enlarged images
to DNNs. In the physical environment, we manipulate the zoom lens to zoom in
and out of the target object, and generate adversarial samples. The
experimental results demonstrate the effectiveness of AdvZL in both digital and
physical environments. We further analyze the antagonism of the proposed data
set to the improved DNNs. On the other hand, we provide a guideline for defense
against AdvZL by means of adversarial training. Finally, we look into the
threat possibilities of the proposed approach to future autonomous driving and
variant attack ideas similar to the proposed attack.
|
[
{
"version": "v1",
"created": "Thu, 23 Jun 2022 13:03:08 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 15:41:03 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Hu",
"Chengyin",
""
],
[
"Shi",
"Weiwen",
""
]
] |
new_dataset
| 0.998215 |
2208.04191
|
Yiqin Wang
|
Yiqin Wang, Yuanbo Li, Chong Han, Yi Chen, and Ziming Yu
|
300 GHz Dual-Band Channel Measurement, Analysis and Modeling in an
L-shaped Hallway
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Terahertz (THz) band (0.1-10 THz) has been envisioned as one of the
promising spectrum bands for sixth-generation (6G) and beyond communications.
In this paper, a dual-band angular-resolvable wideband channel measurement in
an indoor L-shaped hallway is presented and THz channel characteristics at
306-321 GHz and 356-371 GHz are analyzed. It is found that conventional
close-in and alpha-beta path loss models cannot take good care of large-scale
fading in the non-line-of-sight (NLoS) case, for which a modified alpha-beta
path loss model for the NLoS case is proposed and verified in the NLoS case for
both indoor and outdoor L-shaped scenarios. To describe both large-scale and
small-scale fading, a ray-tracing (RT)-statistical hybrid channel model is
proposed in the THz hallway scenario. Specifically in the hybrid model, the
deterministic part in hybrid channel modeling uses RT modeling of dominant
multi-path components (MPCs), i.e., LoS and multi-bounce reflected paths in the
near-NLoS region, while dominant MPCs at far-NLoS positions can be deduced
based on the developed statistical evolving model. The evolving model describes
the continuous change of arrival angle, power and delay of dominant MPCs in the
NLoS region. On the other hand, non-dominant MPCs are generated statistically.
The proposed hybrid approach reduces the computational cost and solves the
inaccuracy or even missing of dominant MPCs through RT at far-NLoS positions.
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 14:45:32 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 07:11:12 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Wang",
"Yiqin",
""
],
[
"Li",
"Yuanbo",
""
],
[
"Han",
"Chong",
""
],
[
"Chen",
"Yi",
""
],
[
"Yu",
"Ziming",
""
]
] |
new_dataset
| 0.99713 |
2209.02430
|
Chengyin Hu
|
Chengyin Hu, Weiwen Shi
|
Adversarial Color Film: Effective Physical-World Attack to DNNs
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is well known that the performance of deep neural networks (DNNs) is
susceptible to subtle interference. So far, camera-based physical adversarial
attacks haven't gotten much attention, but it is the vacancy of physical
attack. In this paper, we propose a simple and efficient camera-based physical
attack called Adversarial Color Film (AdvCF), which manipulates the physical
parameters of color film to perform attacks. Carefully designed experiments
show the effectiveness of the proposed method in both digital and physical
environments. In addition, experimental results show that the adversarial
samples generated by AdvCF have excellent performance in attack
transferability, which enables AdvCF effective black-box attacks. At the same
time, we give the guidance of defense against AdvCF by means of adversarial
training. Finally, we look into AdvCF's threat to future vision-based systems
and propose some promising mentality for camera-based physical attacks.
|
[
{
"version": "v1",
"created": "Fri, 2 Sep 2022 08:22:32 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 12:29:21 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Hu",
"Chengyin",
""
],
[
"Shi",
"Weiwen",
""
]
] |
new_dataset
| 0.997998 |
2209.11739
|
Chengyin Hu
|
Chengyin Hu, Weiwen Shi
|
Adversarial Catoptric Light: An Effective, Stealthy and Robust
Physical-World Attack to DNNs
|
arXiv admin note: substantial text overlap with arXiv:2209.09652,
arXiv:2209.02430
| null | null | null |
cs.CV cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep neural networks (DNNs) have demonstrated exceptional success across
various tasks, underscoring the need to evaluate the robustness of advanced
DNNs. However, traditional methods using stickers as physical perturbations to
deceive classifiers present challenges in achieving stealthiness and suffer
from printing loss. Recent advancements in physical attacks have utilized light
beams such as lasers and projectors to perform attacks, where the optical
patterns generated are artificial rather than natural. In this study, we
introduce a novel physical attack, adversarial catoptric light (AdvCL), where
adversarial perturbations are generated using a common natural phenomenon,
catoptric light, to achieve stealthy and naturalistic adversarial attacks
against advanced DNNs in a black-box setting. We evaluate the proposed method
in three aspects: effectiveness, stealthiness, and robustness. Quantitative
results obtained in simulated environments demonstrate the effectiveness of the
proposed method, and in physical scenarios, we achieve an attack success rate
of 83.5%, surpassing the baseline. We use common catoptric light as a
perturbation to enhance the stealthiness of the method and make physical
samples appear more natural. Robustness is validated by successfully attacking
advanced and robust DNNs with a success rate over 80% in all cases.
Additionally, we discuss defense strategy against AdvCL and put forward some
light-based physical attacks.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 12:33:46 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 14:05:08 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Hu",
"Chengyin",
""
],
[
"Shi",
"Weiwen",
""
]
] |
new_dataset
| 0.996057 |
2209.14402
|
Giuseppe Serra
|
Giuseppe Serra, Mathias Niepert
|
L2XGNN: Learning to Explain Graph Neural Networks
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Graph Neural Networks (GNNs) are a popular class of machine learning models.
Inspired by the learning to explain (L2X) paradigm, we propose L2XGNN, a
framework for explainable GNNs which provides faithful explanations by design.
L2XGNN learns a mechanism for selecting explanatory subgraphs (motifs) which
are exclusively used in the GNNs message-passing operations. L2XGNN is able to
select, for each input graph, a subgraph with specific properties such as being
sparse and connected. Imposing such constraints on the motifs often leads to
more interpretable and effective explanations. Experiments on several datasets
suggest that L2XGNN achieves the same classification accuracy as baseline
methods using the entire input graph while ensuring that only the provided
explanations are used to make predictions. Moreover, we show that L2XGNN is
able to identify motifs responsible for the graph's properties it is intended
to predict.
|
[
{
"version": "v1",
"created": "Wed, 28 Sep 2022 20:03:57 GMT"
},
{
"version": "v2",
"created": "Mon, 22 May 2023 09:52:54 GMT"
},
{
"version": "v3",
"created": "Tue, 23 May 2023 09:41:31 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Serra",
"Giuseppe",
""
],
[
"Niepert",
"Mathias",
""
]
] |
new_dataset
| 0.990706 |
2210.01989
|
Yufan Zhuang
|
Yufan Zhuang, Zihan Wang, Fangbo Tao, Jingbo Shang
|
WavSpA: Wavelet Space Attention for Boosting Transformers' Long Sequence
Learning Ability
| null | null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Transformer and its variants are fundamental neural architectures in deep
learning. Recent works show that learning attention in the Fourier space can
improve the long sequence learning capability of Transformers. We argue that
wavelet transform shall be a better choice because it captures both position
and frequency information with linear time complexity. Therefore, in this
paper, we systematically study the synergy between wavelet transform and
Transformers. We propose Wavelet Space Attention (WavSpA) that facilitates
attention learning in a learnable wavelet coefficient space which replaces the
attention in Transformers by (1) applying forward wavelet transform to project
the input sequences to multi-resolution bases, (2) conducting attention
learning in the wavelet coefficient space, and (3) reconstructing the
representation in input space via backward wavelet transform. Extensive
experiments on the Long Range Arena demonstrate that learning attention in the
wavelet space using either fixed or adaptive wavelets can consistently improve
Transformer's performance and also significantly outperform learning in Fourier
space. We further show our method can enhance Transformer's reasoning
extrapolation capability over distance on the LEGO chain-of-reasoning task.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2022 02:37:59 GMT"
},
{
"version": "v2",
"created": "Tue, 31 Jan 2023 22:54:32 GMT"
},
{
"version": "v3",
"created": "Mon, 22 May 2023 22:42:47 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Zhuang",
"Yufan",
""
],
[
"Wang",
"Zihan",
""
],
[
"Tao",
"Fangbo",
""
],
[
"Shang",
"Jingbo",
""
]
] |
new_dataset
| 0.999012 |
2211.01335
|
Junyang Lin
|
An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren
Zhou, Chang Zhou
|
Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese
| null | null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The tremendous success of CLIP (Radford et al., 2021) has promoted the
research and application of contrastive learning for vision-language
pretraining. In this work, we construct a large-scale dataset of image-text
pairs in Chinese, where most data are retrieved from publicly available
datasets, and we pretrain Chinese CLIP models on the new dataset. We develop 5
Chinese CLIP models of multiple sizes, spanning from 77 to 958 million
parameters. Furthermore, we propose a two-stage pretraining method, where the
model is first trained with the image encoder frozen and then trained with all
parameters being optimized, to achieve enhanced model performance. Our
comprehensive experiments demonstrate that Chinese CLIP can achieve the
state-of-the-art performance on MUGE, Flickr30K-CN, and COCO-CN in the setups
of zero-shot learning and finetuning, and it is able to achieve competitive
performance in zero-shot image classification based on the evaluation on the
ELEVATER benchmark (Li et al., 2022). We have released our codes, models, and
demos in https://github.com/OFA-Sys/Chinese-CLIP
|
[
{
"version": "v1",
"created": "Wed, 2 Nov 2022 17:47:23 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Nov 2022 13:21:44 GMT"
},
{
"version": "v3",
"created": "Tue, 23 May 2023 01:28:21 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Yang",
"An",
""
],
[
"Pan",
"Junshu",
""
],
[
"Lin",
"Junyang",
""
],
[
"Men",
"Rui",
""
],
[
"Zhang",
"Yichang",
""
],
[
"Zhou",
"Jingren",
""
],
[
"Zhou",
"Chang",
""
]
] |
new_dataset
| 0.999791 |
2211.07615
|
Sagar Gubbi Venkatesh
|
Sagar Gubbi Venkatesh, Partha Talukdar, Srini Narayanan
|
UGIF: UI Grounded Instruction Following
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Smartphone users often find it difficult to navigate myriad menus to perform
common tasks such as "How to block calls from unknown numbers?". Currently,
help documents with step-by-step instructions are manually written to aid the
user. The user experience can be further enhanced by grounding the instructions
in the help document to the UI and overlaying a tutorial on the phone UI. To
build such tutorials, several natural language processing components including
retrieval, parsing, and grounding are necessary, but there isn't any relevant
dataset for such a task. Thus, we introduce UGIF-DataSet, a multi-lingual,
multi-modal UI grounded dataset for step-by-step task completion on the
smartphone containing 4,184 tasks across 8 languages. As an initial approach to
this problem, we propose retrieving the relevant instruction steps based on the
user's query and parsing the steps using Large Language Models (LLMs) to
generate macros that can be executed on-device. The instruction steps are often
available only in English, so the challenge includes cross-modal, cross-lingual
retrieval of English how-to pages from user queries in many languages and
mapping English instruction steps to UI in a potentially different language. We
compare the performance of different LLMs including PaLM and GPT-3 and find
that the end-to-end task completion rate is 48% for English UI but the
performance drops to 32% for other languages. We analyze the common failure
modes of existing models on this task and point out areas for improvement.
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 18:36:19 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 16:08:10 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Venkatesh",
"Sagar Gubbi",
""
],
[
"Talukdar",
"Partha",
""
],
[
"Narayanan",
"Srini",
""
]
] |
new_dataset
| 0.999532 |
2211.11180
|
Yiqin Wang
|
Yiqin Wang, Yuanbo Li, Yi Chen, Ziming Yu, Chong Han
|
300 GHz Wideband Channel Measurement and Analysis in a Lobby
|
6 pages, 6 figures
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Terahertz (0.1-10 THz) band has been envisioned as one of the promising
spectrum bands to support ultra-broadband sixth-generation (6G) and beyond
communications. In this paper, a wideband channel measurement campaign in a
500- square-meter indoor lobby at 306-321 GHz is presented. The measurement
system consists of a vector network analyzer (VNA)-based channel sounder, and a
directional antenna equipped at the receiver to resolve multi-path components
(MPCs) in the angular domain. In particular, 21 positions and 3780 channel
impulse responses (CIRs) are measured in the lobby, including the line-of-sight
(LoS), non-line-of-sight (NLoS) and obstructed-line-of-sight (OLoS) cases. The
multi-path characteristics are summarized as follows. First, the main
scatterers in the lobby include the glass, the pillar, and the LED screen.
Second, best direction and omni-directional path losses are analyzed. Compared
with the close-in path loss model, the optimal path loss offset in the
alpha-beta path loss model exceeds 86 dB in the LoS case, and accordingly, the
exponent decreases to 1.57 and below. Third, more than 10 clusters are observed
in OLoS and NLoS cases, compared to 2.17 clusters on average in the LoS case.
Fourth, the average power dispersion of MPCs is smaller in both temporal and
angular domains in the LoS case, compared with the NLoS and OLoS counterparts.
Finally, in contrast to hallway scenarios measured in previous works at the
same frequency band, the lobby which is larger in dimension and square in
shape, features larger path losses and smaller delay and angular spreads.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 04:51:06 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 07:16:37 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Wang",
"Yiqin",
""
],
[
"Li",
"Yuanbo",
""
],
[
"Chen",
"Yi",
""
],
[
"Yu",
"Ziming",
""
],
[
"Han",
"Chong",
""
]
] |
new_dataset
| 0.999755 |
2212.10474
|
Jonas Belouadi
|
Jonas Belouadi, Steffen Eger
|
ByGPT5: End-to-End Style-conditioned Poetry Generation with Token-free
Language Models
|
Accepted at ACL 2023 (main track)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
State-of-the-art poetry generation systems are often complex. They either
consist of task-specific model pipelines, incorporate prior knowledge in the
form of manually created constraints, or both. In contrast, end-to-end models
would not suffer from the overhead of having to model prior knowledge and could
learn the nuances of poetry from data alone, reducing the degree of human
supervision required. In this work, we investigate end-to-end poetry generation
conditioned on styles such as rhyme, meter, and alliteration. We identify and
address lack of training data and mismatching tokenization algorithms as
possible limitations of past attempts. In particular, we successfully pre-train
ByGPT5, a new token-free decoder-only language model, and fine-tune it on a
large custom corpus of English and German quatrains annotated with our styles.
We show that ByGPT5 outperforms other models such as mT5, ByT5, GPT-2 and
ChatGPT, while also being more parameter efficient and performing favorably
compared to humans. In addition, we analyze its runtime performance and
demonstrate that it is not prone to memorization. We make our code, models, and
datasets publicly available.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 17:49:49 GMT"
},
{
"version": "v2",
"created": "Mon, 22 May 2023 21:15:06 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Belouadi",
"Jonas",
""
],
[
"Eger",
"Steffen",
""
]
] |
new_dataset
| 0.991945 |
2302.05899
|
Efstratios Chatzoglou
|
Efstratios Chatzoglou and Vyron Kampourakis and Georgios Kambourakis
|
Bl0ck: Paralyzing 802.11 connections through Block Ack frames
| null | null |
10.48550/arXiv.2302.05899
| null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Despite Wi-Fi is at the eve of its seventh generation, security concerns
regarding this omnipresent technology remain in the spotlight of the research
community. This work introduces two new denial of service attacks against
contemporary Wi-Fi 5 and 6 networks. Differently to similar works in the
literature which focus on 802.11 management frames, the introduced assaults
exploit control frames. Both the attacks target the central element of any
infrastructure-based 802.11 network, i.e., the access point (AP), and result in
depriving the associated stations from any service. We demonstrate that, at the
very least, the attacks affect a great mass of off-the-self AP implementations
by different renowned vendors, and it can be mounted with inexpensive
equipment, little effort, and a low level of expertise. With reference to the
latest standard, namely, 802.11-2020, we elaborate on the root cause of the
respected vulnerabilities, pinpointing shortcomings. Following a coordinated
vulnerability disclosure process, our findings have been promptly communicated
to each affected AP vendor, already receiving positive feedback as well as a -
currently reserved - common vulnerabilities and exposures (CVE) id, namely
CVE-2022-32666.
|
[
{
"version": "v1",
"created": "Sun, 12 Feb 2023 12:33:48 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 13:15:01 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Chatzoglou",
"Efstratios",
""
],
[
"Kampourakis",
"Vyron",
""
],
[
"Kambourakis",
"Georgios",
""
]
] |
new_dataset
| 0.982775 |
2303.16281
|
Queenie Luo
|
Queenie Luo, Michael J. Puett, Michael D. Smith
|
A Perspectival Mirror of the Elephant: Investigating Language Bias on
Google, ChatGPT, Wikipedia, and YouTube
| null | null | null | null |
cs.CY cs.AI cs.CL cs.LG cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Contrary to Google Search's mission of delivering information from "many
angles so you can form your own understanding of the world," we find that
Google and its most prominent returned results - Wikipedia and YouTube - simply
reflect a narrow set of cultural stereotypes tied to the search language for
complex topics like "Buddhism," "Liberalism," "colonization," "Iran" and
"America." Simply stated, they present, to varying degrees, distinct
information across the same search in different languages, a phenomenon we call
'language bias.' This paper presents evidence and analysis of language bias and
discusses its larger social implications. Instead of presenting a global
picture of a complex topic, our online searches and emerging tools like ChatGPT
turn us into the proverbial blind person touching a small portion of an
elephant, ignorant of the existence of other cultural perspectives. Piecing
together a genuine depiction of the elephant is a challenging and important
endeavor that will require collaborative efforts from scholars in both the
humanities and technology.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 19:49:58 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 07:14:46 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Luo",
"Queenie",
""
],
[
"Puett",
"Michael J.",
""
],
[
"Smith",
"Michael D.",
""
]
] |
new_dataset
| 0.999368 |
2303.17870
|
Jian Ma
|
Jian Ma, Mingjun Zhao, Chen Chen, Ruichen Wang, Di Niu, Haonan Lu,
Xiaodong Lin
|
GlyphDraw: Seamlessly Rendering Text with Intricate Spatial Structures
in Text-to-Image Generation
|
24 pages, 5 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent breakthroughs in the field of language-guided image generation have
yielded impressive achievements, enabling the creation of high-quality and
diverse images based on user instructions.Although the synthesis performance is
fascinating, one significant limitation of current image generation models is
their insufficient ability to generate text coherently within images,
particularly for complex glyph structures like Chinese characters. To address
this problem, we introduce GlyphDraw, a general learning framework aiming to
endow image generation models with the capacity to generate images coherently
embedded with text for any specific language.We first sophisticatedly design
the image-text dataset's construction strategy, then build our model
specifically on a diffusion-based image generator and carefully modify the
network structure to allow the model to learn drawing language characters with
the help of glyph and position information.Furthermore, we maintain the model's
open-domain image synthesis capability by preventing catastrophic forgetting by
using parameter-efficient fine-tuning techniques.Extensive qualitative and
quantitative experiments demonstrate that our method not only produces accurate
language characters as in prompts, but also seamlessly blends the generated
text into the background.Please refer to our
\href{https://1073521013.github.io/glyph-draw.github.io/}{project page}.
\end{abstract}
|
[
{
"version": "v1",
"created": "Fri, 31 Mar 2023 08:06:33 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 04:07:00 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Ma",
"Jian",
""
],
[
"Zhao",
"Mingjun",
""
],
[
"Chen",
"Chen",
""
],
[
"Wang",
"Ruichen",
""
],
[
"Niu",
"Di",
""
],
[
"Lu",
"Haonan",
""
],
[
"Lin",
"Xiaodong",
""
]
] |
new_dataset
| 0.997633 |
2304.06287
|
Chen Yang
|
Chen Yang, Peihao Li, Zanwei Zhou, Shanxin Yuan, Bingbing Liu,
Xiaokang Yang, Weichao Qiu, Wei Shen
|
NeRFVS: Neural Radiance Fields for Free View Synthesis via Geometry
Scaffolds
|
10 pages, 7 figures
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present NeRFVS, a novel neural radiance fields (NeRF) based method to
enable free navigation in a room. NeRF achieves impressive performance in
rendering images for novel views similar to the input views while suffering for
novel views that are significantly different from the training views. To
address this issue, we utilize the holistic priors, including pseudo depth maps
and view coverage information, from neural reconstruction to guide the learning
of implicit neural representations of 3D indoor scenes. Concretely, an
off-the-shelf neural reconstruction method is leveraged to generate a geometry
scaffold. Then, two loss functions based on the holistic priors are proposed to
improve the learning of NeRF: 1) A robust depth loss that can tolerate the
error of the pseudo depth map to guide the geometry learning of NeRF; 2) A
variance loss to regularize the variance of implicit neural representations to
reduce the geometry and color ambiguity in the learning procedure. These two
loss functions are modulated during NeRF optimization according to the view
coverage information to reduce the negative influence brought by the view
coverage imbalance. Extensive results demonstrate that our NeRFVS outperforms
state-of-the-art view synthesis methods quantitatively and qualitatively on
indoor scenes, achieving high-fidelity free navigation results.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 06:40:08 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 12:49:17 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Yang",
"Chen",
""
],
[
"Li",
"Peihao",
""
],
[
"Zhou",
"Zanwei",
""
],
[
"Yuan",
"Shanxin",
""
],
[
"Liu",
"Bingbing",
""
],
[
"Yang",
"Xiaokang",
""
],
[
"Qiu",
"Weichao",
""
],
[
"Shen",
"Wei",
""
]
] |
new_dataset
| 0.950759 |
2305.06849
|
Yujia Qin
|
Yujia Qin, Zihan Cai, Dian Jin, Lan Yan, Shihao Liang, Kunlun Zhu,
Yankai Lin, Xu Han, Ning Ding, Huadong Wang, Ruobing Xie, Fanchao Qi, Zhiyuan
Liu, Maosong Sun, and Jie Zhou
|
WebCPM: Interactive Web Search for Chinese Long-form Question Answering
|
ACL 2023, main conference
| null | null | null |
cs.CL cs.AI cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Long-form question answering (LFQA) aims at answering complex, open-ended
questions with detailed, paragraph-length responses. The de facto paradigm of
LFQA necessitates two procedures: information retrieval, which searches for
relevant supporting facts, and information synthesis, which integrates these
facts into a coherent answer. In this paper, we introduce WebCPM, the first
Chinese LFQA dataset. One unique feature of WebCPM is that its information
retrieval is based on interactive web search, which engages with a search
engine in real time. Following WebGPT, we develop a web search interface. We
recruit annotators to search for relevant information using our interface and
then answer questions. Meanwhile, the web search behaviors of our annotators
would be recorded. In total, we collect 5,500 high-quality question-answer
pairs, together with 14,315 supporting facts and 121,330 web search actions. We
fine-tune pre-trained language models to imitate human behaviors for web search
and to generate answers based on the collected facts. Our LFQA pipeline, built
on these fine-tuned models, generates answers that are no worse than
human-written ones in 32.5% and 47.5% of the cases on our dataset and DuReader,
respectively.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 14:47:29 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 13:15:10 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Qin",
"Yujia",
""
],
[
"Cai",
"Zihan",
""
],
[
"Jin",
"Dian",
""
],
[
"Yan",
"Lan",
""
],
[
"Liang",
"Shihao",
""
],
[
"Zhu",
"Kunlun",
""
],
[
"Lin",
"Yankai",
""
],
[
"Han",
"Xu",
""
],
[
"Ding",
"Ning",
""
],
[
"Wang",
"Huadong",
""
],
[
"Xie",
"Ruobing",
""
],
[
"Qi",
"Fanchao",
""
],
[
"Liu",
"Zhiyuan",
""
],
[
"Sun",
"Maosong",
""
],
[
"Zhou",
"Jie",
""
]
] |
new_dataset
| 0.999742 |
2305.07507
|
Ilias Chalkidis
|
Ilias Chalkidis, Nicolas Garneau, Catalina Goanta, Daniel Martin Katz,
Anders S{\o}gaard
|
LeXFiles and LegalLAMA: Facilitating English Multinational Legal
Language Model Development
|
9 pages, long paper at ACL 2023 proceedings
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this work, we conduct a detailed analysis on the performance of
legal-oriented pre-trained language models (PLMs). We examine the interplay
between their original objective, acquired knowledge, and legal language
understanding capacities which we define as the upstream, probing, and
downstream performance, respectively. We consider not only the models' size but
also the pre-training corpora used as important dimensions in our study. To
this end, we release a multinational English legal corpus (LeXFiles) and a
legal knowledge probing benchmark (LegalLAMA) to facilitate training and
detailed analysis of legal-oriented PLMs. We release two new legal PLMs trained
on LeXFiles and evaluate them alongside others on LegalLAMA and LexGLUE. We
find that probing performance strongly correlates with upstream performance in
related legal topics. On the other hand, downstream performance is mainly
driven by the model's size and prior legal knowledge which can be estimated by
upstream and probing performance. Based on these findings, we can conclude that
both dimensions are important for those seeking the development of
domain-specific PLMs.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 14:21:38 GMT"
},
{
"version": "v2",
"created": "Mon, 22 May 2023 18:20:54 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Chalkidis",
"Ilias",
""
],
[
"Garneau",
"Nicolas",
""
],
[
"Goanta",
"Catalina",
""
],
[
"Katz",
"Daniel Martin",
""
],
[
"Søgaard",
"Anders",
""
]
] |
new_dataset
| 0.999588 |
2305.07664
|
Jeyakodi G
|
G. Jeyakodi, Trisha Agarwal, P. Shanthi Bala
|
mAedesID: Android Application for Aedes Mosquito Species Identification
using Convolutional Neural Network
|
11 pages, 13 figures, This paper was presented at the International
Conference on KnowledgeDiscoveries on Statistical Innovations and Recent
Advances in Optimization (ICON-KSRAO)on 29th and 30th December 2022. only
abstract is printed in the conference proceedings
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vector-Borne Disease (VBD) is an infectious disease transmitted through the
pathogenic female Aedes mosquito to humans and animals. It is important to
control dengue disease by reducing the spread of Aedes mosquito vectors.
Community awareness plays acrucial role to ensure Aedes control programmes and
encourages the communities to involve active participation. Identifying the
species of mosquito will help to recognize the mosquito density in the locality
and intensifying mosquito control efforts in particular areas. This willhelp in
avoiding Aedes breeding sites around residential areas and reduce adult
mosquitoes. To serve this purpose, an android application are developed to
identify Aedes species that help the community to contribute in mosquito
control events. Several Android applications have been developed to identify
species like birds, plant species, and Anopheles mosquito species. In this
work, a user-friendly mobile application mAedesID is developed for identifying
the Aedes mosquito species using a deep learning Convolutional Neural Network
(CNN) algorithm which is best suited for species image classification and
achieves better accuracy for voluminous images. The mobile application can be
downloaded from the URLhttps://tinyurl.com/mAedesID.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 14:20:13 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 05:18:06 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Jeyakodi",
"G.",
""
],
[
"Agarwal",
"Trisha",
""
],
[
"Bala",
"P. Shanthi",
""
]
] |
new_dataset
| 0.997588 |
2305.09758
|
Yaman Kumar Singla
|
Aanisha Bhattacharya, Yaman K Singla, Balaji Krishnamurthy, Rajiv Ratn
Shah, Changyou Chen
|
A Video Is Worth 4096 Tokens: Verbalize Story Videos To Understand Them
In Zero Shot
| null | null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multimedia content, such as advertisements and story videos, exhibit a rich
blend of creativity and multiple modalities. They incorporate elements like
text, visuals, audio, and storytelling techniques, employing devices like
emotions, symbolism, and slogans to convey meaning. While previous research in
multimedia understanding has focused mainly on videos with specific actions
like cooking, there is a dearth of large annotated training datasets, hindering
the development of supervised learning models with satisfactory performance for
real-world applications. However, the rise of large language models (LLMs) has
witnessed remarkable zero-shot performance in various natural language
processing (NLP) tasks, such as emotion classification, question-answering, and
topic classification. To bridge this performance gap in multimedia
understanding, we propose verbalizing story videos to generate their
descriptions in natural language and then performing video-understanding tasks
on the generated story as opposed to the original video. Through extensive
experiments on five video-understanding tasks, we demonstrate that our method,
despite being zero-shot, achieves significantly better results than supervised
baselines for video understanding. Further, alleviating a lack of story
understanding benchmarks, we publicly release the first dataset on a crucial
task in computational social science, persuasion strategy identification.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 19:13:11 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 03:58:13 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Bhattacharya",
"Aanisha",
""
],
[
"Singla",
"Yaman K",
""
],
[
"Krishnamurthy",
"Balaji",
""
],
[
"Shah",
"Rajiv Ratn",
""
],
[
"Chen",
"Changyou",
""
]
] |
new_dataset
| 0.961039 |
2305.10358
|
David Noever
|
Forrest McKee and David Noever
|
NUANCE: Near Ultrasound Attack On Networked Communication Environments
| null | null | null | null |
cs.CR cs.LG cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This study investigates a primary inaudible attack vector on Amazon Alexa
voice services using near ultrasound trojans and focuses on characterizing the
attack surface and examining the practical implications of issuing inaudible
voice commands. The research maps each attack vector to a tactic or technique
from the MITRE ATT&CK matrix, covering enterprise, mobile, and Industrial
Control System (ICS) frameworks. The experiment involved generating and
surveying fifty near-ultrasonic audios to assess the attacks' effectiveness,
with unprocessed commands having a 100% success rate and processed ones
achieving a 58% overall success rate. This systematic approach stimulates
previously unaddressed attack surfaces, ensuring comprehensive detection and
attack design while pairing each ATT&CK Identifier with a tested defensive
method, providing attack and defense tactics for prompt-response options. The
main findings reveal that the attack method employs Single Upper Sideband
Amplitude Modulation (SUSBAM) to generate near-ultrasonic audio from audible
sources, transforming spoken commands into a frequency range beyond human-adult
hearing. By eliminating the lower sideband, the design achieves a 6 kHz minimum
from 16-22 kHz while remaining inaudible after transformation. The research
investigates the one-to-many attack surface where a single device
simultaneously triggers multiple actions or devices. Additionally, the study
demonstrates the reversibility or demodulation of the inaudible signal,
suggesting potential alerting methods and the possibility of embedding secret
messages like audio steganography.
|
[
{
"version": "v1",
"created": "Tue, 25 Apr 2023 23:28:46 GMT"
},
{
"version": "v2",
"created": "Mon, 22 May 2023 23:32:11 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"McKee",
"Forrest",
""
],
[
"Noever",
"David",
""
]
] |
new_dataset
| 0.980702 |
2305.12146
|
Ziyin Zhang
|
Zhaokun Jiang and Ziyin Zhang
|
Hedges in Bidirectional Translations of Publicity-Oriented Documents
|
fixed typsetting issues
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Hedges are widely studied across registers and disciplines, yet research on
the translation of hedges in political texts is extremely limited. This
contrastive study is dedicated to investigating whether there is a diachronic
change in the frequencies of hedging devices in the target texts, to what
extent the changing frequencies of translated hedges through years are
attributed to the source texts, and what translation strategies are adopted to
deal with them. For the purposes of this research, two types of official
political texts and their translations from China and the United Nations were
collected to form three sub-corpora. Results show that hedges tend to appear
more frequently in English political texts, be it original English or
translated English. In addition, directionality seems to play an important role
in influencing both the frequencies and translation strategies regarding the
use of hedges. A noticeable diachronic increase of hedging devices is also
observed in our corpus.
|
[
{
"version": "v1",
"created": "Sat, 20 May 2023 09:19:39 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 01:42:06 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Jiang",
"Zhaokun",
""
],
[
"Zhang",
"Ziyin",
""
]
] |
new_dataset
| 0.996571 |
2305.12433
|
Yaohua Zang
|
Yaohua Zang, Gang Bao
|
ParticleWNN: a Novel Neural Networks Framework for Solving Partial
Differential Equations
| null | null | null | null |
cs.LG cs.NA math.NA
|
http://creativecommons.org/licenses/by/4.0/
|
Deep neural networks (DNNs) have been widely used to solve partial
differential equations (PDEs) in recent years. In this work, a novel deep
learning-based framework named Particle Weak-form based Neural Networks
(ParticleWNN) is developed for solving PDEs in the weak form. In this
framework, the trial space is chosen as the space of DNNs, and the test space
is constructed by functions compactly supported in extremely small regions
whose centers are particles. To train the neural networks, an R-adaptive
strategy is designed to adaptively modify the radius of regions during
training. The ParticleWNN inherits the advantages of weak/variational
formulation, such as requiring less regularity of the solution and a small
number of quadrature points for computing the integrals. Moreover, due to the
special construction of the test functions, the ParticleWNN allows local
training of networks, parallel implementation, and integral calculations only
in extremely small regions. The framework is particularly desirable for solving
problems with high-dimensional and complex domains. The efficiency and accuracy
of the ParticleWNN are demonstrated with several numerical examples. The
numerical results show clear advantages of the ParticleWNN over the
state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Sun, 21 May 2023 11:22:48 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 01:51:03 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Zang",
"Yaohua",
""
],
[
"Bao",
"Gang",
""
]
] |
new_dataset
| 0.999209 |
2305.12542
|
Zachary Yang
|
Zachary Yang, Yasmine Maricar, MohammadReza Davari, Nicolas
Grenon-Godbout, Reihaneh Rabbany
|
ToxBuster: In-game Chat Toxicity Buster with BERT
|
11 pages, 3 figures
| null | null | null |
cs.CL cs.CY
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Detecting toxicity in online spaces is challenging and an ever more pressing
problem given the increase in social media and gaming consumption. We introduce
ToxBuster, a simple and scalable model trained on a relatively large dataset of
194k lines of game chat from Rainbow Six Siege and For Honor, carefully
annotated for different kinds of toxicity. Compared to the existing
state-of-the-art, ToxBuster achieves 82.95% (+7) in precision and 83.56% (+57)
in recall. This improvement is obtained by leveraging past chat history and
metadata. We also study the implication towards real-time and post-game
moderation as well as the model transferability from one game to another.
|
[
{
"version": "v1",
"created": "Sun, 21 May 2023 18:53:26 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Yang",
"Zachary",
""
],
[
"Maricar",
"Yasmine",
""
],
[
"Davari",
"MohammadReza",
""
],
[
"Grenon-Godbout",
"Nicolas",
""
],
[
"Rabbany",
"Reihaneh",
""
]
] |
new_dataset
| 0.999797 |
2305.12972
|
Hanting Chen
|
Hanting Chen, Yunhe Wang, Jianyuan Guo, Dacheng Tao
|
VanillaNet: the Power of Minimalism in Deep Learning
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
At the heart of foundation models is the philosophy of "more is different",
exemplified by the astonishing success in computer vision and natural language
processing. However, the challenges of optimization and inherent complexity of
transformer models call for a paradigm shift towards simplicity. In this study,
we introduce VanillaNet, a neural network architecture that embraces elegance
in design. By avoiding high depth, shortcuts, and intricate operations like
self-attention, VanillaNet is refreshingly concise yet remarkably powerful.
Each layer is carefully crafted to be compact and straightforward, with
nonlinear activation functions pruned after training to restore the original
architecture. VanillaNet overcomes the challenges of inherent complexity,
making it ideal for resource-constrained environments. Its easy-to-understand
and highly simplified architecture opens new possibilities for efficient
deployment. Extensive experimentation demonstrates that VanillaNet delivers
performance on par with renowned deep neural networks and vision transformers,
showcasing the power of minimalism in deep learning. This visionary journey of
VanillaNet has significant potential to redefine the landscape and challenge
the status quo of foundation model, setting a new path for elegant and
effective model design. Pre-trained models and codes are available at
https://github.com/huawei-noah/VanillaNet and
https://gitee.com/mindspore/models/tree/master/research/cv/vanillanet.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 12:27:27 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 12:51:30 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Chen",
"Hanting",
""
],
[
"Wang",
"Yunhe",
""
],
[
"Guo",
"Jianyuan",
""
],
[
"Tao",
"Dacheng",
""
]
] |
new_dataset
| 0.969731 |
2305.12987
|
Magnus Sahlgren
|
Ariel Ekgren, Amaru Cuba Gyllensten, Felix Stollenwerk, Joey \"Ohman,
Tim Isbister, Evangelia Gogoulou, Fredrik Carlsson, Alice Heiman, Judit
Casademont, Magnus Sahlgren
|
GPT-SW3: An Autoregressive Language Model for the Nordic Languages
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper details the process of developing the first native large
generative language model for the Nordic languages, GPT-SW3. We cover all parts
of the development process, from data collection and processing, training
configuration and instruction finetuning, to evaluation and considerations for
release strategies. We hope that this paper can serve as a guide and reference
for other researchers that undertake the development of large generative models
for smaller languages.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 12:47:48 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 06:59:16 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Ekgren",
"Ariel",
""
],
[
"Gyllensten",
"Amaru Cuba",
""
],
[
"Stollenwerk",
"Felix",
""
],
[
"Öhman",
"Joey",
""
],
[
"Isbister",
"Tim",
""
],
[
"Gogoulou",
"Evangelia",
""
],
[
"Carlsson",
"Fredrik",
""
],
[
"Heiman",
"Alice",
""
],
[
"Casademont",
"Judit",
""
],
[
"Sahlgren",
"Magnus",
""
]
] |
new_dataset
| 0.964872 |
2305.13351
|
Thijs Havinga
|
Thijs Havinga, Xianjun Jiao, Wei Liu and Ingrid Moerman
|
Accelerating FPGA-Based Wi-Fi Transceiver Design and Prototyping by
High-Level Synthesis
|
7 pages, extended version of poster accepted at FCCM 2023
| null | null | null |
cs.AR cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Field-Programmable Gate Array (FPGA)-based Software-Defined Radio (SDR) is
well-suited for experimenting with advanced wireless communication systems, as
it allows to alter the architecture promptly while obtaining high performance.
However, programming the FPGA using a Hardware Description Language (HDL) is a
time-consuming task for FPGA developers and difficult for software developers,
which limits the potential of SDR. High-Level Synthesis (HLS) tools aid the
designers by allowing them to program on a higher layer of abstraction.
However, if not carefully designed, it may lead to a degradation in computing
performance or significant increase in resource utilization. This work shows
that it is feasible to design modern Orthogonal Frequency Division Multiplex
(OFDM) baseband processing modules like channel estimation and equalization
using HLS without sacrificing performance and to integrate them in an HDL
design to form a fully-operational FPGA-based Wi-Fi (IEEE 802.11a/g/n)
transceiver. Starting from no HLS experience, a design with minor overhead in
terms of latency and resource utilization as compared to the HDL approach was
created in less than one month. We show the readability of the sequential logic
as coded in HLS, and discuss the lessons learned from the approach taken and
the benefits it brings for further design and experimentation. The FPGA design
generated by HLS was verified to be bit-true with its MATLAB implementation in
simulation. Furthermore, we show its practical performance when deployed on a
System-on-Chip (SoC)-based SDR using a professional wireless connectivity
tester.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 15:09:59 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Havinga",
"Thijs",
""
],
[
"Jiao",
"Xianjun",
""
],
[
"Liu",
"Wei",
""
],
[
"Moerman",
"Ingrid",
""
]
] |
new_dataset
| 0.999542 |
2305.13353
|
Dongwei Pan
|
Dongwei Pan, Long Zhuo, Jingtan Piao, Huiwen Luo, Wei Cheng, Yuxin
Wang, Siming Fan, Shengqi Liu, Lei Yang, Bo Dai, Ziwei Liu, Chen Change Loy,
Chen Qian, Wayne Wu, Dahua Lin, Kwan-Yee Lin
|
RenderMe-360: A Large Digital Asset Library and Benchmarks Towards
High-fidelity Head Avatars
|
Technical Report; Project Page: 36; Github Link:
https://github.com/RenderMe-360/RenderMe-360
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Synthesizing high-fidelity head avatars is a central problem for computer
vision and graphics. While head avatar synthesis algorithms have advanced
rapidly, the best ones still face great obstacles in real-world scenarios. One
of the vital causes is inadequate datasets -- 1) current public datasets can
only support researchers to explore high-fidelity head avatars in one or two
task directions; 2) these datasets usually contain digital head assets with
limited data volume, and narrow distribution over different attributes. In this
paper, we present RenderMe-360, a comprehensive 4D human head dataset to drive
advance in head avatar research. It contains massive data assets, with 243+
million complete head frames, and over 800k video sequences from 500 different
identities captured by synchronized multi-view cameras at 30 FPS. It is a
large-scale digital library for head avatars with three key attributes: 1) High
Fidelity: all subjects are captured by 60 synchronized, high-resolution 2K
cameras in 360 degrees. 2) High Diversity: The collected subjects vary from
different ages, eras, ethnicities, and cultures, providing abundant materials
with distinctive styles in appearance and geometry. Moreover, each subject is
asked to perform various motions, such as expressions and head rotations, which
further extend the richness of assets. 3) Rich Annotations: we provide
annotations with different granularities: cameras' parameters, matting, scan,
2D/3D facial landmarks, FLAME fitting, and text description.
Based on the dataset, we build a comprehensive benchmark for head avatar
research, with 16 state-of-the-art methods performed on five main tasks: novel
view synthesis, novel expression synthesis, hair rendering, hair editing, and
talking head generation. Our experiments uncover the strengths and weaknesses
of current methods. RenderMe-360 opens the door for future exploration in head
avatars.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 17:54:01 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Pan",
"Dongwei",
""
],
[
"Zhuo",
"Long",
""
],
[
"Piao",
"Jingtan",
""
],
[
"Luo",
"Huiwen",
""
],
[
"Cheng",
"Wei",
""
],
[
"Wang",
"Yuxin",
""
],
[
"Fan",
"Siming",
""
],
[
"Liu",
"Shengqi",
""
],
[
"Yang",
"Lei",
""
],
[
"Dai",
"Bo",
""
],
[
"Liu",
"Ziwei",
""
],
[
"Loy",
"Chen Change",
""
],
[
"Qian",
"Chen",
""
],
[
"Wu",
"Wayne",
""
],
[
"Lin",
"Dahua",
""
],
[
"Lin",
"Kwan-Yee",
""
]
] |
new_dataset
| 0.991675 |
2305.13391
|
Kyoungmin Han
|
Kyoungmin Han, Minsik Lee
|
EnSiam: Self-Supervised Learning With Ensemble Representations
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Recently, contrastive self-supervised learning, where the proximity of
representations is determined based on the identities of samples, has made
remarkable progress in unsupervised representation learning. SimSiam is a
well-known example in this area, known for its simplicity yet powerful
performance. However, it is known to be sensitive to changes in training
configurations, such as hyperparameters and augmentation settings, due to its
structural characteristics. To address this issue, we focus on the similarity
between contrastive learning and the teacher-student framework in knowledge
distillation. Inspired by the ensemble-based knowledge distillation approach,
the proposed method, EnSiam, aims to improve the contrastive learning procedure
using ensemble representations. This can provide stable pseudo labels,
providing better performance. Experiments demonstrate that EnSiam outperforms
previous state-of-the-art methods in most cases, including the experiments on
ImageNet, which shows that EnSiam is capable of learning high-quality
representations.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 18:09:55 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Han",
"Kyoungmin",
""
],
[
"Lee",
"Minsik",
""
]
] |
new_dataset
| 0.988497 |
2305.13395
|
Karel D'Oosterlinck
|
Karel D'Oosterlinck, Fran\c{c}ois Remy, Johannes Deleu, Thomas
Demeester, Chris Develder, Klim Zaporojets, Aneiss Ghodsi, Simon Ellershaw,
Jack Collins, Christopher Potts
|
BioDEX: Large-Scale Biomedical Adverse Drug Event Extraction for
Real-World Pharmacovigilance
|
28 pages
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Timely and accurate extraction of Adverse Drug Events (ADE) from biomedical
literature is paramount for public safety, but involves slow and costly manual
labor. We set out to improve drug safety monitoring (pharmacovigilance, PV)
through the use of Natural Language Processing (NLP). We introduce BioDEX, a
large-scale resource for Biomedical adverse Drug Event Extraction, rooted in
the historical output of drug safety reporting in the U.S. BioDEX consists of
65k abstracts and 19k full-text biomedical papers with 256k associated
document-level safety reports created by medical experts. The core features of
these reports include the reported weight, age, and biological sex of a
patient, a set of drugs taken by the patient, the drug dosages, the reactions
experienced, and whether the reaction was life threatening. In this work, we
consider the task of predicting the core information of the report given its
originating paper. We estimate human performance to be 72.0% F1, whereas our
best model achieves 62.3% F1, indicating significant headroom on this task. We
also begin to explore ways in which these models could help professional PV
reviewers. Our code and data are available: https://github.com/KarelDO/BioDEX.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 18:15:57 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"D'Oosterlinck",
"Karel",
""
],
[
"Remy",
"François",
""
],
[
"Deleu",
"Johannes",
""
],
[
"Demeester",
"Thomas",
""
],
[
"Develder",
"Chris",
""
],
[
"Zaporojets",
"Klim",
""
],
[
"Ghodsi",
"Aneiss",
""
],
[
"Ellershaw",
"Simon",
""
],
[
"Collins",
"Jack",
""
],
[
"Potts",
"Christopher",
""
]
] |
new_dataset
| 0.999101 |
2305.13396
|
Chris Doyle
|
Chris Doyle, Sarah Shader, Michelle Lau, Megumi Sano, Daniel L. K.
Yamins and Nick Haber
|
Developmental Curiosity and Social Interaction in Virtual Agents
|
6 pages, 5 figures, 2 tables; accepted to CogSci 2023 with full paper
publication in the proceedings
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Infants explore their complex physical and social environment in an organized
way. To gain insight into what intrinsic motivations may help structure this
exploration, we create a virtual infant agent and place it in a
developmentally-inspired 3D environment with no external rewards. The
environment has a virtual caregiver agent with the capability to interact
contingently with the infant agent in ways that resemble play. We test
intrinsic reward functions that are similar to motivations that have been
proposed to drive exploration in humans: surprise, uncertainty, novelty, and
learning progress. These generic reward functions lead the infant agent to
explore its environment and discover the contingencies that are embedded into
the caregiver agent. The reward functions that are proxies for novelty and
uncertainty are the most successful in generating diverse experiences and
activating the environment contingencies. We also find that learning a world
model in the presence of an attentive caregiver helps the infant agent learn
how to predict scenarios with challenging social and physical dynamics. Taken
together, our findings provide insight into how curiosity-like intrinsic
rewards and contingent social interaction lead to dynamic social behavior and
the creation of a robust predictive world model.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 18:17:07 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Doyle",
"Chris",
""
],
[
"Shader",
"Sarah",
""
],
[
"Lau",
"Michelle",
""
],
[
"Sano",
"Megumi",
""
],
[
"Yamins",
"Daniel L. K.",
""
],
[
"Haber",
"Nick",
""
]
] |
new_dataset
| 0.967774 |
2305.13418
|
Aditya Arun
|
William Hunter, Aditya Arun, Dinesh Bharadia
|
WiROS: WiFi sensing toolbox for robotics
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many recent works have explored using WiFi-based sensing to improve SLAM,
robot manipulation, or exploration. Moreover, widespread availability makes
WiFi the most advantageous RF signal to leverage. But WiFi sensors lack an
accurate, tractable, and versatile toolbox, which hinders their widespread
adoption with robot's sensor stacks.
We develop WiROS to address this immediate need, furnishing many WiFi-related
measurements as easy-to-consume ROS topics. Specifically, WiROS is a
plug-and-play WiFi sensing toolbox providing access to coarse-grained WiFi
signal strength (RSSI), fine-grained WiFi channel state information (CSI), and
other MAC-layer information (device address, packet id's or frequency-channel
information). Additionally, WiROS open-sources state-of-art algorithms to
calibrate and process WiFi measurements to furnish accurate bearing information
for received WiFi signals. The open-sourced repository is:
https://github.com/ucsdwcsng/WiROS
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 19:07:14 GMT"
}
] | 2023-05-24T00:00:00 |
[
[
"Hunter",
"William",
""
],
[
"Arun",
"Aditya",
""
],
[
"Bharadia",
"Dinesh",
""
]
] |
new_dataset
| 0.992929 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.