id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.09288
|
Milan Straka
|
David Kube\v{s}a, Milan Straka
|
DaMuEL: A Large Multilingual Dataset for Entity Linking
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present DaMuEL, a large Multilingual Dataset for Entity Linking containing
data in 53 languages. DaMuEL consists of two components: a knowledge base that
contains language-agnostic information about entities, including their claims
from Wikidata and named entity types (PER, ORG, LOC, EVENT, BRAND, WORK_OF_ART,
MANUFACTURED); and Wikipedia texts with entity mentions linked to the knowledge
base, along with language-specific text from Wikidata such as labels, aliases,
and descriptions, stored separately for each language. The Wikidata QID is used
as a persistent, language-agnostic identifier, enabling the combination of the
knowledge base with language-specific texts and information for each entity.
Wikipedia documents deliberately annotate only a single mention for every
entity present; we further automatically detect all mentions of named entities
linked from each document. The dataset contains 27.9M named entities in the
knowledge base and 12.3G tokens from Wikipedia texts. The dataset is published
under the CC BY-SA license at https://hdl.handle.net/11234/1-5047.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 17:15:52 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Kubeša",
"David",
""
],
[
"Straka",
"Milan",
""
]
] |
new_dataset
| 0.999801 |
2306.09298
|
Leonhard Horstmeyer
|
Leonhard Horstmeyer
|
Lakat: An open and permissionless architecture for continuous
integration academic publishing
|
23 pages, 5 figures, 1 table
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In this paper, we present three contributions to the field of academic
publishing. Firstly, we introduce Lakat, a novel base layer for a publishing
system that fosters collaboration, pluralism and permissionless participation.
Drawing inspiration from the philosophy of Imre Lakatos, Lakat is designed as a
peer-to-peer process- and conflict-oriented system that supports continuous
integration across multiple branches. This architecture provides a robust
foundation for the integration of existing reputation systems and incentive
structures or the development of new ones. Secondly, we propose a new consensus
mechanism, called Proof of Review, which ensures the integrity and quality of
the content while promoting active participation from the community. Lastly, we
present Lignification, a new finality gadget specifically designed for
branched, permissionless systems. Lignification provides a deterministic way to
find the consensual state in these systems, ensuring the system's robustness
and reliability in handling complex scenarios where multiple contributors may
be proposing changes simultaneously. Together, these contributions aim to
provide a convenient starting point to tackle some of the issues in traditional
paper-formatted publishing of research output. By prioritizing collaboration,
process-orientation, and pluralism, Lakat aims to improve the way research is
conducted and disseminated and ultimately hopes to contribute to a healthier
and more productive academic culture.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 17:27:16 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Horstmeyer",
"Leonhard",
""
]
] |
new_dataset
| 0.999454 |
2306.09327
|
Daniel McKee
|
Daniel McKee, Justin Salamon, Josef Sivic, Bryan Russell
|
Language-Guided Music Recommendation for Video via Prompt Analogies
|
CVPR 2023 (Highlight paper). Project page:
https://www.danielbmckee.com/language-guided-music-for-video
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a method to recommend music for an input video while allowing a
user to guide music selection with free-form natural language. A key challenge
of this problem setting is that existing music video datasets provide the
needed (video, music) training pairs, but lack text descriptions of the music.
This work addresses this challenge with the following three contributions.
First, we propose a text-synthesis approach that relies on an analogy-based
prompting procedure to generate natural language music descriptions from a
large-scale language model (BLOOM-176B) given pre-trained music tagger outputs
and a small number of human text descriptions. Second, we use these synthesized
music descriptions to train a new trimodal model, which fuses text and video
input representations to query music samples. For training, we introduce a text
dropout regularization mechanism which we show is critical to model
performance. Our model design allows for the retrieved music audio to agree
with the two input modalities by matching visual style depicted in the video
and musical genre, mood, or instrumentation described in the natural language
query. Third, to evaluate our approach, we collect a testing dataset for our
problem by annotating a subset of 4k clips from the YT8M-MusicVideo dataset
with natural language music descriptions which we make publicly available. We
show that our approach can match or exceed the performance of prior methods on
video-to-music retrieval while significantly improving retrieval accuracy when
using text guidance.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 17:58:01 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"McKee",
"Daniel",
""
],
[
"Salamon",
"Justin",
""
],
[
"Sivic",
"Josef",
""
],
[
"Russell",
"Bryan",
""
]
] |
new_dataset
| 0.997645 |
2306.09329
|
Nikos Kolotouros
|
Nikos Kolotouros, Thiemo Alldieck, Andrei Zanfir, Eduard Gabriel
Bazavan, Mihai Fieraru, Cristian Sminchisescu
|
DreamHuman: Animatable 3D Avatars from Text
|
Project website at https://dream-human.github.io/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present DreamHuman, a method to generate realistic animatable 3D human
avatar models solely from textual descriptions. Recent text-to-3D methods have
made considerable strides in generation, but are still lacking in important
aspects. Control and often spatial resolution remain limited, existing methods
produce fixed rather than animated 3D human models, and anthropometric
consistency for complex structures like people remains a challenge. DreamHuman
connects large text-to-image synthesis models, neural radiance fields, and
statistical human body models in a novel modeling and optimization framework.
This makes it possible to generate dynamic 3D human avatars with high-quality
textures and learned, instance-specific, surface deformations. We demonstrate
that our method is capable to generate a wide variety of animatable, realistic
3D human models from text. Our 3D models have diverse appearance, clothing,
skin tones and body shapes, and significantly outperform both generic
text-to-3D approaches and previous text-based 3D avatar generators in visual
fidelity. For more results and animations please check our website at
https://dream-human.github.io.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 17:58:21 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Kolotouros",
"Nikos",
""
],
[
"Alldieck",
"Thiemo",
""
],
[
"Zanfir",
"Andrei",
""
],
[
"Bazavan",
"Eduard Gabriel",
""
],
[
"Fieraru",
"Mihai",
""
],
[
"Sminchisescu",
"Cristian",
""
]
] |
new_dataset
| 0.991011 |
2306.09337
|
Lea M\"uller
|
Lea M\"uller, Vickie Ye, Georgios Pavlakos, Michael Black, Angjoo
Kanazawa
|
Generative Proxemics: A Prior for 3D Social Interaction from Images
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Social interaction is a fundamental aspect of human behavior and
communication. The way individuals position themselves in relation to others,
also known as proxemics, conveys social cues and affects the dynamics of social
interaction. We present a novel approach that learns a 3D proxemics prior of
two people in close social interaction. Since collecting a large 3D dataset of
interacting people is a challenge, we rely on 2D image collections where social
interactions are abundant. We achieve this by reconstructing pseudo-ground
truth 3D meshes of interacting people from images with an optimization approach
using existing ground-truth contact maps. We then model the proxemics using a
novel denoising diffusion model called BUDDI that learns the joint distribution
of two people in close social interaction directly in the SMPL-X parameter
space. Sampling from our generative proxemics model produces realistic 3D human
interactions, which we validate through a user study. Additionally, we
introduce a new optimization method that uses the diffusion prior to
reconstruct two people in close proximity from a single image without any
contact annotation. Our approach recovers more accurate and plausible 3D social
interactions from noisy initial estimates and outperforms state-of-the-art
methods. See our project site for code, data, and model:
muelea.github.io/buddi.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 17:59:20 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Müller",
"Lea",
""
],
[
"Ye",
"Vickie",
""
],
[
"Pavlakos",
"Georgios",
""
],
[
"Black",
"Michael",
""
],
[
"Kanazawa",
"Angjoo",
""
]
] |
new_dataset
| 0.999128 |
2306.09343
|
Rose Wang
|
Rose E. Wang, Pawan Wirawarn, Noah Goodman, Dorottya Demszky
|
SIGHT: A Large Annotated Dataset on Student Insights Gathered from
Higher Education Transcripts
|
First two authors contributed equally. In the Proceedings of
Innovative Use of NLP for Building Educational Applications 2023. The code
and data are open-sourced here: https://github.com/rosewang2008/sight
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Lectures are a learning experience for both students and teachers. Students
learn from teachers about the subject material, while teachers learn from
students about how to refine their instruction. However, online student
feedback is unstructured and abundant, making it challenging for teachers to
learn and improve. We take a step towards tackling this challenge. First, we
contribute a dataset for studying this problem: SIGHT is a large dataset of 288
math lecture transcripts and 15,784 comments collected from the Massachusetts
Institute of Technology OpenCourseWare (MIT OCW) YouTube channel. Second, we
develop a rubric for categorizing feedback types using qualitative analysis.
Qualitative analysis methods are powerful in uncovering domain-specific
insights, however they are costly to apply to large data sources. To overcome
this challenge, we propose a set of best practices for using large language
models (LLMs) to cheaply classify the comments at scale. We observe a striking
correlation between the model's and humans' annotation: Categories with
consistent human annotations (>$0.9$ inter-rater reliability, IRR) also display
higher human-model agreement (>$0.7$), while categories with less consistent
human annotations ($0.7$-$0.8$ IRR) correspondingly demonstrate lower
human-model agreement ($0.3$-$0.5$). These techniques uncover useful student
feedback from thousands of comments, costing around $\$0.002$ per comment. We
conclude by discussing exciting future directions on using online student
feedback and improving automated annotation techniques for qualitative
research.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 17:59:47 GMT"
}
] | 2023-06-16T00:00:00 |
[
[
"Wang",
"Rose E.",
""
],
[
"Wirawarn",
"Pawan",
""
],
[
"Goodman",
"Noah",
""
],
[
"Demszky",
"Dorottya",
""
]
] |
new_dataset
| 0.999606 |
2009.00433
|
Giorgio Grani
|
Valerio Agasucci, Giorgio Grani, Leonardo Lamorgese
|
Solving the single-track train scheduling problem via Deep Reinforcement
Learning
|
Graph neural network added. Comparison with other methods added. 24
pages, 5 figures (1 b&w)
|
Journal of Rail Transport Planning & Management, 26, p.100394
(2023)
|
10.1016/j.jrtpm.2023.100394
| null |
cs.AI math.OC
|
http://creativecommons.org/licenses/by/4.0/
|
Every day, railways experience disturbances and disruptions, both on the
network and the fleet side, that affect the stability of rail traffic. Induced
delays propagate through the network, which leads to a mismatch in demand and
offer for goods and passengers, and, in turn, to a loss in service quality. In
these cases, it is the duty of human traffic controllers, the so-called
dispatchers, to do their best to minimize the impact on traffic. However,
dispatchers inevitably have a limited depth of perception of the knock-on
effect of their decisions, particularly how they affect areas of the network
that are outside their direct control. In recent years, much work in Decision
Science has been devoted to developing methods to solve the problem
automatically and support the dispatchers in this challenging task. This paper
investigates Machine Learning-based methods for tackling this problem,
proposing two different Deep Q-Learning methods(Decentralized and Centralized).
Numerical results show the superiority of these techniques with respect to the
classical linear Q-Learning based on matrices. Moreover, the Centralized
approach is compared with a MILP formulation showing interesting results. The
experiments are inspired by data provided by a U.S. Class 1 railroad.
|
[
{
"version": "v1",
"created": "Tue, 1 Sep 2020 14:03:56 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Jun 2023 08:01:42 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Agasucci",
"Valerio",
""
],
[
"Grani",
"Giorgio",
""
],
[
"Lamorgese",
"Leonardo",
""
]
] |
new_dataset
| 0.965744 |
2012.01955
|
Gustavo Marfia
|
Lorenzo Stacchio, Alessia Angeli, Giuseppe Lisanti, Daniela Calanca,
Gustavo Marfia
|
IMAGO: A family photo album dataset for a socio-historical analysis of
the twentieth century
| null | null |
10.1145/3507918
| null |
cs.CV cs.CY cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although one of the most popular practices in photography since the end of
the 19th century, an increase in scholarly interest in family photo albums
dates back to the early 1980s. Such collections of photos may reveal
sociological and historical insights regarding specific cultures and times.
They are, however, in most cases scattered among private homes and only
available on paper or photographic film, thus making their analysis by
academics such as historians, social-cultural anthropologists and cultural
theorists very cumbersome. In this paper, we analyze the IMAGO dataset
including photos belonging to family albums assembled at the University of
Bologna's Rimini campus since 2004. Following a deep learning-based approach,
the IMAGO dataset has offered the opportunity of experimenting with photos
taken between year 1845 and year 2009, with the goals of assessing the dates
and the socio-historical contexts of the images, without use of any other
sources of information. Exceeding our initial expectations, such analysis has
revealed its merit not only in terms of the performance of the approach adopted
in this work, but also in terms of the foreseeable implications and use for the
benefit of socio-historical research. To the best of our knowledge, this is the
first work that moves along this path in literature.
|
[
{
"version": "v1",
"created": "Thu, 3 Dec 2020 14:28:58 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Stacchio",
"Lorenzo",
""
],
[
"Angeli",
"Alessia",
""
],
[
"Lisanti",
"Giuseppe",
""
],
[
"Calanca",
"Daniela",
""
],
[
"Marfia",
"Gustavo",
""
]
] |
new_dataset
| 0.999904 |
2104.05596
|
Sumanth Doddapaneni
|
Gowtham Ramesh, Sumanth Doddapaneni, Aravinth Bheemaraj, Mayank
Jobanputra, Raghavan AK, Ajitesh Sharma, Sujit Sahoo, Harshita Diddee,
Mahalakshmi J, Divyanshu Kakwani, Navneet Kumar, Aswin Pradeep, Srihari
Nagaraj, Kumar Deepak, Vivek Raghavan, Anoop Kunchukuttan, Pratyush Kumar,
Mitesh Shantadevi Khapra
|
Samanantar: The Largest Publicly Available Parallel Corpora Collection
for 11 Indic Languages
|
Accepted to the Transactions of the Association for Computational
Linguistics (TACL)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Samanantar, the largest publicly available parallel corpora
collection for Indic languages. The collection contains a total of 49.7 million
sentence pairs between English and 11 Indic languages (from two language
families). Specifically, we compile 12.4 million sentence pairs from existing,
publicly-available parallel corpora, and additionally mine 37.4 million
sentence pairs from the web, resulting in a 4x increase. We mine the parallel
sentences from the web by combining many corpora, tools, and methods: (a)
web-crawled monolingual corpora, (b) document OCR for extracting sentences from
scanned documents, (c) multilingual representation models for aligning
sentences, and (d) approximate nearest neighbor search for searching in a large
collection of sentences. Human evaluation of samples from the newly mined
corpora validate the high quality of the parallel sentences across 11
languages. Further, we extract 83.4 million sentence pairs between all 55 Indic
language pairs from the English-centric parallel corpus using English as the
pivot language. We trained multilingual NMT models spanning all these languages
on Samanantar, which outperform existing models and baselines on publicly
available benchmarks, such as FLORES, establishing the utility of Samanantar.
Our data and models are available publicly at
https://ai4bharat.iitm.ac.in/samanantar and we hope they will help advance
research in NMT and multilingual NLP for Indic languages.
|
[
{
"version": "v1",
"created": "Mon, 12 Apr 2021 16:18:20 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Apr 2021 16:24:26 GMT"
},
{
"version": "v3",
"created": "Fri, 19 Nov 2021 04:54:38 GMT"
},
{
"version": "v4",
"created": "Mon, 12 Jun 2023 18:23:36 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Ramesh",
"Gowtham",
""
],
[
"Doddapaneni",
"Sumanth",
""
],
[
"Bheemaraj",
"Aravinth",
""
],
[
"Jobanputra",
"Mayank",
""
],
[
"AK",
"Raghavan",
""
],
[
"Sharma",
"Ajitesh",
""
],
[
"Sahoo",
"Sujit",
""
],
[
"Diddee",
"Harshita",
""
],
[
"J",
"Mahalakshmi",
""
],
[
"Kakwani",
"Divyanshu",
""
],
[
"Kumar",
"Navneet",
""
],
[
"Pradeep",
"Aswin",
""
],
[
"Nagaraj",
"Srihari",
""
],
[
"Deepak",
"Kumar",
""
],
[
"Raghavan",
"Vivek",
""
],
[
"Kunchukuttan",
"Anoop",
""
],
[
"Kumar",
"Pratyush",
""
],
[
"Khapra",
"Mitesh Shantadevi",
""
]
] |
new_dataset
| 0.999854 |
2107.10492
|
Aditya Gopalan
|
Aditya Gopalan, Venkatesh Saligrama and Braghadeesh Lakshminarayanan
|
Bandit Quickest Changepoint Detection
|
Some typos fixed in the NeurIPS 2021 version
| null | null | null |
cs.LG cs.IT math.IT stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Many industrial and security applications employ a suite of sensors for
detecting abrupt changes in temporal behavior patterns. These abrupt changes
typically manifest locally, rendering only a small subset of sensors
informative. Continuous monitoring of every sensor can be expensive due to
resource constraints, and serves as a motivation for the bandit quickest
changepoint detection problem, where sensing actions (or sensors) are
sequentially chosen, and only measurements corresponding to chosen actions are
observed. We derive an information-theoretic lower bound on the detection delay
for a general class of finitely parameterized probability distributions. We
then propose a computationally efficient online sensing scheme, which
seamlessly balances the need for exploration of different sensing options with
exploitation of querying informative actions. We derive expected delay bounds
for the proposed scheme and show that these bounds match our
information-theoretic lower bounds at low false alarm rates, establishing
optimality of the proposed method. We then perform a number of experiments on
synthetic and real datasets demonstrating the effectiveness of our proposed
method.
|
[
{
"version": "v1",
"created": "Thu, 22 Jul 2021 07:25:35 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Nov 2021 11:06:49 GMT"
},
{
"version": "v3",
"created": "Tue, 13 Jun 2023 05:39:46 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Gopalan",
"Aditya",
""
],
[
"Saligrama",
"Venkatesh",
""
],
[
"Lakshminarayanan",
"Braghadeesh",
""
]
] |
new_dataset
| 0.991474 |
2109.08010
|
Stephane Gaiffas Pr
|
St\'ephane Ga\"iffas and Ibrahim Merad and Yiyang Yu
|
WildWood: a new Random Forest algorithm
| null | null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce WildWood (WW), a new ensemble algorithm for supervised learning
of Random Forest (RF) type. While standard RF algorithms use bootstrap
out-of-bag samples to compute out-of-bag scores, WW uses these samples to
produce improved predictions given by an aggregation of the predictions of all
possible subtrees of each fully grown tree in the forest. This is achieved by
aggregation with exponential weights computed over out-of-bag samples, that are
computed exactly and very efficiently thanks to an algorithm called context
tree weighting. This improvement, combined with a histogram strategy to
accelerate split finding, makes WW fast and competitive compared with other
well-established ensemble methods, such as standard RF and extreme gradient
boosting algorithms.
|
[
{
"version": "v1",
"created": "Thu, 16 Sep 2021 14:36:56 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Jun 2023 09:57:23 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Gaïffas",
"Stéphane",
""
],
[
"Merad",
"Ibrahim",
""
],
[
"Yu",
"Yiyang",
""
]
] |
new_dataset
| 0.950555 |
2207.00413
|
Latif U. Khan
|
Latif U. Khan, Zhu Han, Dusit Niyato, Mohsen Guizani, and Choong Seon
Hong
|
Metaverse for Wireless Systems: Vision, Enablers, Architecture, and
Future Directions
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, significant research efforts have been initiated to enable the
next-generation, namely, the sixth-generation (6G) wireless systems. In this
article, we present a vision of metaverse towards effectively enabling the
development of 6G wireless systems. A metaverse will use virtual representation
(e.g., digital twin), digital avatars, and interactive experience technologies
(e.g., extended reality) to assist analyses, optimizations, and operations of
various wireless applications. Specifically, the metaverse can offer virtual
wireless system operations through the digital twin that allows network
designers, mobile developers, and telecommunications engineers to monitor,
observe, analyze, and simulations their solutions collaboratively and
virtually. We first introduce a general architecture for metaverse-based
wireless systems. We discuss key driving applications, design trends, and key
enablers of metaverse-based wireless systems. Finally, we present several open
challenges and their potential solutions.
|
[
{
"version": "v1",
"created": "Mon, 20 Jun 2022 19:46:49 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Jun 2023 20:41:44 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Khan",
"Latif U.",
""
],
[
"Han",
"Zhu",
""
],
[
"Niyato",
"Dusit",
""
],
[
"Guizani",
"Mohsen",
""
],
[
"Hong",
"Choong Seon",
""
]
] |
new_dataset
| 0.994769 |
2210.15009
|
Fran\c{c}ois Th\'eberge
|
Bogumi{\l} Kami\'nski, Pawe{\l} Pra{\l}at, Fran\c{c}ois Th\'eberge
|
Hypergraph Artificial Benchmark for Community Detection (h-ABCD)
|
23 pages, 6 figures, 7 tables
| null | null | null |
cs.SI cs.LG math.CO
|
http://creativecommons.org/licenses/by/4.0/
|
The Artificial Benchmark for Community Detection (ABCD) graph is a recently
introduced random graph model with community structure and power-law
distribution for both degrees and community sizes. The model generates graphs
with similar properties as the well-known LFR one, and its main parameter can
be tuned to mimic its counterpart in the LFR model, the mixing parameter. In
this paper, we introduce hypergraph counterpart of the ABCD model, h-ABCD,
which produces random hypergraph with distributions of ground-truth community
sizes and degrees following power-law. As in the original ABCD, the new model
h-ABCD can produce hypergraphs with various levels of noise. More importantly,
the model is flexible and can mimic any desired level of homogeneity of
hyperedges that fall into one community. As a result, it can be used as a
suitable, synthetic playground for analyzing and tuning hypergraph community
detection algorithms.
|
[
{
"version": "v1",
"created": "Wed, 26 Oct 2022 20:06:56 GMT"
},
{
"version": "v2",
"created": "Sat, 4 Mar 2023 19:44:06 GMT"
},
{
"version": "v3",
"created": "Mon, 12 Jun 2023 19:02:30 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Kamiński",
"Bogumił",
""
],
[
"Prałat",
"Paweł",
""
],
[
"Théberge",
"François",
""
]
] |
new_dataset
| 0.998789 |
2210.15668
|
Prabhat Kumar
|
Prabhat Kumar, Andrew Nonaka, Revathi Jambunathan, Girish Pahwa,
Sayeef Salahuddin, Zhi Yao
|
FerroX : A GPU-accelerated, 3D Phase-Field Simulation Framework for
Modeling Ferroelectric Devices
| null | null |
10.1016/j.cpc.2023.108757
| null |
cs.CE
|
http://creativecommons.org/licenses/by/4.0/
|
We present a massively parallel, 3D phase-field simulation framework for
modeling ferro-electric materials based scalable logic devices. We
self-consistently solve the time-dependent Ginzburg Landau (TDGL) equation for
ferroelectric polarization, Poisson equation for electric potential, and charge
equation for carrier densities in semiconductor regions. The algorithm is
implemented using the AMReX software framework, which provides effective
scalability on manycore and GPU-based supercomputing architectures. We
demonstrate the performance of the algorithm with excellent scaling results on
NERSC multicore and GPU systems, with a significant (15x) speedup on the GPU
using a node-by-node comparison. We further demonstrate the applicability of
the code in simulations of ferroelectric domain-wall induced negative
capacitance (NC) effect in Metal-Ferroelectric-Insulator-Metal (MFIM) and
Metal-Ferroelectric-Insulator-Semiconductor-Metal (MFISM) devices. The charge
(Q) v.s. applied voltage (V) responses for these structures clearly indicates
stabilized negative capacitance.
|
[
{
"version": "v1",
"created": "Tue, 25 Oct 2022 18:00:36 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Nov 2022 02:05:49 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Kumar",
"Prabhat",
""
],
[
"Nonaka",
"Andrew",
""
],
[
"Jambunathan",
"Revathi",
""
],
[
"Pahwa",
"Girish",
""
],
[
"Salahuddin",
"Sayeef",
""
],
[
"Yao",
"Zhi",
""
]
] |
new_dataset
| 0.998784 |
2212.05171
|
Le Xue
|
Le Xue, Mingfei Gao, Chen Xing, Roberto Mart\'in-Mart\'in, Jiajun Wu,
Caiming Xiong, Ran Xu, Juan Carlos Niebles, Silvio Savarese
|
ULIP: Learning a Unified Representation of Language, Images, and Point
Clouds for 3D Understanding
|
Accepted by CVPR 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The recognition capabilities of current state-of-the-art 3D models are
limited by datasets with a small number of annotated data and a pre-defined set
of categories. In its 2D counterpart, recent advances have shown that similar
problems can be significantly alleviated by employing knowledge from other
modalities, such as language. Inspired by this, leveraging multimodal
information for 3D modality could be promising to improve 3D understanding
under the restricted data regime, but this line of research is not well
studied. Therefore, we introduce ULIP to learn a unified representation of
images, texts, and 3D point clouds by pre-training with object triplets from
the three modalities. To overcome the shortage of training triplets, ULIP
leverages a pre-trained vision-language model that has already learned a common
visual and textual space by training with massive image-text pairs. Then, ULIP
learns a 3D representation space aligned with the common image-text space,
using a small number of automatically synthesized triplets. ULIP is agnostic to
3D backbone networks and can easily be integrated into any 3D architecture.
Experiments show that ULIP effectively improves the performance of multiple
recent 3D backbones by simply pre-training them on ShapeNet55 using our
framework, achieving state-of-the-art performance in both standard 3D
classification and zero-shot 3D classification on ModelNet40 and ScanObjectNN.
ULIP also improves the performance of PointMLP by around 3% in 3D
classification on ScanObjectNN, and outperforms PointCLIP by 28.8% on top-1
accuracy for zero-shot 3D classification on ModelNet40. Our code and
pre-trained models are released at https://github.com/salesforce/ULIP.
|
[
{
"version": "v1",
"created": "Sat, 10 Dec 2022 01:34:47 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Mar 2023 14:09:23 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Apr 2023 13:20:23 GMT"
},
{
"version": "v4",
"created": "Mon, 12 Jun 2023 19:30:52 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Xue",
"Le",
""
],
[
"Gao",
"Mingfei",
""
],
[
"Xing",
"Chen",
""
],
[
"Martín-Martín",
"Roberto",
""
],
[
"Wu",
"Jiajun",
""
],
[
"Xiong",
"Caiming",
""
],
[
"Xu",
"Ran",
""
],
[
"Niebles",
"Juan Carlos",
""
],
[
"Savarese",
"Silvio",
""
]
] |
new_dataset
| 0.998063 |
2212.06746
|
Kourosh Shoele
|
Brian Van Stratum and Kourosh Shoele and Jonathan E. Clark
|
Pacific Lamprey Inspired Climbing
| null | null |
10.1088/1748-3190/acd671
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Snakes and their bio-inspired robot counterparts have demonstrated locomotion
on a wide range of terrains. However, dynamic vertical climbing is one
locomotion strategy that has received little attention in the existing snake
robotics literature. We demonstrate a new scansorial gait and robot inspired by
the locomotion of the Pacific Lamprey. This new gait allows a robot to steer
while climbing on flat, near-vertical surfaces. A reduced-order model is
developed and used to explore the relationship between body actuation and
vertical and lateral motions of the robot. Trident, the new wall climbing
lamprey-inspired robot, demonstrates dynamic climbing on flat vertical surfaces
with a peak net vertical stride displacement of 4.1 cm per step. Actuating at
1.3 Hz, Trident attains a vertical climbing speed of 4.8 cm/s (0.09 Bl/s) at
specific resistance of 8.3. Trident can also traverse laterally at 9 cm/s (0.17
Bl/s). Moreover, Trident is able to make 14\% longer strides than the Pacific
Lamprey when climbing vertically. The computational and experimental results
demonstrate that a lamprey-inspired climbing gait coupled with appropriate
attachment is a useful climbing strategy for snake robots climbing near
vertical surfaces with limited push points.
|
[
{
"version": "v1",
"created": "Tue, 13 Dec 2022 17:28:00 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Van Stratum",
"Brian",
""
],
[
"Shoele",
"Kourosh",
""
],
[
"Clark",
"Jonathan E.",
""
]
] |
new_dataset
| 0.999712 |
2301.13591
|
Bo Han
|
Bo Han, Yitong Fu, Yixuan Shen
|
Zero3D: Semantic-Driven Multi-Category 3D Shape Generation
| null | null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Semantic-driven 3D shape generation aims to generate 3D objects conditioned
on text. Previous works face problems with single-category generation,
low-frequency 3D details, and requiring a large number of paired datasets for
training. To tackle these challenges, we propose a multi-category conditional
diffusion model. Specifically, 1) to alleviate the problem of lack of
large-scale paired data, we bridge the text, 2D image and 3D shape based on the
pre-trained CLIP model, and 2) to obtain the multi-category 3D shape feature,
we apply the conditional flow model to generate 3D shape vector conditioned on
CLIP embedding. 3) to generate multi-category 3D shape, we employ the
hidden-layer diffusion model conditioned on the multi-category shape vector,
which greatly reduces the training time and memory consumption.
|
[
{
"version": "v1",
"created": "Tue, 31 Jan 2023 12:43:54 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Feb 2023 14:43:43 GMT"
},
{
"version": "v3",
"created": "Mon, 15 May 2023 06:43:01 GMT"
},
{
"version": "v4",
"created": "Tue, 13 Jun 2023 02:21:59 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Han",
"Bo",
""
],
[
"Fu",
"Yitong",
""
],
[
"Shen",
"Yixuan",
""
]
] |
new_dataset
| 0.990705 |
2302.02276
|
Hanzhou Wu
|
Qiyun Liu, Zhiguang Yang and Hanzhou Wu
|
JPEG Steganalysis Based on Steganographic Feature Enhancement and Graph
Attention Learning
|
https://scholar.google.com/citations?user=IdiF7M0AAAAJ&hl=en
|
Journal of Electronic Imaging 2023
| null | null |
cs.MM cs.CR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The purpose of image steganalysis is to determine whether the carrier image
contains hidden information or not. Since JEPG is the most commonly used image
format over social networks, steganalysis in JPEG images is also the most
urgently needed to be explored. However, in order to detect whether secret
information is hidden within JEPG images, the majority of existing algorithms
are designed in conjunction with the popular computer vision related networks,
without considering the key characteristics appeared in image steganalysis. It
is crucial that the steganographic signal, as an extremely weak signal, can be
enhanced during its representation learning process. Motivated by this insight,
in this paper, we introduce a novel representation learning algorithm for JPEG
steganalysis that is mainly consisting of a graph attention learning module and
a feature enhancement module. The graph attention learning module is designed
to avoid global feature loss caused by the local feature learning of
convolutional neural network and reliance on depth stacking to extend the
perceptual domain. The feature enhancement module is applied to prevent the
stacking of convolutional layers from weakening the steganographic information.
In addition, pretraining as a way to initialize the network weights with a
large-scale dataset is utilized to enhance the ability of the network to
extract discriminative features. We advocate pretraining with ALASKA2 for the
model trained with BOSSBase+BOWS2. The experimental results indicate that the
proposed algorithm outperforms previous arts in terms of detection accuracy,
which has verified the superiority and applicability of the proposed work.
|
[
{
"version": "v1",
"created": "Sun, 5 Feb 2023 01:42:19 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Liu",
"Qiyun",
""
],
[
"Yang",
"Zhiguang",
""
],
[
"Wu",
"Hanzhou",
""
]
] |
new_dataset
| 0.998474 |
2303.03004
|
M Saiful Bari
|
Mohammad Abdullah Matin Khan, M Saiful Bari, Xuan Long Do, Weishi
Wang, Md Rizwan Parvez, Shafiq Joty
|
xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code
Understanding, Generation, Translation and Retrieval
|
Data & Code available at https://github.com/ntunlp/xCodeEval
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
AI systems that can create codes as solutions to problems or assist
developers in writing codes can increase productivity and make programming more
accessible. Recently, pre-trained large language models have shown impressive
abilities in generating codes from natural language descriptions, repairing
buggy codes, translating codes between languages, and retrieving relevant code
segments. However, the evaluation of these models has often been performed in a
scattered way on only one or two specific tasks, in a few languages, at a
partial granularity (e.g., function) level, and in many cases without proper
training data. Even more concerning is that in most cases the evaluation of
generated codes has been done in terms of mere lexical overlap with a reference
code rather than actual execution. We introduce xCodeEval, the largest
executable multilingual multitask benchmark to date consisting of 25M
document-level coding examples (16.5B tokens) from about 7.5K unique problems
covering up to 11 programming languages with execution-level parallelism. It
features a total of seven tasks involving code understanding, generation,
translation and retrieval. xCodeEval adopts an execution-based evaluation and
offers a multilingual code execution engine, ExecEval that supports unit test
based execution in all the 11 languages. To address the challenge of balancing
the distributions of text-code samples over multiple attributes in
validation/test sets, we further propose a novel data splitting and a data
selection schema based on the geometric mean and graph-theoretic principle.
Experimental results on all the tasks and languages show xCodeEval is a
promising yet challenging benchmark as per the current advancements in language
models.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 10:08:51 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Apr 2023 05:27:18 GMT"
},
{
"version": "v3",
"created": "Tue, 13 Jun 2023 11:29:45 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Khan",
"Mohammad Abdullah Matin",
""
],
[
"Bari",
"M Saiful",
""
],
[
"Do",
"Xuan Long",
""
],
[
"Wang",
"Weishi",
""
],
[
"Parvez",
"Md Rizwan",
""
],
[
"Joty",
"Shafiq",
""
]
] |
new_dataset
| 0.999865 |
2303.03315
|
Antoine Gu\'edon
|
Antoine Gu\'edon, Tom Monnier, Pascal Monasse and Vincent Lepetit
|
MACARONS: Mapping And Coverage Anticipation with RGB Online
Self-Supervision
|
To appear at CVPR 2023. Project Webpage:
https://imagine.enpc.fr/~guedona/MACARONS/
| null | null | null |
cs.CV cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a method that simultaneously learns to explore new large
environments and to reconstruct them in 3D from color images only. This is
closely related to the Next Best View problem (NBV), where one has to identify
where to move the camera next to improve the coverage of an unknown scene.
However, most of the current NBV methods rely on depth sensors, need 3D
supervision and/or do not scale to large scenes. Our method requires only a
color camera and no 3D supervision. It simultaneously learns in a
self-supervised fashion to predict a "volume occupancy field" from color images
and, from this field, to predict the NBV. Thanks to this approach, our method
performs well on new scenes as it is not biased towards any training 3D data.
We demonstrate this on a recent dataset made of various 3D scenes and show it
performs even better than recent methods requiring a depth sensor, which is not
a realistic assumption for outdoor scenes captured with a flying drone.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 17:38:03 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Jun 2023 16:16:16 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Guédon",
"Antoine",
""
],
[
"Monnier",
"Tom",
""
],
[
"Monasse",
"Pascal",
""
],
[
"Lepetit",
"Vincent",
""
]
] |
new_dataset
| 0.993912 |
2303.16778
|
Nazmus Sakib
|
Nazmus Sakib, G. M. Shahariar, Md. Mohsinul Kabir, Md. Kamrul Hasan
and Hasan Mahmud
|
Assorted, Archetypal and Annotated Two Million (3A2M) Cooking Recipes
Dataset based on Active Learning
| null |
International Conference on Machine Intelligence and Emerging
Technologies. MIET 2022. Lecture Notes of the Institute for Computer
Sciences, Social Informatics and Telecommunications Engineering, vol 491, pp
188-203, Springer, Cham
|
10.1007/978-3-031-34622-4_15
| null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Cooking recipes allow individuals to exchange culinary ideas and provide food
preparation instructions. Due to a lack of adequate labeled data, categorizing
raw recipes found online to the appropriate food genres is a challenging task
in this domain. Utilizing the knowledge of domain experts to categorize recipes
could be a solution. In this study, we present a novel dataset of two million
culinary recipes labeled in respective categories leveraging the knowledge of
food experts and an active learning technique. To construct the dataset, we
collect the recipes from the RecipeNLG dataset. Then, we employ three human
experts whose trustworthiness score is higher than 86.667% to categorize 300K
recipe by their Named Entity Recognition (NER) and assign it to one of the nine
categories: bakery, drinks, non-veg, vegetables, fast food, cereals, meals,
sides and fusion. Finally, we categorize the remaining 1900K recipes using
Active Learning method with a blend of Query-by-Committee and Human In The Loop
(HITL) approaches. There are more than two million recipes in our dataset, each
of which is categorized and has a confidence score linked with it. For the 9
genres, the Fleiss Kappa score of this massive dataset is roughly 0.56026. We
believe that the research community can use this dataset to perform various
machine learning tasks such as recipe genre classification, recipe generation
of a specific genre, new recipe creation, etc. The dataset can also be used to
train and evaluate the performance of various NLP tasks such as named entity
recognition, part-of-speech tagging, semantic role labeling, and so on. The
dataset will be available upon publication: https://tinyurl.com/3zu4778y.
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 07:53:18 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Sakib",
"Nazmus",
""
],
[
"Shahariar",
"G. M.",
""
],
[
"Kabir",
"Md. Mohsinul",
""
],
[
"Hasan",
"Md. Kamrul",
""
],
[
"Mahmud",
"Hasan",
""
]
] |
new_dataset
| 0.999839 |
2304.09252
|
Md Hasibul Amin
|
Md Hasibul Amin, Mohammed E. Elbtity and Ramtin Zand
|
IMAC-Sim: A Circuit-level Simulator For In-Memory Analog Computing
Architectures
| null |
Proceedings of the Great Lakes Symposium on VLSI 2023 (GLSVLSI
'23), Association for Computing Machinery, New York, NY, USA, 659-664
|
10.1145/3583781.3590264
| null |
cs.ET cs.AR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the increased attention to memristive-based in-memory analog computing
(IMAC) architectures as an alternative for energy-hungry computer systems for
machine learning applications, a tool that enables exploring their device- and
circuit-level design space can significantly boost the research and development
in this area. Thus, in this paper, we develop IMAC-Sim, a circuit-level
simulator for the design space exploration of IMAC architectures. IMAC-Sim is a
Python-based simulation framework, which creates the SPICE netlist of the IMAC
circuit based on various device- and circuit-level hyperparameters selected by
the user, and automatically evaluates the accuracy, power consumption, and
latency of the developed circuit using a user-specified dataset. Moreover,
IMAC-Sim simulates the interconnect parasitic resistance and capacitance in the
IMAC architectures and is also equipped with horizontal and vertical
partitioning techniques to surmount these reliability challenges. IMAC-Sim is a
flexible tool that supports a broad range of device- and circuit-level
hyperparameters. In this paper, we perform controlled experiments to exhibit
some of the important capabilities of the IMAC-Sim, while the entirety of its
features is available for researchers via an open-source tool.
|
[
{
"version": "v1",
"created": "Tue, 18 Apr 2023 19:22:34 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Amin",
"Md Hasibul",
""
],
[
"Elbtity",
"Mohammed E.",
""
],
[
"Zand",
"Ramtin",
""
]
] |
new_dataset
| 0.998695 |
2305.04790
|
Tao Gong
|
Tao Gong, Chengqi Lyu, Shilong Zhang, Yudong Wang, Miao Zheng, Qian
Zhao, Kuikun Liu, Wenwei Zhang, Ping Luo, Kai Chen
|
MultiModal-GPT: A Vision and Language Model for Dialogue with Humans
|
10 pages, 8 figures
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present a vision and language model named MultiModal-GPT to conduct
multi-round dialogue with humans. MultiModal-GPT can follow various
instructions from humans, such as generating a detailed caption, counting the
number of interested objects, and answering general questions from users.
MultiModal-GPT is parameter-efficiently fine-tuned from OpenFlamingo, with
Low-rank Adapter (LoRA) added both in the cross-attention part and the
self-attention part of the language model. We first construct instruction
templates with vision and language data for multi-modality instruction tuning
to make the model understand and follow human instructions. We find the quality
of training data is vital for the dialogue performance, where few data
containing short answers can lead the model to respond shortly to any
instructions. To further enhance the ability to chat with humans of the
MultiModal-GPT, we utilize language-only instruction-following data to train
the MultiModal-GPT jointly. The joint training of language-only and
visual-language instructions with the \emph{same} instruction template
effectively improves dialogue performance. Various demos show the ability of
continuous dialogue of MultiModal-GPT with humans. Code, dataset, and demo are
at https://github.com/open-mmlab/Multimodal-GPT
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 15:45:42 GMT"
},
{
"version": "v2",
"created": "Tue, 9 May 2023 11:41:53 GMT"
},
{
"version": "v3",
"created": "Tue, 13 Jun 2023 13:31:12 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Gong",
"Tao",
""
],
[
"Lyu",
"Chengqi",
""
],
[
"Zhang",
"Shilong",
""
],
[
"Wang",
"Yudong",
""
],
[
"Zheng",
"Miao",
""
],
[
"Zhao",
"Qian",
""
],
[
"Liu",
"Kuikun",
""
],
[
"Zhang",
"Wenwei",
""
],
[
"Luo",
"Ping",
""
],
[
"Chen",
"Kai",
""
]
] |
new_dataset
| 0.998371 |
2305.07244
|
Prasad Talasila
|
Prasad Talasila, Cl\'audio Gomes, Peter H{\o}gh Mikkelsen, Santiago
Gil Arboleda, Eduard Kamburjan, Peter Gorm Larsen
|
Digital Twin as a Service (DTaaS): A Platform for Digital Twin
Developers and Users
|
8 pages, 6 figures. Accepted at Digital Twin 2023
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Establishing digital twins is a non-trivial endeavour especially when users
face significant challenges in creating them from scratch. Ready availability
of reusable models, data and tool assets, can help with creation and use of
digital twins. A number of digital twin frameworks exist to facilitate creation
and use of digital twins. In this paper we propose a digital twin framework to
author digital twin assets, create digital twins from reusable assets and make
the digital twins available as a service to other users. The proposed framework
automates the management of reusable assets, storage, provision of compute
infrastructure, communication and monitoring tasks. The users operate at the
level of digital twins and delegate rest of the work to the digital twin as a
service framework.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 04:34:30 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Jun 2023 08:59:12 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Talasila",
"Prasad",
""
],
[
"Gomes",
"Cláudio",
""
],
[
"Mikkelsen",
"Peter Høgh",
""
],
[
"Arboleda",
"Santiago Gil",
""
],
[
"Kamburjan",
"Eduard",
""
],
[
"Larsen",
"Peter Gorm",
""
]
] |
new_dataset
| 0.971025 |
2305.10855
|
Lei Cui
|
Jingye Chen, Yupan Huang, Tengchao Lv, Lei Cui, Qifeng Chen, Furu Wei
|
TextDiffuser: Diffusion Models as Text Painters
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Diffusion models have gained increasing attention for their impressive
generation abilities but currently struggle with rendering accurate and
coherent text. To address this issue, we introduce TextDiffuser, focusing on
generating images with visually appealing text that is coherent with
backgrounds. TextDiffuser consists of two stages: first, a Transformer model
generates the layout of keywords extracted from text prompts, and then
diffusion models generate images conditioned on the text prompt and the
generated layout. Additionally, we contribute the first large-scale text images
dataset with OCR annotations, MARIO-10M, containing 10 million image-text pairs
with text recognition, detection, and character-level segmentation annotations.
We further collect the MARIO-Eval benchmark to serve as a comprehensive tool
for evaluating text rendering quality. Through experiments and user studies, we
show that TextDiffuser is flexible and controllable to create high-quality text
images using text prompts alone or together with text template images, and
conduct text inpainting to reconstruct incomplete images with text. The code,
model, and dataset will be available at \url{https://aka.ms/textdiffuser}.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 10:16:19 GMT"
},
{
"version": "v2",
"created": "Wed, 24 May 2023 17:57:19 GMT"
},
{
"version": "v3",
"created": "Wed, 7 Jun 2023 05:55:26 GMT"
},
{
"version": "v4",
"created": "Tue, 13 Jun 2023 11:13:22 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Chen",
"Jingye",
""
],
[
"Huang",
"Yupan",
""
],
[
"Lv",
"Tengchao",
""
],
[
"Cui",
"Lei",
""
],
[
"Chen",
"Qifeng",
""
],
[
"Wei",
"Furu",
""
]
] |
new_dataset
| 0.999634 |
2305.13193
|
Bela Gipp
|
Ankit Satpute and Andr\'e Greiner-Petter and Moritz Schubotz and
Norman Meuschke and Akiko Aizawa and Olaf Teschke and Bela Gipp
|
TEIMMA: The First Content Reuse Annotator for Text, Images, and Math
| null | null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This demo paper presents the first tool to annotate the reuse of text,
images, and mathematical formulae in a document pair -- TEIMMA. Annotating
content reuse is particularly useful to develop plagiarism detection
algorithms. Real-world content reuse is often obfuscated, which makes it
challenging to identify such cases. TEIMMA allows entering the obfuscation type
to enable novel classifications for confirmed cases of plagiarism. It enables
recording different reuse types for text, images, and mathematical formulae in
HTML and supports users by visualizing the content reuse in a document pair
using similarity detection methods for text and math.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 16:24:59 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Jun 2023 16:43:15 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Satpute",
"Ankit",
""
],
[
"Greiner-Petter",
"André",
""
],
[
"Schubotz",
"Moritz",
""
],
[
"Meuschke",
"Norman",
""
],
[
"Aizawa",
"Akiko",
""
],
[
"Teschke",
"Olaf",
""
],
[
"Gipp",
"Bela",
""
]
] |
new_dataset
| 0.995051 |
2305.14839
|
Yunshui Li
|
Yunshui Li, Binyuan Hui, ZhiChao Yin, Min Yang, Fei Huang and Yongbin
Li
|
PaCE: Unified Multi-modal Dialogue Pre-training with Progressive and
Compositional Experts
|
ACL 2023
| null | null | null |
cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Perceiving multi-modal information and fulfilling dialogues with humans is a
long-term goal of artificial intelligence. Pre-training is commonly regarded as
an effective approach for multi-modal dialogue. However, due to the limited
availability of multi-modal dialogue data, there is still scarce research on
multi-modal dialogue pre-training. Yet another intriguing challenge emerges
from the encompassing nature of multi-modal dialogue, which involves various
modalities and tasks. Moreover, new forms of tasks may arise at unpredictable
points in the future. Hence, it is essential for designed multi-modal dialogue
models to possess sufficient flexibility to adapt to such scenarios. This paper
proposes \textbf{PaCE}, a unified, structured, compositional multi-modal
dialogue pre-training framework. It utilizes a combination of several
fundamental experts to accommodate multiple dialogue-related tasks and can be
pre-trained using limited dialogue and extensive non-dialogue multi-modal data.
Furthermore, we propose a progressive training method where old experts from
the past can assist new experts, facilitating the expansion of their
capabilities. Experimental results demonstrate that PaCE achieves
state-of-the-art results on eight multi-modal dialog benchmarks.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 07:43:29 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Jun 2023 06:31:46 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Li",
"Yunshui",
""
],
[
"Hui",
"Binyuan",
""
],
[
"Yin",
"ZhiChao",
""
],
[
"Yang",
"Min",
""
],
[
"Huang",
"Fei",
""
],
[
"Li",
"Yongbin",
""
]
] |
new_dataset
| 0.979269 |
2306.02887
|
Gabriel B\'en\'edict
|
Gabriel B\'en\'edict, Ruqing Zhang, Donald Metzler
|
Gen-IR @ SIGIR 2023: The First Workshop on Generative Information
Retrieval
|
Accepted SIGIR 23 workshop
| null | null | null |
cs.IR cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Generative information retrieval (IR) has experienced substantial growth
across multiple research communities (e.g., information retrieval, computer
vision, natural language processing, and machine learning), and has been highly
visible in the popular press. Theoretical, empirical, and actual user-facing
products have been released that retrieve documents (via generation) or
directly generate answers given an input request. We would like to investigate
whether end-to-end generative models are just another trend or, as some claim,
a paradigm change for IR. This necessitates new metrics, theoretical grounding,
evaluation methods, task definitions, models, user interfaces, etc. The goal of
this workshop (https://coda.io/@sigir/gen-ir) is to focus on previously
explored Generative IR techniques like document retrieval and direct Grounded
Answer Generation, while also offering a venue for the discussion and
exploration of how Generative IR can be applied to new domains like
recommendation systems, summarization, etc. The format of the workshop is
interactive, including roundtable and keynote sessions and tends to avoid the
one-sided dialogue of a mini-conference.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 13:56:36 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Jun 2023 15:20:13 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Bénédict",
"Gabriel",
""
],
[
"Zhang",
"Ruqing",
""
],
[
"Metzler",
"Donald",
""
]
] |
new_dataset
| 0.954308 |
2306.03092
|
Zhaoshuo Li
|
Zhaoshuo Li, Thomas M\"uller, Alex Evans, Russell H. Taylor, Mathias
Unberath, Ming-Yu Liu, Chen-Hsuan Lin
|
Neuralangelo: High-Fidelity Neural Surface Reconstruction
|
CVPR 2023, project page:
https://research.nvidia.com/labs/dir/neuralangelo
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural surface reconstruction has been shown to be powerful for recovering
dense 3D surfaces via image-based neural rendering. However, current methods
struggle to recover detailed structures of real-world scenes. To address the
issue, we present Neuralangelo, which combines the representation power of
multi-resolution 3D hash grids with neural surface rendering. Two key
ingredients enable our approach: (1) numerical gradients for computing
higher-order derivatives as a smoothing operation and (2) coarse-to-fine
optimization on the hash grids controlling different levels of details. Even
without auxiliary inputs such as depth, Neuralangelo can effectively recover
dense 3D surface structures from multi-view images with fidelity significantly
surpassing previous methods, enabling detailed large-scale scene reconstruction
from RGB video captures.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 17:59:57 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Jun 2023 20:50:07 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Li",
"Zhaoshuo",
""
],
[
"Müller",
"Thomas",
""
],
[
"Evans",
"Alex",
""
],
[
"Taylor",
"Russell H.",
""
],
[
"Unberath",
"Mathias",
""
],
[
"Liu",
"Ming-Yu",
""
],
[
"Lin",
"Chen-Hsuan",
""
]
] |
new_dataset
| 0.999237 |
2306.06362
|
Xiaqing Pan
|
Xiaqing Pan, Nicholas Charron, Yongqian Yang, Scott Peters, Thomas
Whelan, Chen Kong, Omkar Parkhi, Richard Newcombe, Carl Yuheng Ren
|
Aria Digital Twin: A New Benchmark Dataset for Egocentric 3D Machine
Perception
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We introduce the Aria Digital Twin (ADT) - an egocentric dataset captured
using Aria glasses with extensive object, environment, and human level ground
truth. This ADT release contains 200 sequences of real-world activities
conducted by Aria wearers in two real indoor scenes with 398 object instances
(324 stationary and 74 dynamic). Each sequence consists of: a) raw data of two
monochrome camera streams, one RGB camera stream, two IMU streams; b) complete
sensor calibration; c) ground truth data including continuous
6-degree-of-freedom (6DoF) poses of the Aria devices, object 6DoF poses, 3D eye
gaze vectors, 3D human poses, 2D image segmentations, image depth maps; and d)
photo-realistic synthetic renderings. To the best of our knowledge, there is no
existing egocentric dataset with a level of accuracy, photo-realism and
comprehensiveness comparable to ADT. By contributing ADT to the research
community, our mission is to set a new standard for evaluation in the
egocentric machine perception domain, which includes very challenging research
problems such as 3D object detection and tracking, scene reconstruction and
understanding, sim-to-real learning, human pose prediction - while also
inspiring new machine perception tasks for augmented reality (AR) applications.
To kick start exploration of the ADT research use cases, we evaluated several
existing state-of-the-art methods for object detection, segmentation and image
translation tasks that demonstrate the usefulness of ADT as a benchmarking
dataset.
|
[
{
"version": "v1",
"created": "Sat, 10 Jun 2023 06:46:32 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Jun 2023 06:38:47 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Pan",
"Xiaqing",
""
],
[
"Charron",
"Nicholas",
""
],
[
"Yang",
"Yongqian",
""
],
[
"Peters",
"Scott",
""
],
[
"Whelan",
"Thomas",
""
],
[
"Kong",
"Chen",
""
],
[
"Parkhi",
"Omkar",
""
],
[
"Newcombe",
"Richard",
""
],
[
"Ren",
"Carl Yuheng",
""
]
] |
new_dataset
| 0.999807 |
2306.06452
|
Bilash Saha
|
Bilash Saha, Md Saiful Islam, Abm Kamrul Riad, Sharaban Tahora,
Hossain Shahriar, Sweta Sneha
|
BlockTheFall: Wearable Device-based Fall Detection Framework Powered by
Machine Learning and Blockchain for Elderly Care
|
Accepted to publish in The 1st IEEE International Workshop on Digital
and Public Health
| null | null | null |
cs.CY cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Falls among the elderly are a major health concern, frequently resulting in
serious injuries and a reduced quality of life. In this paper, we propose
"BlockTheFall," a wearable device-based fall detection framework which detects
falls in real time by using sensor data from wearable devices. To accurately
identify patterns and detect falls, the collected sensor data is analyzed using
machine learning algorithms. To ensure data integrity and security, the
framework stores and verifies fall event data using blockchain technology. The
proposed framework aims to provide an efficient and dependable solution for
fall detection with improved emergency response, and elderly individuals'
overall well-being. Further experiments and evaluations are being carried out
to validate the effectiveness and feasibility of the proposed framework, which
has shown promising results in distinguishing genuine falls from simulated
falls. By providing timely and accurate fall detection and response, this
framework has the potential to substantially boost the quality of elderly care.
|
[
{
"version": "v1",
"created": "Sat, 10 Jun 2023 14:18:44 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Saha",
"Bilash",
""
],
[
"Islam",
"Md Saiful",
""
],
[
"Riad",
"Abm Kamrul",
""
],
[
"Tahora",
"Sharaban",
""
],
[
"Shahriar",
"Hossain",
""
],
[
"Sneha",
"Sweta",
""
]
] |
new_dataset
| 0.999326 |
2306.07201
|
Guian Fang
|
Ziyang Ma, Mengsha Liu, Guian Fang, Ying Shen
|
LTCR: Long-Text Chinese Rumor Detection Dataset
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
False information can spread quickly on social media, negatively influencing
the citizens' behaviors and responses to social events. To better detect all of
the fake news, especially long texts which are harder to find completely, a
Long-Text Chinese Rumor detection dataset named LTCR is proposed. The LTCR
dataset provides a valuable resource for accurately detecting misinformation,
especially in the context of complex fake news related to COVID-19. The dataset
consists of 1,729 and 500 pieces of real and fake news, respectively. The
average lengths of real and fake news are approximately 230 and 152 characters.
We also propose \method, Salience-aware Fake News Detection Model, which
achieves the highest accuracy (95.85%), fake news recall (90.91%) and F-score
(90.60%) on the dataset. (https://github.com/Enderfga/DoubleCheck)
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2023 16:03:36 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Jun 2023 08:08:18 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Ma",
"Ziyang",
""
],
[
"Liu",
"Mengsha",
""
],
[
"Fang",
"Guian",
""
],
[
"Shen",
"Ying",
""
]
] |
new_dataset
| 0.999833 |
2306.07265
|
Shilong Liu
|
Tianhe Ren, Shilong Liu, Feng Li, Hao Zhang, Ailing Zeng, Jie Yang,
Xingyu Liao, Ding Jia, Hongyang Li, He Cao, Jianan Wang, Zhaoyang Zeng,
Xianbiao Qi, Yuhui Yuan, Jianwei Yang, Lei Zhang
|
detrex: Benchmarking Detection Transformers
|
project link: https://github.com/IDEA-Research/detrex
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The DEtection TRansformer (DETR) algorithm has received considerable
attention in the research community and is gradually emerging as a mainstream
approach for object detection and other perception tasks. However, the current
field lacks a unified and comprehensive benchmark specifically tailored for
DETR-based models. To address this issue, we develop a unified, highly modular,
and lightweight codebase called detrex, which supports a majority of the
mainstream DETR-based instance recognition algorithms, covering various
fundamental tasks, including object detection, segmentation, and pose
estimation. We conduct extensive experiments under detrex and perform a
comprehensive benchmark for DETR-based models. Moreover, we enhance the
performance of detection transformers through the refinement of training
hyper-parameters, providing strong baselines for supported algorithms.We hope
that detrex could offer research communities a standardized and unified
platform to evaluate and compare different DETR-based models while fostering a
deeper understanding and driving advancements in DETR-based instance
recognition. Our code is available at https://github.com/IDEA-Research/detrex.
The project is currently being actively developed. We encourage the community
to use detrex codebase for further development and contributions.
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2023 17:52:11 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Jun 2023 17:53:15 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Ren",
"Tianhe",
""
],
[
"Liu",
"Shilong",
""
],
[
"Li",
"Feng",
""
],
[
"Zhang",
"Hao",
""
],
[
"Zeng",
"Ailing",
""
],
[
"Yang",
"Jie",
""
],
[
"Liao",
"Xingyu",
""
],
[
"Jia",
"Ding",
""
],
[
"Li",
"Hongyang",
""
],
[
"Cao",
"He",
""
],
[
"Wang",
"Jianan",
""
],
[
"Zeng",
"Zhaoyang",
""
],
[
"Qi",
"Xianbiao",
""
],
[
"Yuan",
"Yuhui",
""
],
[
"Yang",
"Jianwei",
""
],
[
"Zhang",
"Lei",
""
]
] |
new_dataset
| 0.99952 |
2306.07298
|
Shruti Bhargava
|
Shruti Bhargava, Anand Dhoot, Ing-Marie Jonsson, Hoang Long Nguyen,
Alkesh Patel, Hong Yu, Vincent Renkens
|
Referring to Screen Texts with Voice Assistants
|
7 pages, Accepted to ACL Industry Track 2023
| null | null | null |
cs.HC cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Voice assistants help users make phone calls, send messages, create events,
navigate, and do a lot more. However, assistants have limited capacity to
understand their users' context. In this work, we aim to take a step in this
direction. Our work dives into a new experience for users to refer to phone
numbers, addresses, email addresses, URLs, and dates on their phone screens.
Our focus lies in reference understanding, which becomes particularly
interesting when multiple similar texts are present on screen, similar to
visual grounding. We collect a dataset and propose a lightweight
general-purpose model for this novel experience. Due to the high cost of
consuming pixels directly, our system is designed to rely on the extracted text
from the UI. Our model is modular, thus offering flexibility, improved
interpretability, and efficient runtime memory utilization.
|
[
{
"version": "v1",
"created": "Sat, 10 Jun 2023 22:43:16 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Bhargava",
"Shruti",
""
],
[
"Dhoot",
"Anand",
""
],
[
"Jonsson",
"Ing-Marie",
""
],
[
"Nguyen",
"Hoang Long",
""
],
[
"Patel",
"Alkesh",
""
],
[
"Yu",
"Hong",
""
],
[
"Renkens",
"Vincent",
""
]
] |
new_dataset
| 0.999686 |
2306.07349
|
Jonathan Lorraine
|
Jonathan Lorraine, Kevin Xie, Xiaohui Zeng, Chen-Hsuan Lin, Towaki
Takikawa, Nicholas Sharp, Tsung-Yi Lin, Ming-Yu Liu, Sanja Fidler, James
Lucas
|
ATT3D: Amortized Text-to-3D Object Synthesis
|
22 pages, 20 figures
| null | null | null |
cs.LG cs.AI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Text-to-3D modelling has seen exciting progress by combining generative
text-to-image models with image-to-3D methods like Neural Radiance Fields.
DreamFusion recently achieved high-quality results but requires a lengthy,
per-prompt optimization to create 3D objects. To address this, we amortize
optimization over text prompts by training on many prompts simultaneously with
a unified model, instead of separately. With this, we share computation across
a prompt set, training in less time than per-prompt optimization. Our framework
- Amortized text-to-3D (ATT3D) - enables knowledge-sharing between prompts to
generalize to unseen setups and smooth interpolations between text for novel
assets and simple animations.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2023 17:59:10 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Lorraine",
"Jonathan",
""
],
[
"Xie",
"Kevin",
""
],
[
"Zeng",
"Xiaohui",
""
],
[
"Lin",
"Chen-Hsuan",
""
],
[
"Takikawa",
"Towaki",
""
],
[
"Sharp",
"Nicholas",
""
],
[
"Lin",
"Tsung-Yi",
""
],
[
"Liu",
"Ming-Yu",
""
],
[
"Fidler",
"Sanja",
""
],
[
"Lucas",
"James",
""
]
] |
new_dataset
| 0.990691 |
2306.07373
|
Iker De La Iglesia
|
Iker de la Iglesia and Aitziber Atutxa and Koldo Gojenola and Ander
Barrena
|
EriBERTa: A Bilingual Pre-Trained Language Model for Clinical Natural
Language Processing
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The utilization of clinical reports for various secondary purposes, including
health research and treatment monitoring, is crucial for enhancing patient
care. Natural Language Processing (NLP) tools have emerged as valuable assets
for extracting and processing relevant information from these reports. However,
the availability of specialized language models for the clinical domain in
Spanish has been limited.
In this paper, we introduce EriBERTa, a bilingual domain-specific language
model pre-trained on extensive medical and clinical corpora. We demonstrate
that EriBERTa outperforms previous Spanish language models in the clinical
domain, showcasing its superior capabilities in understanding medical texts and
extracting meaningful information. Moreover, EriBERTa exhibits promising
transfer learning abilities, allowing for knowledge transfer from one language
to another. This aspect is particularly beneficial given the scarcity of
Spanish clinical data.
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2023 18:56:25 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"de la Iglesia",
"Iker",
""
],
[
"Atutxa",
"Aitziber",
""
],
[
"Gojenola",
"Koldo",
""
],
[
"Barrena",
"Ander",
""
]
] |
new_dataset
| 0.995615 |
2306.07399
|
Stefanie Wuhrer
|
Matthieu Armando, Laurence Boissieux, Edmond Boyer, Jean-Sebastien
Franco, Martin Humenberger, Christophe Legras, Vincent Leroy, Mathieu Marsot,
Julien Pansiot, Sergi Pujades, Rim Rekik, Gregory Rogez, Anilkumar Swamy,
Stefanie Wuhrer
|
4DHumanOutfit: a multi-subject 4D dataset of human motion sequences in
varying outfits exhibiting large displacements
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work presents 4DHumanOutfit, a new dataset of densely sampled
spatio-temporal 4D human motion data of different actors, outfits and motions.
The dataset is designed to contain different actors wearing different outfits
while performing different motions in each outfit. In this way, the dataset can
be seen as a cube of data containing 4D motion sequences along 3 axes with
identity, outfit and motion. This rich dataset has numerous potential
applications for the processing and creation of digital humans, e.g. augmented
reality, avatar creation and virtual try on. 4DHumanOutfit is released for
research purposes at https://kinovis.inria.fr/4dhumanoutfit/. In addition to
image data and 4D reconstructions, the dataset includes reference solutions for
each axis. We present independent baselines along each axis that demonstrate
the value of these reference solutions for evaluation tasks.
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2023 19:59:27 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Armando",
"Matthieu",
""
],
[
"Boissieux",
"Laurence",
""
],
[
"Boyer",
"Edmond",
""
],
[
"Franco",
"Jean-Sebastien",
""
],
[
"Humenberger",
"Martin",
""
],
[
"Legras",
"Christophe",
""
],
[
"Leroy",
"Vincent",
""
],
[
"Marsot",
"Mathieu",
""
],
[
"Pansiot",
"Julien",
""
],
[
"Pujades",
"Sergi",
""
],
[
"Rekik",
"Rim",
""
],
[
"Rogez",
"Gregory",
""
],
[
"Swamy",
"Anilkumar",
""
],
[
"Wuhrer",
"Stefanie",
""
]
] |
new_dataset
| 0.999894 |
2306.07426
|
Vukosi Marivate
|
Andani Madodonga, Vukosi Marivate, Matthew Adendorff
|
Izindaba-Tindzaba: Machine learning news categorisation for Long and
Short Text for isiZulu and Siswati
|
Accepted for Third workshop on Resources for African Indigenous
Languages (RAIL)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Local/Native South African languages are classified as low-resource
languages. As such, it is essential to build the resources for these languages
so that they can benefit from advances in the field of natural language
processing. In this work, the focus was to create annotated news datasets for
the isiZulu and Siswati native languages based on news topic classification
tasks and present the findings from these baseline classification models. Due
to the shortage of data for these native South African languages, the datasets
that were created were augmented and oversampled to increase data size and
overcome class classification imbalance. In total, four different
classification models were used namely Logistic regression, Naive bayes,
XGBoost and LSTM. These models were trained on three different word embeddings
namely Bag-Of-Words, TFIDF and Word2vec. The results of this study showed that
XGBoost, Logistic Regression and LSTM, trained from Word2vec performed better
than the other combinations.
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2023 21:02:12 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Madodonga",
"Andani",
""
],
[
"Marivate",
"Vukosi",
""
],
[
"Adendorff",
"Matthew",
""
]
] |
new_dataset
| 0.99967 |
2306.07429
|
Mar Canet Sola
|
Varvara Guljajeva and Mar Canet Sol\`a and Isaac Joseph Clarke
|
Explaining CLIP through Co-Creative Drawings and Interaction
| null | null | null | null |
cs.AI cs.CV cs.CY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper analyses a visual archive of drawings produced by an interactive
robotic art installation where audience members narrated their dreams into a
system powered by CLIPdraw deep learning (DL) model that interpreted and
transformed their dreams into images. The resulting archive of prompt-image
pairs were examined and clustered based on concept representation accuracy. As
a result of the analysis, the paper proposes four groupings for describing and
explaining CLIP-generated results: clear concept, text-to-text as image,
indeterminacy and confusion, and lost in translation. This article offers a
glimpse into a collection of dreams interpreted, mediated and given form by
Artificial Intelligence (AI), showcasing oftentimes unexpected, visually
compelling or, indeed, the dream-like output of the system, with the emphasis
on processes and results of translations between languages, sign-systems and
various modules of the installation. In the end, the paper argues that proposed
clusters support better understanding of the neural model.
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2023 21:15:25 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Guljajeva",
"Varvara",
""
],
[
"Solà",
"Mar Canet",
""
],
[
"Clarke",
"Isaac Joseph",
""
]
] |
new_dataset
| 0.954259 |
2306.07476
|
Haoran Xie
|
Zhengyu Huang, Haoran Xie, Tsukasa Fukusato, Kazunori Miyata
|
AniFaceDrawing: Anime Portrait Exploration during Your Sketching
|
11 pages, 13 figures. SIGGRAPH 2023 Conference Track. Project
webpage: http://www.jaist.ac.jp/~xie/AniFaceDrawing.html
| null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we focus on how artificial intelligence (AI) can be used to
assist users in the creation of anime portraits, that is, converting rough
sketches into anime portraits during their sketching process. The input is a
sequence of incomplete freehand sketches that are gradually refined stroke by
stroke, while the output is a sequence of high-quality anime portraits that
correspond to the input sketches as guidance. Although recent GANs can generate
high quality images, it is a challenging problem to maintain the high quality
of generated images from sketches with a low degree of completion due to
ill-posed problems in conditional image generation. Even with the latest
sketch-to-image (S2I) technology, it is still difficult to create high-quality
images from incomplete rough sketches for anime portraits since anime style
tend to be more abstract than in realistic style. To address this issue, we
adopt a latent space exploration of StyleGAN with a two-stage training
strategy. We consider the input strokes of a freehand sketch to correspond to
edge information-related attributes in the latent structural code of StyleGAN,
and term the matching between strokes and these attributes stroke-level
disentanglement. In the first stage, we trained an image encoder with the
pre-trained StyleGAN model as a teacher encoder. In the second stage, we
simulated the drawing process of the generated images without any additional
data (labels) and trained the sketch encoder for incomplete progressive
sketches to generate high-quality portrait images with feature alignment to the
disentangled representations in the teacher encoder. We verified the proposed
progressive S2I system with both qualitative and quantitative evaluations and
achieved high-quality anime portraits from incomplete progressive sketches. Our
user study proved its effectiveness in art creation assistance for the anime
style.
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 00:43:47 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Huang",
"Zhengyu",
""
],
[
"Xie",
"Haoran",
""
],
[
"Fukusato",
"Tsukasa",
""
],
[
"Miyata",
"Kazunori",
""
]
] |
new_dataset
| 0.998809 |
2306.07495
|
Yuqing Yang
|
Yuqing Yang, Chao Wang, Yue Zhang, Zhiqiang Lin
|
SoK: Decoding the Super App Enigma: The Security Mechanisms, Threats,
and Trade-offs in OS-alike Apps
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The super app paradigm, exemplified by platforms such as WeChat and AliPay,
has revolutionized the mobile app landscape by enabling third-party developers
to deploy add-ons within these apps. These add-ons, known as miniapps, leverage
user data hosted by the super app platforms to provide a wide range of
services, such as shopping and gaming. With the rise of miniapps, super apps
have transformed into "operating systems", offering encapsulated APIs to
miniapp developers as well as in-app miniapp stores for users to explore and
download miniapps. In this paper, we provide the first systematic study to
consolidate the current state of knowledge in this field from the security
perspective: the security measures, threats, and trade-offs of this paradigm.
Specifically, we summarize 13 security mechanisms and 10 security threats in
super app platforms, followed by a root cause analysis revealing that the
security assumptions still may be violated due to issues in underlying systems,
implementation of isolation, and vetting. Additionally, we also systematize
open problems and trade-offs that need to be addressed by future works to help
enhance the security and privacy of this new paradigm.
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 02:13:10 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Yang",
"Yuqing",
""
],
[
"Wang",
"Chao",
""
],
[
"Zhang",
"Yue",
""
],
[
"Lin",
"Zhiqiang",
""
]
] |
new_dataset
| 0.98507 |
2306.07503
|
Lin Ma
|
Lin Ma and Conan Liu and Tiefeng Ma and Shuangzhe Liu
|
PaVa: a novel Path-based Valley-seeking clustering algorithm
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Clustering methods are being applied to a wider range of scenarios involving
more complex datasets, where the shapes of clusters tend to be arbitrary. In
this paper, we propose a novel Path-based Valley-seeking clustering algorithm
for arbitrarily shaped clusters. This work aims to seek the valleys among
clusters and then individually extract clusters. Three vital techniques are
used in this algorithm. First, path distance (minmax distance) is employed to
transform the irregular boundaries among clusters, that is density valleys,
into perfect spherical shells. Second, a suitable density measurement,
$k$-distance, is employed to make adjustment on Minimum Spanning Tree, by which
a robust minmax distance is calculated. Third, we seek the transformed density
valleys by determining their centers and radius. First, the clusters are
wrapped in spherical shells after the distance transformation, making the
extraction process efficient even with clusters of arbitrary shape. Second,
adjusted Minimum Spanning Tree enhances the robustness of minmax distance under
different kinds of noise. Last, the number of clusters does not need to be
inputted or decided manually due to the individual extraction process. After
applying the proposed algorithm to several commonly used synthetic datasets,
the results indicate that the Path-based Valley-seeking algorithm is accurate
and efficient. The algorithm is based on the dissimilarity of objects, so it
can be applied to a wide range of fields. Its performance on real-world
datasets illustrates its versatility.
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 02:29:34 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Ma",
"Lin",
""
],
[
"Liu",
"Conan",
""
],
[
"Ma",
"Tiefeng",
""
],
[
"Liu",
"Shuangzhe",
""
]
] |
new_dataset
| 0.999479 |
2306.07536
|
Kush Bhatia
|
Kush Bhatia, Avanika Narayan, Christopher De Sa, Christopher R\'e
|
TART: A plug-and-play Transformer module for task-agnostic reasoning
| null | null | null | null |
cs.LG cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models (LLMs) exhibit in-context learning abilities which
enable the same model to perform several tasks without any task-specific
training. In contrast, traditional adaptation approaches, such as fine-tuning,
modify the underlying models for each specific task. In-context learning,
however, consistently underperforms task-specific tuning approaches even when
presented with the same examples. While most existing approaches (e.g., prompt
engineering) focus on the LLM's learned representations to patch this
performance gap, our analysis actually reveal that LLM representations contain
sufficient information to make good predictions. As such, we focus on the LLM's
reasoning abilities and demonstrate that this performance gap exists due to
their inability to perform simple probabilistic reasoning tasks. This raises an
intriguing question: Are LLMs actually capable of learning how to reason in a
task-agnostic manner? We answer this in the affirmative and propose TART which
generically improves an LLM's reasoning abilities using a synthetically trained
Transformer-based reasoning module. TART trains this reasoning module in a
task-agnostic manner using only synthetic logistic regression tasks and
composes it with an arbitrary real-world pre-trained model without any
additional training. With a single inference module, TART improves performance
across different model families (GPT-Neo, Pythia, BLOOM), model sizes (100M -
6B), tasks (14 NLP binary classification tasks), and even across different
modalities (audio and vision). Additionally, on the RAFT Benchmark, TART
improves GPT-Neo (125M)'s performance such that it outperforms BLOOM (176B),
and is within 4% of GPT-3 (175B). Our code and models are available at
https://github.com/HazyResearch/TART .
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 04:37:00 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Bhatia",
"Kush",
""
],
[
"Narayan",
"Avanika",
""
],
[
"De Sa",
"Christopher",
""
],
[
"Ré",
"Christopher",
""
]
] |
new_dataset
| 0.997567 |
2306.07775
|
Maximilian Muschalik
|
Maximilian Muschalik, Fabian Fumagalli, Rohit Jagtani, Barbara Hammer,
Eyke H\"ullermeier
|
iPDP: On Partial Dependence Plots in Dynamic Modeling Scenarios
|
This preprint has not undergone peer review or any post-submission
improvements or corrections
| null | null | null |
cs.LG cs.AI eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Post-hoc explanation techniques such as the well-established partial
dependence plot (PDP), which investigates feature dependencies, are used in
explainable artificial intelligence (XAI) to understand black-box machine
learning models. While many real-world applications require dynamic models that
constantly adapt over time and react to changes in the underlying distribution,
XAI, so far, has primarily considered static learning environments, where
models are trained in a batch mode and remain unchanged. We thus propose a
novel model-agnostic XAI framework called incremental PDP (iPDP) that extends
on the PDP to extract time-dependent feature effects in non-stationary learning
environments. We formally analyze iPDP and show that it approximates a
time-dependent variant of the PDP that properly reacts to real and virtual
concept drift. The time-sensitivity of iPDP is controlled by a single smoothing
parameter, which directly corresponds to the variance and the approximation
error of iPDP in a static learning environment. We illustrate the efficacy of
iPDP by showcasing an example application for drift detection and conducting
multiple experiments on real-world and synthetic data sets and streams.
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 13:56:56 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Muschalik",
"Maximilian",
""
],
[
"Fumagalli",
"Fabian",
""
],
[
"Jagtani",
"Rohit",
""
],
[
"Hammer",
"Barbara",
""
],
[
"Hüllermeier",
"Eyke",
""
]
] |
new_dataset
| 0.978795 |
2306.07818
|
Kihyuk Hong
|
Kihyuk Hong, Yuhang Li, Ambuj Tewari
|
A Primal-Dual-Critic Algorithm for Offline Constrained Reinforcement
Learning
| null | null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Offline constrained reinforcement learning (RL) aims to learn a policy that
maximizes the expected cumulative reward subject to constraints on expected
value of cost functions using an existing dataset. In this paper, we propose
Primal-Dual-Critic Algorithm (PDCA), a novel algorithm for offline constrained
RL with general function approximation. PDCA runs a primal-dual algorithm on
the Lagrangian function estimated by critics. The primal player employs a
no-regret policy optimization oracle to maximize the Lagrangian estimate given
any choices of the critics and the dual player. The dual player employs a
no-regret online linear optimization oracle to minimize the Lagrangian estimate
given any choices of the critics and the primal player. We show that PDCA can
successfully find a near saddle point of the Lagrangian, which is nearly
optimal for the constrained RL problem. Unlike previous work that requires
concentrability and strong Bellman completeness assumptions, PDCA only requires
concentrability and value function/marginalized importance weight realizability
assumptions.
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 14:50:03 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Hong",
"Kihyuk",
""
],
[
"Li",
"Yuhang",
""
],
[
"Tewari",
"Ambuj",
""
]
] |
new_dataset
| 0.997917 |
2306.07842
|
Guangtao Lyu
|
Guangtao Lyu, Anna Zhu
|
PSSTRNet: Progressive Segmentation-guided Scene Text Removal Network
|
Accepted by ICME2022
|
2022 IEEE International Conference on Multimedia and Expo (ICME)
|
10.1109/ICME52920.2022.9859792
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Scene text removal (STR) is a challenging task due to the complex text fonts,
colors, sizes, and background textures in scene images. However, most previous
methods learn both text location and background inpainting implicitly within a
single network, which weakens the text localization mechanism and makes a lossy
background. To tackle these problems, we propose a simple Progressive
Segmentation-guided Scene Text Removal Network(PSSTRNet) to remove the text in
the image iteratively. It contains two decoder branches, a text segmentation
branch, and a text removal branch, with a shared encoder. The text segmentation
branch generates text mask maps as the guidance for the regional removal
branch. In each iteration, the original image, previous text removal result,
and text mask are input to the network to extract the rest part of the text
segments and cleaner text removal result. To get a more accurate text mask map,
an update module is developed to merge the mask map in the current and previous
stages. The final text removal result is obtained by adaptive fusion of results
from all previous stages. A sufficient number of experiments and ablation
studies conducted on the real and synthetic public datasets demonstrate our
proposed method achieves state-of-the-art performance. The source code of our
work is available at:
\href{https://github.com/GuangtaoLyu/PSSTRNet}{https://github.com/GuangtaoLyu/PSSTRNet.}
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 15:20:37 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Lyu",
"Guangtao",
""
],
[
"Zhu",
"Anna",
""
]
] |
new_dataset
| 0.999724 |
2306.07845
|
R\u{a}zvan-Alexandru Sm\u{a}du
|
Sebastian-Vasile Echim, R\u{a}zvan-Alexandru Sm\u{a}du, Andrei-Marius
Avram, Dumitru-Clementin Cercel, Florin Pop
|
Adversarial Capsule Networks for Romanian Satire Detection and Sentiment
Analysis
|
15 pages, 3 figures, Accepted by NLDB 2023
| null |
10.1007/978-3-031-35320-8_31
| null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Satire detection and sentiment analysis are intensively explored natural
language processing (NLP) tasks that study the identification of the satirical
tone from texts and extracting sentiments in relationship with their targets.
In languages with fewer research resources, an alternative is to produce
artificial examples based on character-level adversarial processes to overcome
dataset size limitations. Such samples are proven to act as a regularization
method, thus improving the robustness of models. In this work, we improve the
well-known NLP models (i.e., Convolutional Neural Networks, Long Short-Term
Memory (LSTM), Bidirectional LSTM, Gated Recurrent Units (GRUs), and
Bidirectional GRUs) with adversarial training and capsule networks. The
fine-tuned models are used for satire detection and sentiment analysis tasks in
the Romanian language. The proposed framework outperforms the existing methods
for the two tasks, achieving up to 99.08% accuracy, thus confirming the
improvements added by the capsule layers and the adversarial training in NLP
approaches.
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 15:23:44 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Echim",
"Sebastian-Vasile",
""
],
[
"Smădu",
"Răzvan-Alexandru",
""
],
[
"Avram",
"Andrei-Marius",
""
],
[
"Cercel",
"Dumitru-Clementin",
""
],
[
"Pop",
"Florin",
""
]
] |
new_dataset
| 0.998986 |
2306.07934
|
Mehran Kazemi
|
Mehran Kazemi, Quan Yuan, Deepti Bhatia, Najoung Kim, Xin Xu, Vaiva
Imbrasaite, Deepak Ramachandran
|
BoardgameQA: A Dataset for Natural Language Reasoning with Contradictory
Information
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Automated reasoning with unstructured natural text is a key requirement for
many potential applications of NLP and for developing robust AI systems.
Recently, Language Models (LMs) have demonstrated complex reasoning capacities
even without any finetuning. However, existing evaluation for automated
reasoning assumes access to a consistent and coherent set of information over
which models reason. When reasoning in the real-world, the available
information is frequently inconsistent or contradictory, and therefore models
need to be equipped with a strategy to resolve such conflicts when they arise.
One widely-applicable way of resolving conflicts is to impose preferences over
information sources (e.g., based on source credibility or information recency)
and adopt the source with higher preference. In this paper, we formulate the
problem of reasoning with contradictory information guided by preferences over
sources as the classical problem of defeasible reasoning, and develop a dataset
called BoardgameQA for measuring the reasoning capacity of LMs in this setting.
BoardgameQA also incorporates reasoning with implicit background knowledge, to
better reflect reasoning problems in downstream applications. We benchmark
various LMs on BoardgameQA and the results reveal a significant gap in the
reasoning capacity of state-of-the-art LMs on this problem, showing that
reasoning with conflicting information does not surface out-of-the-box in LMs.
While performance can be improved with finetuning, it nevertheless remains
poor.
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 17:39:20 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Kazemi",
"Mehran",
""
],
[
"Yuan",
"Quan",
""
],
[
"Bhatia",
"Deepti",
""
],
[
"Kim",
"Najoung",
""
],
[
"Xu",
"Xin",
""
],
[
"Imbrasaite",
"Vaiva",
""
],
[
"Ramachandran",
"Deepak",
""
]
] |
new_dataset
| 0.999785 |
2306.07971
|
Abdelrahman Shaker
|
Omkar Thawkar, Abdelrahman Shaker, Sahal Shaji Mullappilly, Hisham
Cholakkal, Rao Muhammad Anwer, Salman Khan, Jorma Laaksonen, Fahad Shahbaz
Khan
|
XrayGPT: Chest Radiographs Summarization using Medical Vision-Language
Models
|
Technical report
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The latest breakthroughs in large vision-language models, such as Bard and
GPT-4, have showcased extraordinary abilities in performing a wide range of
tasks. Such models are trained on massive datasets comprising billions of
public image-text pairs with diverse tasks. However, their performance on
task-specific domains, such as radiology, is still under-investigated and
potentially limited due to a lack of sophistication in understanding biomedical
images. On the other hand, conversational medical models have exhibited
remarkable success but have mainly focused on text-based analysis. In this
paper, we introduce XrayGPT, a novel conversational medical vision-language
model that can analyze and answer open-ended questions about chest radiographs.
Specifically, we align both medical visual encoder (MedClip) with a fine-tuned
large language model (Vicuna), using a simple linear transformation. This
alignment enables our model to possess exceptional visual conversation
abilities, grounded in a deep understanding of radiographs and medical domain
knowledge. To enhance the performance of LLMs in the medical context, we
generate ~217k interactive and high-quality summaries from free-text radiology
reports. These summaries serve to enhance the performance of LLMs through the
fine-tuning process. Our approach opens up new avenues the research for
advancing the automated analysis of chest radiographs. Our open-source demos,
models, and instruction sets are available at:
https://github.com/mbzuai-oryx/XrayGPT.
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 17:59:59 GMT"
}
] | 2023-06-14T00:00:00 |
[
[
"Thawkar",
"Omkar",
""
],
[
"Shaker",
"Abdelrahman",
""
],
[
"Mullappilly",
"Sahal Shaji",
""
],
[
"Cholakkal",
"Hisham",
""
],
[
"Anwer",
"Rao Muhammad",
""
],
[
"Khan",
"Salman",
""
],
[
"Laaksonen",
"Jorma",
""
],
[
"Khan",
"Fahad Shahbaz",
""
]
] |
new_dataset
| 0.974651 |
2012.13188
|
Yalda Foroutan
|
Yalda Foroutan, Ahmad Kalhor, Saeid Mohammadi Nejati, Samad Sheikhaei
|
Control of Computer Pointer Using Hand Gesture Recognition in Motion
Pictures
|
9 pages, 6 figures, 2 tables
| null | null | null |
cs.CV cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a user interface designed to enable computer cursor
control through hand detection and gesture classification. A comprehensive hand
dataset comprising 6720 image samples was collected, encompassing four distinct
classes: fist, palm, pointing to the left, and pointing to the right. The
images were captured from 15 individuals in various settings, including simple
backgrounds with different perspectives and lighting conditions. A
convolutional neural network (CNN) was trained on this dataset to accurately
predict labels for each captured image and measure their similarity. The system
incorporates defined commands for cursor movement, left-click, and right-click
actions. Experimental results indicate that the proposed algorithm achieves a
remarkable accuracy of 91.88% and demonstrates its potential applicability
across diverse backgrounds.
|
[
{
"version": "v1",
"created": "Thu, 24 Dec 2020 10:24:51 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Jun 2023 21:49:33 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Foroutan",
"Yalda",
""
],
[
"Kalhor",
"Ahmad",
""
],
[
"Nejati",
"Saeid Mohammadi",
""
],
[
"Sheikhaei",
"Samad",
""
]
] |
new_dataset
| 0.999728 |
2103.09900
|
Nodari Vakhania
|
Nodari Vakhania
|
Compact enumeration for scheduling one machine
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We develop an algorithmic framework that finds an optimal solution by
enumerating some feasible solutions, which number is bounded by a specially
derived Variable Parameter (VP) with a favorable asymptotic behavior. We build
a VP algorithm for a strongly $\mathsf{NP}$-hard single-machine scheduling
problem. The target VP $\nu$ is the number of jobs with some special
properties, the so-called emerging jobs. At phase 1 a partial solution
including $n-\nu$ non-emerging jobs is constructed in a low degree polynomial
time. At phase 2 less than $\nu!$ permutations of the $\nu$ emerging jobs are
considered, each of them being incorporated into the partial schedule of phase
1. Based on an earlier conducted experimental study, in practice, $\nu/n$
varied from $1/4$ for small problem instances to $1/10$ for the largest tested
instances. We illustrate how the proposed method can be used to build a
polynomial-time approximation scheme (PTAS) with the worst-case time complexity
$O(\kappa!\kappa k n \log n)$, where $\kappa$, $\kappa<\nu< n$, is a VP and the
corresponding approximation factor is $1+1/k$, with $k\kappa<k$. This is better
than the time complexity of the earlier known approximation schemes. Using an
intuitive probabilistic model, we give more realistic bounds on the running
time of the VP algorithm and the PTAS, which are far below the worst-case
bounds $\nu!$ and $\kappa!$.
|
[
{
"version": "v1",
"created": "Wed, 17 Mar 2021 20:50:10 GMT"
},
{
"version": "v2",
"created": "Sun, 11 Jun 2023 22:58:50 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Vakhania",
"Nodari",
""
]
] |
new_dataset
| 0.997301 |
2103.13725
|
Haipeng Li
|
Haipeng Li and Kunming Luo and Shuaicheng Liu
|
GyroFlow: Gyroscope-Guided Unsupervised Optical Flow Learning
| null |
2021 IEEE/CVF International Conference on Computer Vision (ICCV)
|
10.1109/ICCV48922.2021.01263
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing optical flow methods are erroneous in challenging scenes, such as
fog, rain, and night because the basic optical flow assumptions such as
brightness and gradient constancy are broken. To address this problem, we
present an unsupervised learning approach that fuses gyroscope into optical
flow learning. Specifically, we first convert gyroscope readings into motion
fields named gyro field. Second, we design a self-guided fusion module to fuse
the background motion extracted from the gyro field with the optical flow and
guide the network to focus on motion details. To the best of our knowledge,
this is the first deep learning-based framework that fuses gyroscope data and
image content for optical flow learning. To validate our method, we propose a
new dataset that covers regular and challenging scenes. Experiments show that
our method outperforms the state-of-art methods in both regular and challenging
scenes. Code and dataset are available at
https://github.com/megvii-research/GyroFlow.
|
[
{
"version": "v1",
"created": "Thu, 25 Mar 2021 10:14:57 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Aug 2021 07:46:31 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Li",
"Haipeng",
""
],
[
"Luo",
"Kunming",
""
],
[
"Liu",
"Shuaicheng",
""
]
] |
new_dataset
| 0.996552 |
2112.04720
|
Juyeop Kim
|
Juyeop Kim, Jun-Ho Choi, Soobeom Jang, Jong-Seok Lee
|
Amicable Aid: Perturbing Images to Improve Classification Performance
|
5 pages
| null |
10.1109/ICASSP49357.2023.10095024
| null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While adversarial perturbation of images to attack deep image classification
models pose serious security concerns in practice, this paper suggests a novel
paradigm where the concept of image perturbation can benefit classification
performance, which we call amicable aid. We show that by taking the opposite
search direction of perturbation, an image can be modified to yield higher
classification confidence and even a misclassified image can be made correctly
classified. This can be also achieved with a large amount of perturbation by
which the image is made unrecognizable by human eyes. The mechanism of the
amicable aid is explained in the viewpoint of the underlying natural image
manifold. Furthermore, we investigate the universal amicable aid, i.e., a fixed
perturbation can be applied to multiple images to improve their classification
results. While it is challenging to find such perturbations, we show that
making the decision boundary as perpendicular to the image manifold as possible
via training with modified data is effective to obtain a model for which
universal amicable perturbations are more easily found.
|
[
{
"version": "v1",
"created": "Thu, 9 Dec 2021 06:16:08 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Feb 2023 16:21:41 GMT"
},
{
"version": "v3",
"created": "Sat, 4 Mar 2023 17:07:21 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Kim",
"Juyeop",
""
],
[
"Choi",
"Jun-Ho",
""
],
[
"Jang",
"Soobeom",
""
],
[
"Lee",
"Jong-Seok",
""
]
] |
new_dataset
| 0.96802 |
2204.04280
|
Jan Bok
|
Jan Bok, Ji\v{r}\'i Fiala, Nikola Jedli\v{c}kov\'a, Jan Kratochv\'il,
Pawe{\l} Rz\k{a}\.zewski
|
List covering of regular multigraphs with semi-edges
|
full version, submited to a journal
| null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In line with the recent development in topological graph theory, we are
considering undirected graphs that are allowed to contain {\em multiple edges},
{\em loops}, and {\em semi-edges}. A graph is called {\em simple} if it
contains no semi-edges, no loops, and no multiple edges. A graph covering
projection, also known as a locally bijective homomorphism, is a mapping
between vertices and edges of two graphs which preserves incidences and which
is a local bijection on the edge-neighborhood of every vertex. This notion
stems from topological graph theory, but has also found applications in
combinatorics and theoretical computer science.
It has been known that for every fixed simple regular graph $H$ of valency
greater than 2, deciding if an input graph covers $H$ is NP-complete. Graphs
with semi-edges have been considered in this context only recently and only
partial results on the complexity of covering such graphs are known so far. In
this paper we consider the list version of the problem, called
\textsc{List-$H$-Cover}, where the vertices and edges of the input graph come
with lists of admissible targets. Our main result reads that the
\textsc{List-$H$-Cover} problem is NP-complete for every regular graph $H$ of
valency greater than 2 which contains at least one semi-simple vertex (i.e., a
vertex which is incident with no loops, with no multiple edges and with at most
one semi-edge). Using this result we show the NP-co/polytime dichotomy for the
computational complexity of \textsc{ List-$H$-Cover} for cubic graphs.
|
[
{
"version": "v1",
"created": "Fri, 8 Apr 2022 20:23:21 GMT"
},
{
"version": "v2",
"created": "Sat, 10 Jun 2023 13:19:08 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Bok",
"Jan",
""
],
[
"Fiala",
"Jiří",
""
],
[
"Jedličková",
"Nikola",
""
],
[
"Kratochvíl",
"Jan",
""
],
[
"Rzążewski",
"Paweł",
""
]
] |
new_dataset
| 0.985282 |
2206.09759
|
Lei Deng
|
Ming Li, Lei Deng, Yunghsiang S. Han
|
An Input-Queueing TSN Switching Architecture to Achieve Zero Packet Loss
for Timely Traffic
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Zero packet loss with bounded latency is necessary for many applications,
such as industrial control networks, automotive Ethernet, and aircraft
communication systems. Traditional networks cannot meet the such strict
requirement, and thus Time-Sensitive Networking (TSN) emerges. TSN is a set of
standards proposed by IEEE 802 for providing deterministic connectivity in
terms of low packet loss, low packet delay variation, and guaranteed packet
transport. However, to our knowledge, few existing TSN solutions can
deterministically achieve zero packet loss with bounded latency. This paper
fills in this blank by proposing a novel input-queueing TSN switching
architecture, under which we design a TDMA-like scheduling policy (called
M-TDMA) along with a sufficient condition and an EDF-like scheduling policy
(called M-EDF) along with a different sufficient condition to achieve zero
packet loss with bounded latency.
|
[
{
"version": "v1",
"created": "Mon, 20 Jun 2022 13:08:30 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Aug 2022 15:36:27 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Oct 2022 15:14:21 GMT"
},
{
"version": "v4",
"created": "Thu, 17 Nov 2022 13:14:34 GMT"
},
{
"version": "v5",
"created": "Thu, 18 May 2023 14:12:36 GMT"
},
{
"version": "v6",
"created": "Sun, 11 Jun 2023 07:56:41 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Li",
"Ming",
""
],
[
"Deng",
"Lei",
""
],
[
"Han",
"Yunghsiang S.",
""
]
] |
new_dataset
| 0.98511 |
2207.02621
|
Jiahui Zhang
|
Jiahui Zhang and Fangneng Zhan and Rongliang Wu and Yingchen Yu and
Wenqing Zhang and Bai Song and Xiaoqin Zhang and Shijian Lu
|
VMRF: View Matching Neural Radiance Fields
|
This paper has been accepted to ACM MM 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural Radiance Fields (NeRF) have demonstrated very impressive performance
in novel view synthesis via implicitly modelling 3D representations from
multi-view 2D images. However, most existing studies train NeRF models with
either reasonable camera pose initialization or manually-crafted camera pose
distributions which are often unavailable or hard to acquire in various
real-world data. We design VMRF, an innovative view matching NeRF that enables
effective NeRF training without requiring prior knowledge in camera poses or
camera pose distributions. VMRF introduces a view matching scheme, which
exploits unbalanced optimal transport to produce a feature transport plan for
mapping a rendered image with randomly initialized camera pose to the
corresponding real image. With the feature transport plan as the guidance, a
novel pose calibration technique is designed which rectifies the initially
randomized camera poses by predicting relative pose transformations between the
pair of rendered and real images. Extensive experiments over a number of
synthetic and real datasets show that the proposed VMRF outperforms the
state-of-the-art qualitatively and quantitatively by large margins.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2022 12:26:40 GMT"
},
{
"version": "v2",
"created": "Sun, 11 Jun 2023 18:40:36 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Zhang",
"Jiahui",
""
],
[
"Zhan",
"Fangneng",
""
],
[
"Wu",
"Rongliang",
""
],
[
"Yu",
"Yingchen",
""
],
[
"Zhang",
"Wenqing",
""
],
[
"Song",
"Bai",
""
],
[
"Zhang",
"Xiaoqin",
""
],
[
"Lu",
"Shijian",
""
]
] |
new_dataset
| 0.99925 |
2207.08891
|
Anrin Chakraborti
|
Anrin Chakraborti, Darius Suciu, Radu Sion
|
Wink: Deniable Secure Messaging
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
End-to-end encrypted (E2EE) messaging is an essential first step in providing
message confidentiality. Unfortunately, all security guarantees of end-to-end
encryption are lost when keys or plaintext are disclosed, either due to device
compromise or (sometimes lawful) coercion by powerful adversaries. This work
introduces Wink, the first plausibly-deniable messaging system protecting
message confidentiality from partial device compromise and compelled key
disclosure. Wink can surreptitiously inject hidden messages in standard random
coins (e.g., salts, IVs) used by existing E2EE protocols. It does so as part of
legitimate secure cryptographic functionality deployed inside the
widely-available trusted execution environment (TEE) TrustZone. This results in
hidden communication using virtually unchanged existing E2EE messaging apps, as
well as strong plausible deniability. Wink has been demonstrated with multiple
existing E2EE applications (including Telegram and Signal) with minimal
(external) instrumentation, negligible overheads, and crucially, without
changing on-wire message formats.
|
[
{
"version": "v1",
"created": "Mon, 18 Jul 2022 19:01:28 GMT"
},
{
"version": "v2",
"created": "Sat, 10 Jun 2023 04:57:35 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Chakraborti",
"Anrin",
""
],
[
"Suciu",
"Darius",
""
],
[
"Sion",
"Radu",
""
]
] |
new_dataset
| 0.98279 |
2210.12453
|
Yuchen Shi
|
Yuchen Shi, Congying Han, Tiande Guo
|
NeuroPrim: An Attention-based Model for Solving NP-hard Spanning Tree
Problems
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spanning tree problems with specialized constraints can be difficult to solve
in real-world scenarios, often requiring intricate algorithmic design and
exponential time. Recently, there has been growing interest in end-to-end deep
neural networks for solving routing problems. However, such methods typically
produce sequences of vertices, which makes it difficult to apply them to
general combinatorial optimization problems where the solution set consists of
edges, as in various spanning tree problems. In this paper, we propose
NeuroPrim, a novel framework for solving various spanning tree problems by
defining a Markov Decision Process (MDP) for general combinatorial optimization
problems on graphs. Our approach reduces the action and state space using
Prim's algorithm and trains the resulting model using REINFORCE. We apply our
framework to three difficult problems on Euclidean space: the
Degree-constrained Minimum Spanning Tree (DCMST) problem, the Minimum Routing
Cost Spanning Tree (MRCST) problem, and the Steiner Tree Problem in graphs
(STP). Experimental results on literature instances demonstrate that our model
outperforms strong heuristics and achieves small optimality gaps of up to 250
vertices. Additionally, we find that our model has strong generalization
ability, with no significant degradation observed on problem instances as large
as 1000. Our results suggest that our framework can be effective for solving a
wide range of combinatorial optimization problems beyond spanning tree
problems.
|
[
{
"version": "v1",
"created": "Sat, 22 Oct 2022 13:49:29 GMT"
},
{
"version": "v2",
"created": "Sat, 10 Jun 2023 16:21:03 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Shi",
"Yuchen",
""
],
[
"Han",
"Congying",
""
],
[
"Guo",
"Tiande",
""
]
] |
new_dataset
| 0.999098 |
2210.14549
|
Jon-Lark Kim
|
Jon-Lark Kim
|
Binary optimal linear codes with various hull dimensions and
entanglement-assisted QECC
|
27 pages
|
Comp. Appl. Math. 42, 114 (2023)
|
10.1007/s40314-023-02268-z
| null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
The hull of a linear code $C$ is the intersection of $C$ with its dual. To
the best of our knowledge, there are very few constructions of binary linear
codes with the hull dimension $\ge 2$ except for self-orthogonal codes. We
propose a building-up construction to obtain a plenty of binary $[n+2, k+1]$
codes with hull dimension $\ell, \ell +1$, or $\ell +2$ from a given binary
$[n,k]$ code with hull dimension $\ell$. In particular, with respect to hull
dimensions 1 and 2, we construct all binary optimal $[n, k]$ codes of lengths
up to 13. With respect to hull dimensions 3, 4, and 5, we construct all binary
optimal $[n,k]$ codes of lengths up to 12 and the best possible minimum
distances of $[13,k]$ codes for $3 \le k \le 10$. As an application, we apply
our binary optimal codes with a given hull dimension to construct several
entanglement-assisted quantum error-correcting codes(EAQECC) with the best
known parameters.
|
[
{
"version": "v1",
"created": "Wed, 26 Oct 2022 08:11:05 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Kim",
"Jon-Lark",
""
]
] |
new_dataset
| 0.996638 |
2212.10773
|
Ying Shen
|
Zhiyang Xu, Ying Shen, Lifu Huang
|
MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction
Tuning
|
ACL 2023, dataset url: https://github.com/VT-NLP/MultiInstruct
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Instruction tuning, a new learning paradigm that fine-tunes pre-trained
language models on tasks specified through instructions, has shown promising
zero-shot performance on various natural language processing tasks. However, it
has yet to be explored for vision and multimodal tasks. In this work, we
introduce MUL-TIINSTRUCT, the first multimodal instruction tuning benchmark
dataset that consists of 62 diverse multimodal tasks in a unified seq-to-seq
format covering 10 broad categories. The tasks are derived from 21 existing
open-source datasets and each task is equipped with 5 expert-written
instructions. We take OFA as the base pre-trained model for multimodal
instruction tuning, and to further improve its zero-shot performance, we
explore multiple transfer learning strategies to leverage the large-scale
NATURAL INSTRUCTIONS dataset. Experimental results demonstrate strong zero-shot
performance on various unseen multimodal tasks and the benefit of transfer
learning from a text-only instruction dataset. We also design a new evaluation
metric - Sensitivity, to evaluate how sensitive the model is to the variety of
instructions. Our results indicate that fine-tuning the model on a diverse set
of tasks and instructions leads to a reduced sensitivity to variations in
instructions for each task.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 05:17:06 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jun 2023 23:05:26 GMT"
},
{
"version": "v3",
"created": "Sat, 10 Jun 2023 18:33:21 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Xu",
"Zhiyang",
""
],
[
"Shen",
"Ying",
""
],
[
"Huang",
"Lifu",
""
]
] |
new_dataset
| 0.999762 |
2212.14597
|
Marcin Plata
|
Piotr Kawa, Marcin Plata, Piotr Syga
|
Defense Against Adversarial Attacks on Audio DeepFake Detection
|
Accepted to INTERSPEECH 2023
| null | null | null |
cs.SD cs.CR cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Audio DeepFakes (DF) are artificially generated utterances created using deep
learning, with the primary aim of fooling the listeners in a highly convincing
manner. Their quality is sufficient to pose a severe threat in terms of
security and privacy, including the reliability of news or defamation. Multiple
neural network-based methods to detect generated speech have been proposed to
prevent the threats. In this work, we cover the topic of adversarial attacks,
which decrease the performance of detectors by adding superficial (difficult to
spot by a human) changes to input data. Our contribution contains evaluating
the robustness of 3 detection architectures against adversarial attacks in two
scenarios (white-box and using transferability) and enhancing it later by using
adversarial training performed by our novel adaptive training. Moreover, one of
the investigated architectures is RawNet3, which, to the best of our knowledge,
we adapted for the first time to DeepFake detection.
|
[
{
"version": "v1",
"created": "Fri, 30 Dec 2022 08:41:06 GMT"
},
{
"version": "v2",
"created": "Sat, 10 Jun 2023 18:48:55 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Kawa",
"Piotr",
""
],
[
"Plata",
"Marcin",
""
],
[
"Syga",
"Piotr",
""
]
] |
new_dataset
| 0.988418 |
2301.05119
|
Francesco Pierri
|
Francesco Pierri, Geng Liu, Stefano Ceri
|
ITA-ELECTION-2022: A multi-platform dataset of social media
conversations around the 2022 Italian general election
|
4 pages, 3 figures, 2 tables
| null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Online social media play a major role in shaping public discourse and
opinion, especially during political events. We present the first public
multi-platform dataset of Italian-language political conversations, focused on
the 2022 Italian general election taking place on September 25th. Leveraging
public APIs and a keyword-based search, we collected millions of posts
published by users, pages and groups on Facebook, Instagram and Twitter, along
with metadata of TikTok and YouTube videos shared on these platforms, over a
period of four months. We augmented the dataset with a collection of political
ads sponsored on Meta platforms, and a list of social media handles associated
with political representatives. Our data resource will allow researchers and
academics to further our understanding of the role of social media in the
democratic process.
|
[
{
"version": "v1",
"created": "Thu, 12 Jan 2023 16:19:08 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Jun 2023 10:17:54 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Pierri",
"Francesco",
""
],
[
"Liu",
"Geng",
""
],
[
"Ceri",
"Stefano",
""
]
] |
new_dataset
| 0.999833 |
2303.05203
|
Xiuyu Yang
|
Xiuyu Yang, Zhuangyan Zhang, Haikuo Du, Sui Yang, Fengping Sun, Yanbo
Liu, Ling Pei, Wenchao Xu, Weiqi Sun, Zhengyu Li
|
RMMDet: Road-Side Multitype and Multigroup Sensor Detection System for
Autonomous Driving
| null | null | null | null |
cs.RO cs.AI cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous driving has now made great strides thanks to artificial
intelligence, and numerous advanced methods have been proposed for vehicle end
target detection, including single sensor or multi sensor detection methods.
However, the complexity and diversity of real traffic situations necessitate an
examination of how to use these methods in real road conditions. In this paper,
we propose RMMDet, a road-side multitype and multigroup sensor detection system
for autonomous driving. We use a ROS-based virtual environment to simulate
real-world conditions, in particular the physical and functional construction
of the sensors. Then we implement muti-type sensor detection and multi-group
sensors fusion in this environment, including camera-radar and camera-lidar
detection based on result-level fusion. We produce local datasets and real sand
table field, and conduct various experiments. Furthermore, we link a
multi-agent collaborative scheduling system to the fusion detection system.
Hence, the whole roadside detection system is formed by roadside perception,
fusion detection, and scheduling planning. Through the experiments, it can be
seen that RMMDet system we built plays an important role in vehicle-road
collaboration and its optimization. The code and supplementary materials can be
found at: https://github.com/OrangeSodahub/RMMDet
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 12:13:39 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Mar 2023 03:19:20 GMT"
},
{
"version": "v3",
"created": "Sat, 10 Jun 2023 01:07:03 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Yang",
"Xiuyu",
""
],
[
"Zhang",
"Zhuangyan",
""
],
[
"Du",
"Haikuo",
""
],
[
"Yang",
"Sui",
""
],
[
"Sun",
"Fengping",
""
],
[
"Liu",
"Yanbo",
""
],
[
"Pei",
"Ling",
""
],
[
"Xu",
"Wenchao",
""
],
[
"Sun",
"Weiqi",
""
],
[
"Li",
"Zhengyu",
""
]
] |
new_dataset
| 0.999481 |
2303.11366
|
Noah Shinn
|
Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik
Narasimhan, Shunyu Yao
|
Reflexion: Language Agents with Verbal Reinforcement Learning
|
v3 contains additional citations
| null | null | null |
cs.AI cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models (LLMs) have been increasingly used to interact with
external environments (e.g., games, compilers, APIs) as goal-driven agents.
However, it remains challenging for these language agents to quickly and
efficiently learn from trial-and-error as traditional reinforcement learning
methods require extensive training samples and expensive model fine-tuning. We
propose Reflexion, a novel framework to reinforce language agents not by
updating weights, but instead through linguistic feedback. Concretely,
Reflexion agents verbally reflect on task feedback signals, then maintain their
own reflective text in an episodic memory buffer to induce better
decision-making in subsequent trials. Reflexion is flexible enough to
incorporate various types (scalar values or free-form language) and sources
(external or internally simulated) of feedback signals, and obtains significant
improvements over a baseline agent across diverse tasks (sequential
decision-making, coding, language reasoning). For example, Reflexion achieves a
91% pass@1 accuracy on the HumanEval coding benchmark, surpassing the previous
state-of-the-art GPT-4 that achieves 80%. We also conduct ablation and analysis
studies using different feedback signals, feedback incorporation methods, and
agent types, and provide insights into how they affect performance.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 18:08:50 GMT"
},
{
"version": "v2",
"created": "Sun, 21 May 2023 06:20:36 GMT"
},
{
"version": "v3",
"created": "Sat, 10 Jun 2023 04:32:30 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Shinn",
"Noah",
""
],
[
"Cassano",
"Federico",
""
],
[
"Labash",
"Beck",
""
],
[
"Gopinath",
"Ashwin",
""
],
[
"Narasimhan",
"Karthik",
""
],
[
"Yao",
"Shunyu",
""
]
] |
new_dataset
| 0.997161 |
2303.13310
|
Jannis Vamvas
|
Jannis Vamvas and Johannes Gra\"en and Rico Sennrich
|
SwissBERT: The Multilingual Language Model for Switzerland
|
SwissText 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present SwissBERT, a masked language model created specifically for
processing Switzerland-related text. SwissBERT is a pre-trained model that we
adapted to news articles written in the national languages of Switzerland --
German, French, Italian, and Romansh. We evaluate SwissBERT on natural language
understanding tasks related to Switzerland and find that it tends to outperform
previous models on these tasks, especially when processing contemporary news
and/or Romansh Grischun. Since SwissBERT uses language adapters, it may be
extended to Swiss German dialects in future work. The model and our open-source
code are publicly released at https://github.com/ZurichNLP/swissbert.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 14:44:47 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Jun 2023 08:49:53 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Vamvas",
"Jannis",
""
],
[
"Graën",
"Johannes",
""
],
[
"Sennrich",
"Rico",
""
]
] |
new_dataset
| 0.999645 |
2304.06858
|
Mohammad Reza Zarei
|
Mohammad Reza Zarei, Michael Christensen, Sarah Everts and Majid
Komeili
|
Vax-Culture: A Dataset for Studying Vaccine Discourse on Twitter
| null | null | null | null |
cs.SI cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vaccine hesitancy continues to be a main challenge for public health
officials during the COVID-19 pandemic. As this hesitancy undermines vaccine
campaigns, many researchers have sought to identify its root causes, finding
that the increasing volume of anti-vaccine misinformation on social media
platforms is a key element of this problem. We explored Twitter as a source of
misleading content with the goal of extracting overlapping cultural and
political beliefs that motivate the spread of vaccine misinformation. To do
this, we have collected a data set of vaccine-related Tweets and annotated them
with the help of a team of annotators with a background in communications and
journalism. Ultimately we hope this can lead to effective and targeted public
health communication strategies for reaching individuals with anti-vaccine
beliefs. Moreover, this information helps with developing Machine Learning
models to automatically detect vaccine misinformation posts and combat their
negative impacts. In this paper, we present Vax-Culture, a novel Twitter
COVID-19 dataset consisting of 6373 vaccine-related tweets accompanied by an
extensive set of human-provided annotations including vaccine-hesitancy stance,
indication of any misinformation in tweets, the entities criticized and
supported in each tweet and the communicated message of each tweet. Moreover,
we define five baseline tasks including four classification and one sequence
generation tasks, and report the results of a set of recent transformer-based
models for them. The dataset and code are publicly available at
https://github.com/mrzarei5/Vax-Culture.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 23:04:30 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Apr 2023 16:51:08 GMT"
},
{
"version": "v3",
"created": "Sun, 11 Jun 2023 22:11:10 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Zarei",
"Mohammad Reza",
""
],
[
"Christensen",
"Michael",
""
],
[
"Everts",
"Sarah",
""
],
[
"Komeili",
"Majid",
""
]
] |
new_dataset
| 0.999774 |
2304.06939
|
Wanrong Zhu
|
Wanrong Zhu and Jack Hessel and Anas Awadalla and Samir Yitzhak Gadre
and Jesse Dodge and Alex Fang and Youngjae Yu and Ludwig Schmidt and William
Yang Wang and Yejin Choi
|
Multimodal C4: An Open, Billion-scale Corpus of Images Interleaved with
Text
|
Project homepage: https://github.com/allenai/mmc4
| null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In-context vision and language models like Flamingo support arbitrarily
interleaved sequences of images and text as input. This format not only enables
few-shot learning via interleaving independent supervised (image, text)
examples, but also, more complex prompts involving interaction between images,
e.g., "What do image A and image B have in common?" To support this interface,
pretraining occurs over web corpora that similarly contain interleaved
images+text. To date, however, large-scale data of this form have not been
publicly available.
We release Multimodal C4, an augmentation of the popular text-only C4 corpus
with images interleaved. We use a linear assignment algorithm to place images
into longer bodies of text using CLIP features, a process that we show
outperforms alternatives. Multimodal C4 spans everyday topics like cooking,
travel, technology, etc. A manual inspection of a random sample of documents
shows that a vast majority (88%) of images are topically relevant, and that
linear assignment frequently selects individual sentences specifically
well-aligned with each image (80%). After filtering NSFW images, ads, etc., the
resulting corpus consists of 101.2M documents with 571M images interleaved in
43B English tokens.
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2023 06:17:46 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Jun 2023 21:49:58 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Zhu",
"Wanrong",
""
],
[
"Hessel",
"Jack",
""
],
[
"Awadalla",
"Anas",
""
],
[
"Gadre",
"Samir Yitzhak",
""
],
[
"Dodge",
"Jesse",
""
],
[
"Fang",
"Alex",
""
],
[
"Yu",
"Youngjae",
""
],
[
"Schmidt",
"Ludwig",
""
],
[
"Wang",
"William Yang",
""
],
[
"Choi",
"Yejin",
""
]
] |
new_dataset
| 0.988746 |
2304.09349
|
Jinjie Mai
|
Jinjie Mai, Jun Chen, Bing Li, Guocheng Qian, Mohamed Elhoseiny,
Bernard Ghanem
|
LLM as A Robotic Brain: Unifying Egocentric Memory and Control
|
This early project is now integrated to: Mindstorms in Natural
Language-Based Societies of Mind, arXiv:2305.17066
| null | null | null |
cs.AI cs.CL cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Embodied AI focuses on the study and development of intelligent systems that
possess a physical or virtual embodiment (i.e. robots) and are able to
dynamically interact with their environment. Memory and control are the two
essential parts of an embodied system and usually require separate frameworks
to model each of them. In this paper, we propose a novel and generalizable
framework called LLM-Brain: using Large-scale Language Model as a robotic brain
to unify egocentric memory and control. The LLM-Brain framework integrates
multiple multimodal language models for robotic tasks, utilizing a zero-shot
learning approach. All components within LLM-Brain communicate using natural
language in closed-loop multi-round dialogues that encompass perception,
planning, control, and memory. The core of the system is an embodied LLM to
maintain egocentric memory and control the robot. We demonstrate LLM-Brain by
examining two downstream tasks: active exploration and embodied question
answering. The active exploration tasks require the robot to extensively
explore an unknown environment within a limited number of actions. Meanwhile,
the embodied question answering tasks necessitate that the robot answers
questions based on observations acquired during prior explorations.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 00:08:48 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Apr 2023 21:56:41 GMT"
},
{
"version": "v3",
"created": "Thu, 27 Apr 2023 19:36:30 GMT"
},
{
"version": "v4",
"created": "Mon, 12 Jun 2023 14:07:42 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Mai",
"Jinjie",
""
],
[
"Chen",
"Jun",
""
],
[
"Li",
"Bing",
""
],
[
"Qian",
"Guocheng",
""
],
[
"Elhoseiny",
"Mohamed",
""
],
[
"Ghanem",
"Bernard",
""
]
] |
new_dataset
| 0.992448 |
2304.10440
|
Huijie Wang
|
Huijie Wang, Tianyu Li, Yang Li, Li Chen, Chonghao Sima, Zhenbo Liu,
Yuting Wang, Shengyin Jiang, Peijin Jia, Bangjun Wang, Feng Wen, Hang Xu,
Ping Luo, Junchi Yan, Wei Zhang, Hongyang Li
|
OpenLane-V2: A Topology Reasoning Benchmark for Scene Understanding in
Autonomous Driving
|
OpenLane-V2 dataset: https://github.com/OpenDriveLab/OpenLane-V2
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Accurately depicting the complex traffic scene is a vital component for
autonomous vehicles to execute accurate judgments. However, existing benchmarks
tend to oversimplify the scene by solely focusing on lane perception tasks.
Observing that human drivers rely on both lanes and traffic signals to operate
their vehicles safely, we present OpenLane-V2, the first dataset on topology
reasoning for traffic scene structure. The objective of the presented dataset
is to advance research in understanding the structure of road scenes by
examining the relationship between perceived entities, such as traffic elements
and lanes. Leveraging existing datasets, OpenLane-V2 consists of 2,000
annotated road scenes that describe traffic elements and their correlation to
the lanes. It comprises three primary sub-tasks, including the 3D lane
detection inherited from OpenLane, accompanied by corresponding metrics to
evaluate the model's performance. We evaluate various state-of-the-art methods,
and present their quantitative and qualitative results on OpenLane-V2 to
indicate future avenues for investigating topology reasoning in traffic scenes.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 16:31:22 GMT"
},
{
"version": "v2",
"created": "Sat, 10 Jun 2023 17:22:09 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Wang",
"Huijie",
""
],
[
"Li",
"Tianyu",
""
],
[
"Li",
"Yang",
""
],
[
"Chen",
"Li",
""
],
[
"Sima",
"Chonghao",
""
],
[
"Liu",
"Zhenbo",
""
],
[
"Wang",
"Yuting",
""
],
[
"Jiang",
"Shengyin",
""
],
[
"Jia",
"Peijin",
""
],
[
"Wang",
"Bangjun",
""
],
[
"Wen",
"Feng",
""
],
[
"Xu",
"Hang",
""
],
[
"Luo",
"Ping",
""
],
[
"Yan",
"Junchi",
""
],
[
"Zhang",
"Wei",
""
],
[
"Li",
"Hongyang",
""
]
] |
new_dataset
| 0.999823 |
2304.13417
|
Petra van den Bos
|
Petra van den Bos and Marielle Stoelinga
|
With a little help from your friends: semi-cooperative games via Joker
moves
|
Extended version with appendix
| null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper coins the notion of Joker games where Player 2 is not strictly
adversarial: Player 1 gets help from Player 2 by playing a Joker. We formalize
these games as cost games, and study their theoretical properties. Finally, we
illustrate their use in model-based testing.
|
[
{
"version": "v1",
"created": "Wed, 26 Apr 2023 09:56:02 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Apr 2023 07:38:34 GMT"
},
{
"version": "v3",
"created": "Mon, 12 Jun 2023 14:31:09 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Bos",
"Petra van den",
""
],
[
"Stoelinga",
"Marielle",
""
]
] |
new_dataset
| 0.998011 |
2304.13620
|
Raian Rahman
|
Raian Rahman, Rizvi Hasan, Abdullah Al Farhad, Md Tahmid Rahman
Laskar, Md. Hamjajul Ashmafee, Abu Raihan Mostofa Kamal
|
ChartSumm: A Comprehensive Benchmark for Automatic Chart Summarization
of Long and Short Summaries
|
Accepted as a long paper at the Canadian AI 2023
| null |
10.21428/594757db.0b1f96f6
| null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Automatic chart to text summarization is an effective tool for the visually
impaired people along with providing precise insights of tabular data in
natural language to the user. A large and well-structured dataset is always a
key part for data driven models. In this paper, we propose ChartSumm: a
large-scale benchmark dataset consisting of a total of 84,363 charts along with
their metadata and descriptions covering a wide range of topics and chart types
to generate short and long summaries. Extensive experiments with strong
baseline models show that even though these models generate fluent and
informative summaries by achieving decent scores in various automatic
evaluation metrics, they often face issues like suffering from hallucination,
missing out important data points, in addition to incorrect explanation of
complex trends in the charts. We also investigated the potential of expanding
ChartSumm to other languages using automated translation tools. These make our
dataset a challenging benchmark for future research.
|
[
{
"version": "v1",
"created": "Wed, 26 Apr 2023 15:25:24 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Apr 2023 17:22:08 GMT"
},
{
"version": "v3",
"created": "Sun, 11 Jun 2023 04:07:27 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Rahman",
"Raian",
""
],
[
"Hasan",
"Rizvi",
""
],
[
"Farhad",
"Abdullah Al",
""
],
[
"Laskar",
"Md Tahmid Rahman",
""
],
[
"Ashmafee",
"Md. Hamjajul",
""
],
[
"Kamal",
"Abu Raihan Mostofa",
""
]
] |
new_dataset
| 0.999728 |
2305.01210
|
Jiawei Liu
|
Jiawei Liu and Chunqiu Steven Xia and Yuyao Wang and Lingming Zhang
|
Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of
Large Language Models for Code Generation
| null | null | null | null |
cs.SE cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Program synthesis has been long studied with recent approaches focused on
directly using the power of Large Language Models (LLMs) to generate code.
Programming benchmarks, with curated synthesis problems and test-cases, are
used to measure the performance of various LLMs on code synthesis. However,
these test-cases can be limited in both quantity and quality for fully
assessing the functional correctness of the generated code. Such limitation in
the existing benchmarks begs the following question: In the era of LLMs, is the
code generated really correct? To answer this, we propose EvalPlus -- a code
synthesis benchmarking framework to rigorously evaluate the functional
correctness of LLM-synthesized code. EvalPlus augments a given evaluation
dataset with large amounts of test-cases newly produced by an automatic test
input generator, powered by both LLM- and mutation-based strategies. While
EvalPlus is general, we extend the test-cases of the popular HUMANEVAL
benchmark by 81x to build HUMANEVAL+. Our extensive evaluation across 19
popular LLMs (e.g., GPT-4 and ChatGPT) demonstrates that HUMANEVAL+ is able to
catch significant amounts of previously undetected wrong code synthesized by
LLMs, reducing the pass@k by 13.6-15.3% on average. Our work not only indicates
that prior popular code synthesis evaluation results do not accurately reflect
the true performance of LLMs for code synthesis, but also opens up a new
direction to improve such programming benchmarks through automated testing.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 05:46:48 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Jun 2023 06:49:51 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Liu",
"Jiawei",
""
],
[
"Xia",
"Chunqiu Steven",
""
],
[
"Wang",
"Yuyao",
""
],
[
"Zhang",
"Lingming",
""
]
] |
new_dataset
| 0.994418 |
2305.08528
|
Philipp Allgeuer
|
Matthias Kerzel, Philipp Allgeuer, Erik Strahl, Nicolas Frick,
Jan-Gerrit Habekost, Manfred Eppe and Stefan Wermter
|
NICOL: A Neuro-inspired Collaborative Semi-humanoid Robot that Bridges
Social Interaction and Reliable Manipulation
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robotic platforms that can efficiently collaborate with humans in physical
tasks constitute a major goal in robotics. However, many existing robotic
platforms are either designed for social interaction or industrial object
manipulation tasks. The design of collaborative robots seldom emphasizes both
their social interaction and physical collaboration abilities. To bridge this
gap, we present the novel semi-humanoid NICOL, the Neuro-Inspired COLlaborator.
NICOL is a large, newly designed, scaled-up version of its well-evaluated
predecessor, the Neuro-Inspired COmpanion (NICO). NICOL adopts NICO's head and
facial expression display, and extends its manipulation abilities in terms of
precision, object size and workspace size. To introduce and evaluate NICOL, we
first develop and extend different neural and hybrid neuro-genetic visuomotor
approaches initially developed for the NICO to the larger NICOL and its more
complex kinematics. Furthermore, we present a novel neuro-genetic approach that
improves the grasp-accuracy of the NICOL to over 99%, outperforming the
state-of-the-art IK solvers KDL, TRACK-IK and BIO-IK. Furthermore, we introduce
the social interaction capabilities of NICOL, including the auditory and visual
capabilities, but also the face and emotion generation capabilities. Overall,
this article presents for the first time the humanoid robot NICOL and, thereby,
with the neuro-genetic approaches, contributes to the integration of social
robotics and neural visuomotor learning for humanoid robots.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 10:37:36 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Jun 2023 11:43:55 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Kerzel",
"Matthias",
""
],
[
"Allgeuer",
"Philipp",
""
],
[
"Strahl",
"Erik",
""
],
[
"Frick",
"Nicolas",
""
],
[
"Habekost",
"Jan-Gerrit",
""
],
[
"Eppe",
"Manfred",
""
],
[
"Wermter",
"Stefan",
""
]
] |
new_dataset
| 0.999596 |
2305.19512
|
Yiwei Lyu
|
Yiwei Lyu, Tiange Luo, Jiacheng Shi, Todd C. Hollon, Honglak Lee
|
Fine-grained Text Style Transfer with Diffusion-Based Language Models
|
Accepted at Repl4NLP workshop at ACL 2023
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Diffusion probabilistic models have shown great success in generating
high-quality images controllably, and researchers have tried to utilize this
controllability into text generation domain. Previous works on diffusion-based
language models have shown that they can be trained without external knowledge
(such as pre-trained weights) and still achieve stable performance and
controllability. In this paper, we trained a diffusion-based model on StylePTB
dataset, the standard benchmark for fine-grained text style transfers. The
tasks in StylePTB requires much more refined control over the output text
compared to tasks evaluated in previous works, and our model was able to
achieve state-of-the-art performance on StylePTB on both individual and
compositional transfers. Moreover, our model, trained on limited data from
StylePTB without external knowledge, outperforms previous works that utilized
pretrained weights, embeddings, and external grammar parsers, and this may
indicate that diffusion-based language models have great potential under
low-resource settings.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 02:51:26 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Jun 2023 02:13:16 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Lyu",
"Yiwei",
""
],
[
"Luo",
"Tiange",
""
],
[
"Shi",
"Jiacheng",
""
],
[
"Hollon",
"Todd C.",
""
],
[
"Lee",
"Honglak",
""
]
] |
new_dataset
| 0.988529 |
2306.01272
|
Hossein Aboutalebi
|
Hossein Aboutalebi, Dayou Mao, Carol Xu, Alexander Wong
|
DeepfakeArt Challenge: A Benchmark Dataset for Generative AI Art Forgery
and Data Poisoning Detection
| null | null | null | null |
cs.CV cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The tremendous recent advances in generative artificial intelligence
techniques have led to significant successes and promise in a wide range of
different applications ranging from conversational agents and textual content
generation to voice and visual synthesis. Amid the rise in generative AI and
its increasing widespread adoption, there has been significant growing concern
over the use of generative AI for malicious purposes. In the realm of visual
content synthesis using generative AI, key areas of significant concern has
been image forgery (e.g., generation of images containing or derived from
copyright content), and data poisoning (i.e., generation of adversarially
contaminated images). Motivated to address these key concerns to encourage
responsible generative AI, we introduce the DeepfakeArt Challenge, a
large-scale challenge benchmark dataset designed specifically to aid in the
building of machine learning algorithms for generative AI art forgery and data
poisoning detection. Comprising of over 32,000 records across a variety of
generative forgery and data poisoning techniques, each entry consists of a pair
of images that are either forgeries / adversarially contaminated or not. Each
of the generated images in the DeepfakeArt Challenge benchmark dataset has been
quality checked in a comprehensive manner. The DeepfakeArt Challenge is a core
part of GenAI4Good, a global open source initiative for accelerating machine
learning for promoting responsible creation and deployment of generative AI for
good.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 05:11:27 GMT"
},
{
"version": "v2",
"created": "Sun, 11 Jun 2023 03:08:24 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Aboutalebi",
"Hossein",
""
],
[
"Mao",
"Dayou",
""
],
[
"Xu",
"Carol",
""
],
[
"Wong",
"Alexander",
""
]
] |
new_dataset
| 0.999626 |
2306.01704
|
Sarah Ostadabbas
|
Yedi Luo, Xiangyu Bai, Le Jiang, Aniket Gupta, Eric Mortin, Hanumant
Singh Sarah Ostadabbas
|
Temporal-controlled Frame Swap for Generating High-Fidelity Stereo
Driving Data for Autonomy Analysis
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper presents a novel approach, TeFS (Temporal-controlled Frame Swap),
to generate synthetic stereo driving data for visual simultaneous localization
and mapping (vSLAM) tasks. TeFS is designed to overcome the lack of native
stereo vision support in commercial driving simulators, and we demonstrate its
effectiveness using Grand Theft Auto V (GTA V), a high-budget open-world video
game engine. We introduce GTAV-TeFS, the first large-scale GTA V stereo-driving
dataset, containing over 88,000 high-resolution stereo RGB image pairs, along
with temporal information, GPS coordinates, camera poses, and full-resolution
dense depth maps. GTAV-TeFS offers several advantages over other synthetic
stereo datasets and enables the evaluation and enhancement of state-of-the-art
stereo vSLAM models under GTA V's environment. We validate the quality of the
stereo data collected using TeFS by conducting a comparative analysis with the
conventional dual-viewport data using an open-source simulator. We also
benchmark various vSLAM models using the challenging-case comparison groups
included in GTAV-TeFS, revealing the distinct advantages and limitations
inherent to each model. The goal of our work is to bring more high-fidelity
stereo data from commercial-grade game simulators into the research domain and
push the boundary of vSLAM models.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 17:27:46 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Jun 2023 16:55:10 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Luo",
"Yedi",
""
],
[
"Bai",
"Xiangyu",
""
],
[
"Jiang",
"Le",
""
],
[
"Gupta",
"Aniket",
""
],
[
"Mortin",
"Eric",
""
],
[
"Ostadabbas",
"Hanumant Singh Sarah",
""
]
] |
new_dataset
| 0.999583 |
2306.02858
|
Hang Zhang
|
Hang Zhang, Xin Li, Lidong Bing
|
Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video
Understanding
|
Technical Report; Code, Pretrained Model, and Dataset:
https://github.com/DAMO-NLP-SG/Video-LLaMA
| null | null | null |
cs.CL cs.CV cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We present Video-LLaMA, a multi-modal framework that empowers Large Language
Models (LLMs) with the capability of understanding both visual and auditory
content in the video. Video-LLaMA bootstraps cross-modal training from the
frozen pre-trained visual & audio encoders and the frozen LLMs. Unlike previous
vision-LLMs that focus on static image comprehensions such as MiniGPT-4 and
LLaVA, Video-LLaMA mainly tackles two challenges in video understanding: (1)
capturing the temporal changes in visual scenes, (2) integrating audio-visual
signals. To counter the first challenge, we propose a Video Q-former to
assemble the pre-trained image encoder into our video encoder and introduce a
video-to-text generation task to learn video-language correspondence. For the
second challenge, we leverage ImageBind, a universal embedding model aligning
multiple modalities as the pre-trained audio encoder, and introduce an Audio
Q-former on top of ImageBind to learn reasonable auditory query embeddings for
the LLM module. To align the output of both visual & audio encoders with LLM's
embedding space, we train Video-LLaMA on massive video/image-caption pairs as
well as visual-instruction-tuning datasets of moderate amount but higher
quality. We found Video-LLaMA showcases the ability to perceive and comprehend
video content, generating meaningful responses that are grounded in the visual
and auditory information presented in the videos. This highlights the potential
of Video-LLaMA as a promising prototype for audio-visual AI assistants.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 13:17:27 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jun 2023 12:28:37 GMT"
},
{
"version": "v3",
"created": "Mon, 12 Jun 2023 02:28:57 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Zhang",
"Hang",
""
],
[
"Li",
"Xin",
""
],
[
"Bing",
"Lidong",
""
]
] |
new_dataset
| 0.993297 |
2306.03906
|
Kenjiro Tadakuma
|
Josephine Galipon, Shoya Shimizu, Kenjiro Tadakuma
|
Biological Organisms as End Effectors
|
13 pages, 9 figures, 1 graphical abstract
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In robotics, an end effector is a device at the end of a robotic arm that is
designed to physically interact with objects in the environment or with the
environment itself. Effectively, it serves as the hand of the robot, carrying
out tasks on behalf of humans. But could we turn this concept on its head and
consider using living organisms themselves as end effectors? This paper
introduces a novel idea of using whole living organisms as end effectors for
robotics. We showcase this by demonstrating that pill bugs and chitons -- types
of small, harmless creatures -- can be utilized as functional grippers.
Crucially, this method does not harm these creatures, enabling their release
back into nature after use. How this concept may be expanded to other organisms
and applications is also discussed.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2023 17:59:29 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Jun 2023 15:22:02 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Galipon",
"Josephine",
""
],
[
"Shimizu",
"Shoya",
""
],
[
"Tadakuma",
"Kenjiro",
""
]
] |
new_dataset
| 0.964787 |
2306.04717
|
Chunyi Li
|
Chunyi Li, Zicheng Zhang, Haoning Wu, Wei Sun, Xiongkuo Min, Xiaohong
Liu, Guangtao Zhai, Weisi Lin
|
AGIQA-3K: An Open Database for AI-Generated Image Quality Assessment
|
12 pages, 11 figures
| null | null | null |
cs.CV cs.AI eess.IV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
With the rapid advancements of the text-to-image generative model,
AI-generated images (AGIs) have been widely applied to entertainment,
education, social media, etc. However, considering the large quality variance
among different AGIs, there is an urgent need for quality models that are
consistent with human subjective ratings. To address this issue, we extensively
consider various popular AGI models, generated AGI through different prompts
and model parameters, and collected subjective scores at the perceptual quality
and text-to-image alignment, thus building the most comprehensive AGI
subjective quality database AGIQA-3K so far. Furthermore, we conduct a
benchmark experiment on this database to evaluate the consistency between the
current Image Quality Assessment (IQA) model and human perception, while
proposing StairReward that significantly improves the assessment performance of
subjective text-to-image alignment. We believe that the fine-grained subjective
scores in AGIQA-3K will inspire subsequent AGI quality models to fit human
subjective perception mechanisms at both perception and alignment levels and to
optimize the generation result of future AGI models. The database is released
on https://github.com/lcysyzxdxc/AGIQA-3k-Database.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 18:28:21 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Jun 2023 16:42:59 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Li",
"Chunyi",
""
],
[
"Zhang",
"Zicheng",
""
],
[
"Wu",
"Haoning",
""
],
[
"Sun",
"Wei",
""
],
[
"Min",
"Xiongkuo",
""
],
[
"Liu",
"Xiaohong",
""
],
[
"Zhai",
"Guangtao",
""
],
[
"Lin",
"Weisi",
""
]
] |
new_dataset
| 0.988076 |
2306.05923
|
Francesco Marchiori
|
Emad Efatinasab, Francesco Marchiori, Denis Donadel, Alessandro
Brighente, Mauro Conti
|
GAN-CAN: A Novel Attack to Behavior-Based Driver Authentication Systems
|
16 pages, 6 figures
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
For many years, car keys have been the sole mean of authentication in
vehicles. Whether the access control process is physical or wireless,
entrusting the ownership of a vehicle to a single token is prone to stealing
attempts. For this reason, many researchers started developing behavior-based
authentication systems. By collecting data in a moving vehicle, Deep Learning
(DL) models can recognize patterns in the data and identify drivers based on
their driving behavior. This can be used as an anti-theft system, as a thief
would exhibit a different driving style compared to the vehicle owner's.
However, the assumption that an attacker cannot replicate the legitimate driver
behavior falls under certain conditions.
In this paper, we propose GAN-CAN, the first attack capable of fooling
state-of-the-art behavior-based driver authentication systems in a vehicle.
Based on the adversary's knowledge, we propose different GAN-CAN
implementations. Our attack leverages the lack of security in the Controller
Area Network (CAN) to inject suitably designed time-series data to mimic the
legitimate driver. Our design of the malicious time series results from the
combination of different Generative Adversarial Networks (GANs) and our study
on the safety importance of the injected values during the attack. We tested
GAN-CAN in an improved version of the most efficient driver behavior-based
authentication model in the literature. We prove that our attack can fool it
with an attack success rate of up to 0.99. We show how an attacker, without
prior knowledge of the authentication system, can steal a car by deploying
GAN-CAN in an off-the-shelf system in under 22 minutes.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 14:33:26 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Jun 2023 07:21:35 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Efatinasab",
"Emad",
""
],
[
"Marchiori",
"Francesco",
""
],
[
"Donadel",
"Denis",
""
],
[
"Brighente",
"Alessandro",
""
],
[
"Conti",
"Mauro",
""
]
] |
new_dataset
| 0.95781 |
2306.06108
|
Youssef Elmougy
|
Youssef Elmougy and Ling Liu
|
Demystifying Fraudulent Transactions and Illicit Nodes in the Bitcoin
Network for Financial Forensics
| null | null |
10.1145/3580305.3599803
| null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Blockchain provides the unique and accountable channel for financial
forensics by mining its open and immutable transaction data. A recent surge has
been witnessed by training machine learning models with cryptocurrency
transaction data for anomaly detection, such as money laundering and other
fraudulent activities. This paper presents a holistic applied data science
approach to fraud detection in the Bitcoin network with two original
contributions. First, we contribute the Elliptic++ dataset, which extends the
Elliptic transaction dataset to include over 822k Bitcoin wallet addresses
(nodes), each with 56 features, and 1.27M temporal interactions. This enables
both the detection of fraudulent transactions and the detection of illicit
addresses (actors) in the Bitcoin network by leveraging four types of graph
data: (i) the transaction-to-transaction graph, representing the money flow in
the Bitcoin network, (ii) the address-to-address interaction graph, capturing
the types of transaction flows between Bitcoin addresses, (iii) the
address-transaction graph, representing the bi-directional money flow between
addresses and transactions (BTC flow from input address to one or more
transactions and BTC flow from a transaction to one or more output addresses),
and (iv) the user entity graph, capturing clusters of Bitcoin addresses
representing unique Bitcoin users. Second, we perform fraud detection tasks on
all four graphs by using diverse machine learning algorithms. We show that
adding enhanced features from the address-to-address and the
address-transaction graphs not only assists in effectively detecting both
illicit transactions and illicit addresses, but also assists in gaining
in-depth understanding of the root cause of money laundering vulnerabilities in
cryptocurrency transactions and the strategies for fraud detection and
prevention. Released at github.com/git-disl/EllipticPlusPlus.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 18:36:54 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Elmougy",
"Youssef",
""
],
[
"Liu",
"Ling",
""
]
] |
new_dataset
| 0.990934 |
2306.06142
|
Nathan Doumeche
|
Yvenn Amara-Ouali (EDF R&D), Yannig Goude (EDF R&D), Nathan Doum\`eche
(SU, EDF R&D), Pascal Veyret (EDF R&D), Alexis Thomas, Daniel Hebenstreit (TU
Graz), Thomas Wedenig (TU Graz), Arthur Satouf, Aymeric Jan, Yannick Deleuze
(VeRI), Paul Berhaut, S\'ebastien Treguer, Tiphaine Phe-Neau
|
Forecasting Electric Vehicle Charging Station Occupancy: Smarter
Mobility Data Challenge
| null | null | null | null |
cs.DB stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The transport sector is a major contributor to greenhouse gas emissions in
Europe. Shifting to electric vehicles (EVs) powered by a low-carbon energy mix
would reduce carbon emissions. However, to support the development of electric
mobility, a better understanding of EV charging behaviours and more accurate
forecasting models are needed. To fill that gap, the Smarter Mobility Data
Challenge has focused on the development of forecasting models to predict EV
charging station occupancy. This challenge involved analysing a dataset of 91
charging stations across four geographical areas over seven months in
2020-2021. The forecasts were evaluated at three levels of aggregation
(individual stations, areas and global) to capture the inherent hierarchical
structure of the data. The results highlight the potential of hierarchical
forecasting approaches to accurately predict EV charging station occupancy,
providing valuable insights for energy providers and EV users alike. This open
dataset addresses many real-world challenges associated with time series, such
as missing values, non-stationarity and spatio-temporal correlations. Access to
the dataset, code and benchmarks are available at
https://gitlab.com/smarter-mobility-data-challenge/tutorials to foster future
research.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 07:22:18 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Amara-Ouali",
"Yvenn",
"",
"EDF R&D"
],
[
"Goude",
"Yannig",
"",
"EDF R&D"
],
[
"Doumèche",
"Nathan",
"",
"SU, EDF R&D"
],
[
"Veyret",
"Pascal",
"",
"EDF R&D"
],
[
"Thomas",
"Alexis",
"",
"TU\n Graz"
],
[
"Hebenstreit",
"Daniel",
"",
"TU\n Graz"
],
[
"Wedenig",
"Thomas",
"",
"TU Graz"
],
[
"Satouf",
"Arthur",
"",
"VeRI"
],
[
"Jan",
"Aymeric",
"",
"VeRI"
],
[
"Deleuze",
"Yannick",
"",
"VeRI"
],
[
"Berhaut",
"Paul",
""
],
[
"Treguer",
"Sébastien",
""
],
[
"Phe-Neau",
"Tiphaine",
""
]
] |
new_dataset
| 0.997087 |
2306.06147
|
Labib Chowdhury
|
Md. Ekramul Islam, Labib Chowdhury, Faisal Ahamed Khan, Shazzad
Hossain, Sourave Hossain, Mohammad Mamun Or Rashid, Nabeel Mohammed and
Mohammad Ruhul Amin
|
SentiGOLD: A Large Bangla Gold Standard Multi-Domain Sentiment Analysis
Dataset and its Evaluation
|
Accepted in KDD 2023 Applied Data Science Track; 12 pages, 14 figures
| null |
10.1145/3580305.3599904
| null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This study introduces SentiGOLD, a Bangla multi-domain sentiment analysis
dataset. Comprising 70,000 samples, it was created from diverse sources and
annotated by a gender-balanced team of linguists. SentiGOLD adheres to
established linguistic conventions agreed upon by the Government of Bangladesh
and a Bangla linguistics committee. Unlike English and other languages, Bangla
lacks standard sentiment analysis datasets due to the absence of a national
linguistics framework. The dataset incorporates data from online video
comments, social media posts, blogs, news, and other sources while maintaining
domain and class distribution rigorously. It spans 30 domains (e.g., politics,
entertainment, sports) and includes 5 sentiment classes (strongly negative,
weakly negative, neutral, and strongly positive). The annotation scheme,
approved by the national linguistics committee, ensures a robust Inter
Annotator Agreement (IAA) with a Fleiss' kappa score of 0.88. Intra- and
cross-dataset evaluation protocols are applied to establish a standard
classification system. Cross-dataset evaluation on the noisy SentNoB dataset
presents a challenging test scenario. Additionally, zero-shot experiments
demonstrate the generalizability of SentiGOLD. The top model achieves a macro
f1 score of 0.62 (intra-dataset) across 5 classes, setting a benchmark, and
0.61 (cross-dataset from SentNoB) across 3 classes, comparable to the
state-of-the-art. Fine-tuned sentiment analysis model can be accessed at
https://sentiment.bangla.gov.bd.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 12:07:10 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Islam",
"Md. Ekramul",
""
],
[
"Chowdhury",
"Labib",
""
],
[
"Khan",
"Faisal Ahamed",
""
],
[
"Hossain",
"Shazzad",
""
],
[
"Hossain",
"Sourave",
""
],
[
"Rashid",
"Mohammad Mamun Or",
""
],
[
"Mohammed",
"Nabeel",
""
],
[
"Amin",
"Mohammad Ruhul",
""
]
] |
new_dataset
| 0.99982 |
2306.06189
|
Ali Hatamizadeh
|
Ali Hatamizadeh, Greg Heinrich, Hongxu Yin, Andrew Tao, Jose M.
Alvarez, Jan Kautz, Pavlo Molchanov
|
FasterViT: Fast Vision Transformers with Hierarchical Attention
|
Tech report
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We design a new family of hybrid CNN-ViT neural networks, named FasterViT,
with a focus on high image throughput for computer vision (CV) applications.
FasterViT combines the benefits of fast local representation learning in CNNs
and global modeling properties in ViT. Our newly introduced Hierarchical
Attention (HAT) approach decomposes global self-attention with quadratic
complexity into a multi-level attention with reduced computational costs. We
benefit from efficient window-based self-attention. Each window has access to
dedicated carrier tokens that participate in local and global representation
learning. At a high level, global self-attentions enable the efficient
cross-window communication at lower costs. FasterViT achieves a SOTA
Pareto-front in terms of accuracy \vs image throughput. We have extensively
validated its effectiveness on various CV tasks including classification,
object detection and segmentation. We also show that HAT can be used as a
plug-and-play module for existing networks and enhance them. We further
demonstrate significantly faster and more accurate performance than competitive
counterparts for images with high resolution. Code is available at
https://github.com/NVlabs/FasterViT.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 18:41:37 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Hatamizadeh",
"Ali",
""
],
[
"Heinrich",
"Greg",
""
],
[
"Yin",
"Hongxu",
""
],
[
"Tao",
"Andrew",
""
],
[
"Alvarez",
"Jose M.",
""
],
[
"Kautz",
"Jan",
""
],
[
"Molchanov",
"Pavlo",
""
]
] |
new_dataset
| 0.994213 |
2306.06191
|
Anthony Cintron Roman
|
Anthony Cintron Roman, Kevin Xu, Arfon Smith, Jehu Torres Vega, Caleb
Robinson, Juan M Lavista Ferres
|
Open Data on GitHub: Unlocking the Potential of AI
|
In submission to NeurIPS 2023 Track Datasets and Benchmarks
| null | null | null |
cs.LG cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
GitHub is the world's largest platform for collaborative software
development, with over 100 million users. GitHub is also used extensively for
open data collaboration, hosting more than 800 million open data files,
totaling 142 terabytes of data. This study highlights the potential of open
data on GitHub and demonstrates how it can accelerate AI research. We analyze
the existing landscape of open data on GitHub and the patterns of how users
share datasets. Our findings show that GitHub is one of the largest hosts of
open data in the world and has experienced an accelerated growth of open data
assets over the past four years. By examining the open data landscape on
GitHub, we aim to empower users and organizations to leverage existing open
datasets and improve their discoverability -- ultimately contributing to the
ongoing AI revolution to help address complex societal issues. We release the
three datasets that we have collected to support this analysis as open datasets
at https://github.com/github/open-data-on-github.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 18:43:26 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Roman",
"Anthony Cintron",
""
],
[
"Xu",
"Kevin",
""
],
[
"Smith",
"Arfon",
""
],
[
"Vega",
"Jehu Torres",
""
],
[
"Robinson",
"Caleb",
""
],
[
"Ferres",
"Juan M Lavista",
""
]
] |
new_dataset
| 0.988918 |
2306.06203
|
Qing Su
|
Qing Su, Anton Netchaev, Hai Li, and Shihao Ji
|
FLSL: Feature-level Self-supervised Learning
| null | null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Current self-supervised learning (SSL) methods (e.g., SimCLR, DINO, VICReg,
MOCOv3) target primarily on representations at instance level and do not
generalize well to dense prediction tasks, such as object detection and
segmentation. Towards aligning SSL with dense predictions, this paper
demonstrates for the first time the underlying mean-shift clustering process of
Vision Transformers (ViT), which aligns well with natural image semantics
(e.g., a world of objects and stuffs). By employing transformer for joint
embedding and clustering, we propose a two-level feature clustering SSL method,
coined Feature-Level Self-supervised Learning (FLSL). We present the formal
definition of the FLSL problem and construct the objectives from the mean-shift
and k-means perspectives. We show that FLSL promotes remarkable semantic
cluster representations and learns an embedding scheme amenable to intra-view
and inter-view feature clustering. Experiments show that FLSL yields
significant improvements in dense prediction tasks, achieving 44.9 (+2.8)% AP
and 46.5% AP in object detection, as well as 40.8 (+2.3)% AP and 42.1% AP in
instance segmentation on MS-COCO, using Mask R-CNN with ViT-S/16 and ViT-S/8 as
backbone, respectively. FLSL consistently outperforms existing SSL methods
across additional benchmarks, including UAV object detection on UAVDT, and
video instance segmentation on DAVIS 2017. We conclude by presenting
visualization and various ablation studies to better 20 understand the success
of FLSL.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 19:10:51 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Su",
"Qing",
""
],
[
"Netchaev",
"Anton",
""
],
[
"Li",
"Hai",
""
],
[
"Ji",
"Shihao",
""
]
] |
new_dataset
| 0.96925 |
2306.06205
|
Judit Acs
|
Judit Acs, Endre Hamerlik, Roy Schwartz, Noah A. Smith, Andras Kornai
|
Morphosyntactic probing of multilingual BERT models
|
to appear in the Journal of Natural Language Engineering
| null |
10.1017/S1351324923000190
| null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce an extensive dataset for multilingual probing of morphological
information in language models (247 tasks across 42 languages from 10
families), each consisting of a sentence with a target word and a morphological
tag as the desired label, derived from the Universal Dependencies treebanks. We
find that pre-trained Transformer models (mBERT and XLM-RoBERTa) learn features
that attain strong performance across these tasks. We then apply two methods to
locate, for each probing task, where the disambiguating information resides in
the input. The first is a new perturbation method that masks various parts of
context; the second is the classical method of Shapley values. The most
intriguing finding that emerges is a strong tendency for the preceding context
to hold more information relevant to the prediction than the following context.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 19:15:20 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Acs",
"Judit",
""
],
[
"Hamerlik",
"Endre",
""
],
[
"Schwartz",
"Roy",
""
],
[
"Smith",
"Noah A.",
""
],
[
"Kornai",
"Andras",
""
]
] |
new_dataset
| 0.998412 |
2306.06212
|
Ian Huang
|
Ian Huang, Vrishab Krishna, Omoruyi Atekha, Leonidas Guibas
|
Aladdin: Zero-Shot Hallucination of Stylized 3D Assets from Abstract
Scene Descriptions
| null | null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
What constitutes the "vibe" of a particular scene? What should one find in "a
busy, dirty city street", "an idyllic countryside", or "a crime scene in an
abandoned living room"? The translation from abstract scene descriptions to
stylized scene elements cannot be done with any generality by extant systems
trained on rigid and limited indoor datasets. In this paper, we propose to
leverage the knowledge captured by foundation models to accomplish this
translation. We present a system that can serve as a tool to generate stylized
assets for 3D scenes described by a short phrase, without the need to enumerate
the objects to be found within the scene or give instructions on their
appearance. Additionally, it is robust to open-world concepts in a way that
traditional methods trained on limited data are not, affording more creative
freedom to the 3D artist. Our system demonstrates this using a foundation model
"team" composed of a large language model, a vision-language model and several
image diffusion models, which communicate using an interpretable and
user-editable intermediate representation, thus allowing for more versatile and
controllable stylized asset generation for 3D artists. We introduce novel
metrics for this task, and show through human evaluations that in 91% of the
cases, our system outputs are judged more faithful to the semantics of the
input scene description than the baseline, thus highlighting the potential of
this approach to radically accelerate the 3D content creation process for 3D
artists.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 19:24:39 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Huang",
"Ian",
""
],
[
"Krishna",
"Vrishab",
""
],
[
"Atekha",
"Omoruyi",
""
],
[
"Guibas",
"Leonidas",
""
]
] |
new_dataset
| 0.99941 |
2306.06228
|
Robert Joyce
|
Robert J. Joyce, Tirth Patel, Charles Nicholas, Edward Raff
|
AVScan2Vec: Feature Learning on Antivirus Scan Data for Production-Scale
Malware Corpora
| null | null | null | null |
cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
When investigating a malicious file, searching for related files is a common
task that malware analysts must perform. Given that production malware corpora
may contain over a billion files and consume petabytes of storage, many feature
extraction and similarity search approaches are computationally infeasible. Our
work explores the potential of antivirus (AV) scan data as a scalable source of
features for malware. This is possible because AV scan reports are widely
available through services such as VirusTotal and are ~100x smaller than the
average malware sample. The information within an AV scan report is abundant
with information and can indicate a malicious file's family, behavior, target
operating system, and many other characteristics. We introduce AVScan2Vec, a
language model trained to comprehend the semantics of AV scan data. AVScan2Vec
ingests AV scan data for a malicious file and outputs a meaningful vector
representation. AVScan2Vec vectors are ~3 to 85x smaller than popular
alternatives in use today, enabling faster vector comparisons and lower memory
usage. By incorporating Dynamic Continuous Indexing, we show that
nearest-neighbor queries on AVScan2Vec vectors can scale to even the largest
malware production datasets. We also demonstrate that AVScan2Vec vectors are
superior to other leading malware feature vector representations across nearly
all classification, clustering, and nearest-neighbor lookup algorithms that we
evaluated.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 19:53:40 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Joyce",
"Robert J.",
""
],
[
"Patel",
"Tirth",
""
],
[
"Nicholas",
"Charles",
""
],
[
"Raff",
"Edward",
""
]
] |
new_dataset
| 0.99675 |
2306.06261
|
Zhixuan Zhou
|
Zhixuan Zhou, Tanusree Sharma, Luke Emano, Sauvik Das, Yang Wang
|
Iterative Design of An Accessible Crypto Wallet for Blind Users
|
19th Symposium on Usable Privacy and Security
| null | null | null |
cs.CR cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Crypto wallets are a key touch-point for cryptocurrency use. People use
crypto wallets to make transactions, manage crypto assets, and interact with
decentralized apps (dApps). However, as is often the case with emergent
technologies, little attention has been paid to understanding and improving
accessibility barriers in crypto wallet software. We present a series of user
studies that explored how both blind and sighted individuals use MetaMask, one
of the most popular non-custodial crypto wallets. We uncovered inter-related
accessibility, learnability, and security issues with MetaMask. We also report
on an iterative redesign of MetaMask to make it more accessible for blind
users. This process involved multiple evaluations with 44 novice crypto wallet
users, including 20 sighted users, 23 blind users, and one user with low
vision. Our study results show notable improvements for accessibility after two
rounds of design iterations. Based on the results, we discuss design
implications for creating more accessible and secure crypto wallets for blind
users.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 21:18:26 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Zhou",
"Zhixuan",
""
],
[
"Sharma",
"Tanusree",
""
],
[
"Emano",
"Luke",
""
],
[
"Das",
"Sauvik",
""
],
[
"Wang",
"Yang",
""
]
] |
new_dataset
| 0.963529 |
2306.06269
|
Conrad M Albrecht
|
Wenlu Sun, Yao Sun, Chenying Liu, Conrad M Albrecht
|
DeepLCZChange: A Remote Sensing Deep Learning Model Architecture for
Urban Climate Resilience
|
accepted for publication in 2023 IGARSS conference
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Urban land use structures impact local climate conditions of metropolitan
areas. To shed light on the mechanism of local climate wrt. urban land use, we
present a novel, data-driven deep learning architecture and pipeline,
DeepLCZChange, to correlate airborne LiDAR data statistics with the Landsat 8
satellite's surface temperature product. A proof-of-concept numerical
experiment utilizes corresponding remote sensing data for the city of New York
to verify the cooling effect of urban forests.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 21:42:29 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Sun",
"Wenlu",
""
],
[
"Sun",
"Yao",
""
],
[
"Liu",
"Chenying",
""
],
[
"Albrecht",
"Conrad M",
""
]
] |
new_dataset
| 0.977575 |
2306.06272
|
Shiwali Mohan
|
Shiwali Mohan, Wiktor Piotrowski, Roni Stern, Sachin Grover, Sookyung
Kim, Jacob Le, Johan De Kleer
|
A Domain-Independent Agent Architecture for Adaptive Operation in
Evolving Open Worlds
|
Under review in Artificial Intelligence Journal - Open World Learning
track
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Model-based reasoning agents are ill-equipped to act in novel situations in
which their model of the environment no longer sufficiently represents the
world. We propose HYDRA - a framework for designing model-based agents
operating in mixed discrete-continuous worlds, that can autonomously detect
when the environment has evolved from its canonical setup, understand how it
has evolved, and adapt the agents' models to perform effectively. HYDRA is
based upon PDDL+, a rich modeling language for planning in mixed,
discrete-continuous environments. It augments the planning module with visual
reasoning, task selection, and action execution modules for closed-loop
interaction with complex environments. HYDRA implements a novel meta-reasoning
process that enables the agent to monitor its own behavior from a variety of
aspects. The process employs a diverse set of computational methods to maintain
expectations about the agent's own behavior in an environment. Divergences from
those expectations are useful in detecting when the environment has evolved and
identifying opportunities to adapt the underlying models. HYDRA builds upon
ideas from diagnosis and repair and uses a heuristics-guided search over model
changes such that they become competent in novel conditions. The HYDRA
framework has been used to implement novelty-aware agents for three diverse
domains - CartPole++ (a higher dimension variant of a classic control problem),
Science Birds (an IJCAI competition problem), and PogoStick (a specific problem
domain in Minecraft). We report empirical observations from these domains to
demonstrate the efficacy of various components in the novelty meta-reasoning
process.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 21:54:13 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Mohan",
"Shiwali",
""
],
[
"Piotrowski",
"Wiktor",
""
],
[
"Stern",
"Roni",
""
],
[
"Grover",
"Sachin",
""
],
[
"Kim",
"Sookyung",
""
],
[
"Le",
"Jacob",
""
],
[
"De Kleer",
"Johan",
""
]
] |
new_dataset
| 0.975843 |
2306.06294
|
Jiong Yang
|
Jiong Yang, Arijit Shaw, Teodora Baluta, Mate Soos, and Kuldeep S.
Meel
|
Explaining SAT Solving Using Causal Reasoning
|
17 pages, 3 figures, to be published in SAT23
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The past three decades have witnessed notable success in designing efficient
SAT solvers, with modern solvers capable of solving industrial benchmarks
containing millions of variables in just a few seconds. The success of modern
SAT solvers owes to the widely-used CDCL algorithm, which lacks comprehensive
theoretical investigation. Furthermore, it has been observed that CDCL solvers
still struggle to deal with specific classes of benchmarks comprising only
hundreds of variables, which contrasts with their widespread use in real-world
applications. Consequently, there is an urgent need to uncover the inner
workings of these seemingly weak yet powerful black boxes.
In this paper, we present a first step towards this goal by introducing an
approach called CausalSAT, which employs causal reasoning to gain insights into
the functioning of modern SAT solvers. CausalSAT initially generates
observational data from the execution of SAT solvers and learns a structured
graph representing the causal relationships between the components of a SAT
solver. Subsequently, given a query such as whether a clause with low literals
blocks distance (LBD) has a higher clause utility, CausalSAT calculates the
causal effect of LBD on clause utility and provides an answer to the question.
We use CausalSAT to quantitatively verify hypotheses previously regarded as
"rules of thumb" or empirical findings such as the query above. Moreover,
CausalSAT can address previously unexplored questions, like which branching
heuristic leads to greater clause utility in order to study the relationship
between branching and clause management. Experimental evaluations using
practical benchmarks demonstrate that CausalSAT effectively fits the data,
verifies four "rules of thumb", and provides answers to three questions closely
related to implementing modern solvers.
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 22:53:16 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Yang",
"Jiong",
""
],
[
"Shaw",
"Arijit",
""
],
[
"Baluta",
"Teodora",
""
],
[
"Soos",
"Mate",
""
],
[
"Meel",
"Kuldeep S.",
""
]
] |
new_dataset
| 0.962225 |
2306.06322
|
Abdelhamid Haouhat
|
Abdelhamid Haouhat, Slimane Bellaouar, Attia Nehar, Hadda Cherroun
|
Towards Arabic Multimodal Dataset for Sentiment Analysis
|
8 pages
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Multimodal Sentiment Analysis (MSA) has recently become a centric research
direction for many real-world applications. This proliferation is due to the
fact that opinions are central to almost all human activities and are key
influencers of our behaviors. In addition, the recent deployment of Deep
Learning-based (DL) models has proven their high efficiency for a wide range of
Western languages. In contrast, Arabic DL-based multimodal sentiment analysis
(MSA) is still in its infantile stage due, mainly, to the lack of standard
datasets. In this paper, our investigation is twofold. First, we design a
pipeline that helps building our Arabic Multimodal dataset leveraging both
state-of-the-art transformers and feature extraction tools within word
alignment techniques. Thereafter, we validate our dataset using
state-of-the-art transformer-based model dealing with multimodality. Despite
the small size of the outcome dataset, experiments show that Arabic
multimodality is very promising
|
[
{
"version": "v1",
"created": "Sat, 10 Jun 2023 00:13:09 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Haouhat",
"Abdelhamid",
""
],
[
"Bellaouar",
"Slimane",
""
],
[
"Nehar",
"Attia",
""
],
[
"Cherroun",
"Hadda",
""
]
] |
new_dataset
| 0.997098 |
2306.06406
|
Xiaoyang Hao
|
Xiaoyang Hao (1 and 2), Han Li (1), Jun Cheng (2), Lei Wang (2) ((1)
Southern University of Science and Technology, (2) Shenzhen Institute of
Advanced Technology, Chinese Academy of Sciences)
|
D3L: Decomposition of 3D Rotation and Lift from 2D Joint to 3D for Human
Mesh Recovery
|
11 pages, 4 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing methods for 3D human mesh recovery always directly estimate SMPL
parameters, which involve both joint rotations and shape parameters. However,
these methods present rotation semantic ambiguity, rotation error accumulation,
and shape estimation overfitting, which also leads to errors in the estimated
pose. Additionally, these methods have not efficiently leveraged the
advancements in another hot topic, human pose estimation. To address these
issues, we propose a novel approach, Decomposition of 3D Rotation and Lift from
2D Joint to 3D mesh (D3L). We disentangle 3D joint rotation into bone direction
and bone twist direction so that the human mesh recovery task is broken down
into estimation of pose, twist, and shape, which can be handled independently.
Then we design a 2D-to-3D lifting network for estimating twist direction and 3D
joint position from 2D joint position sequences and introduce a nonlinear
optimization method for fitting shape parameters and bone directions. Our
approach can leverage human pose estimation methods, and avoid pose errors
introduced by shape estimation overfitting. We conduct experiments on the
Human3.6M dataset and demonstrate improved performance compared to existing
methods by a large margin.
|
[
{
"version": "v1",
"created": "Sat, 10 Jun 2023 10:41:54 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Hao",
"Xiaoyang",
"",
"1 and 2"
],
[
"Li",
"Han",
""
],
[
"Cheng",
"Jun",
""
],
[
"Wang",
"Lei",
""
]
] |
new_dataset
| 0.999081 |
2306.06410
|
Xize Cheng
|
Xize Cheng, Tao Jin, Linjun Li, Wang Lin, Xinyu Duan and Zhou Zhao
|
OpenSR: Open-Modality Speech Recognition via Maintaining Multi-Modality
Alignment
|
Accepted to ACL2023 (Oral)
| null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Speech Recognition builds a bridge between the multimedia streaming
(audio-only, visual-only or audio-visual) and the corresponding text
transcription. However, when training the specific model of new domain, it
often gets stuck in the lack of new-domain utterances, especially the labeled
visual utterances. To break through this restriction, we attempt to achieve
zero-shot modality transfer by maintaining the multi-modality alignment in
phoneme space learned with unlabeled multimedia utterances in the high resource
domain during the pre-training \cite{shi2022learning}, and propose a training
system Open-modality Speech Recognition (\textbf{OpenSR}) that enables the
models trained on a single modality (e.g., audio-only) applicable to more
modalities (e.g., visual-only and audio-visual). Furthermore, we employ a
cluster-based prompt tuning strategy to handle the domain shift for the
scenarios with only common words in the new domain utterances. We demonstrate
that OpenSR enables modality transfer from one to any in three different
settings (zero-, few- and full-shot), and achieves highly competitive zero-shot
performance compared to the existing few-shot and full-shot lip-reading
methods. To the best of our knowledge, OpenSR achieves the state-of-the-art
performance of word error rate in LRS2 on audio-visual speech recognition and
lip-reading with 2.7\% and 25.0\%, respectively. The code and demo are
available at https://github.com/Exgc/OpenSR.
|
[
{
"version": "v1",
"created": "Sat, 10 Jun 2023 11:04:10 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Cheng",
"Xize",
""
],
[
"Jin",
"Tao",
""
],
[
"Li",
"Linjun",
""
],
[
"Lin",
"Wang",
""
],
[
"Duan",
"Xinyu",
""
],
[
"Zhao",
"Zhou",
""
]
] |
new_dataset
| 0.999052 |
2306.06448
|
Bilash Saha
|
Bilash Saha, Sharaban Tahora, Abdul Barek, Hossain Shahriar
|
HIPAAChecker: The Comprehensive Solution for HIPAA Compliance in Android
mHealth Apps
|
Accepted to publish in The 17th IEEE International Workshop on
Security, Trust, and Privacy for Software Applications
| null | null | null |
cs.CY cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The proliferation of mobile health technology, or mHealth apps, has
necessitated the paramount importance of safeguarding personal health records.
These digital platforms afford individuals the ability to effortlessly monitor
and manage their health-related issues, as well as store, share, and access
their medical records and treatment information. As the utilization of mHealth
apps becomes increasingly widespread, it is imperative to ensure that protected
health information (PHI) is effectively and securely transmitted, received,
created, and maintained in accordance with the regulations set forth by the
Health Insurance Portability and Accountability Act (HIPAA). However, it is
unfortunate to note that many mobile app developers, including those of mHealth
apps, are not fully cognizant of the HIPAA security and privacy guidelines.
This presents a unique opportunity for research to develop an analytical
framework that can aid developers in maintaining a secure and HIPAA-compliant
source code, while also raising awareness among consumers about the privacy and
security of sensitive health information. The plan is to develop a framework
which will serve as the foundation for developing an integrated development
environment (IDE) plugin for mHealth app developers and a web-based interface
for mHealth app consumers. This will help developers identify and address HIPAA
compliance issues during the development process and provide consumers with a
tool to evaluate the privacy and security of mHealth apps before downloading
and using them. The goal is to encourage the development of secure and
compliant mHealth apps that safeguard personal health information.
|
[
{
"version": "v1",
"created": "Sat, 10 Jun 2023 14:03:59 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Saha",
"Bilash",
""
],
[
"Tahora",
"Sharaban",
""
],
[
"Barek",
"Abdul",
""
],
[
"Shahriar",
"Hossain",
""
]
] |
new_dataset
| 0.999127 |
2306.06455
|
Zhe Chen
|
Zhe Chen, Jiaoyang Li, Daniel Harabor, Peter J. Stuckey
|
Scalable Rail Planning and Replanning with Soft Deadlines
| null | null | null | null |
cs.RO cs.MA
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The Flatland Challenge, which was first held in 2019 and reported in NeurIPS
2020, is designed to answer the question: How to efficiently manage dense
traffic on complex rail networks? Considering the significance of punctuality
in real-world railway network operation and the fact that fast passenger trains
share the network with slow freight trains, Flatland version 3 introduces
trains with different speeds and scheduling time windows. This paper introduces
the Flatland 3 problem definitions and extends an award-winning MAPF-based
software, which won the NeurIPS 2020 competition, to efficiently solve Flatland
3 problems. The resulting system won the Flatland 3 competition. We designed a
new priority ordering for initial planning, a new neighbourhood selection
strategy for efficient solution quality improvement with Multi-Agent Path
Finding via Large Neighborhood Search(MAPF-LNS), and use MAPF-LNS for partially
replanning the trains influenced by malfunction.
|
[
{
"version": "v1",
"created": "Sat, 10 Jun 2023 14:41:05 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Chen",
"Zhe",
""
],
[
"Li",
"Jiaoyang",
""
],
[
"Harabor",
"Daniel",
""
],
[
"Stuckey",
"Peter J.",
""
]
] |
new_dataset
| 0.983963 |
2306.06468
|
Tiancheng Jin
|
Tiancheng Jin, Jianjun Zhao
|
ScaffML: A Quantum Behavioral Interface Specification Language for
Scaffold
|
This paper will be appeared in the proceedings of the 2023 IEEE
International Conference on Quantum Software (QSW 2023), July 2-8, 2023
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ensuring the correctness of quantum programs is crucial for quantum software
quality assurance. Although various effective verification methods exist for
classical programs, they cannot be applied to quantum programs due to the
fundamental differences in their execution logic, such as quantum superposition
and entanglement. This calls for new methods to verify the correctness of
quantum programs. In this paper, we present a behavioral interface
specification language (BISL) called ScaffML for the quantum programming
language Scaffold. ScaffML allows the specification of pre- and post-conditions
for Scaffold modules and enables the mixing of assertions with Scaffold code,
thereby facilitating debugging and verification of quantum programs. This paper
discusses the goals and overall approach of ScaffML and describes the basic
features of the language through examples. ScaffML provides an easy-to-use
specification language for quantum programmers, supporting static analysis,
run-time checking, and formal verification of Scaffold programs. Finally, we
present several instances to illustrate the workflow and functionalities of
ScaffML.
|
[
{
"version": "v1",
"created": "Sat, 10 Jun 2023 15:44:45 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Jin",
"Tiancheng",
""
],
[
"Zhao",
"Jianjun",
""
]
] |
new_dataset
| 0.999757 |
2306.06493
|
Chetan Singh Thakur
|
Adithya Krishna, Srikanth Rohit Nudurupati, Chandana D G, Pritesh
Dwivedi, Andr\'e van Schaik, Mahesh Mehendale and Chetan Singh Thakur
|
RAMAN: A Re-configurable and Sparse tinyML Accelerator for Inference on
Edge
| null | null | null | null |
cs.NE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Deep Neural Network (DNN) based inference at the edge is challenging as these
compute and data-intensive algorithms need to be implemented at low cost and
low power while meeting the latency constraints of the target applications.
Sparsity, in both activations and weights inherent to DNNs, is a key knob to
leverage. In this paper, we present RAMAN, a Re-configurable and spArse tinyML
Accelerator for infereNce on edge, architected to exploit the sparsity to
reduce area (storage), power as well as latency. RAMAN can be configured to
support a wide range of DNN topologies - consisting of different convolution
layer types and a range of layer parameters (feature-map size and the number of
channels). RAMAN can also be configured to support accuracy vs power/latency
tradeoffs using techniques deployed at compile-time and run-time. We present
the salient features of the architecture, provide implementation results and
compare the same with the state-of-the-art. RAMAN employs novel dataflow
inspired by Gustavson's algorithm that has optimal input activation (IA) and
output activation (OA) reuse to minimize memory access and the overall data
movement cost. The dataflow allows RAMAN to locally reduce the partial sum
(Psum) within a processing element array to eliminate the Psum writeback
traffic. Additionally, we suggest a method to reduce peak activation memory by
overlapping IA and OA on the same memory space, which can reduce storage
requirements by up to 50%. RAMAN was implemented on a low-power and
resource-constrained Efinix Ti60 FPGA with 37.2K LUTs and 8.6K register
utilization. RAMAN processes all layers of the MobileNetV1 model at 98.47
GOp/s/W and the DS-CNN model at 79.68 GOp/s/W by leveraging both weight and
activation sparsity.
|
[
{
"version": "v1",
"created": "Sat, 10 Jun 2023 17:25:58 GMT"
}
] | 2023-06-13T00:00:00 |
[
[
"Krishna",
"Adithya",
""
],
[
"Nudurupati",
"Srikanth Rohit",
""
],
[
"G",
"Chandana D",
""
],
[
"Dwivedi",
"Pritesh",
""
],
[
"van Schaik",
"André",
""
],
[
"Mehendale",
"Mahesh",
""
],
[
"Thakur",
"Chetan Singh",
""
]
] |
new_dataset
| 0.963713 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.