id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.04890
|
Denizalp Goktas
|
Denizalp Goktas and Jiayi Zhao and Amy Greenwald
|
T\^atonnement in Homothetic Fisher Markets
|
33 pages, 2 figues, appeared at EC'23
| null | null | null |
cs.GT econ.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A prevalent theme in the economics and computation literature is to identify
natural price-adjustment processes by which sellers and buyers in a market can
discover equilibrium prices. An example of such a process is t\^atonnement, an
auction-like algorithm first proposed in 1874 by French economist Walras in
which sellers adjust prices based on the Marshallian demands of buyers. A dual
concept in consumer theory is a buyer's Hicksian demand. In this paper, we
identify the maximum of the absolute value of the elasticity of the Hicksian
demand, as an economic parameter sufficient to capture and explain a range of
convergent and non-convergent t\^atonnement behaviors in a broad class of
markets. In particular, we prove the convergence of t\^atonnement at a rate of
$O((1+\varepsilon^2)/T)$, in homothetic Fisher markets with bounded price
elasticity of Hicksian demand, i.e., Fisher markets in which consumers have
preferences represented by homogeneous utility functions and the price
elasticity of their Hicksian demand is bounded, where $\varepsilon \geq 0$ is
the maximum absolute value of the price elasticity of Hicksian demand across
all buyers. Our result not only generalizes known convergence results for CES
Fisher markets, but extends them to mixed nested CES markets and Fisher markets
with continuous, possibly non-concave, homogeneous utility functions. Our
convergence rate covers the full spectrum of nested CES utilities, including
Leontief and linear utilities, unifying previously existing disparate
convergence and non-convergence results. In particular, for $\varepsilon = 0$,
i.e., Leontief markets, we recover the best-known convergence rate of $O(1/T)$,
and as $\varepsilon \to \infty$, e.g., linear Fisher markets, we obtain
non-convergent behavior, as expected.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 02:38:15 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Goktas",
"Denizalp",
""
],
[
"Zhao",
"Jiayi",
""
],
[
"Greenwald",
"Amy",
""
]
] |
new_dataset
| 0.993728 |
2306.04892
|
Sridhar Chimalakonda
|
Mir Sameed Ali, Nikhil Manjunath, Sridhar Chimalakonda
|
X-COBOL: A Dataset of COBOL Repositories
|
5 pages
| null | null | null |
cs.SE cs.PL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Despite being proposed as early as 1959, COBOL (Common Business-Oriented
Language) still predominantly acts as an integral part of the majority of
operations of several financial, banking, and governmental organizations. To
support the inevitable modernization and maintenance of legacy systems written
in COBOL, it is essential for organizations, researchers, and developers to
understand the nature and source code of COBOL programs. However, to the best
of our knowledge, we are unaware of any dataset that provides data on COBOL
software projects, motivating the need for the dataset. Thus, to aid empirical
research on comprehending COBOL in open-source repositories, we constructed a
dataset of 84 COBOL repositories mined from GitHub, containing rich metadata on
the development cycle of the projects. We envision that researchers can utilize
our dataset to study COBOL projects' evolution, code properties and develop
tools to support their development. Our dataset also provides 1255 COBOL files
present inside the mined repositories. The dataset and artifacts are available
at https://doi.org/10.5281/zenodo.7968845.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 02:42:09 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Ali",
"Mir Sameed",
""
],
[
"Manjunath",
"Nikhil",
""
],
[
"Chimalakonda",
"Sridhar",
""
]
] |
new_dataset
| 0.999906 |
2306.04926
|
Yousuf Khan
|
Yousuf A. Khan, Clarisse Hokia, Jennifer Xu, Ben Ehlert
|
covLLM: Large Language Models for COVID-19 Biomedical Literature
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The COVID-19 pandemic led to 1.1 million deaths in the United States, despite
the explosion of coronavirus research. These new findings are slow to translate
to clinical interventions, leading to poorer patient outcomes and unnecessary
deaths. One reason is that clinicians, overwhelmed by patients, struggle to
keep pace with the rate of new coronavirus literature. A potential solution is
developing a tool for evaluating coronavirus literature using large language
models (LLMs) -- neural networks that are deployed for natural language
processing. LLMs can be used to summarize and extract user-specified
information. The greater availability and advancement of LLMs and pre-processed
coronavirus literature databases provide the opportunity to assist clinicians
in evaluating coronavirus literature through a coronavirus literature specific
LLM (covLLM), a tool that directly takes an inputted research article and a
user query to return an answer. Using the COVID-19 Open Research Dataset
(CORD-19), we produced two datasets: (1) synCovid, which uses a combination of
handwritten prompts and synthetic prompts generated using OpenAI, and (2) real
abstracts, which contains abstract and title pairs. covLLM was trained with
LLaMA 7B as a baseline model to produce three models trained on (1) the Alpaca
and synCovid datasets, (2) the synCovid dataset, and (3) the synCovid and real
abstract datasets. These models were evaluated by two human evaluators and
ChatGPT. Results demonstrate that training covLLM on the synCovid and abstract
pairs datasets performs competitively with ChatGPT and outperforms covLLM
trained primarily using the Alpaca dataset.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 04:08:32 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Khan",
"Yousuf A.",
""
],
[
"Hokia",
"Clarisse",
""
],
[
"Xu",
"Jennifer",
""
],
[
"Ehlert",
"Ben",
""
]
] |
new_dataset
| 0.992584 |
2306.04932
|
Chaoyang Song
|
Xiaobo Liu, Fang Wan, Sheng Ge, Haokun Wang, Haoran Sun, and Chaoyang
Song
|
Jigsaw-based Benchmarking for Learning Robotic Manipulation
|
7 pages, 7 figures, accepted to 2023 IEEE International Conference on
Advanced Robotics and Mechatronics (ICARM)
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Benchmarking provides experimental evidence of the scientific baseline to
enhance the progression of fundamental research, which is also applicable to
robotics. In this paper, we propose a method to benchmark metrics of robotic
manipulation, which addresses the spatial-temporal reasoning skills for robot
learning with the jigsaw game. In particular, our approach exploits a simple
set of jigsaw pieces by designing a structured protocol, which can be highly
customizable according to a wide range of task specifications. Researchers can
selectively adopt the proposed protocol to benchmark their research outputs, on
a comparable scale in the functional, task, and system-level of details. The
purpose is to provide a potential look-up table for learning-based robot
manipulation, commonly available in other engineering disciplines, to
facilitate the adoption of robotics through calculated, empirical, and
systematic experimental evidence.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 04:29:27 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Liu",
"Xiaobo",
""
],
[
"Wan",
"Fang",
""
],
[
"Ge",
"Sheng",
""
],
[
"Wang",
"Haokun",
""
],
[
"Sun",
"Haoran",
""
],
[
"Song",
"Chaoyang",
""
]
] |
new_dataset
| 0.962512 |
2306.04948
|
Wei-Yao Wang
|
Wei-Yao Wang, Yung-Chang Huang, Tsi-Ui Ik, Wen-Chih Peng
|
ShuttleSet: A Human-Annotated Stroke-Level Singles Dataset for Badminton
Tactical Analysis
|
KDD 2023. Project page: https://github.com/wywyWang/CoachAI-Projects
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
With the recent progress in sports analytics, deep learning approaches have
demonstrated the effectiveness of mining insights into players' tactics for
improving performance quality and fan engagement. This is attributed to the
availability of public ground-truth datasets. While there are a few available
datasets for turn-based sports for action detection, these datasets severely
lack structured source data and stroke-level records since these require
high-cost labeling efforts from domain experts and are hard to detect using
automatic techniques. Consequently, the development of artificial intelligence
approaches is significantly hindered when existing models are applied to more
challenging structured turn-based sequences. In this paper, we present
ShuttleSet, the largest publicly-available badminton singles dataset with
annotated stroke-level records. It contains 104 sets, 3,685 rallies, and 36,492
strokes in 44 matches between 2018 and 2021 with 27 top-ranking men's singles
and women's singles players. ShuttleSet is manually annotated with a
computer-aided labeling tool to increase the labeling efficiency and
effectiveness of selecting the shot type with a choice of 18 distinct classes,
the corresponding hitting locations, and the locations of both players at each
stroke. In the experiments, we provide multiple benchmarks (i.e., stroke
influence, stroke forecasting, and movement forecasting) with baselines to
illustrate the practicability of using ShuttleSet for turn-based analytics,
which is expected to stimulate both academic and sports communities. Over the
past two years, a visualization platform has been deployed to illustrate the
variability of analysis cases from ShuttleSet for coaches to delve into
players' tactical preferences with human-interactive interfaces, which was also
used by national badminton teams during multiple international high-ranking
matches.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 05:41:42 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Wang",
"Wei-Yao",
""
],
[
"Huang",
"Yung-Chang",
""
],
[
"Ik",
"Tsi-Ui",
""
],
[
"Peng",
"Wen-Chih",
""
]
] |
new_dataset
| 0.999585 |
2306.04962
|
Meng Liu
|
Meng Liu, Ke Liang, Yue Liu, Siwei Wang, Sihang Zhou, Xinwang Liu
|
arXiv4TGC: Large-Scale Datasets for Temporal Graph Clustering
| null | null | null | null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Temporal graph clustering (TGC) is a crucial task in temporal graph learning.
Its focus is on node clustering on temporal graphs, and it offers greater
flexibility for large-scale graph structures due to the mechanism of temporal
graph methods. However, the development of TGC is currently constrained by a
significant problem: the lack of suitable and reliable large-scale temporal
graph datasets to evaluate clustering performance. In other words, most
existing temporal graph datasets are in small sizes, and even large-scale
datasets contain only a limited number of available node labels. It makes
evaluating models for large-scale temporal graph clustering challenging. To
address this challenge, we build arXiv4TGC, a set of novel academic datasets
(including arXivAI, arXivCS, arXivMath, arXivPhy, and arXivLarge) for
large-scale temporal graph clustering. In particular, the largest dataset,
arXivLarge, contains 1.3 million labeled available nodes and 10 million
temporal edges. We further compare the clustering performance with typical
temporal graph learning models on both previous classic temporal graph datasets
and the new datasets proposed in this paper. The clustering performance on
arXiv4TGC can be more apparent for evaluating different models, resulting in
higher clustering confidence and more suitable for large-scale temporal graph
clustering. The arXiv4TGC datasets are publicly available at:
https://github.com/MGitHubL/arXiv4TGC.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 06:37:04 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Liu",
"Meng",
""
],
[
"Liang",
"Ke",
""
],
[
"Liu",
"Yue",
""
],
[
"Wang",
"Siwei",
""
],
[
"Zhou",
"Sihang",
""
],
[
"Liu",
"Xinwang",
""
]
] |
new_dataset
| 0.997939 |
2306.04988
|
Jianfei Guo
|
Jianfei Guo, Nianchen Deng, Xinyang Li, Yeqi Bai, Botian Shi, Chiyu
Wang, Chenjing Ding, Dongliang Wang, Yikang Li
|
StreetSurf: Extending Multi-view Implicit Surface Reconstruction to
Street Views
|
https://ventusff.github.io/streetsurf_web/
| null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
We present a novel multi-view implicit surface reconstruction technique,
termed StreetSurf, that is readily applicable to street view images in
widely-used autonomous driving datasets, such as Waymo-perception sequences,
without necessarily requiring LiDAR data. As neural rendering research expands
rapidly, its integration into street views has started to draw interests.
Existing approaches on street views either mainly focus on novel view synthesis
with little exploration of the scene geometry, or rely heavily on dense LiDAR
data when investigating reconstruction. Neither of them investigates multi-view
implicit surface reconstruction, especially under settings without LiDAR data.
Our method extends prior object-centric neural surface reconstruction
techniques to address the unique challenges posed by the unbounded street views
that are captured with non-object-centric, long and narrow camera trajectories.
We delimit the unbounded space into three parts, close-range, distant-view and
sky, with aligned cuboid boundaries, and adapt cuboid/hyper-cuboid hash-grids
along with road-surface initialization scheme for finer and disentangled
representation. To further address the geometric errors arising from
textureless regions and insufficient viewing angles, we adopt geometric priors
that are estimated using general purpose monocular models. Coupled with our
implementation of efficient and fine-grained multi-stage ray marching strategy,
we achieve state of the art reconstruction quality in both geometry and
appearance within only one to two hours of training time with a single RTX3090
GPU for each street view sequence. Furthermore, we demonstrate that the
reconstructed implicit surfaces have rich potential for various downstream
tasks, including ray tracing and LiDAR simulation.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 07:19:27 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Guo",
"Jianfei",
""
],
[
"Deng",
"Nianchen",
""
],
[
"Li",
"Xinyang",
""
],
[
"Bai",
"Yeqi",
""
],
[
"Shi",
"Botian",
""
],
[
"Wang",
"Chiyu",
""
],
[
"Ding",
"Chenjing",
""
],
[
"Wang",
"Dongliang",
""
],
[
"Li",
"Yikang",
""
]
] |
new_dataset
| 0.999813 |
2306.05007
|
Jian Liu Mr.
|
Jian Liu, Peilun Li, Raymond~Cheng, N. Asokan, Dawn Song
|
Parallel and Asynchronous Smart Contract Execution
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Today's blockchains suffer from low throughput and high latency, which
impedes their widespread adoption of more complex applications like smart
contracts. In this paper, we propose a novel paradigm for smart contract
execution. It distinguishes between consensus nodes and execution nodes:
different groups of execution nodes can execute transactions in parallel;
meanwhile, consensus nodes can asynchronously order transactions and process
execution results. Moreover, it requires no coordination among execution nodes
and can effectively prevent livelocks. We show two ways of applying this
paradigm to blockchains. First, we show how we can make Ethereum support
parallel and asynchronous contract execution \emph{without hard-forks}. Then,
we propose a new public, permissionless blockchain. Our benchmark shows that,
with a fast consensus layer, it can provide a high throughput even for complex
transactions like Cryptokitties gene mixing. It can also protect simple
transactions from being starved by complex transactions.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 07:56:45 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Liu",
"Jian",
""
],
[
"Li",
"Peilun",
""
],
[
"Raymond~Cheng",
"",
""
],
[
"Asokan",
"N.",
""
],
[
"Song",
"Dawn",
""
]
] |
new_dataset
| 0.986239 |
2306.05045
|
Helena Liz L\'opez
|
Helena Liz-L\'opez, Javier Huertas-Tato, Jorge P\'erez-Aracil, Carlos
Casanova-Mateo, Julia Sanz-Justo, David Camacho
|
Spain on Fire: A novel wildfire risk assessment model based on image
satellite processing and atmospheric information
| null | null | null | null |
cs.CV cs.AI eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Each year, wildfires destroy larger areas of Spain, threatening numerous
ecosystems. Humans cause 90% of them (negligence or provoked) and the behaviour
of individuals is unpredictable. However, atmospheric and environmental
variables affect the spread of wildfires, and they can be analysed by using
deep learning. In order to mitigate the damage of these events we proposed the
novel Wildfire Assessment Model (WAM). Our aim is to anticipate the economic
and ecological impact of a wildfire, assisting managers resource allocation and
decision making for dangerous regions in Spain, Castilla y Le\'on and
Andaluc\'ia. The WAM uses a residual-style convolutional network architecture
to perform regression over atmospheric variables and the greenness index,
computing necessary resources, the control and extinction time, and the
expected burnt surface area. It is first pre-trained with self-supervision over
100,000 examples of unlabelled data with a masked patch prediction objective
and fine-tuned using 311 samples of wildfires. The pretraining allows the model
to understand situations, outclassing baselines with a 1,4%, 3,7% and 9%
improvement estimating human, heavy and aerial resources; 21% and 10,2% in
expected extinction and control time; and 18,8% in expected burnt area. Using
the WAM we provide an example assessment map of Castilla y Le\'on, visualizing
the expected resources over an entire region.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 08:55:16 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Liz-López",
"Helena",
""
],
[
"Huertas-Tato",
"Javier",
""
],
[
"Pérez-Aracil",
"Jorge",
""
],
[
"Casanova-Mateo",
"Carlos",
""
],
[
"Sanz-Justo",
"Julia",
""
],
[
"Camacho",
"David",
""
]
] |
new_dataset
| 0.998751 |
2306.05076
|
Amr Keleg
|
Amr Keleg and Walid Magdy
|
DLAMA: A Framework for Curating Culturally Diverse Facts for Probing the
Knowledge of Pretrained Language Models
|
Accepted to ACL 2023 (Findings)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A few benchmarking datasets have been released to evaluate the factual
knowledge of pretrained language models. These benchmarks (e.g., LAMA, and
ParaRel) are mainly developed in English and later are translated to form new
multilingual versions (e.g., mLAMA, and mParaRel). Results on these
multilingual benchmarks suggest that using English prompts to recall the facts
from multilingual models usually yields significantly better and more
consistent performance than using non-English prompts. Our analysis shows that
mLAMA is biased toward facts from Western countries, which might affect the
fairness of probing models. We propose a new framework for curating factual
triples from Wikidata that are culturally diverse. A new benchmark DLAMA-v1 is
built of factual triples from three pairs of contrasting cultures having a
total of 78,259 triples from 20 relation predicates. The three pairs comprise
facts representing the (Arab and Western), (Asian and Western), and (South
American and Western) countries respectively. Having a more balanced benchmark
(DLAMA-v1) supports that mBERT performs better on Western facts than
non-Western ones, while monolingual Arabic, English, and Korean models tend to
perform better on their culturally proximate facts. Moreover, both monolingual
and multilingual models tend to make a prediction that is culturally or
geographically relevant to the correct label, even if the prediction is wrong.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 09:59:48 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Keleg",
"Amr",
""
],
[
"Magdy",
"Walid",
""
]
] |
new_dataset
| 0.997636 |
2306.05111
|
Alessandro Saviolo
|
Alessandro Saviolo, Jeffrey Mao, Roshan Balu T M B, Vivek
Radhakrishnan, and Giuseppe Loianno
|
AutoCharge: Autonomous Charging for Perpetual Quadrotor Missions
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Battery endurance represents a key challenge for long-term autonomy and
long-range operations, especially in the case of aerial robots. In this paper,
we propose AutoCharge, an autonomous charging solution for quadrotors that
combines a portable ground station with a flexible, lightweight charging tether
and is capable of universal, highly efficient, and robust charging. We design
and manufacture a pair of circular magnetic connectors to ensure a precise
orientation-agnostic electrical connection between the ground station and the
charging tether. Moreover, we supply the ground station with an electromagnet
that largely increases the tolerance to localization and control errors during
the docking maneuver, while still guaranteeing smooth un-docking once the
charging process is completed. We demonstrate AutoCharge on a perpetual 10
hours quadrotor flight experiment and show that the docking and un-docking
performance is solidly repeatable, enabling perpetual quadrotor flight
missions.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 11:19:55 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Saviolo",
"Alessandro",
""
],
[
"Mao",
"Jeffrey",
""
],
[
"B",
"Roshan Balu T M",
""
],
[
"Radhakrishnan",
"Vivek",
""
],
[
"Loianno",
"Giuseppe",
""
]
] |
new_dataset
| 0.999348 |
2306.05119
|
Mingqi Gao
|
Mingqi Gao, Xiaojun Wan, Jia Su, Zhefeng Wang, Baoxing Huai
|
Reference Matters: Benchmarking Factual Error Correction for Dialogue
Summarization with Fine-grained Evaluation Framework
|
Accepted to ACL 2023 Main Conference
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Factuality is important to dialogue summarization. Factual error correction
(FEC) of model-generated summaries is one way to improve factuality. Current
FEC evaluation that relies on factuality metrics is not reliable and detailed
enough. To address this problem, we are the first to manually annotate a FEC
dataset for dialogue summarization containing 4000 items and propose FERRANTI,
a fine-grained evaluation framework based on reference correction that
automatically evaluates the performance of FEC models on different error
categories. Using this evaluation framework, we conduct sufficient experiments
with FEC approaches under a variety of settings and find the best training
modes and significant differences in the performance of the existing approaches
on different factual error categories.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 11:41:39 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Gao",
"Mingqi",
""
],
[
"Wan",
"Xiaojun",
""
],
[
"Su",
"Jia",
""
],
[
"Wang",
"Zhefeng",
""
],
[
"Huai",
"Baoxing",
""
]
] |
new_dataset
| 0.996499 |
2306.05144
|
Spyros Kondylatos
|
Spyros Kondylatos, Ioannis Prapas, Gustau Camps-Valls, Ioannis
Papoutsis
|
Mesogeos: A multi-purpose dataset for data-driven wildfire modeling in
the Mediterranean
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce Mesogeos, a large-scale multi-purpose dataset for wildfire
modeling in the Mediterranean. Mesogeos integrates variables representing
wildfire drivers (meteorology, vegetation, human activity) and historical
records of wildfire ignitions and burned areas for 17 years (2006-2022). It is
designed as a cloud-friendly spatio-temporal dataset, namely a datacube,
harmonizing all variables in a grid of 1km x 1km x 1-day resolution. The
datacube structure offers opportunities to assess machine learning (ML) usage
in various wildfire modeling tasks. We extract two ML-ready datasets that
establish distinct tracks to demonstrate this potential: (1) short-term
wildfire danger forecasting and (2) final burned area estimation given the
point of ignition. We define appropriate metrics and baselines to evaluate the
performance of models in each track. By publishing the datacube, along with the
code to create the ML datasets and models, we encourage the community to foster
the implementation of additional tracks for mitigating the increasing threat of
wildfires in the Mediterranean.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 12:11:16 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Kondylatos",
"Spyros",
""
],
[
"Prapas",
"Ioannis",
""
],
[
"Camps-Valls",
"Gustau",
""
],
[
"Papoutsis",
"Ioannis",
""
]
] |
new_dataset
| 0.999816 |
2306.05179
|
Wenxuan Zhang
|
Wenxuan Zhang, Sharifah Mahani Aljunied, Chang Gao, Yew Ken Chia,
Lidong Bing
|
M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining
Large Language Models
| null | null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Despite the existence of various benchmarks for evaluating natural language
processing models, we argue that human exams are a more suitable means of
evaluating general intelligence for large language models (LLMs), as they
inherently demand a much wider range of abilities such as language
understanding, domain knowledge, and problem-solving skills. To this end, we
introduce M3Exam, a novel benchmark sourced from real and official human exam
questions for evaluating LLMs in a multilingual, multimodal, and multilevel
context. M3Exam exhibits three unique characteristics: (1) multilingualism,
encompassing questions from multiple countries that require strong multilingual
proficiency and cultural knowledge; (2) multimodality, accounting for the
multimodal nature of many exam questions to test the model's multimodal
understanding capability; and (3) multilevel structure, featuring exams from
three critical educational periods to comprehensively assess a model's
proficiency at different levels. In total, M3Exam contains 12,317 questions in
9 diverse languages with three educational levels, where about 23\% of the
questions require processing images for successful solving. We assess the
performance of top-performing LLMs on M3Exam and find that current models,
including GPT-4, still struggle with multilingual text, particularly in
low-resource and non-Latin script languages. Multimodal LLMs also perform
poorly with complex multimodal questions. We believe that M3Exam can be a
valuable resource for comprehensively evaluating LLMs by examining their
multilingual and multimodal abilities and tracking their development. Data and
evaluation code is available at \url{https://github.com/DAMO-NLP-SG/M3Exam}.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 13:21:29 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Zhang",
"Wenxuan",
""
],
[
"Aljunied",
"Sharifah Mahani",
""
],
[
"Gao",
"Chang",
""
],
[
"Chia",
"Yew Ken",
""
],
[
"Bing",
"Lidong",
""
]
] |
new_dataset
| 0.999776 |
2306.05228
|
William Seymour
|
William Seymour, Xiao Zhan, Mark Cote, Jose Such
|
Who are CUIs Really For? Representation and Accessibility in the
Conversational User Interface Literature
|
To appear in the Proceedings of the 2023 ACM conference on
Conversational User Interfaces (CUI 23)
| null |
10.1145/3571884.3603760
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The theme for CUI 2023 is 'designing for inclusive conversation', but who are
CUIs really designed for? The field has its roots in computer science, which
has a long acknowledged diversity problem. Inspired by studies mapping out the
diversity of the CHI and voice assistant literature, we set out to investigate
how these issues have (or have not) shaped the CUI literature. To do this we
reviewed the 46 full-length research papers that have been published at CUI
since its inception in 2019. After detailing the eight papers that engage with
accessibility, social interaction, and performance of gender, we show that 90%
of papers published at CUI with user studies recruit participants from Europe
and North America (or do not specify). To complement existing work in the
community towards diversity we discuss the factors that have contributed to the
current status quo, and offer some initial suggestions as to how we as a CUI
community can continue to improve. We hope that this will form the beginning of
a wider discussion at the conference.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 14:25:22 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Seymour",
"William",
""
],
[
"Zhan",
"Xiao",
""
],
[
"Cote",
"Mark",
""
],
[
"Such",
"Jose",
""
]
] |
new_dataset
| 0.996789 |
2306.05246
|
Qiujie Dong
|
Qiujie Dong, Rui Xu, Xiaoran Gong, Zixiong Wang, Shuangmin Chen,
Shiqing Xin, Changhe Tu
|
Mesh-MLP: An all-MLP Architecture for Mesh Classification and Semantic
Segmentation
|
8 pages, 6 figures
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rapid development of geometric deep learning techniques, many
mesh-based convolutional operators have been proposed to bridge irregular mesh
structures and popular backbone networks. In this paper, we show that while
convolutions are helpful, a simple architecture based exclusively on
multi-layer perceptrons (MLPs) is competent enough to deal with mesh
classification and semantic segmentation. Our new network architecture, named
Mesh-MLP, takes mesh vertices equipped with the heat kernel signature (HKS) and
dihedral angles as the input, replaces the convolution module of a ResNet with
Multi-layer Perceptron (MLP), and utilizes layer normalization (LN) to perform
the normalization of the layers. The all-MLP architecture operates in an
end-to-end fashion and does not include a pooling module. Extensive
experimental results on the mesh classification/segmentation tasks validate the
effectiveness of the all-MLP architecture.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 14:44:57 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Dong",
"Qiujie",
""
],
[
"Xu",
"Rui",
""
],
[
"Gong",
"Xiaoran",
""
],
[
"Wang",
"Zixiong",
""
],
[
"Chen",
"Shuangmin",
""
],
[
"Xin",
"Shiqing",
""
],
[
"Tu",
"Changhe",
""
]
] |
new_dataset
| 0.990079 |
2306.05262
|
Hyunseo Kim
|
Hyunseo Kim, Hye Jung Yoon, Minji Kim, Dong-Sig Han, and Byoung-Tak
Zhang
|
EXOT: Exit-aware Object Tracker for Safe Robotic Manipulation of Moving
Object
|
2023 IEEE International Conference on Robotics and Automation (ICRA)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current robotic hand manipulation narrowly operates with objects in
predictable positions in limited environments. Thus, when the location of the
target object deviates severely from the expected location, a robot sometimes
responds in an unexpected way, especially when it operates with a human. For
safe robot operation, we propose the EXit-aware Object Tracker (EXOT) on a
robot hand camera that recognizes an object's absence during manipulation. The
robot decides whether to proceed by examining the tracker's bounding box output
containing the target object. We adopt an out-of-distribution classifier for
more accurate object recognition since trackers can mistrack a background as a
target object. To the best of our knowledge, our method is the first approach
of applying an out-of-distribution classification technique to a tracker
output. We evaluate our method on the first-person video benchmark dataset,
TREK-150, and on the custom dataset, RMOT-223, that we collect from the UR5e
robot. Then we test our tracker on the UR5e robot in real-time with a
conveyor-belt sushi task, to examine the tracker's ability to track target
dishes and to determine the exit status. Our tracker shows 38% higher
exit-aware performance than a baseline method. The dataset and the code will be
released at https://github.com/hskAlena/EXOT.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 15:03:47 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Kim",
"Hyunseo",
""
],
[
"Yoon",
"Hye Jung",
""
],
[
"Kim",
"Minji",
""
],
[
"Han",
"Dong-Sig",
""
],
[
"Zhang",
"Byoung-Tak",
""
]
] |
new_dataset
| 0.990877 |
2306.05366
|
Nelson Vadori
|
Nelson Vadori and Rahul Savani
|
Ordinal Potential-based Player Rating
| null | null | null | null |
cs.GT cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A two-player symmetric zero-sum game is transitive if for any pure strategies
$x$, $y$, $z$, if $x$ is better than $y$, and $y$ is better than $z$, then $x$
is better than $z$. It was recently observed that the Elo rating fails at
preserving transitive relations among strategies and therefore cannot correctly
extract the transitive component of a game. Our first contribution is to show
that the Elo rating actually does preserve transitivity when computed in the
right space. Precisely, using a suitable invertible mapping $\varphi$, we first
apply $\varphi$ to the game, then compute Elo ratings, then go back to the
original space by applying $\varphi^{-1}$. We provide a characterization of
transitive games as a weak variant of ordinal potential games with additively
separable potential functions. Leveraging this insight, we introduce the
concept of transitivity order, the minimum number of invertible mappings
required to transform the payoff of a transitive game into (differences of) its
potential function. The transitivity order is a tool to classify transitive
games, with Elo games being an example of transitive games of order one. Most
real-world games have both transitive and non-transitive (cyclic) components,
and we use our analysis of transitivity to extract the transitive (potential)
component of an arbitrary game. We link transitivity to the known concept of
sign-rank: transitive games have sign-rank two; arbitrary games may have higher
sign-rank. Using a neural network-based architecture, we learn a decomposition
of an arbitrary game into transitive and cyclic components that prioritises
capturing the sign pattern of the game. In particular, a transitive game always
has just one component in its decomposition, the potential component. We
provide a comprehensive evaluation of our methodology using both toy examples
and empirical data from real-world games.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 17:08:52 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Vadori",
"Nelson",
""
],
[
"Savani",
"Rahul",
""
]
] |
new_dataset
| 0.979233 |
2306.05376
|
Akash Awasthi
|
Akash Awasthi, Son Ly, Jaer Nizam, Samira Zare, Videet Mehta, Safwan
Ahmed, Keshav Shah, Ramakrishna Nemani, Saurabh Prasad, Hien Van Nguyen
|
Anomaly Detection in Satellite Videos using Diffusion Models
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The definition of anomaly detection is the identification of an unexpected
event. Real-time detection of extreme events such as wildfires, cyclones, or
floods using satellite data has become crucial for disaster management.
Although several earth-observing satellites provide information about
disasters, satellites in the geostationary orbit provide data at intervals as
frequent as every minute, effectively creating a video from space. There are
many techniques that have been proposed to identify anomalies in surveillance
videos; however, the available datasets do not have dynamic behavior, so we
discuss an anomaly framework that can work on very high-frequency datasets to
find very fast-moving anomalies. In this work, we present a diffusion model
which does not need any motion component to capture the fast-moving anomalies
and outperforms the other baseline methods.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 19:17:39 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Awasthi",
"Akash",
""
],
[
"Ly",
"Son",
""
],
[
"Nizam",
"Jaer",
""
],
[
"Zare",
"Samira",
""
],
[
"Mehta",
"Videet",
""
],
[
"Ahmed",
"Safwan",
""
],
[
"Shah",
"Keshav",
""
],
[
"Nemani",
"Ramakrishna",
""
],
[
"Prasad",
"Saurabh",
""
],
[
"Van Nguyen",
"Hien",
""
]
] |
new_dataset
| 0.98317 |
2306.05381
|
Xianda Chen
|
Xianda Chen, Meixin Zhu, Kehua Chen, Pengqin Wang, Hongliang Lu, Hui
Zhong, Xu Han, Yinhai Wang
|
FollowNet: A Comprehensive Benchmark for Car-Following Behavior Modeling
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Car-following is a control process in which a following vehicle (FV) adjusts
its acceleration to keep a safe distance from the lead vehicle (LV). Recently,
there has been a booming of data-driven models that enable more accurate
modeling of car-following through real-world driving datasets. Although there
are several public datasets available, their formats are not always consistent,
making it challenging to determine the state-of-the-art models and how well a
new model performs compared to existing ones. In contrast, research fields such
as image recognition and object detection have benchmark datasets like
ImageNet, Microsoft COCO, and KITTI. To address this gap and promote the
development of microscopic traffic flow modeling, we establish a public
benchmark dataset for car-following behavior modeling. The benchmark consists
of more than 80K car-following events extracted from five public driving
datasets using the same criteria. These events cover diverse situations
including different road types, various weather conditions, and mixed traffic
flows with autonomous vehicles. Moreover, to give an overview of current
progress in car-following modeling, we implemented and tested representative
baseline models with the benchmark. Results show that the deep deterministic
policy gradient (DDPG) based model performs competitively with a lower MSE for
spacing compared to traditional intelligent driver model (IDM) and
Gazis-Herman-Rothery (GHR) models, and a smaller collision rate compared to
fully connected neural network (NN) and long short-term memory (LSTM) models in
most datasets. The established benchmark will provide researchers with
consistent data formats and metrics for cross-comparing different car-following
models, promoting the development of more accurate models. We open-source our
dataset and implementation code in
https://github.com/HKUST-DRIVE-AI-LAB/FollowNet.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 08:59:26 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Chen",
"Xianda",
""
],
[
"Zhu",
"Meixin",
""
],
[
"Chen",
"Kehua",
""
],
[
"Wang",
"Pengqin",
""
],
[
"Lu",
"Hongliang",
""
],
[
"Zhong",
"Hui",
""
],
[
"Han",
"Xu",
""
],
[
"Wang",
"Yinhai",
""
]
] |
new_dataset
| 0.97674 |
2306.05390
|
Dongdong Chen
|
Qinhong Yang and Dongdong Chen and Zhentao Tan and Qiankun Liu and Qi
Chu and Jianmin Bao and Lu Yuan and Gang Hua and Nenghai Yu
|
HQ-50K: A Large-scale, High-quality Dataset for Image Restoration
|
Dataset and code will be available at
https://github.com/littleYaang/HQ-50K
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a new large-scale image restoration dataset, called
HQ-50K, which contains 50,000 high-quality images with rich texture details and
semantic diversity. We analyze existing image restoration datasets from five
different perspectives, including data scale, resolution, compression rates,
texture details, and semantic coverage. However, we find that all of these
datasets are deficient in some aspects. In contrast, HQ-50K considers all of
these five aspects during the data curation process and meets all requirements.
We also present a new Degradation-Aware Mixture of Expert (DAMoE) model, which
enables a single model to handle multiple corruption types and unknown levels.
Our extensive experiments demonstrate that HQ-50K consistently improves the
performance on various image restoration tasks, such as super-resolution,
denoising, dejpeg, and deraining. Furthermore, our proposed DAMoE, trained on
our \dataset, outperforms existing state-of-the-art unified models designed for
multiple restoration tasks and levels. The dataset and code are available at
\url{https://github.com/littleYaang/HQ-50K}.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 17:44:21 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Yang",
"Qinhong",
""
],
[
"Chen",
"Dongdong",
""
],
[
"Tan",
"Zhentao",
""
],
[
"Liu",
"Qiankun",
""
],
[
"Chu",
"Qi",
""
],
[
"Bao",
"Jianmin",
""
],
[
"Yuan",
"Lu",
""
],
[
"Hua",
"Gang",
""
],
[
"Yu",
"Nenghai",
""
]
] |
new_dataset
| 0.999875 |
2306.05401
|
Ori Press
|
Ori Press, Steffen Schneider, Matthias K\"ummerer, Matthias Bethge
|
RDumb: A simple approach that questions our progress in continual
test-time adaptation
| null | null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Test-Time Adaptation (TTA) allows to update pretrained models to changing
data distributions at deployment time. While early work tested these algorithms
for individual fixed distribution shifts, recent work proposed and applied
methods for continual adaptation over long timescales. To examine the reported
progress in the field, we propose the Continuously Changing Corruptions (CCC)
benchmark to measure asymptotic performance of TTA techniques. We find that
eventually all but one state-of-the-art methods collapse and perform worse than
a non-adapting model, including models specifically proposed to be robust to
performance collapse. In addition, we introduce a simple baseline, "RDumb",
that periodically resets the model to its pretrained state. RDumb performs
better or on par with the previously proposed state-of-the-art in all
considered benchmarks. Our results show that previous TTA approaches are
neither effective at regularizing adaptation to avoid collapse nor able to
outperform a simplistic resetting strategy.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 17:52:34 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Press",
"Ori",
""
],
[
"Schneider",
"Steffen",
""
],
[
"Kümmerer",
"Matthias",
""
],
[
"Bethge",
"Matthias",
""
]
] |
new_dataset
| 0.996431 |
2306.05407
|
Paul-Edouard Sarlin
|
Paul-Edouard Sarlin, Eduard Trulls, Marc Pollefeys, Jan Hosang, Simon
Lynen
|
SNAP: Self-Supervised Neural Maps for Visual Positioning and Semantic
Understanding
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Semantic 2D maps are commonly used by humans and machines for navigation
purposes, whether it's walking or driving. However, these maps have
limitations: they lack detail, often contain inaccuracies, and are difficult to
create and maintain, especially in an automated fashion. Can we use raw imagery
to automatically create better maps that can be easily interpreted by both
humans and machines? We introduce SNAP, a deep network that learns rich neural
2D maps from ground-level and overhead images. We train our model to align
neural maps estimated from different inputs, supervised only with camera poses
over tens of millions of StreetView images. SNAP can resolve the location of
challenging image queries beyond the reach of traditional methods,
outperforming the state of the art in localization by a large margin. Moreover,
our neural maps encode not only geometry and appearance but also high-level
semantics, discovered without explicit supervision. This enables effective
pre-training for data-efficient semantic scene understanding, with the
potential to unlock cost-efficient creation of more detailed maps.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 17:54:47 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Sarlin",
"Paul-Edouard",
""
],
[
"Trulls",
"Eduard",
""
],
[
"Pollefeys",
"Marc",
""
],
[
"Hosang",
"Jan",
""
],
[
"Lynen",
"Simon",
""
]
] |
new_dataset
| 0.996372 |
2306.05410
|
Zezhou Cheng
|
Zezhou Cheng, Carlos Esteves, Varun Jampani, Abhishek Kar, Subhransu
Maji, Ameesh Makadia
|
LU-NeRF: Scene and Pose Estimation by Synchronizing Local Unposed NeRFs
|
Project website: https://people.cs.umass.edu/~zezhoucheng/lu-nerf/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
A critical obstacle preventing NeRF models from being deployed broadly in the
wild is their reliance on accurate camera poses. Consequently, there is growing
interest in extending NeRF models to jointly optimize camera poses and scene
representation, which offers an alternative to off-the-shelf SfM pipelines
which have well-understood failure modes. Existing approaches for unposed NeRF
operate under limited assumptions, such as a prior pose distribution or coarse
pose initialization, making them less effective in a general setting. In this
work, we propose a novel approach, LU-NeRF, that jointly estimates camera poses
and neural radiance fields with relaxed assumptions on pose configuration. Our
approach operates in a local-to-global manner, where we first optimize over
local subsets of the data, dubbed mini-scenes. LU-NeRF estimates local pose and
geometry for this challenging few-shot task. The mini-scene poses are brought
into a global reference frame through a robust pose synchronization step, where
a final global optimization of pose and scene can be performed. We show our
LU-NeRF pipeline outperforms prior attempts at unposed NeRF without making
restrictive assumptions on the pose prior. This allows us to operate in the
general SE(3) pose setting, unlike the baselines. Our results also indicate our
model can be complementary to feature-based SfM pipelines as it compares
favorably to COLMAP on low-texture and low-resolution images.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 17:56:22 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Cheng",
"Zezhou",
""
],
[
"Esteves",
"Carlos",
""
],
[
"Jampani",
"Varun",
""
],
[
"Kar",
"Abhishek",
""
],
[
"Maji",
"Subhransu",
""
],
[
"Makadia",
"Ameesh",
""
]
] |
new_dataset
| 0.991861 |
2306.05411
|
Duy-Kien Nguyen
|
Duy-Kien Nguyen, Vaibhav Aggarwal, Yanghao Li, Martin R. Oswald,
Alexander Kirillov, Cees G. M. Snoek, Xinlei Chen
|
R-MAE: Regions Meet Masked Autoencoders
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Vision-specific concepts such as "region" have played a key role in extending
general machine learning frameworks to tasks like object detection. Given the
success of region-based detectors for supervised learning and the progress of
intra-image methods for contrastive learning, we explore the use of regions for
reconstructive pre-training. Starting from Masked Autoencoding (MAE) both as a
baseline and an inspiration, we propose a parallel pre-text task tailored to
address the one-to-many mapping between images and regions. Since such regions
can be generated in an unsupervised way, our approach (R-MAE) inherits the wide
applicability from MAE, while being more "region-aware". We conduct thorough
analyses during the development of R-MAE, and converge on a variant that is
both effective and efficient (1.3% overhead over MAE). Moreover, it shows
consistent quantitative improvements when generalized to various pre-training
data and downstream detection and segmentation benchmarks. Finally, we provide
extensive qualitative visualizations to enhance the understanding of R-MAE's
behaviour and potential. Code will be made available at
https://github.com/facebookresearch/r-mae.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 17:56:46 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Nguyen",
"Duy-Kien",
""
],
[
"Aggarwal",
"Vaibhav",
""
],
[
"Li",
"Yanghao",
""
],
[
"Oswald",
"Martin R.",
""
],
[
"Kirillov",
"Alexander",
""
],
[
"Snoek",
"Cees G. M.",
""
],
[
"Chen",
"Xinlei",
""
]
] |
new_dataset
| 0.977944 |
2306.05419
|
M. Esat Kalfaoglu
|
M. Esat Kalfaoglu, Halil Ibrahim Ozturk, Ozsel Kilinc, Alptekin
Temizel
|
TopoMask: Instance-Mask-Based Formulation for the Road Topology Problem
via Transformer-Based Architecture
|
4th in OLS and 2nd in the F1-score in OpenLane Topology Challenge
2023
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Driving scene understanding task involves detecting static elements such as
lanes, traffic signs, and traffic lights, and their relationships with each
other. To facilitate the development of comprehensive scene understanding
solutions using multiple camera views, a new dataset called Road Genome
(OpenLane-V2) has been released. This dataset allows for the exploration of
complex road connections and situations where lane markings may be absent.
Instead of using traditional lane markings, the lanes in this dataset are
represented by centerlines, which offer a more suitable representation of lanes
and their connections. In this study, we have introduced a new approach called
TopoMask for predicting centerlines in road topology. Unlike existing
approaches in the literature that rely on keypoints or parametric methods,
TopoMask utilizes an instance-mask based formulation with a transformer-based
architecture and, in order to enrich the mask instances with flow information,
a direction label representation is proposed. TopoMask have ranked 4th in the
OpenLane-V2 Score (OLS) and ranked 2nd in the F1 score of centerline prediction
in OpenLane Topology Challenge 2023. In comparison to the current
state-of-the-art method, TopoNet, the proposed method has achieved similar
performance in Frechet-based lane detection and outperformed TopoNet in
Chamfer-based lane detection without utilizing its scene graph neural network.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 17:58:57 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Kalfaoglu",
"M. Esat",
""
],
[
"Ozturk",
"Halil Ibrahim",
""
],
[
"Kilinc",
"Ozsel",
""
],
[
"Temizel",
"Alptekin",
""
]
] |
new_dataset
| 0.999842 |
2306.05424
|
Muhammad Maaz Mr
|
Muhammad Maaz, Hanoona Rasheed, Salman Khan, Fahad Shahbaz Khan
|
Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and
Language Models
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Conversation agents fueled by Large Language Models (LLMs) are providing a
new way to interact with visual data. While there have been initial attempts
for image-based conversation models, this work addresses the underexplored
field of video-based conversation by introducing Video-ChatGPT. It is a
multimodal model that merges a video-adapted visual encoder with a LLM. The
model is capable of understanding and generating human-like conversations about
videos. We introduce a new dataset of 100,000 video-instruction pairs used to
train Video-ChatGPT acquired via manual and semi-automated pipeline that is
easily scalable and robust to label noise. We also develop a quantiative
evaluation framework for video-based dialogue models to objectively analyse the
strengths and weaknesses of proposed models. Our code, models, instruction-sets
and demo are released at https://github.com/mbzuai-oryx/Video-ChatGPT.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 17:59:56 GMT"
}
] | 2023-06-09T00:00:00 |
[
[
"Maaz",
"Muhammad",
""
],
[
"Rasheed",
"Hanoona",
""
],
[
"Khan",
"Salman",
""
],
[
"Khan",
"Fahad Shahbaz",
""
]
] |
new_dataset
| 0.97139 |
2105.02611
|
Martin Zimmermann
|
Shibashis Guha, Isma\"el Jecker, Karoliina Lehtinen, Martin Zimmermann
|
A Bit of Nondeterminism Makes Pushdown Automata Expressive and Succinct
| null | null | null | null |
cs.FL cs.LO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We study the expressiveness and succinctness of history-deterministic
pushdown automata (HD-PDA) over finite words, that is, pushdown automata whose
nondeterminism can be resolved based on the run constructed so far, but
independently of the remainder of the input word. These are also known as
good-for-games pushdown automata. We prove that HD-PDA recognise more languages
than deterministic PDA (DPDA) but not all context-free languages (CFL). This
class is orthogonal to unambiguous CFL. We further show that HD-PDA can be
exponentially more succinct than DPDA, while PDA can be double-exponentially
more succinct than HD-PDA. We also study HDness in visibly pushdown automata
(VPA), which enjoy better closure properties than PDA, and for which we show
that deciding HDness is ExpTime-complete. HD-VPA can be exponentially more
succinct than deterministic VPA, while VPA can be exponentially more succinct
than HD-VPA. Both of these lower bounds are tight. We then compare HD-PDA with
PDA for which composition with games is well-behaved, i.e. good-for-games
automata. We show that these two notions coincide, but only if we consider
potentially infinitely branching games. Finally, we study the complexity of
resolving nondeterminism in HD-PDA. Every HDPDA has a positional resolver, a
function that resolves nondeterminism and that is only dependant on the current
configuration. Pushdown transducers are sufficient to implement the resolvers
of HD-VPA, but not those of HD-PDA. HD-PDA with finite-state resolvers are
determinisable.
|
[
{
"version": "v1",
"created": "Thu, 6 May 2021 12:36:26 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Oct 2022 06:14:37 GMT"
},
{
"version": "v3",
"created": "Wed, 7 Jun 2023 12:39:52 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Guha",
"Shibashis",
""
],
[
"Jecker",
"Ismaël",
""
],
[
"Lehtinen",
"Karoliina",
""
],
[
"Zimmermann",
"Martin",
""
]
] |
new_dataset
| 0.998732 |
2109.10333
|
Michael Lampis
|
Michael Lampis and Valia Mitsou
|
Fine-grained Meta-Theorems for Vertex Integrity
|
Presented in ISAAC 2021
| null | null | null |
cs.CC cs.DS cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vertex Integrity is a graph measure which sits squarely between two more
well-studied notions, namely vertex cover and tree-depth, and that has recently
gained attention as a structural graph parameter. In this paper we investigate
the algorithmic trade-offs involved with this parameter from the point of view
of algorithmic meta-theorems for First-Order (FO) and Monadic Second Order
(MSO) logic. Our positive results are the following: (i) given a graph $G$ of
vertex integrity $k$ and an FO formula $\phi$ with $q$ quantifiers, deciding if
$G$ satisfies $\phi$ can be done in time $2^{O(k^2q+q\log q)}+n^{O(1)}$; (ii)
for MSO formulas with $q$ quantifiers, the same can be done in time
$2^{2^{O(k^2+kq)}}+n^{O(1)}$. Both results are obtained using kernelization
arguments, which pre-process the input to sizes $2^{O(k^2)}q$ and
$2^{O(k^2+kq)}$ respectively.
The complexities of our meta-theorems are significantly better than the
corresponding meta-theorems for tree-depth, which involve towers of
exponentials. However, they are worse than the roughly $2^{O(kq)}$ and
$2^{2^{O(k+q)}}$ complexities known for corresponding meta-theorems for vertex
cover. To explain this deterioration we present two formula constructions which
lead to fine-grained complexity lower bounds and establish that the dependence
of our meta-theorems on $k$ is best possible. More precisely, we show that it
is not possible to decide FO formulas with $q$ quantifiers in time
$2^{o(k^2q)}$, and that there exists a constant-size MSO formula which cannot
be decided in time $2^{2^{o(k^2)}}$, both under the ETH. Hence, the quadratic
blow-up in the dependence on $k$ is unavoidable and vertex integrity has a
complexity for FO and MSO logic which is truly intermediate between vertex
cover and tree-depth.
|
[
{
"version": "v1",
"created": "Tue, 21 Sep 2021 17:32:27 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jun 2023 14:38:41 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Lampis",
"Michael",
""
],
[
"Mitsou",
"Valia",
""
]
] |
new_dataset
| 0.999359 |
2202.13889
|
Efi Fogel
|
Nir Goren, Efi Fogel, and Dan Halperin
|
CGAL Made More Accessible
|
57 pages
| null | null | null |
cs.CG cs.MS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We introduce bindings that enable the convenient, efficient, and reliable use
of software modules of CGAL (Computational Geometry Algorithm Library), which
are written in C++, from within code written in Python. There are different
tools that facilitate the creation of such bindings. We present a short study
that compares three main tools, which leads to the tool of choice. The
implementation of algorithms and data structures in computational geometry
presents tremendous difficulties, such as obtaining robust software despite the
use of (inexact) floating point arithmetic, found in standard hardware, and
meticulous handling of all degenerate cases, which typically are in abundance.
The code of CGAL extensively uses function and class templates in order to
handle these difficulties, which implies that the programmer has to make many
choices that are resolved during compile time (of the C++ modules). While
bindings take effect at run time (of the Python code), the type of the C++
objects that are bound must be known when the bindings are generated, that is,
when they are compiled. The types of the bound objects are instances
(instantiated types) of C++ function and class templates. The number of object
types that can potentially be bound, in implementation of generic
computational-geometry algorithms, is enormous; thus, the generation of the
bindings for all these types in advance is practically impossible. Often there
are several choices to make, resulting in a prohibitively large number of
combinations. We present a system that rapidly generates bindings for desired
object types according to user prescriptions, which enables the convenient use
of any subset of bound object types concurrently. The introduction of the
bindings made them easily accessible to newcomers and practitioners in
non-computing fields, as we report in the paper.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 15:38:24 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jun 2023 12:10:19 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Goren",
"Nir",
""
],
[
"Fogel",
"Efi",
""
],
[
"Halperin",
"Dan",
""
]
] |
new_dataset
| 0.998396 |
2205.13225
|
Minsu Kim
|
Haeyeon Kim, Minsu Kim, Federico Berto, Joungho Kim, Jinkyoo Park
|
DevFormer: A Symmetric Transformer for Context-Aware Device Placement
|
International Conference on Machine Learning (ICML) 2023. Extended
version of NeurIPS 2022 Offline RL Workshop "Collaborative symmetricity
exploitation for offline learning of hardware design solver"
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present DevFormer, a novel transformer-based architecture
for addressing the complex and computationally demanding problem of hardware
design optimization. Despite the demonstrated efficacy of transformers in
domains including natural language processing and computer vision, their use in
hardware design has been limited by the scarcity of offline data. Our approach
addresses this limitation by introducing strong inductive biases such as
relative positional embeddings and action-permutation symmetricity that
effectively capture the hardware context and enable efficient design
optimization with limited offline data. We apply DevFoemer to the problem of
decoupling capacitor placement and show that it outperforms state-of-the-art
methods in both simulated and real hardware, leading to improved performances
while reducing the number of components by more than $30\%$. Finally, we show
that our approach achieves promising results in other offline contextual
learning-based combinatorial optimization tasks.
|
[
{
"version": "v1",
"created": "Thu, 26 May 2022 08:36:35 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Feb 2023 06:38:30 GMT"
},
{
"version": "v3",
"created": "Wed, 7 Jun 2023 07:01:45 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Kim",
"Haeyeon",
""
],
[
"Kim",
"Minsu",
""
],
[
"Berto",
"Federico",
""
],
[
"Kim",
"Joungho",
""
],
[
"Park",
"Jinkyoo",
""
]
] |
new_dataset
| 0.991062 |
2211.02480
|
Shitao Li
|
Shitao Li, Minjia Shi, Jon-Lark Kim
|
Characterization and construction of optimal binary linear codes with
one-dimensional hull
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The hull of a linear code over finite fields is the intersection of the code
and its dual, and linear codes with small hulls have applications in
computational complexity and information protection. Linear codes with the
smallest hull are LCD codes, which have been widely studied. Recently, several
papers were devoted to related LCD codes over finite fields with size greater
than 3 to linear codes with one-dimensional or higher dimensional hull.
Therefore, an interesting and non-trivial problem is to study binary linear
codes with one-dimensional hull with connection to binary LCD codes. The
objective of this paper is to study some properties of binary linear codes with
one-dimensional hull, and establish their relation with binary LCD codes. Some
interesting inequalities are thus obtained. Using such a characterization, we
study the largest minimum distance $d_{one}(n,k)$ among all binary linear
$[n,k]$ codes with one-dimensional hull. We determine the largest minimum
distances $d_{one}(n,n-k)$ for $ k\leq 5$ and $d_{one}(n,k)$ for $k\leq 4$ or
$14\leq n\leq 24$. We partially determine the exact value of $d_{one}(n,k)$ for
$k=5$ or $25\leq n\leq 30$.
|
[
{
"version": "v1",
"created": "Fri, 4 Nov 2022 14:15:20 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Nov 2022 09:32:25 GMT"
},
{
"version": "v3",
"created": "Wed, 7 Jun 2023 11:21:55 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Li",
"Shitao",
""
],
[
"Shi",
"Minjia",
""
],
[
"Kim",
"Jon-Lark",
""
]
] |
new_dataset
| 0.999725 |
2212.05861
|
Weihong Ren
|
Weihong Ren, Denglu Wu, Hui Cao, Bowen Chen, Yuhang Shi, Weibo Jiang
and Honghai Liu
|
CountingMOT: Joint Counting, Detection and Re-Identification for
Multiple Object Tracking
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The recent trend in multiple object tracking (MOT) is jointly solving
detection and tracking, where object detection and appearance feature (or
motion) are learned simultaneously. Despite competitive performance, in crowded
scenes, joint detection and tracking usually fail to find accurate object
associations due to missed or false detections. In this paper, we jointly model
counting, detection and re-identification in an end-to-end framework, named
CountingMOT, tailored for crowded scenes. By imposing mutual object-count
constraints between detection and counting, the CountingMOT tries to find a
balance between object detection and crowd density map estimation, which can
help it to recover missed detections or reject false detections. Our approach
is an attempt to bridge the gap of object detection, counting, and
re-Identification. This is in contrast to prior MOT methods that either ignore
the crowd density and thus are prone to failure in crowded scenes, or depend on
local correlations to build a graphical relationship for matching targets. The
proposed MOT tracker can perform online and real-time tracking, and achieves
the state-of-the-art results on public benchmarks MOT16 (MOTA of 79.7), MOT17
(MOTA of 81.3%) and MOT20 (MOTA of 78.9%).
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2022 12:53:58 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jun 2023 07:14:26 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Ren",
"Weihong",
""
],
[
"Wu",
"Denglu",
""
],
[
"Cao",
"Hui",
""
],
[
"Chen",
"Bowen",
""
],
[
"Shi",
"Yuhang",
""
],
[
"Jiang",
"Weibo",
""
],
[
"Liu",
"Honghai",
""
]
] |
new_dataset
| 0.992734 |
2302.09569
|
Enrique Dehaerne
|
MinJin Hwang, Bappaditya Dey, Enrique Dehaerne, Sandip Halder,
Young-han Shin
|
SEMI-PointRend: Improved Semiconductor Wafer Defect Classification and
Segmentation as Rendering
|
7 pages, 6 figures, 5 tables. To be published by SPIE in the
proceedings of Metrology, Inspection, and Process Control XXXVII
|
Proc. SPIE 12496, Metrology, Inspection, and Process Control
XXXVII, 1249608 (27 April 2023)
|
10.1117/12.2657555
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this study, we applied the PointRend (Point-based Rendering) method to
semiconductor defect segmentation. PointRend is an iterative segmentation
algorithm inspired by image rendering in computer graphics, a new image
segmentation method that can generate high-resolution segmentation masks. It
can also be flexibly integrated into common instance segmentation
meta-architecture such as Mask-RCNN and semantic meta-architecture such as FCN.
We implemented a model, termed as SEMI-PointRend, to generate precise
segmentation masks by applying the PointRend neural network module. In this
paper, we focus on comparing the defect segmentation predictions of
SEMI-PointRend and Mask-RCNN for various defect types (line-collapse, single
bridge, thin bridge, multi bridge non-horizontal). We show that SEMI-PointRend
can outperforms Mask R-CNN by up to 18.8% in terms of segmentation mean average
precision.
|
[
{
"version": "v1",
"created": "Sun, 19 Feb 2023 13:12:28 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Hwang",
"MinJin",
""
],
[
"Dey",
"Bappaditya",
""
],
[
"Dehaerne",
"Enrique",
""
],
[
"Halder",
"Sandip",
""
],
[
"Shin",
"Young-han",
""
]
] |
new_dataset
| 0.998792 |
2303.01586
|
Qiaozi Gao
|
Qiaozi Gao, Govind Thattai, Suhaila Shakiah, Xiaofeng Gao, Shreyas
Pansare, Vasu Sharma, Gaurav Sukhatme, Hangjie Shi, Bofei Yang, Desheng
Zheng, Lucy Hu, Karthika Arumugam, Shui Hu, Matthew Wen, Dinakar Guthy,
Cadence Chung, Rohan Khanna, Osman Ipek, Leslie Ball, Kate Bland, Heather
Rocker, Yadunandana Rao, Michael Johnston, Reza Ghanadan, Arindam Mandal,
Dilek Hakkani Tur, Prem Natarajan
|
Alexa Arena: A User-Centric Interactive Platform for Embodied AI
| null | null | null | null |
cs.HC cs.AI cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We introduce Alexa Arena, a user-centric simulation platform for Embodied AI
(EAI) research. Alexa Arena provides a variety of multi-room layouts and
interactable objects, for the creation of human-robot interaction (HRI)
missions. With user-friendly graphics and control mechanisms, Alexa Arena
supports the development of gamified robotic tasks readily accessible to
general human users, thus opening a new venue for high-efficiency HRI data
collection and EAI system evaluation. Along with the platform, we introduce a
dialog-enabled instruction-following benchmark and provide baseline results for
it. We make Alexa Arena publicly available to facilitate research in building
generalizable and assistive embodied agents.
|
[
{
"version": "v1",
"created": "Thu, 2 Mar 2023 21:22:00 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jun 2023 08:54:46 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Gao",
"Qiaozi",
""
],
[
"Thattai",
"Govind",
""
],
[
"Shakiah",
"Suhaila",
""
],
[
"Gao",
"Xiaofeng",
""
],
[
"Pansare",
"Shreyas",
""
],
[
"Sharma",
"Vasu",
""
],
[
"Sukhatme",
"Gaurav",
""
],
[
"Shi",
"Hangjie",
""
],
[
"Yang",
"Bofei",
""
],
[
"Zheng",
"Desheng",
""
],
[
"Hu",
"Lucy",
""
],
[
"Arumugam",
"Karthika",
""
],
[
"Hu",
"Shui",
""
],
[
"Wen",
"Matthew",
""
],
[
"Guthy",
"Dinakar",
""
],
[
"Chung",
"Cadence",
""
],
[
"Khanna",
"Rohan",
""
],
[
"Ipek",
"Osman",
""
],
[
"Ball",
"Leslie",
""
],
[
"Bland",
"Kate",
""
],
[
"Rocker",
"Heather",
""
],
[
"Rao",
"Yadunandana",
""
],
[
"Johnston",
"Michael",
""
],
[
"Ghanadan",
"Reza",
""
],
[
"Mandal",
"Arindam",
""
],
[
"Tur",
"Dilek Hakkani",
""
],
[
"Natarajan",
"Prem",
""
]
] |
new_dataset
| 0.998601 |
2303.02230
|
Susie Xi Rao
|
Peter Egger, Susie Xi Rao, Sebastiano Papini
|
Building Floorspace in China: A Dataset and Learning Pipeline
| null | null | null | null |
cs.CV cs.AI econ.GN q-fin.EC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper provides a first milestone in measuring the floorspace of
buildings (that is, building footprint and height) for 40 major Chinese cities.
The intent is to maximize city coverage and, eventually provide longitudinal
data. Doing so requires building on imagery that is of a medium-fine-grained
granularity, as larger cross sections of cities and longer time series for them
are only available in such format. We use a multi-task object segmenter
approach to learn the building footprint and height in the same framework in
parallel: (1) we determine the surface area is covered by any buildings (the
square footage of occupied land); (2) we determine floorspace from multi-image
representations of buildings from various angles to determine the height of
buildings. We use Sentinel-1 and -2 satellite images as our main data source.
The benefits of these data are their large cross-sectional and longitudinal
scope plus their unrestricted accessibility. We provide a detailed description
of our data, algorithms, and evaluations. In addition, we analyze the quality
of reference data and their role for measuring the building floorspace with
minimal error. We conduct extensive quantitative and qualitative analyses with
Shenzhen as a case study using our multi-task learner. Finally, we conduct
correlation studies between our results (on both pixel and aggregated urban
area levels) and nightlight data to gauge the merits of our approach in
studying urban development. Our data and codebase are publicly accessible under
https://gitlab.ethz.ch/raox/urban-satellite-public-v2.
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 21:45:36 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jun 2023 21:08:52 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Egger",
"Peter",
""
],
[
"Rao",
"Susie Xi",
""
],
[
"Papini",
"Sebastiano",
""
]
] |
new_dataset
| 0.99988 |
2305.09948
|
Kentaro Takemoto
|
Kentaro Takemoto, Moyuru Yamada, Tomotake Sasaki, Hisanao Akima
|
HICO-DET-SG and V-COCO-SG: New Data Splits for Evaluating the Systematic
Generalization Performance of Human-Object Interaction Detection Models
|
19 pages, 3 figures, 4 tables
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human-Object Interaction (HOI) detection is a task to localize humans and
objects in an image and predict the interactions in human-object pairs. In
real-world scenarios, HOI detection models are required systematic
generalization, i.e., generalization to novel combinations of objects and
interactions, because the train data are expected to cover a limited portion of
all possible combinations. However, to our knowledge, no open benchmarks or
previous work exist for evaluating the systematic generalization performance of
HOI detection models. To address this issue, we created two new sets of HOI
detection data splits named HICO-DET-SG and V-COCO-SG based on the HICO-DET and
V-COCO datasets, respectively. When evaluated on the new data splits, the
representative HOI detection models performed much more poorly than when
evaluated on the original splits. This reveals that systematic generalization
is a challenging goal in HOI detection. By analyzing the evaluation results, we
also gain insights for improving the systematic generalization performance and
identify four possible future research directions. We hope that our new data
splits and presented analysis will encourage further research on systematic
generalization in HOI detection.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 05:03:46 GMT"
},
{
"version": "v2",
"created": "Wed, 31 May 2023 05:36:42 GMT"
},
{
"version": "v3",
"created": "Thu, 1 Jun 2023 00:52:04 GMT"
},
{
"version": "v4",
"created": "Wed, 7 Jun 2023 06:53:07 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Takemoto",
"Kentaro",
""
],
[
"Yamada",
"Moyuru",
""
],
[
"Sasaki",
"Tomotake",
""
],
[
"Akima",
"Hisanao",
""
]
] |
new_dataset
| 0.981152 |
2305.16636
|
Vijay Viswanathan
|
Vijay Viswanathan, Luyu Gao, Tongshuang Wu, Pengfei Liu and Graham
Neubig
|
DataFinder: Scientific Dataset Recommendation from Natural Language
Descriptions
|
To appear at ACL 2023. Code published at
https://github.com/viswavi/datafinder
| null | null | null |
cs.IR cs.CL cs.DL
|
http://creativecommons.org/licenses/by/4.0/
|
Modern machine learning relies on datasets to develop and validate research
ideas. Given the growth of publicly available data, finding the right dataset
to use is increasingly difficult. Any research question imposes explicit and
implicit constraints on how well a given dataset will enable researchers to
answer this question, such as dataset size, modality, and domain. We
operationalize the task of recommending datasets given a short natural language
description of a research idea, to help people find relevant datasets for their
needs. Dataset recommendation poses unique challenges as an information
retrieval problem; datasets are hard to directly index for search and there are
no corpora readily available for this task. To facilitate this task, we build
the DataFinder Dataset which consists of a larger automatically-constructed
training set (17.5K queries) and a smaller expert-annotated evaluation set (392
queries). Using this data, we compare various information retrieval algorithms
on our test set and present a superior bi-encoder retriever for text-based
dataset recommendation. This system, trained on the DataFinder Dataset, finds
more relevant search results than existing third-party dataset search engines.
To encourage progress on dataset recommendation, we release our dataset and
models to the public.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 05:22:36 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jun 2023 03:08:27 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Viswanathan",
"Vijay",
""
],
[
"Gao",
"Luyu",
""
],
[
"Wu",
"Tongshuang",
""
],
[
"Liu",
"Pengfei",
""
],
[
"Neubig",
"Graham",
""
]
] |
new_dataset
| 0.999842 |
2305.16718
|
V\'it Novotn\'y
|
V\'it Novotn\'y, Krist\'yna Luger, Michal \v{S}tef\'anik, Tereza
Vrabcov\'a, Ale\v{s} Hor\'ak
|
People and Places of Historical Europe: Bootstrapping Annotation
Pipeline and a New Corpus of Named Entities in Late Medieval Texts
|
To appear in the Findings of the Association for Computational
Linguistics: ACL 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although pre-trained named entity recognition (NER) models are highly
accurate on modern corpora, they underperform on historical texts due to
differences in language OCR errors. In this work, we develop a new NER corpus
of 3.6M sentences from late medieval charters written mainly in Czech, Latin,
and German.
We show that we can start with a list of known historical figures and
locations and an unannotated corpus of historical texts, and use information
retrieval techniques to automatically bootstrap a NER-annotated corpus. Using
our corpus, we train a NER model that achieves entity-level Precision of
72.81-93.98% with 58.14-81.77% Recall on a manually-annotated test dataset.
Furthermore, we show that using a weighted loss function helps to combat class
imbalance in token classification tasks. To make it easy for others to
reproduce and build upon our work, we publicly release our corpus, models, and
experimental code.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 08:05:01 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jun 2023 20:42:10 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Novotný",
"Vít",
""
],
[
"Luger",
"Kristýna",
""
],
[
"Štefánik",
"Michal",
""
],
[
"Vrabcová",
"Tereza",
""
],
[
"Horák",
"Aleš",
""
]
] |
new_dataset
| 0.973333 |
2305.18226
|
Christoforos Vasilatos
|
Christoforos Vasilatos, Manaar Alam, Talal Rahwan, Yasir Zaki and
Michail Maniatakos
|
HowkGPT: Investigating the Detection of ChatGPT-generated University
Student Homework through Context-Aware Perplexity Analysis
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As the use of Large Language Models (LLMs) in text generation tasks
proliferates, concerns arise over their potential to compromise academic
integrity. The education sector currently tussles with distinguishing
student-authored homework assignments from AI-generated ones. This paper
addresses the challenge by introducing HowkGPT, designed to identify homework
assignments generated by AI. HowkGPT is built upon a dataset of academic
assignments and accompanying metadata [17] and employs a pretrained LLM to
compute perplexity scores for student-authored and ChatGPT-generated responses.
These scores then assist in establishing a threshold for discerning the origin
of a submitted assignment. Given the specificity and contextual nature of
academic work, HowkGPT further refines its analysis by defining
category-specific thresholds derived from the metadata, enhancing the precision
of the detection. This study emphasizes the critical need for effective
strategies to uphold academic integrity amidst the growing influence of LLMs
and provides an approach to ensuring fair and accurate grading in educational
institutions.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 11:07:25 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jun 2023 11:43:44 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Vasilatos",
"Christoforos",
""
],
[
"Alam",
"Manaar",
""
],
[
"Rahwan",
"Talal",
""
],
[
"Zaki",
"Yasir",
""
],
[
"Maniatakos",
"Michail",
""
]
] |
new_dataset
| 0.981797 |
2305.18829
|
Chen Min
|
Chen Min, Xinli Xu, Fuyang Li, Shubin Si, Hanzhang Xue, Weizhong
Jiang, Zhichao Zhang, Jimei Li, Dawei Zhao, Liang Xiao, Jiaolong Xu, Yiming
Nie, Bin Dai
|
Occ-BEV: Multi-Camera Unified Pre-training via 3D Scene Reconstruction
|
8 pages, 5 figures
| null | null | null |
cs.CV cs.MM cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-camera 3D perception has emerged as a prominent research field in
autonomous driving, offering a viable and cost-effective alternative to
LiDAR-based solutions. However, existing multi-camera algorithms primarily rely
on monocular image pre-training, which overlooks the spatial and temporal
correlations among different camera views. To address this limitation, we
propose the first multi-camera unified pre-training framework called Occ-BEV,
which involves initially reconstructing the 3D scene as the foundational stage
and subsequently fine-tuning the model on downstream tasks. Specifically, a 3D
decoder is designed for leveraging Bird's Eye View (BEV) features from
multi-view images to predict the 3D geometric occupancy to enable the model to
capture a more comprehensive understanding of the 3D environment. A significant
benefit of Occ-BEV is its capability of utilizing a considerable volume of
unlabeled image-LiDAR pairs for pre-training purposes. The proposed
multi-camera unified pre-training framework demonstrates promising results in
key tasks such as multi-camera 3D object detection and surrounding semantic
scene completion. When compared to monocular pre-training methods on the
nuScenes dataset, Occ-BEV shows a significant improvement of about 2.0% in mAP
and 2.0% in NDS for multi-camera 3D object detection, as well as a 3% increase
in mIoU for surrounding semantic scene completion. Codes are publicly available
at https://github.com/chaytonmin/Occ-BEV.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 08:23:06 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jun 2023 07:53:51 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Min",
"Chen",
""
],
[
"Xu",
"Xinli",
""
],
[
"Li",
"Fuyang",
""
],
[
"Si",
"Shubin",
""
],
[
"Xue",
"Hanzhang",
""
],
[
"Jiang",
"Weizhong",
""
],
[
"Zhang",
"Zhichao",
""
],
[
"Li",
"Jimei",
""
],
[
"Zhao",
"Dawei",
""
],
[
"Xiao",
"Liang",
""
],
[
"Xu",
"Jiaolong",
""
],
[
"Nie",
"Yiming",
""
],
[
"Dai",
"Bin",
""
]
] |
new_dataset
| 0.99595 |
2306.02349
|
Momchil Hardalov
|
Momchil Hardalov, Pepa Atanasova, Todor Mihaylov, Galia Angelova,
Kiril Simov, Petya Osenova, Ves Stoyanov, Ivan Koychev, Preslav Nakov,
Dragomir Radev
|
bgGLUE: A Bulgarian General Language Understanding Evaluation Benchmark
|
Accepted to ACL 2023 (Main Conference)
|
ACL 2023
| null | null |
cs.CL cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We present bgGLUE(Bulgarian General Language Understanding Evaluation), a
benchmark for evaluating language models on Natural Language Understanding
(NLU) tasks in Bulgarian. Our benchmark includes NLU tasks targeting a variety
of NLP problems (e.g., natural language inference, fact-checking, named entity
recognition, sentiment analysis, question answering, etc.) and machine learning
tasks (sequence labeling, document-level classification, and regression). We
run the first systematic evaluation of pre-trained language models for
Bulgarian, comparing and contrasting results across the nine tasks in the
benchmark. The evaluation results show strong performance on sequence labeling
tasks, but there is a lot of room for improvement for tasks that require more
complex reasoning. We make bgGLUE publicly available together with the
fine-tuning and the evaluation code, as well as a public leaderboard at
https://bgglue.github.io/, and we hope that it will enable further advancements
in developing NLU models for Bulgarian.
|
[
{
"version": "v1",
"created": "Sun, 4 Jun 2023 12:54:00 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jun 2023 03:57:51 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Hardalov",
"Momchil",
""
],
[
"Atanasova",
"Pepa",
""
],
[
"Mihaylov",
"Todor",
""
],
[
"Angelova",
"Galia",
""
],
[
"Simov",
"Kiril",
""
],
[
"Osenova",
"Petya",
""
],
[
"Stoyanov",
"Ves",
""
],
[
"Koychev",
"Ivan",
""
],
[
"Nakov",
"Preslav",
""
],
[
"Radev",
"Dragomir",
""
]
] |
new_dataset
| 0.999512 |
2306.03360
|
Minting Pan
|
Minting Pan, Yitao Zheng, Wendong Zhang, Yunbo Wang, Xiaokang Yang
|
Vid2Act: Activate Offline Videos for Visual RL
| null | null | null | null |
cs.LG cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pretraining RL models on offline video datasets is a promising way to improve
their training efficiency in online tasks, but challenging due to the inherent
mismatch in tasks, dynamics, and behaviors across domains. A recent model, APV,
sidesteps the accompanied action records in offline datasets and instead
focuses on pretraining a task-irrelevant, action-free world model within the
source domains. We present Vid2Act, a model-based RL method that learns to
transfer valuable action-conditioned dynamics and potentially useful action
demonstrations from offline to online settings. The main idea is to use the
world models not only as simulators for behavior learning but also as tools to
measure the domain relevance for both dynamics representation transfer and
policy transfer. Specifically, we train the world models to generate a set of
time-varying task similarities using a domain-selective knowledge distillation
loss. These similarities serve two purposes: (i) adaptively transferring the
most useful source knowledge to facilitate dynamics learning, and (ii) learning
to replay the most relevant source actions to guide the target policy. We
demonstrate the advantages of Vid2Act over the action-free visual RL
pretraining method in both Meta-World and DeepMind Control Suite.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2023 02:24:41 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jun 2023 11:39:52 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Pan",
"Minting",
""
],
[
"Zheng",
"Yitao",
""
],
[
"Zhang",
"Wendong",
""
],
[
"Wang",
"Yunbo",
""
],
[
"Yang",
"Xiaokang",
""
]
] |
new_dataset
| 0.988951 |
2306.03457
|
Chen Tang
|
Tyler Loakman, Chen Tang and Chenghua Lin
|
TwistList: Resources and Baselines for Tongue Twister Generation
| null |
ACL 2023
| null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Previous work in phonetically-grounded language generation has mainly focused
on domains such as lyrics and poetry. In this paper, we present work on the
generation of tongue twisters - a form of language that is required to be
phonetically conditioned to maximise sound overlap, whilst maintaining semantic
consistency with an input topic, and still being grammatically correct. We
present \textbf{TwistList}, a large annotated dataset of tongue twisters,
consisting of 2.1K+ human-authored examples. We additionally present several
benchmark systems (referred to as TwisterMisters) for the proposed task of
tongue twister generation, including models that both do and do not require
training on in-domain data. We present the results of automatic and human
evaluation to demonstrate the performance of existing mainstream pre-trained
models in this task with limited (or no) task specific training and data, and
no explicit phonetic knowledge. We find that the task of tongue twister
generation is challenging for models under these conditions, yet some models
are still capable of generating acceptable examples of this language type.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2023 07:20:51 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jun 2023 05:24:25 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Loakman",
"Tyler",
""
],
[
"Tang",
"Chen",
""
],
[
"Lin",
"Chenghua",
""
]
] |
new_dataset
| 0.998838 |
2306.03646
|
Miki Okamura
|
Miki Okamura, Naruya Kondo, Tatsuki Fushimi, Maki Sakamoto, and Yoichi
Ochiai
|
Dance Generation by Sound Symbolic Words
| null | null | null | null |
cs.LG cs.HC cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
This study introduces a novel approach to generate dance motions using
onomatopoeia as input, with the aim of enhancing creativity and diversity in
dance generation. Unlike text and music, onomatopoeia conveys rhythm and
meaning through abstract word expressions without constraints on expression and
without need for specialized knowledge. We adapt the AI Choreographer framework
and employ the Sakamoto system, a feature extraction method for onomatopoeia
focusing on phonemes and syllables. Additionally, we present a new dataset of
40 onomatopoeia-dance motion pairs collected through a user survey. Our results
demonstrate that the proposed method enables more intuitive dance generation
and can create dance motions using sound-symbolic words from a variety of
languages, including those without onomatopoeia. This highlights the potential
for diverse dance creation across different languages and cultures, accessible
to a wider audience. Qualitative samples from our model can be found at:
https://sites.google.com/view/onomatopoeia-dance/home/.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2023 13:00:47 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Okamura",
"Miki",
""
],
[
"Kondo",
"Naruya",
""
],
[
"Fushimi",
"Tatsuki",
""
],
[
"Sakamoto",
"Maki",
""
],
[
"Ochiai",
"Yoichi",
""
]
] |
new_dataset
| 0.985146 |
2306.03932
|
Zaid Khan
|
Zaid Khan, Vijay Kumar BG, Samuel Schulter, Xiang Yu, Yun Fu, Manmohan
Chandraker
|
Q: How to Specialize Large Vision-Language Models to Data-Scarce VQA
Tasks? A: Self-Train on Unlabeled Images!
|
CVPR 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Finetuning a large vision language model (VLM) on a target dataset after
large scale pretraining is a dominant paradigm in visual question answering
(VQA). Datasets for specialized tasks such as knowledge-based VQA or VQA in non
natural-image domains are orders of magnitude smaller than those for
general-purpose VQA. While collecting additional labels for specialized tasks
or domains can be challenging, unlabeled images are often available. We
introduce SelTDA (Self-Taught Data Augmentation), a strategy for finetuning
large VLMs on small-scale VQA datasets. SelTDA uses the VLM and target dataset
to build a teacher model that can generate question-answer pseudolabels
directly conditioned on an image alone, allowing us to pseudolabel unlabeled
images. SelTDA then finetunes the initial VLM on the original dataset augmented
with freshly pseudolabeled images. We describe a series of experiments showing
that our self-taught data augmentation increases robustness to adversarially
searched questions, counterfactual examples and rephrasings, improves domain
generalization, and results in greater retention of numerical reasoning skills.
The proposed strategy requires no additional annotations or architectural
modifications, and is compatible with any modern encoder-decoder multimodal
transformer. Code available at https://github.com/codezakh/SelTDA.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2023 18:00:47 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Khan",
"Zaid",
""
],
[
"BG",
"Vijay Kumar",
""
],
[
"Schulter",
"Samuel",
""
],
[
"Yu",
"Xiang",
""
],
[
"Fu",
"Yun",
""
],
[
"Chandraker",
"Manmohan",
""
]
] |
new_dataset
| 0.952126 |
2306.03940
|
Akhil Arora
|
Akhil Arora, Robert West, Martin Gerlach
|
Orphan Articles: The Dark Matter of Wikipedia
| null | null | null | null |
cs.SI cs.CY cs.DL
|
http://creativecommons.org/licenses/by/4.0/
|
With 60M articles in more than 300 language versions, Wikipedia is the
largest platform for open and freely accessible knowledge. While the available
content has been growing continuously at a rate of around 200K new articles
each month, very little attention has been paid to the accessibility of the
content. One crucial aspect of accessibility is the integration of hyperlinks
into the network so the articles are visible to readers navigating Wikipedia.
In order to understand this phenomenon, we conduct the first systematic study
of orphan articles, which are articles without any incoming links from other
Wikipedia articles, across 319 different language versions of Wikipedia. We
find that a surprisingly large extent of content, roughly 15\% (8.8M) of all
articles, is de facto invisible to readers navigating Wikipedia, and thus,
rightfully term orphan articles as the dark matter of Wikipedia. We also
provide causal evidence through a quasi-experiment that adding new incoming
links to orphans (de-orphanization) leads to a statistically significant
increase of their visibility in terms of the number of pageviews. We further
highlight the challenges faced by editors for de-orphanizing articles,
demonstrate the need to support them in addressing this issue, and provide
potential solutions for developing automated tools based on cross-lingual
approaches. Overall, our work not only unravels a key limitation in the link
structure of Wikipedia and quantitatively assesses its impact, but also
provides a new perspective on the challenges of maintenance associated with
content creation at scale in Wikipedia.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2023 18:04:33 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Arora",
"Akhil",
""
],
[
"West",
"Robert",
""
],
[
"Gerlach",
"Martin",
""
]
] |
new_dataset
| 0.996667 |
2306.03942
|
Yucheng Jin
|
Shuwei Li, Yucheng Jin, Pin-Lun Hsu, Ya-Sin Luo
|
NFT.mine: An xDeepFM-based Recommender System for Non-fungible Token
(NFT) Buyers
|
6 pages, 8 figures, 2 tables
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Non-fungible token (NFT) is a tradable unit of data stored on the blockchain
which can be associated with some digital asset as a certification of
ownership. The past several years have witnessed the exponential growth of the
NFT market. In 2021, the NFT market reached its peak with more than $40 billion
trades. Despite the booming NFT market, most NFT-related studies focus on its
technical aspect, such as standards, protocols, and security, while our study
aims at developing a pioneering recommender system for NFT buyers. In this
paper, we introduce an extreme deep factorization machine (xDeepFM)-based
recommender system, NFT.mine, which achieves real-time data collection, data
cleaning, feature extraction, training, and inference. We used data from
OpenSea, the most influential NFT trading platform, to testify the performance
of NFT.mine. As a result, experiments showed that compared to traditional
models such as logistic regression, naive Bayes, random forest, etc., NFT.mine
outperforms them with higher AUC and lower cross entropy loss and outputs
personalized recommendations for NFT buyers.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2023 18:07:45 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Li",
"Shuwei",
""
],
[
"Jin",
"Yucheng",
""
],
[
"Hsu",
"Pin-Lun",
""
],
[
"Luo",
"Ya-Sin",
""
]
] |
new_dataset
| 0.995906 |
2306.04032
|
Zhihao Yang
|
Zhihao Yang, Wenyi Lian, Siyuan Lai
|
BokehOrNot: Transforming Bokeh Effect with Image Transformer and Lens
Metadata Embedding
| null | null | null | null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bokeh effect is an optical phenomenon that offers a pleasant visual
experience, typically generated by high-end cameras with wide aperture lenses.
The task of bokeh effect transformation aims to produce a desired effect in one
set of lenses and apertures based on another combination. Current models are
limited in their ability to render a specific set of bokeh effects, primarily
transformations from sharp to blur. In this paper, we propose a novel universal
method for embedding lens metadata into the model and introducing a loss
calculation method using alpha masks from the newly released Bokeh Effect
Transformation Dataset(BETD) [3]. Based on the above techniques, we propose the
BokehOrNot model, which is capable of producing both blur-to-sharp and
sharp-to-blur bokeh effect with various combinations of lenses and aperture
sizes. Our proposed model outperforms current leading bokeh rendering and image
restoration models and renders visually natural bokeh effects. Our code is
available at: https://github.com/indicator0/bokehornot.
|
[
{
"version": "v1",
"created": "Tue, 6 Jun 2023 21:49:56 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Yang",
"Zhihao",
""
],
[
"Lian",
"Wenyi",
""
],
[
"Lai",
"Siyuan",
""
]
] |
new_dataset
| 0.996037 |
2306.04085
|
Yusen Zhang
|
Yusen Zhang, Jun Wang, Zhiguo Wang, Rui Zhang
|
XSemPLR: Cross-Lingual Semantic Parsing in Multiple Natural Languages
and Meaning Representations
|
ACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Cross-Lingual Semantic Parsing (CLSP) aims to translate queries in multiple
natural languages (NLs) into meaning representations (MRs) such as SQL, lambda
calculus, and logic forms. However, existing CLSP models are separately
proposed and evaluated on datasets of limited tasks and applications, impeding
a comprehensive and unified evaluation of CLSP on a diverse range of NLs and
MRs. To this end, we present XSemPLR, a unified benchmark for cross-lingual
semantic parsing featured with 22 natural languages and 8 meaning
representations by examining and selecting 9 existing datasets to cover 5 tasks
and 164 domains. We use XSemPLR to conduct a comprehensive benchmark study on a
wide range of multilingual language models including encoder-based models
(mBERT, XLM-R), encoder-decoder models (mBART, mT5), and decoder-based models
(Codex, BLOOM). We design 6 experiment settings covering various lingual
combinations (monolingual, multilingual, cross-lingual) and numbers of learning
samples (full dataset, few-shot, and zero-shot). Our experiments show that
encoder-decoder models (mT5) achieve the highest performance compared with
other popular models, and multilingual training can further improve the average
performance. Notably, multilingual large language models (e.g., BLOOM) are
still inadequate to perform CLSP tasks. We also find that the performance gap
between monolingual training and cross-lingual transfer learning is still
significant for multilingual models, though it can be mitigated by
cross-lingual few-shot training. Our dataset and code are available at
https://github.com/psunlpgroup/XSemPLR.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 01:09:37 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Zhang",
"Yusen",
""
],
[
"Wang",
"Jun",
""
],
[
"Wang",
"Zhiguo",
""
],
[
"Zhang",
"Rui",
""
]
] |
new_dataset
| 0.999687 |
2306.04143
|
Takahiro Fukumori
|
Takahiro Fukumori, Taito Ishida and Yoichi Yamashita
|
RISC: A Corpus for Shout Type Classification and Shout Intensity
Prediction
| null | null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The detection of shouted speech is crucial in audio surveillance and
monitoring. Although it is desirable for a security system to be able to
identify emergencies, existing corpora provide only a binary label (i.e.,
shouted or normal) for each speech sample, making it difficult to predict the
shout intensity. Furthermore, most corpora comprise only utterances typical of
hazardous situations, meaning that classifiers cannot learn to discriminate
such utterances from shouts typical of less hazardous situations, such as
cheers. Thus, this paper presents a novel research source, the RItsumeikan
Shout Corpus (RISC), which contains wide variety types of shouted speech
samples collected in recording experiments. Each shouted speech sample in RISC
has a shout type and is also assigned shout intensity ratings via a
crowdsourcing service. We also present a comprehensive performance comparison
among deep learning approaches for speech type classification tasks and a shout
intensity prediction task. The results show that feature learning based on the
spectral and cepstral domains achieves high performance, no matter which
network architecture is used. The results also demonstrate that shout type
classification and intensity prediction are still challenging tasks, and RISC
is expected to contribute to further development in this research area.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 04:30:02 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Fukumori",
"Takahiro",
""
],
[
"Ishida",
"Taito",
""
],
[
"Yamashita",
"Yoichi",
""
]
] |
new_dataset
| 0.998869 |
2306.04144
|
Liyue Chen
|
Liyue Chen, Di Chai, Leye Wang
|
UCTB: An Urban Computing Tool Box for Spatiotemporal Crowd Flow
Prediction
| null | null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spatiotemporal crowd flow prediction is one of the key technologies in smart
cities. Currently, there are two major pain points that plague related research
and practitioners. Firstly, crowd flow is related to multiple domain knowledge
factors; however, due to the diversity of application scenarios, it is
difficult for subsequent work to make reasonable and comprehensive use of
domain knowledge. Secondly, with the development of deep learning technology,
the implementation of relevant techniques has become increasingly complex;
reproducing advanced models has become a time-consuming and increasingly
cumbersome task. To address these issues, we design and implement a
spatiotemporal crowd flow prediction toolbox called UCTB (Urban Computing Tool
Box), which integrates multiple spatiotemporal domain knowledge and
state-of-the-art models simultaneously. The relevant code and supporting
documents have been open-sourced at https://github.com/uctb/UCTB.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 04:36:21 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Chen",
"Liyue",
""
],
[
"Chai",
"Di",
""
],
[
"Wang",
"Leye",
""
]
] |
new_dataset
| 0.978568 |
2306.04148
|
Chandan Misra
|
Chandan Misra and Swarup Chattopadhyay
|
SANGEET: A XML based Open Dataset for Research in Hindustani Sangeet
| null | null | null | null |
cs.SD cs.IR cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
It is very important to access a rich music dataset that is useful in a wide
variety of applications. Currently, available datasets are mostly focused on
storing vocal or instrumental recording data and ignoring the requirement of
its visual representation and retrieval. This paper attempts to build an
XML-based public dataset, called SANGEET, that stores comprehensive information
of Hindustani Sangeet (North Indian Classical Music) compositions written by
famous musicologist Pt. Vishnu Narayan Bhatkhande. SANGEET preserves all the
required information of any given composition including metadata, structural,
notational, rhythmic, and melodic information in a standardized way for easy
and efficient storage and extraction of musical information. The dataset is
intended to provide the ground truth information for music information research
tasks, thereby supporting several data-driven analysis from a machine learning
perspective. We present the usefulness of the dataset by demonstrating its
application on music information retrieval using XQuery, visualization through
Omenad rendering system. Finally, we propose approaches to transform the
dataset for performing statistical and machine learning tasks for a better
understanding of Hindustani Sangeet. The dataset can be found at
https://github.com/cmisra/Sangeet.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 04:50:09 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Misra",
"Chandan",
""
],
[
"Chattopadhyay",
"Swarup",
""
]
] |
new_dataset
| 0.999786 |
2306.04152
|
Haiqin Yang
|
Junxian Zhou, Haiqin Yang, Yuxuan He, Hao Mou, Junbo Yang
|
A Unified One-Step Solution for Aspect Sentiment Quad Prediction
|
15 pages, 12 tables, 3 figures, ACL Findings
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Aspect sentiment quad prediction (ASQP) is a challenging yet significant
subtask in aspect-based sentiment analysis as it provides a complete
aspect-level sentiment structure. However, existing ASQP datasets are usually
small and low-density, hindering technical advancement. To expand the capacity,
in this paper, we release two new datasets for ASQP, which contain the
following characteristics: larger size, more words per sample, and higher
density. With such datasets, we unveil the shortcomings of existing strong ASQP
baselines and therefore propose a unified one-step solution for ASQP, namely
One-ASQP, to detect the aspect categories and to identify the
aspect-opinion-sentiment (AOS) triplets simultaneously. Our One-ASQP holds
several unique advantages: (1) by separating ASQP into two subtasks and solving
them independently and simultaneously, we can avoid error propagation in
pipeline-based methods and overcome slow training and inference in
generation-based methods; (2) by introducing sentiment-specific horns tagging
schema in a token-pair-based two-dimensional matrix, we can exploit deeper
interactions between sentiment elements and efficiently decode the AOS
triplets; (3) we design ``[NULL]'' token can help us effectively identify the
implicit aspects or opinions. Experiments on two benchmark datasets and our
released two datasets demonstrate the advantages of our One-ASQP. The two new
datasets are publicly released at
\url{https://www.github.com/Datastory-CN/ASQP-Datasets}.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 05:00:01 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Zhou",
"Junxian",
""
],
[
"Yang",
"Haiqin",
""
],
[
"He",
"Yuxuan",
""
],
[
"Mou",
"Hao",
""
],
[
"Yang",
"Junbo",
""
]
] |
new_dataset
| 0.981427 |
2306.04181
|
Yushi Bai
|
Yushi Bai, Jiahao Ying, Yixin Cao, Xin Lv, Yuze He, Xiaozhi Wang,
Jifan Yu, Kaisheng Zeng, Yijia Xiao, Haozhe Lyu, Jiayin Zhang, Juanzi Li, Lei
Hou
|
Benchmarking Foundation Models with Language-Model-as-an-Examiner
|
23 pages, 8 figures
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Numerous benchmarks have been established to assess the performance of
foundation models on open-ended question answering, which serves as a
comprehensive test of a model's ability to understand and generate language in
a manner similar to humans. Most of these works focus on proposing new
datasets, however, we see two main issues within previous benchmarking
pipelines, namely testing leakage and evaluation automation. In this paper, we
propose a novel benchmarking framework, Language-Model-as-an-Examiner, where
the LM serves as a knowledgeable examiner that formulates questions based on
its knowledge and evaluates responses in a reference-free manner. Our framework
allows for effortless extensibility as various LMs can be adopted as the
examiner, and the questions can be constantly updated given more diverse
trigger topics. For a more comprehensive and equitable evaluation, we devise
three strategies: (1) We instruct the LM examiner to generate questions across
a multitude of domains to probe for a broad acquisition, and raise follow-up
questions to engage in a more in-depth assessment. (2) Upon evaluation, the
examiner combines both scoring and ranking measurements, providing a reliable
result as it aligns closely with human annotations. (3) We additionally propose
a decentralized Peer-examination method to address the biases in a single
examiner. Our data and benchmarking results are available at:
https://lmexam.com.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 06:29:58 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Bai",
"Yushi",
""
],
[
"Ying",
"Jiahao",
""
],
[
"Cao",
"Yixin",
""
],
[
"Lv",
"Xin",
""
],
[
"He",
"Yuze",
""
],
[
"Wang",
"Xiaozhi",
""
],
[
"Yu",
"Jifan",
""
],
[
"Zeng",
"Kaisheng",
""
],
[
"Xiao",
"Yijia",
""
],
[
"Lyu",
"Haozhe",
""
],
[
"Zhang",
"Jiayin",
""
],
[
"Li",
"Juanzi",
""
],
[
"Hou",
"Lei",
""
]
] |
new_dataset
| 0.957838 |
2306.04216
|
Jielin Qiu
|
Jielin Qiu, Jiacheng Zhu, William Han, Aditesh Kumar, Karthik Mittal,
Claire Jin, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Bo Li, Ding Zhao,
Lijuan Wang
|
MultiSum: A Dataset for Multimodal Summarization and Thumbnail
Generation of Videos
|
Project website: https://multisum-dataset.github.io/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multimodal summarization with multimodal output (MSMO) has emerged as a
promising research direction. Nonetheless, numerous limitations exist within
existing public MSMO datasets, including insufficient upkeep, data
inaccessibility, limited size, and the absence of proper categorization, which
pose significant challenges to effective research. To address these challenges
and provide a comprehensive dataset for this new direction, we have
meticulously curated the MultiSum dataset. Our new dataset features (1)
Human-validated summaries for both video and textual content, providing
superior human instruction and labels for multimodal learning. (2)
Comprehensively and meticulously arranged categorization, spanning 17 principal
categories and 170 subcategories to encapsulate a diverse array of real-world
scenarios. (3) Benchmark tests performed on the proposed dataset to assess
varied tasks and methods, including video temporal segmentation, video
summarization, text summarization, and multimodal summarization. To champion
accessibility and collaboration, we release the MultiSum dataset and the data
collection tool as fully open-source resources, fostering transparency and
accelerating future developments. Our project website can be found at
https://multisum-dataset.github.io/.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 07:43:11 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Qiu",
"Jielin",
""
],
[
"Zhu",
"Jiacheng",
""
],
[
"Han",
"William",
""
],
[
"Kumar",
"Aditesh",
""
],
[
"Mittal",
"Karthik",
""
],
[
"Jin",
"Claire",
""
],
[
"Yang",
"Zhengyuan",
""
],
[
"Li",
"Linjie",
""
],
[
"Wang",
"Jianfeng",
""
],
[
"Li",
"Bo",
""
],
[
"Zhao",
"Ding",
""
],
[
"Wang",
"Lijuan",
""
]
] |
new_dataset
| 0.999662 |
2306.04221
|
Jo\~ao Paulo Bezerra De Ara\'ujo
|
Veronika Anikina, Jo\~ao Paulo Bezerra, Petr Kuznetsov, Liron Schiff,
Stefan Schmid
|
Dynamic Probabilistic Reliable Broadcast
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Byzantine reliable broadcast is a primitive that allows a set of processes to
agree on a message broadcast by a dedicated source process, even when some of
them are malicious (Byzantine). It guarantees that no two correct processes
deliver different messages, and if a message is delivered by a correct process,
every correct process eventually delivers one. The primitive is known not to
scale, as it requires $\Omega(n^2)$ message exchanges, where $n$ is the number
of system members. The quadratic cost can be explained by the inherent need for
every process to relay a message to every other process.
In this paper, we explore ways to overcome this limitation, by casting the
problem to the probabilistic setting. We propose a solution in which every
broadcast message is validated by a small set of witnesses, which allows us to
maintain low latency and small communication complexity. In order to tolerate a
slow adaptive adversary, we dynamically select witnesses through a novel use of
locality-preserving hash functions. Our simulations demonstrate significant
scalability gains of our solution with respect to existing protocols.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 07:52:51 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Anikina",
"Veronika",
""
],
[
"Bezerra",
"João Paulo",
""
],
[
"Kuznetsov",
"Petr",
""
],
[
"Schiff",
"Liron",
""
],
[
"Schmid",
"Stefan",
""
]
] |
new_dataset
| 0.991021 |
2306.04269
|
Erez Posner
|
Netanel Frank and Erez Posner and Emmanuelle Muhlethaler and Adi
Zholkover and Moshe Bouhnik
|
ColNav: Real-Time Colon Navigation for Colonoscopy
| null | null | null | null |
cs.CV cs.HC cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Colorectal cancer screening through colonoscopy continues to be the dominant
global standard, as it allows identifying pre-cancerous or adenomatous lesions
and provides the ability to remove them during the procedure itself.
Nevertheless, failure by the endoscopist to identify such lesions increases the
likelihood of lesion progression to subsequent colorectal cancer. Ultimately,
colonoscopy remains operator-dependent, and the wide range of quality in
colonoscopy examinations among endoscopists is influenced by variations in
their technique, training, and diligence. This paper presents a novel real-time
navigation guidance system for Optical Colonoscopy (OC). Our proposed system
employs a real-time approach that displays both an unfolded representation of
the colon and a local indicator directing to un-inspected areas. These
visualizations are presented to the physician during the procedure, providing
actionable and comprehensible guidance to un-surveyed areas in real-time, while
seamlessly integrating into the physician's workflow. Through coverage
experimental evaluation, we demonstrated that our system resulted in a higher
polyp recall (PR) and high inter-rater reliability with physicians for coverage
prediction. These results suggest that our real-time navigation guidance system
has the potential to improve the quality and effectiveness of Optical
Colonoscopy and ultimately benefit patient outcomes.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 09:09:35 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Frank",
"Netanel",
""
],
[
"Posner",
"Erez",
""
],
[
"Muhlethaler",
"Emmanuelle",
""
],
[
"Zholkover",
"Adi",
""
],
[
"Bouhnik",
"Moshe",
""
]
] |
new_dataset
| 0.997469 |
2306.04319
|
Hymalai Bello
|
Hymalai Bello, Sungho Suh, Daniel Gei{\ss}ler, Lala Ray, Bo Zhou and
Paul Lukowicz
|
CaptAinGlove: Capacitive and Inertial Fusion-Based Glove for Real-Time
on Edge Hand Gesture Recognition for Drone Control
| null | null | null | null |
cs.LG cs.HC cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present CaptAinGlove, a textile-based, low-power (1.15Watts),
privacy-conscious, real-time on-the-edge (RTE) glove-based solution with a tiny
memory footprint (2MB), designed to recognize hand gestures used for drone
control. We employ lightweight convolutional neural networks as the backbone
models and a hierarchical multimodal fusion to reduce power consumption and
improve accuracy. The system yields an F1-score of 80% for the offline
evaluation of nine classes; eight hand gesture commands and null activity. For
the RTE, we obtained an F1-score of 67% (one user).
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 10:32:53 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Bello",
"Hymalai",
""
],
[
"Suh",
"Sungho",
""
],
[
"Geißler",
"Daniel",
""
],
[
"Ray",
"Lala",
""
],
[
"Zhou",
"Bo",
""
],
[
"Lukowicz",
"Paul",
""
]
] |
new_dataset
| 0.997204 |
2306.04334
|
Alessandro Scir\`e
|
Alessandro Scir\`e, Simone Conia, Simone Ciciliano, Roberto Navigli
|
Echoes from Alexandria: A Large Resource for Multilingual Book
Summarization
|
9 pages, long paper at ACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, research in text summarization has mainly focused on the
news domain, where texts are typically short and have strong layout features.
The task of full-book summarization presents additional challenges which are
hard to tackle with current resources, due to their limited size and
availability in English only. To overcome these limitations, we present "Echoes
from Alexandria", or in shortened form, "Echoes", a large resource for
multilingual book summarization. Echoes features three novel datasets: i)
Echo-Wiki, for multilingual book summarization, ii) Echo-XSum, for
extremely-compressive multilingual book summarization, and iii) Echo-FairySum,
for extractive book summarization. To the best of our knowledge, Echoes, with
its thousands of books and summaries, is the largest resource, and the first to
be multilingual, featuring 5 languages and 25 language pairs. In addition to
Echoes, we also introduce a new extractive-then-abstractive baseline, and,
supported by our experimental results and manual analysis of the summaries
generated, we argue that this baseline is more suitable for book summarization
than purely-abstractive approaches. We release our resource and software at
https://github.com/Babelscape/echoes-from-alexandria in the hope of fostering
innovative research in multilingual book summarization.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 11:01:39 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Scirè",
"Alessandro",
""
],
[
"Conia",
"Simone",
""
],
[
"Ciciliano",
"Simone",
""
],
[
"Navigli",
"Roberto",
""
]
] |
new_dataset
| 0.998448 |
2306.04342
|
Fran\c{c}ois Sellier
|
Chien-Chung Huang and Fran\c{c}ois Sellier
|
Matroid-Constrained Vertex Cover
| null | null |
10.1016/j.tcs.2023.113977
| null |
cs.DS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper, we introduce the problem of Matroid-Constrained Vertex Cover:
given a graph with weights on the edges and a matroid imposed on the vertices,
our problem is to choose a subset of vertices that is independent in the
matroid, with the objective of maximizing the total weight of covered edges.
This problem is a generalization of the much studied max $k$-vertex cover
problem, in which the matroid is the simple uniform matroid, and it is also a
special case of the problem of maximizing a monotone submodular function under
a matroid constraint.
First, we give a Fixed-Parameter Tractable Approximation Scheme (FPT-AS) when
the given matroid is a partition matroid, a laminar matroid, or a transversal
matroid. Precisely, if $k$ is the rank of the matroid, we obtain $(1 -
\varepsilon)$ approximation using $(1/\varepsilon)^{O(k)}n^{O(1)}$ time for
partition and laminar matroids and using $(1/\varepsilon+k)^{O(k)}n^{O(1)}$
time for transversal matroids. This extends a result of Manurangsi for uniform
matroids [Manurangsi, 2018]. We also show that these ideas can be applied in
the context of (single-pass) streaming algorithms. Besides, our FPT-AS
introduces a new technique based on matroid union, which may be of independent
interest in extremal combinatorics.
In the second part, we consider general matroids. We propose a simple local
search algorithm that guarantees $2/3 \approx 0.66$ approximation. For the more
general problem where two matroids are imposed on the vertices and a feasible
solution must be a common independent set, we show that a local search
algorithm gives a $2/3 \cdot (1 - 1/(p+1))$ approximation in $n^{O(p)}$ time,
for any integer $p$. We also provide some evidence to show that with the
constraint of one or two matroids, the approximation ratio of $2/3$ is likely
the best possible, using the currently known techniques of local search.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 11:16:04 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Huang",
"Chien-Chung",
""
],
[
"Sellier",
"François",
""
]
] |
new_dataset
| 0.983935 |
2306.04356
|
Lingfeng Yang
|
Lingfeng Yang, Yueze Wang, Xiang Li, Xinlong Wang, Jian Yang
|
Fine-Grained Visual Prompting
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Vision-Language Models (VLMs), such as CLIP, have demonstrated impressive
zero-shot transfer capabilities in image-level visual perception. However,
these models have shown limited performance in instance-level tasks that demand
precise localization and recognition. Previous works have suggested that
incorporating visual prompts, such as colorful boxes or circles, can improve
the ability of models to recognize objects of interest. Nonetheless, compared
to language prompting, visual prompting designs are rarely explored. Existing
approaches, which employ coarse visual cues such as colorful boxes or circles,
often result in sub-optimal performance due to the inclusion of irrelevant and
noisy pixels. In this paper, we carefully study the visual prompting designs by
exploring more fine-grained markings, such as segmentation masks and their
variations. In addition, we introduce a new zero-shot framework that leverages
pixel-level annotations acquired from a generalist segmentation model for
fine-grained visual prompting. Consequently, our investigation reveals that a
straightforward application of blur outside the target mask, referred to as the
Blur Reverse Mask, exhibits exceptional effectiveness. This proposed prompting
strategy leverages the precise mask annotations to reduce focus on weakly
related regions while retaining spatial coherence between the target and the
surrounding background. Our Fine-Grained Visual Prompting (FGVP) demonstrates
superior performance in zero-shot comprehension of referring expressions on the
RefCOCO, RefCOCO+, and RefCOCOg benchmarks. It outperforms prior methods by an
average margin of 3.0% to 4.6%, with a maximum improvement of 12.5% on the
RefCOCO+ testA subset. The part detection experiments conducted on the PACO
dataset further validate the preponderance of FGVP over existing visual
prompting techniques. Code and models will be made available.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 11:39:56 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Yang",
"Lingfeng",
""
],
[
"Wang",
"Yueze",
""
],
[
"Li",
"Xiang",
""
],
[
"Wang",
"Xinlong",
""
],
[
"Yang",
"Jian",
""
]
] |
new_dataset
| 0.999136 |
2306.04362
|
Qinghao Ye
|
Haiyang Xu, Qinghao Ye, Xuan Wu, Ming Yan, Yuan Miao, Jiabo Ye, Guohai
Xu, Anwen Hu, Yaya Shi, Guangwei Xu, Chenliang Li, Qi Qian, Maofei Que, Ji
Zhang, Xiao Zeng, Fei Huang
|
Youku-mPLUG: A 10 Million Large-scale Chinese Video-Language Dataset for
Pre-training and Benchmarks
|
Working in progress
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
To promote the development of Vision-Language Pre-training (VLP) and
multimodal Large Language Model (LLM) in the Chinese community, we firstly
release the largest public Chinese high-quality video-language dataset named
Youku-mPLUG, which is collected from Youku, a well-known Chinese video-sharing
website, with strict criteria of safety, diversity, and quality. Youku-mPLUG
contains 10 million Chinese video-text pairs filtered from 400 million raw
videos across a wide range of 45 diverse categories for large-scale
pre-training. In addition, to facilitate a comprehensive evaluation of
video-language models, we carefully build the largest human-annotated Chinese
benchmarks covering three popular video-language tasks of cross-modal
retrieval, video captioning, and video category classification. Youku-mPLUG can
enable researchers to conduct more in-depth multimodal research and develop
better applications in the future. Furthermore, we release popular
video-language pre-training models, ALPRO and mPLUG-2, and our proposed
modularized decoder-only model mPLUG-video pre-trained on Youku-mPLUG.
Experiments show that models pre-trained on Youku-mPLUG gain up to 23.1%
improvement in video category classification. Besides, mPLUG-video achieves a
new state-of-the-art result on these benchmarks with 80.5% top-1 accuracy in
video category classification and 68.9 CIDEr score in video captioning,
respectively. Finally, we scale up mPLUG-video based on the frozen Bloomz with
only 1.7% trainable parameters as Chinese multimodal LLM, and demonstrate
impressive instruction and video understanding ability. The zero-shot
instruction understanding experiment indicates that pretraining with
Youku-mPLUG can enhance the ability to comprehend overall and detailed visual
semantics, recognize scene text, and leverage open-domain knowledge.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 11:52:36 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Xu",
"Haiyang",
""
],
[
"Ye",
"Qinghao",
""
],
[
"Wu",
"Xuan",
""
],
[
"Yan",
"Ming",
""
],
[
"Miao",
"Yuan",
""
],
[
"Ye",
"Jiabo",
""
],
[
"Xu",
"Guohai",
""
],
[
"Hu",
"Anwen",
""
],
[
"Shi",
"Yaya",
""
],
[
"Xu",
"Guangwei",
""
],
[
"Li",
"Chenliang",
""
],
[
"Qian",
"Qi",
""
],
[
"Que",
"Maofei",
""
],
[
"Zhang",
"Ji",
""
],
[
"Zeng",
"Xiao",
""
],
[
"Huang",
"Fei",
""
]
] |
new_dataset
| 0.999863 |
2306.04385
|
Han Sun
|
Han Sun, Rui Gong, Konrad Schindler, Luc Van Gool
|
SF-FSDA: Source-Free Few-Shot Domain Adaptive Object Detection with
Efficient Labeled Data Factory
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Domain adaptive object detection aims to leverage the knowledge learned from
a labeled source domain to improve the performance on an unlabeled target
domain. Prior works typically require the access to the source domain data for
adaptation, and the availability of sufficient data on the target domain.
However, these assumptions may not hold due to data privacy and rare data
collection. In this paper, we propose and investigate a more practical and
challenging domain adaptive object detection problem under both source-free and
few-shot conditions, named as SF-FSDA. To overcome this problem, we develop an
efficient labeled data factory based approach. Without accessing the source
domain, the data factory renders i) infinite amount of synthesized
target-domain like images, under the guidance of the few-shot image samples and
text description from the target domain; ii) corresponding bounding box and
category annotations, only demanding minimum human effort, i.e., a few manually
labeled examples. On the one hand, the synthesized images mitigate the
knowledge insufficiency brought by the few-shot condition. On the other hand,
compared to the popular pseudo-label technique, the generated annotations from
data factory not only get rid of the reliance on the source pretrained object
detection model, but also alleviate the unavoidably pseudo-label noise due to
domain shift and source-free condition. The generated dataset is further
utilized to adapt the source pretrained object detection model, realizing the
robust object detection under SF-FSDA. The experiments on different settings
showcase that our proposed approach outperforms other state-of-the-art methods
on SF-FSDA problem. Our codes and models will be made publicly available.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 12:34:55 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Sun",
"Han",
""
],
[
"Gong",
"Rui",
""
],
[
"Schindler",
"Konrad",
""
],
[
"Van Gool",
"Luc",
""
]
] |
new_dataset
| 0.985488 |
2306.04434
|
Marianne Gunderson
|
Marianne Gunderson
|
Visions of augmented reality in popular culture: Power and (un)readable
identities when the world becomes a screen
| null |
Tidsskrift for Kjoennsforskning volume 45 2021 pages 89-104
|
10.18261/issn.1891-1781-2021-02-03-03
| null |
cs.CY cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Augmented reality, where digital objects are overlaid and combined with the
ordinary visual surface, is a technology under rapid development, which has
long been a part of visions of the digital future. In this article, I examine
how gaze and power are coded into three pop-cultural visions of augmented
reality. By analyzing representations of augmented reality in science fiction
through the lens of feminist theory on performativity and intelligibility,
visibility and race, gendered gaze, and algorithmic normativity, this paper
provides a critical understanding of augmented reality as a visual technology,
and how it might change or reinforce possible norms and power relations. In
these futures where the screen no longer has any boundaries, both cooperative
and reluctant bodies are inscribed with gendered and racialized digital
markers. Reading visions of augmented reality through feminist theory, I argue
that augmented reality technologies enter into assemblages of people,
discourses, and technologies, where none of the actors necessarily has an
overview. In these assemblages, augmented reality takes on a performative and
norm-bearing role, by forming a grid of intelligibility that codifies
identities, structures hierarchical relationships, and scripts social
interactions.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 13:49:49 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Gunderson",
"Marianne",
""
]
] |
new_dataset
| 0.995757 |
2306.04441
|
Weizhi Wang
|
Weizhi Wang, Hong Wang, Xifeng Yan
|
STEPS: A Benchmark for Order Reasoning in Sequential Tasks
|
Work in Progress
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Various human activities can be abstracted into a sequence of actions in
natural text, i.e. cooking, repairing, manufacturing, etc. Such action
sequences heavily depend on the executing order, while disorder in action
sequences leads to failure of further task execution by robots or AI agents.
Therefore, to verify the order reasoning capability of current neural models in
sequential tasks, we propose a challenging benchmark , named STEPS. STEPS
involves two subtask settings, focusing on determining the rationality of given
next step in recipes and selecting the reasonable step from the multi-choice
question, respectively. We describe the data construction and task
formulations, and benchmark most of significant Large Language Models (LLMs).
The experimental results demonstrate 1) The commonsense reasoning of action
orders in sequential tasks are challenging to resolve via zero-shot prompting
or few-shot in-context learning for LLMs; 2) Prompting method still
significantly lags behind tuning-based method on STEPS.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 13:58:55 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Wang",
"Weizhi",
""
],
[
"Wang",
"Hong",
""
],
[
"Yan",
"Xifeng",
""
]
] |
new_dataset
| 0.999404 |
2306.04485
|
Spencer Folk
|
Spencer Folk, James Paulos, Vijay Kumar
|
RotorPy: A Python-based Multirotor Simulator with Aerodynamics for
Education and Research
|
Appearing as a contributed paper in "The Role of Robotics Simulators
for Unmanned Aerial Vehicles" workshop at the 2023 International Conference
on Robotics and Automation (ICRA). See more at
https://imrclab.github.io/workshop-uav-sims-icra2023/
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Simulators play a critical role in aerial robotics both in and out of the
classroom. We present RotorPy, a simulation environment written entirely in
Python intentionally designed to be a lightweight and accessible tool for
robotics students and researchers alike to probe concepts in estimation,
planning, and control for aerial robots. RotorPy simulates the 6-DoF dynamics
of a multirotor robot including aerodynamic wrenches, obstacles, actuator
dynamics and saturation, realistic sensors, and wind models. This work
describes the modeling choices for RotorPy, benchmark testing against real
data, and a case study using the simulator to design and evaluate a model-based
wind estimator.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 14:55:00 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Folk",
"Spencer",
""
],
[
"Paulos",
"James",
""
],
[
"Kumar",
"Vijay",
""
]
] |
new_dataset
| 0.999594 |
2306.04523
|
Ines Reinig
|
Ines Reinig and Katja Markert
|
Can current NLI systems handle German word order? Investigating language
model performance on a new German challenge set of minimal pairs
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Compared to English, German word order is freer and therefore poses
additional challenges for natural language inference (NLI). We create WOGLI
(Word Order in German Language Inference), the first adversarial NLI dataset
for German word order that has the following properties: (i) each premise has
an entailed and a non-entailed hypothesis; (ii) premise and hypotheses differ
only in word order and necessary morphological changes to mark case and number.
In particular, each premise andits two hypotheses contain exactly the same
lemmata. Our adversarial examples require the model to use morphological
markers in order to recognise or reject entailment. We show that current German
autoencoding models fine-tuned on translated NLI data can struggle on this
challenge set, reflecting the fact that translated NLI datasets will not mirror
all necessary language phenomena in the target language. We also examine
performance after data augmentation as well as on related word order phenomena
derived from WOGLI. Our datasets are publically available at
https://github.com/ireinig/wogli.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 15:33:07 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Reinig",
"Ines",
""
],
[
"Markert",
"Katja",
""
]
] |
new_dataset
| 0.997703 |
2306.04532
|
Hamza Chaudhry
|
Hamza Tahir Chaudhry, Jacob A. Zavatone-Veth, Dmitry Krotov, Cengiz
Pehlevan
|
Long Sequence Hopfield Memory
|
14+21 pages, 10+1 figures
| null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sequence memory is an essential attribute of natural and artificial
intelligence that enables agents to encode, store, and retrieve complex
sequences of stimuli and actions. Computational models of sequence memory have
been proposed where recurrent Hopfield-like neural networks are trained with
temporally asymmetric Hebbian rules. However, these networks suffer from
limited sequence capacity (maximal length of the stored sequence) due to
interference between the memories. Inspired by recent work on Dense Associative
Memories, we expand the sequence capacity of these models by introducing a
nonlinear interaction term, enhancing separation between the patterns. We
derive novel scaling laws for sequence capacity with respect to network size,
significantly outperforming existing scaling laws for models based on
traditional Hopfield networks, and verify these theoretical results with
numerical simulation. Moreover, we introduce a generalized pseudoinverse rule
to recall sequences of highly correlated patterns. Finally, we extend this
model to store sequences with variable timing between states' transitions and
describe a biologically-plausible implementation, with connections to motor
neuroscience.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 15:41:03 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Chaudhry",
"Hamza Tahir",
""
],
[
"Zavatone-Veth",
"Jacob A.",
""
],
[
"Krotov",
"Dmitry",
""
],
[
"Pehlevan",
"Cengiz",
""
]
] |
new_dataset
| 0.995258 |
2306.04556
|
Arjun Guha
|
Hannah McLean Babe, Sydney Nguyen, Yangtian Zi, Arjun Guha, Molly Q
Feldman, Carolyn Jane Anderson
|
StudentEval: A Benchmark of Student-Written Prompts for Large Language
Models of Code
| null | null | null | null |
cs.LG cs.HC cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Code LLMs are being rapidly deployed and there is evidence that they can make
professional programmers more productive. Current benchmarks for code
generation measure whether models generate correct programs given an expert
prompt. In this paper, we present a new benchmark containing multiple prompts
per problem, written by a specific population of non-expert prompters:
beginning programmers. StudentEval contains 1,749 prompts for 48 problems,
written by 80 students who have only completed one semester of Python
programming. Our students wrote these prompts while working interactively with
a Code LLM, and we observed very mixed success rates. We use StudentEval to
evaluate 5 Code LLMs and find that StudentEval is a better discriminator of
model performance than existing benchmarks. We analyze the prompts and find
significant variation in students' prompting techniques. We also find that
nondeterministic LLM sampling could mislead students into thinking that their
prompts are more (or less) effective than they actually are, which has
implications for how to teach with Code LLMs.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 16:03:55 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Babe",
"Hannah McLean",
""
],
[
"Nguyen",
"Sydney",
""
],
[
"Zi",
"Yangtian",
""
],
[
"Guha",
"Arjun",
""
],
[
"Feldman",
"Molly Q",
""
],
[
"Anderson",
"Carolyn Jane",
""
]
] |
new_dataset
| 0.999552 |
2306.04557
|
Jens Behley
|
Jan Weyler and Federico Magistri and Elias Marks and Yue Linn Chong
and Matteo Sodano and Gianmarco Roggiolani and Nived Chebrolu and Cyrill
Stachniss and Jens Behley
|
PhenoBench -- A Large Dataset and Benchmarks for Semantic Image
Interpretation in the Agricultural Domain
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The production of food, feed, fiber, and fuel is a key task of agriculture.
Especially crop production has to cope with a multitude of challenges in the
upcoming decades caused by a growing world population, climate change, the need
for sustainable production, lack of skilled workers, and generally the limited
availability of arable land. Vision systems could help cope with these
challenges by offering tools to make better and more sustainable field
management decisions and support the breeding of new varieties of crops by
allowing temporally dense and reproducible measurements. Recently, tackling
perception tasks in the agricultural domain got increasing interest in the
computer vision and robotics community since agricultural robotics are one
promising solution for coping with the lack of workers and enable a more
sustainable agricultural production at the same time. While large datasets and
benchmarks in other domains are readily available and have enabled significant
progress toward more reliable vision systems, agricultural datasets and
benchmarks are comparably rare. In this paper, we present a large dataset and
benchmarks for the semantic interpretation of images of real agricultural
fields. Our dataset recorded with a UAV provides high-quality, dense
annotations of crops and weeds, but also fine-grained labels of crop leaves at
the same time, which enable the development of novel algorithms for visual
perception in the agricultural domain. Together with the labeled data, we
provide novel benchmarks for evaluating different visual perception tasks on a
hidden test set comprised of different fields: known fields covered by the
training data and a completely unseen field. The tasks cover semantic
segmentation, panoptic segmentation of plants, leaf instance segmentation,
detection of plants and leaves, and hierarchical panoptic segmentation for
jointly identifying plants and leaves.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 16:04:08 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Weyler",
"Jan",
""
],
[
"Magistri",
"Federico",
""
],
[
"Marks",
"Elias",
""
],
[
"Chong",
"Yue Linn",
""
],
[
"Sodano",
"Matteo",
""
],
[
"Roggiolani",
"Gianmarco",
""
],
[
"Chebrolu",
"Nived",
""
],
[
"Stachniss",
"Cyrill",
""
],
[
"Behley",
"Jens",
""
]
] |
new_dataset
| 0.999896 |
2306.04563
|
Sophie Jentzsch
|
Sophie Jentzsch, Kristian Kersting
|
ChatGPT is fun, but it is not funny! Humor is still challenging Large
Language Models
| null | null | null | null |
cs.AI cs.CL cs.HC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Humor is a central aspect of human communication that has not been solved for
artificial agents so far. Large language models (LLMs) are increasingly able to
capture implicit and contextual information. Especially, OpenAI's ChatGPT
recently gained immense public attention. The GPT3-based model almost seems to
communicate on a human level and can even tell jokes. Humor is an essential
component of human communication. But is ChatGPT really funny? We put ChatGPT's
sense of humor to the test. In a series of exploratory experiments around
jokes, i.e., generation, explanation, and detection, we seek to understand
ChatGPT's capability to grasp and reproduce human humor. Since the model itself
is not accessible, we applied prompt-based experiments. Our empirical evidence
indicates that jokes are not hard-coded but mostly also not newly generated by
the model. Over 90% of 1008 generated jokes were the same 25 Jokes. The system
accurately explains valid jokes but also comes up with fictional explanations
for invalid jokes. Joke-typical characteristics can mislead ChatGPT in the
classification of jokes. ChatGPT has not solved computational humor yet but it
can be a big leap toward "funny" machines.
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 16:10:21 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Jentzsch",
"Sophie",
""
],
[
"Kersting",
"Kristian",
""
]
] |
new_dataset
| 0.999054 |
2306.04585
|
Sayan Mitra
|
Kristina Miller and Christopher K. Zeitler and William Shen and Mahesh
Viswanathan and Sayan Mitra
|
RTAEval: A framework for evaluating runtime assurance logic
| null | null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Runtime assurance (RTA) addresses the problem of keeping an autonomous system
safe while using an untrusted (or experimental) controller. This can be done
via logic that explicitly switches between the untrusted controller and a
safety controller, or logic that filters the input provided by the untrusted
controller. While several tools implement specific instances of RTAs, there is
currently no framework for evaluating different approaches. Given the
importance of the RTA problem in building safe autonomous systems, an
evaluation tool is needed. In this paper, we present the RTAEval framework as a
low code framework that can be used to quickly evaluate different RTA logics
for different types of agents in a variety of scenarios. RTAEval is designed to
quickly create scenarios, run different RTA logics, and collect data that can
be used to evaluate and visualize performance. In this paper, we describe
different components of RTAEval and show how it can be used to create and
evaluate scenarios involving multiple aircraft models.
|
[
{
"version": "v1",
"created": "Mon, 5 Jun 2023 12:39:44 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Miller",
"Kristina",
""
],
[
"Zeitler",
"Christopher K.",
""
],
[
"Shen",
"William",
""
],
[
"Viswanathan",
"Mahesh",
""
],
[
"Mitra",
"Sayan",
""
]
] |
new_dataset
| 0.998516 |
2306.04610
|
Nicholas Riccardi
|
Nicholas Riccardi and Rutvik H. Desai
|
The Two Word Test: A Semantic Benchmark for Large Language Models
|
12 pages, 5 figures, 3 tables, submitted to NeurIPS 2023 Datasets and
Benchmarks Track
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Large Language Models (LLMs) have shown remarkable abilities recently,
including passing advanced professional exams and demanding benchmark tests.
This performance has led many to suggest that they are close to achieving
humanlike or 'true' understanding of language, and even Artificial General
Intelligence (AGI). Here, we provide a new open-source benchmark that can
assess semantic abilities of LLMs using two-word phrases using a task that can
be performed relatively easily by humans without advanced training. Combining
multiple words into a single concept is a fundamental aspect of human language
and intelligence. The test requires meaningfulness judgments of 1768 noun-noun
combinations that have been rated as meaningful (e.g., baby boy) or not
meaningful (e.g., goat sky). by 150 human raters. We provide versions of the
task that probe meaningfulness ratings on a 0-4 scale as well as binary
judgments. We conducted a series of experiments using the TWT on GPT-4,
GPT-3.5, and Bard, with both versions. Results demonstrated that, compared to
humans, all models perform poorly at rating meaningfulness of these phrases.
GPT-3.5 and Bard are also unable to make binary discriminations between
sensible and nonsense phrases as making sense. GPT-4 makes a substantial
improvement in binary discrimination of combinatorial phrases but is still
significantly worse than human performance. The TWT can be used to understand
the limitations and weaknesses of current LLMs, and potentially improve them.
The test also reminds us that caution is warranted in attributing 'true
understanding' or AGI to LLMs. TWT is available at:
https://github.com/NickRiccardi/two-word-test
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 17:22:03 GMT"
}
] | 2023-06-08T00:00:00 |
[
[
"Riccardi",
"Nicholas",
""
],
[
"Desai",
"Rutvik H.",
""
]
] |
new_dataset
| 0.995949 |
2101.08184
|
Paolo Baldan
|
Paolo Baldan, Richard Eggert, Barbara K\"onig, Tommaso Padoan
|
Fixpoint Theory -- Upside Down
| null | null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Knaster-Tarski's theorem, characterising the greatest fixpoint of a monotone
function over a complete lattice as the largest post-fixpoint, naturally leads
to the so-called coinduction proof principle for showing that some element is
below the greatest fixpoint (e.g., for providing bisimilarity witnesses). The
dual principle, used for showing that an element is above the least fixpoint,
is related to inductive invariants. In this paper we provide proof rules which
are similar in spirit but for showing that an element is above the greatest
fixpoint or, dually, below the least fixpoint. The theory is developed for
non-expansive monotone functions on suitable lattices of the form
$\mathbb{M}^Y$, where $Y$ is a finite set and $\mathbb{M}$ an MV-algebra, and
it is based on the construction of (finitary) approximations of the original
functions. We show that our theory applies to a wide range of examples,
including termination probabilities, metric transition systems, behavioural
distances for probabilistic automata and bisimilarity. Moreover it allows us to
determine original algorithms for solving simple stochastic games.
|
[
{
"version": "v1",
"created": "Wed, 20 Jan 2021 15:31:01 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Apr 2021 11:09:43 GMT"
},
{
"version": "v3",
"created": "Thu, 19 Aug 2021 07:57:11 GMT"
},
{
"version": "v4",
"created": "Mon, 25 Jul 2022 13:59:43 GMT"
},
{
"version": "v5",
"created": "Wed, 19 Apr 2023 10:41:15 GMT"
},
{
"version": "v6",
"created": "Thu, 20 Apr 2023 07:27:42 GMT"
},
{
"version": "v7",
"created": "Thu, 27 Apr 2023 12:58:27 GMT"
},
{
"version": "v8",
"created": "Tue, 6 Jun 2023 14:06:36 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Baldan",
"Paolo",
""
],
[
"Eggert",
"Richard",
""
],
[
"König",
"Barbara",
""
],
[
"Padoan",
"Tommaso",
""
]
] |
new_dataset
| 0.993327 |
2102.07448
|
Senthil Yogamani
|
Varun Ravi Kumar, Senthil Yogamani, Hazem Rashed, Ganesh Sistu,
Christian Witt, Isabelle Leang, Stefan Milz and Patrick M\"ader
|
OmniDet: Surround View Cameras based Multi-task Visual Perception
Network for Autonomous Driving
|
Best Robot Vision paper award finalist (top 4). Camera ready version
accepted for RA-L and ICRA 2021 publication
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Surround View fisheye cameras are commonly deployed in automated driving for
360\deg{} near-field sensing around the vehicle. This work presents a
multi-task visual perception network on unrectified fisheye images to enable
the vehicle to sense its surrounding environment. It consists of six primary
tasks necessary for an autonomous driving system: depth estimation, visual
odometry, semantic segmentation, motion segmentation, object detection, and
lens soiling detection. We demonstrate that the jointly trained model performs
better than the respective single task versions. Our multi-task model has a
shared encoder providing a significant computational advantage and has
synergized decoders where tasks support each other. We propose a novel camera
geometry based adaptation mechanism to encode the fisheye distortion model both
at training and inference. This was crucial to enable training on the WoodScape
dataset, comprised of data from different parts of the world collected by 12
different cameras mounted on three different cars with different intrinsics and
viewpoints. Given that bounding boxes is not a good representation for
distorted fisheye images, we also extend object detection to use a polygon with
non-uniformly sampled vertices. We additionally evaluate our model on standard
automotive datasets, namely KITTI and Cityscapes. We obtain the
state-of-the-art results on KITTI for depth estimation and pose estimation
tasks and competitive performance on the other tasks. We perform extensive
ablation studies on various architecture choices and task weighting
methodologies. A short video at https://youtu.be/xbSjZ5OfPes provides
qualitative results.
|
[
{
"version": "v1",
"created": "Mon, 15 Feb 2021 10:46:24 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Aug 2021 14:45:16 GMT"
},
{
"version": "v3",
"created": "Tue, 6 Jun 2023 14:31:21 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Kumar",
"Varun Ravi",
""
],
[
"Yogamani",
"Senthil",
""
],
[
"Rashed",
"Hazem",
""
],
[
"Sistu",
"Ganesh",
""
],
[
"Witt",
"Christian",
""
],
[
"Leang",
"Isabelle",
""
],
[
"Milz",
"Stefan",
""
],
[
"Mäder",
"Patrick",
""
]
] |
new_dataset
| 0.998135 |
2103.17001
|
Senthil Yogamani
|
Ciaran Eising, Jonathan Horgan and Senthil Yogamani
|
Near-field Perception for Low-Speed Vehicle Automation using
Surround-view Fisheye Cameras
|
Accepted for publication at IEEE Transactions on Intelligent
Transportation Systems
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cameras are the primary sensor in automated driving systems. They provide
high information density and are optimal for detecting road infrastructure cues
laid out for human vision. Surround-view camera systems typically comprise of
four fisheye cameras with 190{\deg}+ field of view covering the entire
360{\deg} around the vehicle focused on near-field sensing. They are the
principal sensors for low-speed, high accuracy, and close-range sensing
applications, such as automated parking, traffic jam assistance, and low-speed
emergency braking. In this work, we provide a detailed survey of such vision
systems, setting up the survey in the context of an architecture that can be
decomposed into four modular components namely Recognition, Reconstruction,
Relocalization, and Reorganization. We jointly call this the 4R Architecture.
We discuss how each component accomplishes a specific aspect and provide a
positional argument that they can be synergized to form a complete perception
system for low-speed automation. We support this argument by presenting results
from previous works and by presenting architecture proposals for such a system.
Qualitative results are presented in the video at https://youtu.be/ae8bCOF77uY.
|
[
{
"version": "v1",
"created": "Wed, 31 Mar 2021 11:33:36 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Nov 2021 10:49:43 GMT"
},
{
"version": "v3",
"created": "Thu, 11 Nov 2021 12:41:53 GMT"
},
{
"version": "v4",
"created": "Tue, 6 Jun 2023 15:25:08 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Eising",
"Ciaran",
""
],
[
"Horgan",
"Jonathan",
""
],
[
"Yogamani",
"Senthil",
""
]
] |
new_dataset
| 0.995978 |
2202.03632
|
Zhenkun Shi
|
Zhenkun Shi, Qianqian Yuan, Ruoyu Wang, Hoaran Li, Xiaoping Liao,
Hongwu Ma
|
ECRECer: Enzyme Commission Number Recommendation and Benchmarking based
on Multiagent Dual-core Learning
|
16 pages, 14 figures
|
Research. 2023:6;0153
|
10.34133/research.0153
|
research.0153
|
cs.LG cs.AI q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Enzyme Commission (EC) numbers, which associate a protein sequence with the
biochemical reactions it catalyzes, are essential for the accurate
understanding of enzyme functions and cellular metabolism. Many ab-initio
computational approaches were proposed to predict EC numbers for given input
sequences directly. However, the prediction performance (accuracy, recall,
precision), usability, and efficiency of existing methods still have much room
to be improved. Here, we report ECRECer, a cloud platform for accurately
predicting EC numbers based on novel deep learning techniques. To build
ECRECer, we evaluate different protein representation methods and adopt a
protein language model for protein sequence embedding. After embedding, we
propose a multi-agent hierarchy deep learning-based framework to learn the
proposed tasks in a multi-task manner. Specifically, we used an extreme
multi-label classifier to perform the EC prediction and employed a greedy
strategy to integrate and fine-tune the final model. Comparative analyses
against four representative methods demonstrate that ECRECer delivers the
highest performance, which improves accuracy and F1 score by 70% and 20% over
the state-of-the-the-art, respectively. With ECRECer, we can annotate numerous
enzymes in the Swiss-Prot database with incomplete EC numbers to their full
fourth level. Take UniPort protein "A0A0U5GJ41" as an example (1.14.-.-),
ECRECer annotated it with "1.14.11.38", which supported by further protein
structure analysis based on AlphaFold2. Finally, we established a webserver
(https://ecrecer.biodesign.ac.cn) and provided an offline bundle to improve
usability.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 04:00:49 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Shi",
"Zhenkun",
""
],
[
"Yuan",
"Qianqian",
""
],
[
"Wang",
"Ruoyu",
""
],
[
"Li",
"Hoaran",
""
],
[
"Liao",
"Xiaoping",
""
],
[
"Ma",
"Hongwu",
""
]
] |
new_dataset
| 0.993616 |
2203.00064
|
Niharika Thakuria
|
Niharika Thakuria, Reena Elangovan, Anand Raghunathan and Sumeet K.
Gupta
|
Piezoelectric Strain FET (PeFET) based Non-Volatile Memories
|
8 pages, 13 figures In the peer review process of the journal of IEEE
Transactions on Electron Devices
| null |
10.1109/TED.2023.3270845
| null |
cs.ET cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose non-volatile memory (NVM) designs based on Piezoelectric Strain
FET (PeFET) utilizing a piezoelectric/ferroelectric (PE/FE such as PZT) coupled
with 2D Transition Metal Dichalcogenide (2D-TMD such as MoS2) transistor. The
proposed NVMs store bit information in the form of polarization (P) of the
FE/PE, use electric-field driven P-switching for write and employ
piezoelectricity induced dynamic bandgap modulation of 2D-TMD channel for bit
sensing. We analyze PeFET with COMSOL based 3D modeling showing that the
circuit-driven optimization of PeFET geometry is essential to achieve effective
hammer-and-nail effect and adequate bandgap modulation for NVM read. Our
results show that distinguishability of binary states to up to 11X is achieved
in PeFETs.We propose various flavors of PeFET NVMs, namely (a) high density
(HD) NVM featuring a compact access-transistor-less bit-cell, (b) 1T-1PeFET NVM
with segmented architecture, targeted for optimized write energy and latency
and (c) cross-coupled (CC) NVM offering a trade-off between area and
latency.PeFET NVMs offer up to 7X smaller cell area, 66% lower write energy,
87% lower read energy and 44% faster read compared to 2D-FET SRAM. This comes
at the cost of high write latency in PeFET NVMs, which can be minimized by
virtue of optimized PE geometry.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 20:10:27 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Apr 2022 18:37:43 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Thakuria",
"Niharika",
""
],
[
"Elangovan",
"Reena",
""
],
[
"Raghunathan",
"Anand",
""
],
[
"Gupta",
"Sumeet K.",
""
]
] |
new_dataset
| 0.999695 |
2204.03939
|
Rong Ye
|
Rong Ye, Chengqi Zhao, Tom Ko, Chutong Meng, Tao Wang, Mingxuan Wang,
Jun Cao
|
GigaST: A 10,000-hour Pseudo Speech Translation Corpus
|
Accepted at Interspeech 2023. GigaST dataset is available at
https://st-benchmark.github.io/resources/GigaST
| null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces GigaST, a large-scale pseudo speech translation (ST)
corpus. We create the corpus by translating the text in GigaSpeech, an English
ASR corpus, into German and Chinese. The training set is translated by a strong
machine translation system and the test set is translated by human. ST models
trained with an addition of our corpus obtain new state-of-the-art results on
the MuST-C English-German benchmark test set. We provide a detailed description
of the translation process and verify its quality. We make the translated text
data public and hope to facilitate research in speech translation.
Additionally, we also release the training scripts on NeurST to make it easy to
replicate our systems. GigaST dataset is available at
https://st-benchmark.github.io/resources/GigaST.
|
[
{
"version": "v1",
"created": "Fri, 8 Apr 2022 08:59:33 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jun 2023 12:48:48 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Ye",
"Rong",
""
],
[
"Zhao",
"Chengqi",
""
],
[
"Ko",
"Tom",
""
],
[
"Meng",
"Chutong",
""
],
[
"Wang",
"Tao",
""
],
[
"Wang",
"Mingxuan",
""
],
[
"Cao",
"Jun",
""
]
] |
new_dataset
| 0.998955 |
2206.04882
|
Ziqi Chen
|
Ziqi Chen, Oluwatosin R. Ayinde, James R. Fuchs, Huan Sun, Xia Ning
|
$\mathsf{G^2Retro}$ as a Two-Step Graph Generative Models for
Retrosynthesis Prediction
| null |
Commun Chem 6, 102 (2023)
|
10.1038/s42004-023-00897-3
| null |
cs.LG physics.chem-ph q-bio.BM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Retrosynthesis is a procedure where a target molecule is transformed into
potential reactants and thus the synthesis routes can be identified. Recently,
computational approaches have been developed to accelerate the design of
synthesis routes. In this paper, we develop a generative framework
$\mathsf{G^2Retro}$ for one-step retrosynthesis prediction. $\mathsf{G^2Retro}$
imitates the reversed logic of synthetic reactions. It first predicts the
reaction centers in the target molecules (products), identifies the synthons
needed to assemble the products, and transforms these synthons into reactants.
$\mathsf{G^2Retro}$ defines a comprehensive set of reaction center types, and
learns from the molecular graphs of the products to predict potential reaction
centers. To complete synthons into reactants, $\mathsf{G^2Retro}$ considers all
the involved synthon structures and the product structures to identify the
optimal completion paths, and accordingly attaches small substructures
sequentially to the synthons. Here we show that $\mathsf{G^2Retro}$ is able to
better predict the reactants for given products in the benchmark dataset than
the state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Fri, 10 Jun 2022 05:34:12 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Nov 2022 18:32:43 GMT"
},
{
"version": "v3",
"created": "Mon, 5 Jun 2023 20:58:47 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Chen",
"Ziqi",
""
],
[
"Ayinde",
"Oluwatosin R.",
""
],
[
"Fuchs",
"James R.",
""
],
[
"Sun",
"Huan",
""
],
[
"Ning",
"Xia",
""
]
] |
new_dataset
| 0.964481 |
2206.09959
|
Ali Hatamizadeh
|
Ali Hatamizadeh, Hongxu Yin, Greg Heinrich, Jan Kautz, and Pavlo
Molchanov
|
Global Context Vision Transformers
|
Accepted to ICML 2023
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We propose global context vision transformer (GC ViT), a novel architecture
that enhances parameter and compute utilization for computer vision. Our method
leverages global context self-attention modules, joint with standard local
self-attention, to effectively and efficiently model both long and short-range
spatial interactions, without the need for expensive operations such as
computing attention masks or shifting local windows. In addition, we address
the lack of the inductive bias in ViTs, and propose to leverage a modified
fused inverted residual blocks in our architecture. Our proposed GC ViT
achieves state-of-the-art results across image classification, object detection
and semantic segmentation tasks. On ImageNet-1K dataset for classification, the
variants of GC ViT with 51M, 90M and 201M parameters achieve 84.3%, 85.0% and
85.7% Top-1 accuracy, respectively, at 224 image resolution and without any
pre-training, hence surpassing comparably-sized prior art such as CNN-based
ConvNeXt and ViT-based MaxViT and Swin Transformer by a large margin.
Pre-trained GC ViT backbones in downstream tasks of object detection, instance
segmentation, and semantic segmentation using MS COCO and ADE20K datasets
outperform prior work consistently. Specifically, GC ViT with a 4-scale DINO
detection head achieves a box AP of 58.3 on MS COCO dataset.
|
[
{
"version": "v1",
"created": "Mon, 20 Jun 2022 18:42:44 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Sep 2022 21:02:00 GMT"
},
{
"version": "v3",
"created": "Sat, 1 Oct 2022 03:40:57 GMT"
},
{
"version": "v4",
"created": "Mon, 6 Feb 2023 04:38:57 GMT"
},
{
"version": "v5",
"created": "Tue, 6 Jun 2023 08:17:18 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Hatamizadeh",
"Ali",
""
],
[
"Yin",
"Hongxu",
""
],
[
"Heinrich",
"Greg",
""
],
[
"Kautz",
"Jan",
""
],
[
"Molchanov",
"Pavlo",
""
]
] |
new_dataset
| 0.98881 |
2210.03338
|
Lifan Mei
|
Lifan Mei, Jinrui Gou, Jingrui Yang, Yujin Cai, Yong Liu
|
On Routing Optimization in Networks with Embedded Computational Services
|
16 figures
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Modern communication networks are increasingly equipped with in-network
computational capabilities and services. Routing in such networks is
significantly more complicated than the traditional routing. A legitimate route
for a flow not only needs to have enough communication and computation
resources, but also has to conform to various application-specific routing
constraints. This paper presents a comprehensive study on routing optimization
problems in networks with embedded computational services. We develop a set of
routing optimization models and derive low-complexity heuristic routing
algorithms for diverse computation scenarios. For dynamic demands, we also
develop an online routing algorithm with performance guarantees. Through
evaluations over emerging applications on real topologies, we demonstrate that
our models can be flexibly customized to meet the diverse routing requirements
of different computation applications. Our proposed heuristic algorithms
significantly outperform baseline algorithms and can achieve close-to-optimal
performance in various scenarios.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 05:59:32 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jun 2023 01:10:02 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Mei",
"Lifan",
""
],
[
"Gou",
"Jinrui",
""
],
[
"Yang",
"Jingrui",
""
],
[
"Cai",
"Yujin",
""
],
[
"Liu",
"Yong",
""
]
] |
new_dataset
| 0.985457 |
2210.05241
|
Chengting Yu
|
Chengting Yu, Zheming Gu, Da Li, Gaoang Wang, Aili Wang and Erping Li
|
STSC-SNN: Spatio-Temporal Synaptic Connection with Temporal Convolution
and Attention for Spiking Neural Networks
| null |
Frontiers in neuroscience, 2022, 12
|
10.3389/fnins.2022.1079357
| null |
cs.NE q-bio.NC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spiking Neural Networks (SNNs), as one of the algorithmic models in
neuromorphic computing, have gained a great deal of research attention owing to
temporal information processing capability, low power consumption, and high
biological plausibility. The potential to efficiently extract spatio-temporal
features makes it suitable for processing the event streams. However, existing
synaptic structures in SNNs are almost full-connections or spatial 2D
convolution, neither of which can extract temporal dependencies adequately. In
this work, we take inspiration from biological synapses and propose a
spatio-temporal synaptic connection SNN (STSC-SNN) model, to enhance the
spatio-temporal receptive fields of synaptic connections, thereby establishing
temporal dependencies across layers. Concretely, we incorporate temporal
convolution and attention mechanisms to implement synaptic filtering and gating
functions. We show that endowing synaptic models with temporal dependencies can
improve the performance of SNNs on classification tasks. In addition, we
investigate the impact of performance vias varied spatial-temporal receptive
fields and reevaluate the temporal modules in SNNs. Our approach is tested on
neuromorphic datasets, including DVS128 Gesture (gesture recognition), N-MNIST,
CIFAR10-DVS (image classification), and SHD (speech digit recognition). The
results show that the proposed model outperforms the state-of-the-art accuracy
on nearly all datasets.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 08:13:22 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Yu",
"Chengting",
""
],
[
"Gu",
"Zheming",
""
],
[
"Li",
"Da",
""
],
[
"Wang",
"Gaoang",
""
],
[
"Wang",
"Aili",
""
],
[
"Li",
"Erping",
""
]
] |
new_dataset
| 0.974108 |
2210.08232
|
Tesla Zhang
|
Tesla Zhang
|
A tutorial on implementing De Morgan cubical type theory
|
27 pages
| null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This tutorial explains (one way) how to implement De Morgan cubical type
theory to people who know how to implement a dependent type theory. It contains
an introduction to basic concepts of cubes, type checking algorithms under a
cofibration, the idea of "transportation rules" and cubical operations. This
tutorial is a by-product of an experimental implementation of cubical type
theory, called Guest0x0.
|
[
{
"version": "v1",
"created": "Sat, 15 Oct 2022 08:52:36 GMT"
},
{
"version": "v2",
"created": "Sat, 3 Dec 2022 05:39:12 GMT"
},
{
"version": "v3",
"created": "Tue, 30 May 2023 17:48:26 GMT"
},
{
"version": "v4",
"created": "Tue, 6 Jun 2023 12:01:01 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Zhang",
"Tesla",
""
]
] |
new_dataset
| 0.985305 |
2211.05702
|
Jeffrey Andrews PhD
|
Jeffrey G. Andrews
|
A Primer on Zadoff Chu Sequences
|
Tutorial article, not submitted for publication
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Zadoff Chu (ZC) sequences are a principal manifestation of spread spectrum in
modern cellular systems including LTE and 5G NR, largely displacing PN and
Walsh sequences which were the mainstays of 3G cellular (WCDMA and cdma2000)
and the 2G-era IS-95. ZC sequences are complex sequences with unit amplitude
and particular phase shifts, as opposed to Walsh and PN codes which are real
and binary valued, most commonly $\pm1$ when used in communication systems.
ZC sequences have a number of remarkable and desirable properties that we
define in the next section. Because of these properties, they are used for many
key functions in current cellular systems, and are likely to be prevalent in
future cellular systems as well. In LTE and 5G NR, they are widely used for a
number of important initial access and overhead channel functions that are
often overlooked by engineers who focus on data transmission. For example, ZC
sequences are used for initial access in both the downlink (synchronization)
and uplink (random access), uplink control information, uplink channel
sounding, and for the reference symbols (pilots) used for fine-grained channel
estimation. It is not an exaggeration to say that most types of signals other
than the data transmissions in modern cellular standards utilize ZC sequences.
|
[
{
"version": "v1",
"created": "Thu, 10 Nov 2022 17:14:52 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Jun 2023 19:00:40 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Andrews",
"Jeffrey G.",
""
]
] |
new_dataset
| 0.999779 |
2211.12600
|
Christodoulos Peltekis
|
C. Peltekis, D. Filippas, G. Dimitrakopoulos, C. Nicopoulos, D.
Pnevmatikatos
|
ArrayFlex: A Systolic Array Architecture with Configurable Transparent
Pipelining
|
DATE 2023
| null |
10.23919/DATE56975.2023.10136913
| null |
cs.AR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Convolutional Neural Networks (CNNs) are the state-of-the-art solution for
many deep learning applications. For maximum scalability, their computation
should combine high performance and energy efficiency. In practice, the
convolutions of each CNN layer are mapped to a matrix multiplication that
includes all input features and kernels of each layer and is computed using a
systolic array. In this work, we focus on the design of a systolic array with
configurable pipeline with the goal to select an optimal pipeline configuration
for each CNN layer. The proposed systolic array, called ArrayFlex, can operate
in normal, or in shallow pipeline mode, thus balancing the execution time in
cycles and the operating clock frequency. By selecting the appropriate pipeline
configuration per CNN layer, ArrayFlex reduces the inference latency of
state-of-the-art CNNs by 11%, on average, as compared to a traditional
fixed-pipeline systolic array. Most importantly, this result is achieved while
using 13%-23% less power, for the same applications, thus offering a combined
energy-delay-product efficiency between 1.4x and 1.8x.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 21:56:38 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jun 2023 09:33:37 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Peltekis",
"C.",
""
],
[
"Filippas",
"D.",
""
],
[
"Dimitrakopoulos",
"G.",
""
],
[
"Nicopoulos",
"C.",
""
],
[
"Pnevmatikatos",
"D.",
""
]
] |
new_dataset
| 0.999331 |
2212.08333
|
Haoshu Fang
|
Hao-Shu Fang, Chenxi Wang, Hongjie Fang, Minghao Gou, Jirong Liu,
Hengxu Yan, Wenhai Liu, Yichen Xie, Cewu Lu
|
AnyGrasp: Robust and Efficient Grasp Perception in Spatial and Temporal
Domains
|
Paper accepted to T-RO. Project page is at
https://graspnet.net/anygrasp.html
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As the basis for prehensile manipulation, it is vital to enable robots to
grasp as robustly as humans. Our innate grasping system is prompt, accurate,
flexible, and continuous across spatial and temporal domains. Few existing
methods cover all these properties for robot grasping. In this paper, we
propose AnyGrasp for grasp perception to enable robots these abilities using a
parallel gripper. Specifically, we develop a dense supervision strategy with
real perception and analytic labels in the spatial-temporal domain. Additional
awareness of objects' center-of-mass is incorporated into the learning process
to help improve grasping stability. Utilization of grasp correspondence across
observations enables dynamic grasp tracking. Our model can efficiently generate
accurate, 7-DoF, dense, and temporally-smooth grasp poses and works robustly
against large depth-sensing noise. Using AnyGrasp, we achieve a 93.3% success
rate when clearing bins with over 300 unseen objects, which is on par with
human subjects under controlled conditions. Over 900 mean-picks-per-hour is
reported on a single-arm system. For dynamic grasping, we demonstrate catching
swimming robot fish in the water. Our project page is at
https://graspnet.net/anygrasp.html
|
[
{
"version": "v1",
"created": "Fri, 16 Dec 2022 08:19:40 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jun 2023 08:56:37 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Fang",
"Hao-Shu",
""
],
[
"Wang",
"Chenxi",
""
],
[
"Fang",
"Hongjie",
""
],
[
"Gou",
"Minghao",
""
],
[
"Liu",
"Jirong",
""
],
[
"Yan",
"Hengxu",
""
],
[
"Liu",
"Wenhai",
""
],
[
"Xie",
"Yichen",
""
],
[
"Lu",
"Cewu",
""
]
] |
new_dataset
| 0.993403 |
2212.10534
|
Kyle Richardson
|
Zeming Chen and Qiyue Gao and Antoine Bosselut and Ashish Sabharwal
and Kyle Richardson
|
DISCO: Distilling Counterfactuals with Large Language Models
|
ACL 2023 camera ready, final title change
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Models trained with counterfactually augmented data learn representations of
the causal structure of tasks, enabling robust generalization. However,
high-quality counterfactual data is scarce for most tasks and not easily
generated at scale. When crowdsourced, such data is typically limited in scale
and diversity; when generated using supervised methods, it is computationally
expensive to extend to new counterfactual dimensions. In this work, we
introduce DISCO (DIStilled COunterfactual Data), a new method for automatically
generating high quality counterfactual data at scale. DISCO engineers prompts
to generate phrasal perturbations with a large general language model. Then, a
task-specific teacher model filters these generations to distill high-quality
counterfactual data. While task-agnostic, we apply our pipeline to the task of
natural language inference (NLI) and find that on challenging evaluations such
as the NLI stress test, comparatively smaller student models trained with DISCO
generated counterfactuals are more robust (6% absolute) and generalize better
across distributions (2%) compared to models trained without data augmentation.
Furthermore, DISCO augmented models are 10% more consistent between
counterfactual pairs on three evaluation sets, demonstrating that DISCO
augmentation enables models to more reliably learn causal representations. Our
repository is available at: https://github.com/eric11eca/disco
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 18:46:08 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Jun 2023 17:28:06 GMT"
},
{
"version": "v3",
"created": "Mon, 5 Jun 2023 19:16:25 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Chen",
"Zeming",
""
],
[
"Gao",
"Qiyue",
""
],
[
"Bosselut",
"Antoine",
""
],
[
"Sabharwal",
"Ashish",
""
],
[
"Richardson",
"Kyle",
""
]
] |
new_dataset
| 0.981913 |
2302.00093
|
Xinyun Chen
|
Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed
Chi, Nathanael Sch\"arli, Denny Zhou
|
Large Language Models Can Be Easily Distracted by Irrelevant Context
|
Published in ICML 2023
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models have achieved impressive performance on various natural
language processing tasks. However, so far they have been evaluated primarily
on benchmarks where all information in the input context is relevant for
solving the task. In this work, we investigate the distractibility of large
language models, i.e., how the model problem-solving accuracy can be influenced
by irrelevant context. In particular, we introduce Grade-School Math with
Irrelevant Context (GSM-IC), an arithmetic reasoning dataset with irrelevant
information in the problem description. We use this benchmark to measure the
distractibility of cutting-edge prompting techniques for large language models,
and find that the model performance is dramatically decreased when irrelevant
information is included. We also identify several approaches for mitigating
this deficiency, such as decoding with self-consistency and adding to the
prompt an instruction that tells the language model to ignore the irrelevant
information.
|
[
{
"version": "v1",
"created": "Tue, 31 Jan 2023 20:48:57 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Feb 2023 20:08:59 GMT"
},
{
"version": "v3",
"created": "Tue, 6 Jun 2023 08:36:20 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Shi",
"Freda",
""
],
[
"Chen",
"Xinyun",
""
],
[
"Misra",
"Kanishka",
""
],
[
"Scales",
"Nathan",
""
],
[
"Dohan",
"David",
""
],
[
"Chi",
"Ed",
""
],
[
"Schärli",
"Nathanael",
""
],
[
"Zhou",
"Denny",
""
]
] |
new_dataset
| 0.999109 |
2302.11848
|
Bo Chen
|
Bo Chen, Jing Zhang, Fanjin Zhang, Tianyi Han, Yuqing Cheng, Xiaoyan
Li, Yuxiao Dong, and Jie Tang
|
Web-Scale Academic Name Disambiguation: the WhoIsWho Benchmark,
Leaderboard, and Toolkit
|
Accepted by KDD 2023 ADS track
| null | null | null |
cs.IR cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Name disambiguation -- a fundamental problem in online academic systems -- is
now facing greater challenges with the increasing growth of research papers.
For example, on AMiner, an online academic search platform, about 10% of names
own more than 100 authors. Such real-world challenging cases have not been
effectively addressed by existing researches due to the small-scale or
low-quality datasets that they have used. The development of effective
algorithms is further hampered by a variety of tasks and evaluation protocols
designed on top of diverse datasets. To this end, we present WhoIsWho owning, a
large-scale benchmark with over 1,000,000 papers built using an interactive
annotation process, a regular leaderboard with comprehensive tasks, and an
easy-to-use toolkit encapsulating the entire pipeline as well as the most
powerful features and baseline models for tackling the tasks. Our developed
strong baseline has already been deployed online in the AMiner system to enable
daily arXiv paper assignments. The public leaderboard is available at
http://whoiswho.biendata.xyz/. The toolkit is at
https://github.com/THUDM/WhoIsWho. The online demo of daily arXiv paper
assignments is at https://na-demo.aminer.cn/arxivpaper.
|
[
{
"version": "v1",
"created": "Thu, 23 Feb 2023 08:26:35 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jun 2023 08:41:31 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Chen",
"Bo",
""
],
[
"Zhang",
"Jing",
""
],
[
"Zhang",
"Fanjin",
""
],
[
"Han",
"Tianyi",
""
],
[
"Cheng",
"Yuqing",
""
],
[
"Li",
"Xiaoyan",
""
],
[
"Dong",
"Yuxiao",
""
],
[
"Tang",
"Jie",
""
]
] |
new_dataset
| 0.998364 |
2303.13190
|
Weixiao Liu
|
Weixiao Liu, Yuwei Wu, Sipu Ruan, Gregory S. Chirikjian
|
Marching-Primitives: Shape Abstraction from Signed Distance Function
|
Accepted to CVPR2023 Highlight
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Representing complex objects with basic geometric primitives has long been a
topic in computer vision. Primitive-based representations have the merits of
compactness and computational efficiency in higher-level tasks such as physics
simulation, collision checking, and robotic manipulation. Unlike previous works
which extract polygonal meshes from a signed distance function (SDF), in this
paper, we present a novel method, named Marching-Primitives, to obtain a
primitive-based abstraction directly from an SDF. Our method grows geometric
primitives (such as superquadrics) iteratively by analyzing the connectivity of
voxels while marching at different levels of signed distance. For each valid
connected volume of interest, we march on the scope of voxels from which a
primitive is able to be extracted in a probabilistic sense and simultaneously
solve for the parameters of the primitive to capture the underlying local
geometry. We evaluate the performance of our method on both synthetic and
real-world datasets. The results show that the proposed method outperforms the
state-of-the-art in terms of accuracy, and is directly generalizable among
different categories and scales. The code is open-sourced at
https://github.com/ChirikjianLab/Marching-Primitives.git.
|
[
{
"version": "v1",
"created": "Thu, 23 Mar 2023 11:42:35 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jun 2023 03:21:47 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Liu",
"Weixiao",
""
],
[
"Wu",
"Yuwei",
""
],
[
"Ruan",
"Sipu",
""
],
[
"Chirikjian",
"Gregory S.",
""
]
] |
new_dataset
| 0.975046 |
2304.06447
|
Yihao Ding
|
Yihao Ding, Siwen Luo, Hyunsuk Chung, Soyeon Caren Han
|
PDFVQA: A New Dataset for Real-World VQA on PDF Documents
|
Accepted by ECML-PKDD 2023
| null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Document-based Visual Question Answering examines the document understanding
of document images in conditions of natural language questions. We proposed a
new document-based VQA dataset, PDF-VQA, to comprehensively examine the
document understanding from various aspects, including document element
recognition, document layout structural understanding as well as contextual
understanding and key information extraction. Our PDF-VQA dataset extends the
current scale of document understanding that limits on the single document page
to the new scale that asks questions over the full document of multiple pages.
We also propose a new graph-based VQA model that explicitly integrates the
spatial and hierarchically structural relationships between different document
elements to boost the document structural understanding. The performances are
compared with several baselines over different question types and
tasks\footnote{The full dataset will be released after paper acceptance.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 12:28:14 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Apr 2023 02:58:00 GMT"
},
{
"version": "v3",
"created": "Wed, 19 Apr 2023 14:10:08 GMT"
},
{
"version": "v4",
"created": "Mon, 24 Apr 2023 01:46:17 GMT"
},
{
"version": "v5",
"created": "Tue, 6 Jun 2023 02:26:42 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Ding",
"Yihao",
""
],
[
"Luo",
"Siwen",
""
],
[
"Chung",
"Hyunsuk",
""
],
[
"Han",
"Soyeon Caren",
""
]
] |
new_dataset
| 0.999721 |
2304.09172
|
Karan Desai
|
Karan Desai, Maximilian Nickel, Tanmay Rajpurohit, Justin Johnson,
Ramakrishna Vedantam
|
Hyperbolic Image-Text Representations
|
ICML 2023
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Visual and linguistic concepts naturally organize themselves in a hierarchy,
where a textual concept "dog" entails all images that contain dogs. Despite
being intuitive, current large-scale vision and language models such as CLIP do
not explicitly capture such hierarchy. We propose MERU, a contrastive model
that yields hyperbolic representations of images and text. Hyperbolic spaces
have suitable geometric properties to embed tree-like data, so MERU can better
capture the underlying hierarchy in image-text datasets. Our results show that
MERU learns a highly interpretable and structured representation space while
being competitive with CLIP's performance on standard multi-modal tasks like
image classification and image-text retrieval.
|
[
{
"version": "v1",
"created": "Tue, 18 Apr 2023 17:59:45 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jun 2023 00:33:42 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Desai",
"Karan",
""
],
[
"Nickel",
"Maximilian",
""
],
[
"Rajpurohit",
"Tanmay",
""
],
[
"Johnson",
"Justin",
""
],
[
"Vedantam",
"Ramakrishna",
""
]
] |
new_dataset
| 0.998974 |
2304.11766
|
Jinming Zhao Ms
|
Jinming Zhao, Yuka Ko, Kosuke Doi, Ryo Fukuda, Katsuhito Sudoh,
Satoshi Nakamura
|
NAIST-SIC-Aligned: Automatically-Aligned English-Japanese Simultaneous
Interpretation Corpus
|
Fixed typos
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It remains a question that how simultaneous interpretation (SI) data affects
simultaneous machine translation (SiMT). Research has been limited due to the
lack of a large-scale training corpus. In this work, we aim to fill in the gap
by introducing NAIST-SIC-Aligned, which is an automatically-aligned parallel
English-Japanese SI dataset. Starting with a non-aligned corpus NAIST-SIC, we
propose a two-stage alignment approach to make the corpus parallel and thus
suitable for model training. The first stage is coarse alignment where we
perform a many-to-many mapping between source and target sentences, and the
second stage is fine-grained alignment where we perform intra- and
inter-sentence filtering to improve the quality of aligned pairs. To ensure the
quality of the corpus, each step has been validated either quantitatively or
qualitatively. This is the first open-sourced large-scale parallel SI dataset
in the literature. We also manually curated a small test set for evaluation
purposes. We hope our work advances research on SI corpora construction and
SiMT. Please find our data at \url{https://github.com/mingzi151/AHC-SI}.
|
[
{
"version": "v1",
"created": "Sun, 23 Apr 2023 23:03:58 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Apr 2023 01:02:24 GMT"
},
{
"version": "v3",
"created": "Tue, 6 Jun 2023 06:02:42 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Zhao",
"Jinming",
""
],
[
"Ko",
"Yuka",
""
],
[
"Doi",
"Kosuke",
""
],
[
"Fukuda",
"Ryo",
""
],
[
"Sudoh",
"Katsuhito",
""
],
[
"Nakamura",
"Satoshi",
""
]
] |
new_dataset
| 0.99959 |
2304.11966
|
Wenwen Yu
|
Wenwen Yu, Mingyu Liu, Mingrui Chen, Ning Lu, Yinlong Wen, Yuliang
Liu, Dimosthenis Karatzas, Xiang Bai
|
ICDAR 2023 Competition on Reading the Seal Title
|
ICDAR2023 Competition on ReST report (To be appear in ICDAR 2023)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reading seal title text is a challenging task due to the variable shapes of
seals, curved text, background noise, and overlapped text. However, this
important element is commonly found in official and financial scenarios, and
has not received the attention it deserves in the field of OCR technology. To
promote research in this area, we organized ICDAR 2023 competition on reading
the seal title (ReST), which included two tasks: seal title text detection
(Task 1) and end-to-end seal title recognition (Task 2). We constructed a
dataset of 10,000 real seal data, covering the most common classes of seals,
and labeled all seal title texts with text polygons and text contents. The
competition opened on 30th December, 2022 and closed on 20th March, 2023. The
competition attracted 53 participants from academia and industry including 28
submissions for Task 1 and 25 submissions for Task 2, which demonstrated
significant interest in this challenging task. In this report, we present an
overview of the competition, including the organization, challenges, and
results. We describe the dataset and tasks, and summarize the submissions and
evaluation results. The results show that significant progress has been made in
the field of seal title text reading, and we hope that this competition will
inspire further research and development in this important area of OCR
technology.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 10:01:41 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Jun 2023 21:56:29 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Yu",
"Wenwen",
""
],
[
"Liu",
"Mingyu",
""
],
[
"Chen",
"Mingrui",
""
],
[
"Lu",
"Ning",
""
],
[
"Wen",
"Yinlong",
""
],
[
"Liu",
"Yuliang",
""
],
[
"Karatzas",
"Dimosthenis",
""
],
[
"Bai",
"Xiang",
""
]
] |
new_dataset
| 0.999801 |
2304.14590
|
Sean Deyo
|
Sean Deyo, Veit Elser
|
A logical word embedding for learning grammar
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce the logical grammar emdebbing (LGE), a model inspired by
pregroup grammars and categorial grammars to enable unsupervised inference of
lexical categories and syntactic rules from a corpus of text. LGE produces
comprehensible output summarizing its inferences, has a completely transparent
process for producing novel sentences, and can learn from as few as a hundred
sentences.
|
[
{
"version": "v1",
"created": "Fri, 28 Apr 2023 01:53:54 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jun 2023 00:46:49 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Deyo",
"Sean",
""
],
[
"Elser",
"Veit",
""
]
] |
new_dataset
| 0.989079 |
2305.16914
|
Fusang Wang
|
Fusang Wang, Arnaud Louys, Nathan Piasco, Moussab Bennehar, Luis
Rold\~ao, Dzmitry Tsishkou
|
PlaNeRF: SVD Unsupervised 3D Plane Regularization for NeRF Large-Scale
Scene Reconstruction
|
14 pages, 7 figures
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Neural Radiance Fields (NeRF) enable 3D scene reconstruction from 2D images
and camera poses for Novel View Synthesis (NVS). Although NeRF can produce
photorealistic results, it often suffers from overfitting to training views,
leading to poor geometry reconstruction, especially in low-texture areas. This
limitation restricts many important applications which require accurate
geometry, such as extrapolated NVS, HD mapping and scene editing. To address
this limitation, we propose a new method to improve NeRF's 3D structure using
only RGB images and semantic maps. Our approach introduces a novel plane
regularization based on Singular Value Decomposition (SVD), that does not rely
on any geometric prior. In addition, we leverage the Structural Similarity
Index Measure (SSIM) in our loss design to properly initialize the volumetric
representation of NeRF. Quantitative and qualitative results show that our
method outperforms popular regularization approaches in accurate geometry
reconstruction for large-scale outdoor scenes and achieves SoTA rendering
quality on the KITTI-360 NVS benchmark.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 13:26:46 GMT"
},
{
"version": "v2",
"created": "Tue, 30 May 2023 14:21:06 GMT"
},
{
"version": "v3",
"created": "Tue, 6 Jun 2023 10:01:48 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Wang",
"Fusang",
""
],
[
"Louys",
"Arnaud",
""
],
[
"Piasco",
"Nathan",
""
],
[
"Bennehar",
"Moussab",
""
],
[
"Roldão",
"Luis",
""
],
[
"Tsishkou",
"Dzmitry",
""
]
] |
new_dataset
| 0.953006 |
2305.17449
|
Munkhjargal Gochoo
|
Munkhjargal Gochoo, Munkh-Erdene Otgonbold, Erkhembayar Ganbold,
Jun-Wei Hsieh, Ming-Ching Chang, Ping-Yang Chen, Byambaa Dorj, Hamad Al
Jassmi, Ganzorig Batnasan, Fady Alnajjar, Mohammed Abduljabbar, Fang-Pang Lin
|
FishEye8K: A Benchmark and Dataset for Fisheye Camera Object Detection
|
CVPR Workshops 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the advance of AI, road object detection has been a prominent topic in
computer vision, mostly using perspective cameras. Fisheye lens provides
omnidirectional wide coverage for using fewer cameras to monitor road
intersections, however with view distortions. To our knowledge, there is no
existing open dataset prepared for traffic surveillance on fisheye cameras.
This paper introduces an open FishEye8K benchmark dataset for road object
detection tasks, which comprises 157K bounding boxes across five classes
(Pedestrian, Bike, Car, Bus, and Truck). In addition, we present benchmark
results of State-of-The-Art (SoTA) models, including variations of YOLOv5,
YOLOR, YOLO7, and YOLOv8. The dataset comprises 8,000 images recorded in 22
videos using 18 fisheye cameras for traffic monitoring in Hsinchu, Taiwan, at
resolutions of 1080$\times$1080 and 1280$\times$1280. The data annotation and
validation process were arduous and time-consuming, due to the ultra-wide
panoramic and hemispherical fisheye camera images with large distortion and
numerous road participants, particularly people riding scooters. To avoid bias,
frames from a particular camera were assigned to either the training or test
sets, maintaining a ratio of about 70:30 for both the number of images and
bounding boxes in each class. Experimental results show that YOLOv8 and YOLOR
outperform on input sizes 640$\times$640 and 1280$\times$1280, respectively.
The dataset will be available on GitHub with PASCAL VOC, MS COCO, and YOLO
annotation formats. The FishEye8K benchmark will provide significant
contributions to the fisheye video analytics and smart city applications.
|
[
{
"version": "v1",
"created": "Sat, 27 May 2023 11:26:25 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jun 2023 07:02:32 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Gochoo",
"Munkhjargal",
""
],
[
"Otgonbold",
"Munkh-Erdene",
""
],
[
"Ganbold",
"Erkhembayar",
""
],
[
"Hsieh",
"Jun-Wei",
""
],
[
"Chang",
"Ming-Ching",
""
],
[
"Chen",
"Ping-Yang",
""
],
[
"Dorj",
"Byambaa",
""
],
[
"Jassmi",
"Hamad Al",
""
],
[
"Batnasan",
"Ganzorig",
""
],
[
"Alnajjar",
"Fady",
""
],
[
"Abduljabbar",
"Mohammed",
""
],
[
"Lin",
"Fang-Pang",
""
]
] |
new_dataset
| 0.999819 |
2305.17716
|
Haobo Yang
|
Haobo Yang, Wenyu Wang, Ze Cao, Zhekai Duan, Xuchen Liu
|
InDL: A New Dataset and Benchmark for In-Diagram Logic Interpretation
based on Visual Illusion
|
arXiv admin note: text overlap with arXiv:2305.02299,
arXiv:2302.11939, arXiv:2301.13287, arXiv:2305.12686
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper introduces a novel approach to evaluating deep learning models'
capacity for in-diagram logic interpretation. Leveraging the intriguing realm
of visual illusions, we establish a unique dataset, InDL, designed to
rigorously test and benchmark these models. Deep learning has witnessed
remarkable progress in domains such as computer vision and natural language
processing. However, models often stumble in tasks requiring logical reasoning
due to their inherent 'black box' characteristics, which obscure the
decision-making process. Our work presents a new lens to understand these
models better by focusing on their handling of visual illusions -- a complex
interplay of perception and logic. We utilize six classic geometric optical
illusions to create a comparative framework between human and machine visual
perception. This methodology offers a quantifiable measure to rank models,
elucidating potential weaknesses and providing actionable insights for model
improvements. Our experimental results affirm the efficacy of our benchmarking
strategy, demonstrating its ability to effectively rank models based on their
logic interpretation ability. As part of our commitment to reproducible
research, the source code and datasets will be made publicly available at
https://github.com/rabbit-magic-wh/InDL
|
[
{
"version": "v1",
"created": "Sun, 28 May 2023 13:01:32 GMT"
},
{
"version": "v2",
"created": "Tue, 30 May 2023 12:12:15 GMT"
},
{
"version": "v3",
"created": "Thu, 1 Jun 2023 04:55:10 GMT"
},
{
"version": "v4",
"created": "Mon, 5 Jun 2023 22:52:57 GMT"
}
] | 2023-06-07T00:00:00 |
[
[
"Yang",
"Haobo",
""
],
[
"Wang",
"Wenyu",
""
],
[
"Cao",
"Ze",
""
],
[
"Duan",
"Zhekai",
""
],
[
"Liu",
"Xuchen",
""
]
] |
new_dataset
| 0.999897 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.