id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2305.19245
|
Thu Nguyen-Phuoc
|
Thu Nguyen-Phuoc, Gabriel Schwartz, Yuting Ye, Stephen Lombardi, Lei
Xiao
|
AlteredAvatar: Stylizing Dynamic 3D Avatars with Fast Style Adaptation
|
10 main pages, 14 figures. Project page:
https://alteredavatar.github.io
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a method that can quickly adapt dynamic 3D avatars to
arbitrary text descriptions of novel styles. Among existing approaches for
avatar stylization, direct optimization methods can produce excellent results
for arbitrary styles but they are unpleasantly slow. Furthermore, they require
redoing the optimization process from scratch for every new input. Fast
approximation methods using feed-forward networks trained on a large dataset of
style images can generate results for new inputs quickly, but tend not to
generalize well to novel styles and fall short in quality. We therefore
investigate a new approach, AlteredAvatar, that combines those two approaches
using the meta-learning framework. In the inner loop, the model learns to
optimize to match a single target style well; while in the outer loop, the
model learns to stylize efficiently across many styles. After training,
AlteredAvatar learns an initialization that can quickly adapt within a small
number of update steps to a novel style, which can be given using texts, a
reference image, or a combination of both. We show that AlteredAvatar can
achieve a good balance between speed, flexibility and quality, while
maintaining consistency across a wide range of novel views and facial
expressions.
|
[
{
"version": "v1",
"created": "Tue, 30 May 2023 17:32:12 GMT"
}
] | 2023-05-31T00:00:00 |
[
[
"Nguyen-Phuoc",
"Thu",
""
],
[
"Schwartz",
"Gabriel",
""
],
[
"Ye",
"Yuting",
""
],
[
"Lombardi",
"Stephen",
""
],
[
"Xiao",
"Lei",
""
]
] |
new_dataset
| 0.993338 |
1910.09727
|
Hasan Al Maruf
|
Youngmoon Lee, Hasan Al Maruf, Mosharaf Chowdhury, Asaf Cidon, Kang G.
Shin
|
Hydra: Resilient and Highly Available Remote Memory
| null |
20th USENIX Conference on File and Storage Technologies (FAST),
2022, 181-198
| null | null |
cs.DC cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Hydra, a low-latency, low-overhead, and highly available
resilience mechanism for remote memory. Hydra can access erasure-coded remote
memory within a single-digit microsecond read/write latency, significantly
improving the performance-efficiency trade-off over the state-of-the-art -- it
performs similar to in-memory replication with 1.6X lower memory overhead. We
also propose CodingSets, a novel coding group placement algorithm for
erasure-coded data, that provides load balancing while reducing the probability
of data loss under correlated failures by an order of magnitude. With Hydra,
even when only 50% of memory is local, unmodified memory-intensive applications
achieve performance close to that of the fully in-memory case in the presence
of remote failures and outperform the state-of-the-art solutions by up to
4.35X.
|
[
{
"version": "v1",
"created": "Tue, 22 Oct 2019 02:12:55 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Oct 2020 16:35:44 GMT"
},
{
"version": "v3",
"created": "Sun, 28 May 2023 05:16:40 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Lee",
"Youngmoon",
""
],
[
"Maruf",
"Hasan Al",
""
],
[
"Chowdhury",
"Mosharaf",
""
],
[
"Cidon",
"Asaf",
""
],
[
"Shin",
"Kang G.",
""
]
] |
new_dataset
| 0.999343 |
2005.00858
|
Wolfgang Mulzer
|
Sergio Cabello, Wolfgang Mulzer
|
Minimum Cuts in Geometric Intersection Graphs
|
11 pages, 4 figures; this version corrects a small bug in the proof
of Lemma 5. We thank Matej Marinko for pointing this out
|
Computational Geometry: Theory and Applications (CGTA), 94, 2021,
Article 101720
|
10.1016/j.comgeo.2020.101720
| null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Let $\mathcal{D}$ be a set of $n$ disks in the plane. The disk graph
$G_\mathcal{D}$ for $\mathcal{D}$ is the undirected graph with vertex set
$\mathcal{D}$ in which two disks are joined by an edge if and only if they
intersect. The directed transmission graph $G^{\rightarrow}_\mathcal{D}$ for
$\mathcal{D}$ is the directed graph with vertex set $\mathcal{D}$ in which
there is an edge from a disk $D_1 \in \mathcal{D}$ to a disk $D_2 \in
\mathcal{D}$ if and only if $D_1$ contains the center of $D_2$.
Given $\mathcal{D}$ and two non-intersecting disks $s, t \in \mathcal{D}$, we
show that a minimum $s$-$t$ vertex cut in $G_\mathcal{D}$ or in
$G^{\rightarrow}_\mathcal{D}$ can be found in $O(n^{3/2}\text{polylog} n)$
expected time. To obtain our result, we combine an algorithm for the maximum
flow problem in general graphs with dynamic geometric data structures to
manipulate the disks.
As an application, we consider the barrier resilience problem in a
rectangular domain. In this problem, we have a vertical strip $S$ bounded by
two vertical lines, $L_\ell$ and $L_r$, and a collection $\mathcal{D}$ of
disks. Let $a$ be a point in $S$ above all disks of $\mathcal{D}$, and let $b$
a point in $S$ below all disks of $\mathcal{D}$. The task is to find a curve
from $a$ to $b$ that lies in $S$ and that intersects as few disks of
$\mathcal{D}$ as possible. Using our improved algorithm for minimum cuts in
disk graphs, we can solve the barrier resilience problem in
$O(n^{3/2}\text{polylog} n)$ expected time.
|
[
{
"version": "v1",
"created": "Sat, 2 May 2020 15:23:30 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Oct 2020 14:12:17 GMT"
},
{
"version": "v3",
"created": "Fri, 26 May 2023 19:05:34 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Cabello",
"Sergio",
""
],
[
"Mulzer",
"Wolfgang",
""
]
] |
new_dataset
| 0.978552 |
2005.03192
|
Erik Demaine
|
Joshua Ani, Erik D. Demaine, Dylan H. Hendrickson, Jayson Lynch
|
Trains, Games, and Complexity: 0/1/2-Player Motion Planning through
Input/Output Gadgets
|
37 pages, 42 figures. Presented at WALCOM 2022. Expanded version
accepted to Theoretical Computer Science
| null | null | null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We analyze the computational complexity of motion planning through local
"input/output" gadgets with separate entrances and exits, and a subset of
allowed traversals from entrances to exits, each of which changes the state of
the gadget and thereby the allowed traversals. We study such gadgets in the
zero-, one-, and two-player settings, in particular extending past
motion-planning-through-gadgets work [DGLR18, DHL20] to zero-player games for
the first time, by considering "branchless" connections between gadgets that
route every gadget's exit to a unique gadget's entrance. Our complexity results
include containment in L, NL, P, NP, and PSPACE; as well as hardness for NL, P,
NP, and PSPACE. We apply these results to show PSPACE-completeness for certain
mechanics in the video games Factorio, [the Sequence], and a restricted version
of Trainyard, improving the result of [ALP18a]. This work strengthens prior
results on switching graphs, ARRIVAL [DGK+17], and reachability switching games
[FGMS21].
|
[
{
"version": "v1",
"created": "Thu, 7 May 2020 01:12:29 GMT"
},
{
"version": "v2",
"created": "Sun, 28 May 2023 21:20:42 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Ani",
"Joshua",
""
],
[
"Demaine",
"Erik D.",
""
],
[
"Hendrickson",
"Dylan H.",
""
],
[
"Lynch",
"Jayson",
""
]
] |
new_dataset
| 0.989855 |
2109.03097
|
Naresh Goud Boddu
|
Naresh Goud Boddu, Rahul Jain, Upendra Kapshikar
|
Quantum secure non-malleable-extractors
| null | null | null | null |
cs.CR quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We construct several explicit quantum secure non-malleable-extractors. All
the quantum secure non-malleable-extractors we construct are based on the
constructions by Chattopadhyay, Goyal and Li [2015] and Cohen [2015].
1) We construct the first explicit quantum secure non-malleable-extractor for
(source) min-entropy $k \geq \textsf{poly}\left(\log \left( \frac{n}{\epsilon}
\right)\right)$ ($n$ is the length of the source and $\epsilon$ is the error
parameter). Previously Aggarwal, Chung, Lin, and Vidick [2019] have shown that
the inner-product based non-malleable-extractor proposed by Li [2012] is
quantum secure, however it required linear (in $n$) min-entropy and seed
length.
Using the connection between non-malleable-extractors and privacy
amplification (established first in the quantum setting by Cohen and Vidick
[2017]), we get a $2$-round privacy amplification protocol that is secure
against active quantum adversaries with communication $\textsf{poly}\left(\log
\left( \frac{n}{\epsilon} \right)\right)$, exponentially improving upon the
linear communication required by the protocol due to [2019].
2) We construct an explicit quantum secure $2$-source non-malleable-extractor
for min-entropy $k \geq n- n^{\Omega(1)}$, with an output of size
$n^{\Omega(1)}$ and error $2^{- n^{\Omega(1)}}$.
3) We also study their natural extensions when the tampering of the inputs is
performed $t$-times. We construct explicit quantum secure
$t$-non-malleable-extractors for both seeded ($t=d^{\Omega(1)}$) as well as
$2$-source case ($t=n^{\Omega(1)}$).
|
[
{
"version": "v1",
"created": "Tue, 7 Sep 2021 13:56:24 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Nov 2021 15:03:33 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Mar 2022 18:29:03 GMT"
},
{
"version": "v4",
"created": "Sun, 28 May 2023 21:38:46 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Boddu",
"Naresh Goud",
""
],
[
"Jain",
"Rahul",
""
],
[
"Kapshikar",
"Upendra",
""
]
] |
new_dataset
| 0.958995 |
2109.06445
|
Bochen Tan
|
Bochen Tan and Jason Cong
|
Optimal Qubit Mapping with Simultaneous Gate Absorption
|
8 pages, 8 figures, to appear in ICCAD'21
| null |
10.1109/ICCAD51958.2021.9643554
| null |
cs.ET quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Before quantum error correction (QEC) is achieved, quantum computers focus on
noisy intermediate-scale quantum (NISQ) applications. Compared to the
well-known quantum algorithms requiring QEC, like Shor's or Grover's algorithm,
NISQ applications have different structures and properties to exploit in
compilation. A key step in compilation is mapping the qubits in the program to
physical qubits on a given quantum computer, which has been shown to be an
NP-hard problem. In this paper, we present OLSQ-GA, an optimal qubit mapper
with a key feature of simultaneous SWAP gate absorption during qubit mapping,
which we show to be a very effective optimization technique for NISQ
applications. For the class of quantum approximate optimization algorithm
(QAOA), an important NISQ application, OLSQ-GA reduces depth by up to 50.0% and
SWAP count by 100% compared to other state-of-the-art methods, which translates
to 55.9% fidelity improvement. The solution optimality of OLSQ-GA is achieved
by the exact SMT formulation. For better scalability, we augment our approach
with additional constraints in the form of initial mapping or alternating
matching, which speeds up OLSQ-GA by up to 272X with no or little loss of
optimality.
|
[
{
"version": "v1",
"created": "Tue, 14 Sep 2021 05:15:36 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Tan",
"Bochen",
""
],
[
"Cong",
"Jason",
""
]
] |
new_dataset
| 0.962412 |
2205.12665
|
Samuel Amouyal
|
Samuel Joseph Amouyal, Tomer Wolfson, Ohad Rubin, Ori Yoran, Jonathan
Herzig, Jonathan Berant
|
QAMPARI: An Open-domain Question Answering Benchmark for Questions with
Many Answers from Multiple Paragraphs
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing benchmarks for open-domain question answering (ODQA) typically focus
on questions whose answers can be extracted from a single paragraph. By
contrast, many natural questions, such as "What players were drafted by the
Brooklyn Nets?" have a list of answers. Answering such questions requires
retrieving and reading from many passages, in a large corpus. We introduce
QAMPARI, an ODQA benchmark, where question answers are lists of entities,
spread across many paragraphs. We created QAMPARI by (a) generating questions
with multiple answers from Wikipedia's knowledge graph and tables, (b)
automatically pairing answers with supporting evidence in Wikipedia paragraphs,
and (c) manually paraphrasing questions and validating each answer. We train
ODQA models from the retrieve-and-read family and find that QAMPARI is
challenging in terms of both passage retrieval and answer generation, reaching
an F1 score of 32.8 at best. Our results highlight the need for developing ODQA
models that handle a broad range of question types, including single and
multi-answer questions.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 11:21:30 GMT"
},
{
"version": "v2",
"created": "Thu, 26 May 2022 15:07:40 GMT"
},
{
"version": "v3",
"created": "Wed, 10 May 2023 08:23:54 GMT"
},
{
"version": "v4",
"created": "Mon, 29 May 2023 06:16:41 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Amouyal",
"Samuel Joseph",
""
],
[
"Wolfson",
"Tomer",
""
],
[
"Rubin",
"Ohad",
""
],
[
"Yoran",
"Ori",
""
],
[
"Herzig",
"Jonathan",
""
],
[
"Berant",
"Jonathan",
""
]
] |
new_dataset
| 0.996556 |
2206.02878
|
Hasan Al Maruf
|
Hasan Al Maruf, Hao Wang, Abhishek Dhanotia, Johannes Weiner, Niket
Agarwal, Pallab Bhattacharya, Chris Petersen, Mosharaf Chowdhury, Shobhit
Kanaujia, Prakash Chauhan
|
TPP: Transparent Page Placement for CXL-Enabled Tiered-Memory
| null | null |
10.1145/3582016.3582063
| null |
cs.DC cs.OS
|
http://creativecommons.org/licenses/by/4.0/
|
The increasing demand for memory in hyperscale applications has led to memory
becoming a large portion of the overall datacenter spend. The emergence of
coherent interfaces like CXL enables main memory expansion and offers an
efficient solution to this problem. In such systems, the main memory can
constitute different memory technologies with varied characteristics. In this
paper, we characterize memory usage patterns of a wide range of datacenter
applications across the server fleet of Meta. We, therefore, demonstrate the
opportunities to offload colder pages to slower memory tiers for these
applications. Without efficient memory management, however, such systems can
significantly degrade performance.
We propose a novel OS-level application-transparent page placement mechanism
(TPP) for CXL-enabled memory. TPP employs a lightweight mechanism to identify
and place hot/cold pages to appropriate memory tiers. It enables a proactive
page demotion from local memory to CXL-Memory. This technique ensures a memory
headroom for new page allocations that are often related to request processing
and tend to be short-lived and hot. At the same time, TPP can promptly promote
performance-critical hot pages trapped in the slow CXL-Memory to the fast local
memory, while minimizing both sampling overhead and unnecessary migrations. TPP
works transparently without any application-specific knowledge and can be
deployed globally as a kernel release.
We evaluate TPP in the production server fleet with early samples of new x86
CPUs with CXL 1.1 support. TPP makes a tiered memory system performant as an
ideal baseline (<1% gap) that has all the memory in the local tier. It is 18%
better than today's Linux, and 5-17% better than existing solutions including
NUMA Balancing and AutoTiering. Most of the TPP patches have been merged in the
Linux v5.18 release.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2022 20:09:20 GMT"
},
{
"version": "v2",
"created": "Sun, 28 May 2023 06:05:47 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Maruf",
"Hasan Al",
""
],
[
"Wang",
"Hao",
""
],
[
"Dhanotia",
"Abhishek",
""
],
[
"Weiner",
"Johannes",
""
],
[
"Agarwal",
"Niket",
""
],
[
"Bhattacharya",
"Pallab",
""
],
[
"Petersen",
"Chris",
""
],
[
"Chowdhury",
"Mosharaf",
""
],
[
"Kanaujia",
"Shobhit",
""
],
[
"Chauhan",
"Prakash",
""
]
] |
new_dataset
| 0.99884 |
2207.09529
|
Idil Aytekin
|
Idil Aytekin, Onat Dalmaz, Kaan Gonc, Haydar Ankishan, Emine U
Saritas, Ulas Bagci, Haydar Celik and Tolga Cukur
|
COVID-19 Detection from Respiratory Sounds with Hierarchical Spectrogram
Transformers
| null | null | null | null |
cs.SD cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Monitoring of prevalent airborne diseases such as COVID-19 characteristically
involves respiratory assessments. While auscultation is a mainstream method for
preliminary screening of disease symptoms, its utility is hampered by the need
for dedicated hospital visits. Remote monitoring based on recordings of
respiratory sounds on portable devices is a promising alternative, which can
assist in early assessment of COVID-19 that primarily affects the lower
respiratory tract. In this study, we introduce a novel deep learning approach
to distinguish patients with COVID-19 from healthy controls given audio
recordings of cough or breathing sounds. The proposed approach leverages a
novel hierarchical spectrogram transformer (HST) on spectrogram representations
of respiratory sounds. HST embodies self-attention mechanisms over local
windows in spectrograms, and window size is progressively grown over model
stages to capture local to global context. HST is compared against
state-of-the-art conventional and deep-learning baselines. Demonstrations on
crowd-sourced multi-national datasets indicate that HST outperforms competing
methods, achieving over 83% area under the receiver operating characteristic
curve (AUC) in detecting COVID-19 cases.
|
[
{
"version": "v1",
"created": "Tue, 19 Jul 2022 19:55:16 GMT"
},
{
"version": "v2",
"created": "Sat, 27 May 2023 00:25:56 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Aytekin",
"Idil",
""
],
[
"Dalmaz",
"Onat",
""
],
[
"Gonc",
"Kaan",
""
],
[
"Ankishan",
"Haydar",
""
],
[
"Saritas",
"Emine U",
""
],
[
"Bagci",
"Ulas",
""
],
[
"Celik",
"Haydar",
""
],
[
"Cukur",
"Tolga",
""
]
] |
new_dataset
| 0.998746 |
2208.06080
|
Clayton Miller
|
Clayton Miller, Renee Christensen, Jin Kai Leong, Mahmoud Abdelrahman,
Federico Tartarini, Matias Quintana, Andre Matthias M\"uller, Mario Frei
|
Smartwatch-based ecological momentary assessments for occupant wellness
and privacy in buildings
| null |
17th International Conference on Indoor Air Quality and Climate,
INDOOR AIR 2022
| null | null |
cs.HC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This paper describes the adaptation of an open-source ecological momentary
assessment smart-watch platform with three sets of micro-survey
wellness-related questions focused on i) infectious disease (COVID-19) risk
perception, ii) privacy and distraction in an office context, and iii) triggers
of various movement-related behaviors in buildings. This platform was
previously used to collect data for thermal comfort, and this work extends its
use to other domains. Several research participants took part in a
proof-of-concept experiment by wearing a smartwatch to collect their
micro-survey question preferences and perception responses for two of the
question sets. Participants were also asked to install an indoor localization
app on their phone to detect where precisely in the building they completed the
survey. The experiment identified occupant information such as the tendencies
for the research participants to prefer privacy in certain spaces and the
difference between infectious disease risk perception in naturally versus
mechanically ventilated spaces.
|
[
{
"version": "v1",
"created": "Fri, 12 Aug 2022 01:37:15 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Miller",
"Clayton",
""
],
[
"Christensen",
"Renee",
""
],
[
"Leong",
"Jin Kai",
""
],
[
"Abdelrahman",
"Mahmoud",
""
],
[
"Tartarini",
"Federico",
""
],
[
"Quintana",
"Matias",
""
],
[
"Müller",
"Andre Matthias",
""
],
[
"Frei",
"Mario",
""
]
] |
new_dataset
| 0.992608 |
2209.00946
|
Xinyi He
|
Xinyi He, Mengyu Zhou, Mingjie Zhou, Jialiang Xu, Xiao Lv, Tianle Li,
Yijia Shao, Shi Han, Zejian Yuan, Dongmei Zhang
|
AnaMeta: A Table Understanding Dataset of Field Metadata Knowledge
Shared by Multi-dimensional Data Analysis Tasks
|
Published in Findings of ACL 2023
| null | null | null |
cs.DB cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tabular data analysis is performed every day across various domains. It
requires an accurate understanding of field semantics to correctly operate on
table fields and find common patterns in daily analysis. In this paper, we
introduce the AnaMeta dataset, a collection of 467k tables with derived
supervision labels for four types of commonly used field metadata:
measure/dimension dichotomy, common field roles, semantic field type, and
default aggregation function. We evaluate a wide range of models for inferring
metadata as the benchmark. We also propose a multi-encoder framework, called
KDF, which improves the metadata understanding capability of tabular models by
incorporating distribution and knowledge information. Furthermore, we propose
four interfaces for incorporating field metadata into downstream analysis
tasks.
|
[
{
"version": "v1",
"created": "Fri, 2 Sep 2022 11:01:45 GMT"
},
{
"version": "v2",
"created": "Sat, 27 May 2023 11:27:42 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"He",
"Xinyi",
""
],
[
"Zhou",
"Mengyu",
""
],
[
"Zhou",
"Mingjie",
""
],
[
"Xu",
"Jialiang",
""
],
[
"Lv",
"Xiao",
""
],
[
"Li",
"Tianle",
""
],
[
"Shao",
"Yijia",
""
],
[
"Han",
"Shi",
""
],
[
"Yuan",
"Zejian",
""
],
[
"Zhang",
"Dongmei",
""
]
] |
new_dataset
| 0.999703 |
2210.03094
|
Yunfan Jiang
|
Yunfan Jiang, Agrim Gupta, Zichen Zhang, Guanzhi Wang, Yongqiang Dou,
Yanjun Chen, Li Fei-Fei, Anima Anandkumar, Yuke Zhu, Linxi Fan
|
VIMA: General Robot Manipulation with Multimodal Prompts
|
ICML 2023 Camera-ready version. Project website:
https://vimalabs.github.io/
| null | null | null |
cs.RO cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Prompt-based learning has emerged as a successful paradigm in natural
language processing, where a single general-purpose language model can be
instructed to perform any task specified by input prompts. Yet task
specification in robotics comes in various forms, such as imitating one-shot
demonstrations, following language instructions, and reaching visual goals.
They are often considered different tasks and tackled by specialized models. We
show that a wide spectrum of robot manipulation tasks can be expressed with
multimodal prompts, interleaving textual and visual tokens. Accordingly, we
develop a new simulation benchmark that consists of thousands of
procedurally-generated tabletop tasks with multimodal prompts, 600K+ expert
trajectories for imitation learning, and a four-level evaluation protocol for
systematic generalization. We design a transformer-based robot agent, VIMA,
that processes these prompts and outputs motor actions autoregressively. VIMA
features a recipe that achieves strong model scalability and data efficiency.
It outperforms alternative designs in the hardest zero-shot generalization
setting by up to $2.9\times$ task success rate given the same training data.
With $10\times$ less training data, VIMA still performs $2.7\times$ better than
the best competing variant. Code and video demos are available at
https://vimalabs.github.io/
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 17:50:11 GMT"
},
{
"version": "v2",
"created": "Sun, 28 May 2023 07:32:38 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Jiang",
"Yunfan",
""
],
[
"Gupta",
"Agrim",
""
],
[
"Zhang",
"Zichen",
""
],
[
"Wang",
"Guanzhi",
""
],
[
"Dou",
"Yongqiang",
""
],
[
"Chen",
"Yanjun",
""
],
[
"Fei-Fei",
"Li",
""
],
[
"Anandkumar",
"Anima",
""
],
[
"Zhu",
"Yuke",
""
],
[
"Fan",
"Linxi",
""
]
] |
new_dataset
| 0.999482 |
2210.03521
|
Feng Zhu
|
Feng Zhu, Jingjing Zhang and Xin Wang
|
STSyn: Speeding Up Local SGD with Straggler-Tolerant Synchronization
|
12 pages, 10 figures, submitted for transaction publication
| null | null | null |
cs.LG cs.DC cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Synchronous local stochastic gradient descent (local SGD) suffers from some
workers being idle and random delays due to slow and straggling workers, as it
waits for the workers to complete the same amount of local updates. In this
paper, to mitigate stragglers and improve communication efficiency, a novel
local SGD strategy, named STSyn, is developed. The key point is to wait for the
$K$ fastest workers, while keeping all the workers computing continually at
each synchronization round, and making full use of any effective (completed)
local update of each worker regardless of stragglers. An analysis of the
average wall-clock time, average number of local updates and average number of
uploading workers per round is provided to gauge the performance of STSyn. The
convergence of STSyn is also rigorously established even when the objective
function is nonconvex. Experimental results show the superiority of the
proposed STSyn against state-of-the-art schemes through utilization of the
straggler-tolerant technique and additional effective local updates at each
worker, and the influence of system parameters is studied. By waiting for
faster workers and allowing heterogeneous synchronization with different
numbers of local updates across workers, STSyn provides substantial
improvements both in time and communication efficiency.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 08:04:20 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Oct 2022 04:58:54 GMT"
},
{
"version": "v3",
"created": "Mon, 29 May 2023 11:58:57 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Zhu",
"Feng",
""
],
[
"Zhang",
"Jingjing",
""
],
[
"Wang",
"Xin",
""
]
] |
new_dataset
| 0.991097 |
2210.06828
|
Haneul Yoo
|
Haneul Yoo, Rifki Afina Putri, Changyoon Lee, Youngin Lee, So-Yeon
Ahn, Dongyeop Kang, Alice Oh
|
Rethinking Annotation: Can Language Learners Contribute?
|
ACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Researchers have traditionally recruited native speakers to provide
annotations for widely used benchmark datasets. However, there are languages
for which recruiting native speakers can be difficult, and it would help to
find learners of those languages to annotate the data. In this paper, we
investigate whether language learners can contribute annotations to benchmark
datasets. In a carefully controlled annotation experiment, we recruit 36
language learners, provide two types of additional resources (dictionaries and
machine-translated sentences), and perform mini-tests to measure their language
proficiency. We target three languages, English, Korean, and Indonesian, and
the four NLP tasks of sentiment analysis, natural language inference, named
entity recognition, and machine reading comprehension. We find that language
learners, especially those with intermediate or advanced levels of language
proficiency, are able to provide fairly accurate labels with the help of
additional resources. Moreover, we show that data annotation improves learners'
language proficiency in terms of vocabulary and grammar. One implication of our
findings is that broadening the annotation task to include language learners
can open up the opportunity to build benchmark datasets for languages for which
it is difficult to recruit native speakers.
|
[
{
"version": "v1",
"created": "Thu, 13 Oct 2022 08:22:25 GMT"
},
{
"version": "v2",
"created": "Mon, 29 May 2023 11:39:17 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Yoo",
"Haneul",
""
],
[
"Putri",
"Rifki Afina",
""
],
[
"Lee",
"Changyoon",
""
],
[
"Lee",
"Youngin",
""
],
[
"Ahn",
"So-Yeon",
""
],
[
"Kang",
"Dongyeop",
""
],
[
"Oh",
"Alice",
""
]
] |
new_dataset
| 0.950961 |
2210.07621
|
Xiangqing Shen
|
Xiangqing Shen, Siwei Wu, and Rui Xia
|
Dense-ATOMIC: Towards Densely-connected ATOMIC with High Knowledge
Coverage and Massive Multi-hop Paths
|
Accepted by ACL 2023 Main Conference
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
ATOMIC is a large-scale commonsense knowledge graph (CSKG) containing
everyday if-then knowledge triplets, i.e., {head event, relation, tail event}.
The one-hop annotation manner made ATOMIC a set of independent bipartite
graphs, which ignored the numerous links between events in different bipartite
graphs and consequently caused shortages in knowledge coverage and multi-hop
paths. In this work, we aim to construct Dense-ATOMIC with high knowledge
coverage and massive multi-hop paths. The events in ATOMIC are normalized to a
consistent pattern at first. We then propose a CSKG completion method called
Rel-CSKGC to predict the relation given the head event and the tail event of a
triplet, and train a CSKG completion model based on existing triplets in
ATOMIC. We finally utilize the model to complete the missing links in ATOMIC
and accordingly construct Dense-ATOMIC. Both automatic and human evaluation on
an annotated subgraph of ATOMIC demonstrate the advantage of Rel-CSKGC over
strong baselines. We further conduct extensive evaluations on Dense-ATOMIC in
terms of statistics, human evaluation, and simple downstream tasks, all proving
Dense-ATOMIC's advantages in Knowledge Coverage and Multi-hop Paths. Both the
source code of Rel-CSKGC and Dense-ATOMIC are publicly available on
https://github.com/NUSTM/Dense-ATOMIC.
|
[
{
"version": "v1",
"created": "Fri, 14 Oct 2022 08:17:11 GMT"
},
{
"version": "v2",
"created": "Sun, 28 May 2023 11:56:59 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Shen",
"Xiangqing",
""
],
[
"Wu",
"Siwei",
""
],
[
"Xia",
"Rui",
""
]
] |
new_dataset
| 0.980331 |
2211.01994
|
Anne Wu
|
Anne Wu, Kiant\'e Brantley, Noriyuki Kojima and Yoav Artzi
|
lilGym: Natural Language Visual Reasoning with Reinforcement Learning
|
ACL 2023 Long Paper
| null | null | null |
cs.LG cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present lilGym, a new benchmark for language-conditioned reinforcement
learning in visual environments. lilGym is based on 2,661 highly-compositional
human-written natural language statements grounded in an interactive visual
environment. We introduce a new approach for exact reward computation in every
possible world state by annotating all statements with executable Python
programs. Each statement is paired with multiple start states and reward
functions to form thousands of distinct Markov Decision Processes of varying
difficulty. We experiment with lilGym with different models and learning
regimes. Our results and analysis show that while existing methods are able to
achieve non-trivial performance, lilGym forms a challenging open problem.
lilGym is available at https://lil.nlp.cornell.edu/lilgym/.
|
[
{
"version": "v1",
"created": "Thu, 3 Nov 2022 17:08:26 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Dec 2022 23:41:21 GMT"
},
{
"version": "v3",
"created": "Mon, 29 May 2023 15:44:36 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Wu",
"Anne",
""
],
[
"Brantley",
"Kianté",
""
],
[
"Kojima",
"Noriyuki",
""
],
[
"Artzi",
"Yoav",
""
]
] |
new_dataset
| 0.999674 |
2211.06959
|
Shubham Mittal
|
Shubham Mittal, Keshav Kolluru, Soumen Chakrabarti, Mausam
|
mOKB6: A Multilingual Open Knowledge Base Completion Benchmark
|
camera-ready version for ACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Automated completion of open knowledge bases (Open KBs), which are
constructed from triples of the form (subject phrase, relation phrase, object
phrase), obtained via open information extraction (Open IE) system, are useful
for discovering novel facts that may not be directly present in the text.
However, research in Open KB completion (Open KBC) has so far been limited to
resource-rich languages like English. Using the latest advances in multilingual
Open IE, we construct the first multilingual Open KBC dataset, called mOKB6,
containing facts from Wikipedia in six languages (including English). Improving
the previous Open KB construction pipeline by doing multilingual coreference
resolution and keeping only entity-linked triples, we create a dense Open KB.
We experiment with several models for the task and observe a consistent benefit
of combining languages with the help of shared embedding space as well as
translations of facts. We also observe that current multilingual models
struggle to remember facts seen in languages of different scripts.
|
[
{
"version": "v1",
"created": "Sun, 13 Nov 2022 17:10:49 GMT"
},
{
"version": "v2",
"created": "Sun, 28 May 2023 10:18:59 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Mittal",
"Shubham",
""
],
[
"Kolluru",
"Keshav",
""
],
[
"Chakrabarti",
"Soumen",
""
],
[
"Mausam",
"",
""
]
] |
new_dataset
| 0.999872 |
2211.07044
|
Yi Wang
|
Yi Wang, Nassim Ait Ali Braham, Zhitong Xiong, Chenying Liu, Conrad M
Albrecht, Xiao Xiang Zhu
|
SSL4EO-S12: A Large-Scale Multi-Modal, Multi-Temporal Dataset for
Self-Supervised Learning in Earth Observation
|
Accepted by IEEE Geoscience and Remote Sensing Magazine. 18 pages
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-supervised pre-training bears potential to generate expressive
representations without human annotation. Most pre-training in Earth
observation (EO) are based on ImageNet or medium-size, labeled remote sensing
(RS) datasets. We share an unlabeled RS dataset SSL4EO-S12 (Self-Supervised
Learning for Earth Observation - Sentinel-1/2) to assemble a large-scale,
global, multimodal, and multi-seasonal corpus of satellite imagery from the ESA
Sentinel-1 \& -2 satellite missions. For EO applications we demonstrate
SSL4EO-S12 to succeed in self-supervised pre-training for a set of methods:
MoCo-v2, DINO, MAE, and data2vec. Resulting models yield downstream performance
close to, or surpassing accuracy measures of supervised learning. In addition,
pre-training on SSL4EO-S12 excels compared to existing datasets. We make openly
available the dataset, related source code, and pre-trained models at
https://github.com/zhu-xlab/SSL4EO-S12.
|
[
{
"version": "v1",
"created": "Sun, 13 Nov 2022 23:38:27 GMT"
},
{
"version": "v2",
"created": "Mon, 29 May 2023 13:57:01 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Wang",
"Yi",
""
],
[
"Braham",
"Nassim Ait Ali",
""
],
[
"Xiong",
"Zhitong",
""
],
[
"Liu",
"Chenying",
""
],
[
"Albrecht",
"Conrad M",
""
],
[
"Zhu",
"Xiao Xiang",
""
]
] |
new_dataset
| 0.998986 |
2211.08257
|
Mark Colley
|
Mark Colley, Sebastian Hartwig, Albin Zeqiri, Timo Ropinski, Enrico
Rukzio
|
AutoTherm: A Dataset and Ablation Study for Thermal Comfort Prediction
in Vehicles
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
State recognition in well-known and customizable environments such as
vehicles enables novel insights into users and potentially their intentions.
Besides safety-relevant insights into, for example, fatigue, user
experience-related assessments become increasingly relevant. As thermal comfort
is vital for overall comfort, we introduce a dataset for its prediction in
vehicles incorporating 31 input signals and self-labeled user ratings based on
a 7-point Likert scale (-3 to +3) by 21 subjects. An importance ranking of such
signals indicates higher impact on prediction for signals like ambient
temperature, ambient humidity, radiation temperature, and skin temperature.
Leveraging modern machine learning architectures enables us to not only
automatically recognize human thermal comfort state but also predict future
states. We provide details on how we train a recurrent network-based classifier
and, thus, perform an initial performance benchmark of our proposed thermal
comfort dataset. Ultimately, we compare our collected dataset to publicly
available datasets.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 16:04:38 GMT"
},
{
"version": "v2",
"created": "Sun, 28 May 2023 09:06:20 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Colley",
"Mark",
""
],
[
"Hartwig",
"Sebastian",
""
],
[
"Zeqiri",
"Albin",
""
],
[
"Ropinski",
"Timo",
""
],
[
"Rukzio",
"Enrico",
""
]
] |
new_dataset
| 0.997605 |
2212.05241
|
Tanmay Samak
|
Tanmay Vilas Samak, Chinmay Vilas Samak, Sivanathan Kandhasamy, Venkat
Krovi, Ming Xie
|
AutoDRIVE: A Comprehensive, Flexible and Integrated Digital Twin
Ecosystem for Enhancing Autonomous Driving Research and Education
| null |
MDPI Robotics vol. 12, no. 3: 77, 2023
|
10.3390/robotics12030077
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Prototyping and validating hardware-software components, sub-systems and
systems within the intelligent transportation system-of-systems framework
requires a modular yet flexible and open-access ecosystem. This work presents
our attempt towards developing such a comprehensive research and education
ecosystem, called AutoDRIVE, for synergistically prototyping, simulating and
deploying cyber-physical solutions pertaining to autonomous driving as well as
smart city management. AutoDRIVE features both software as well as
hardware-in-the-loop testing interfaces with openly accessible scaled vehicle
and infrastructure components. The ecosystem is compatible with a variety of
development frameworks, and supports both single and multi-agent paradigms
through local as well as distributed computing. Most critically, AutoDRIVE is
intended to be modularly expandable to explore emergent technologies, and this
work highlights various complementary features and capabilities of the proposed
ecosystem by demonstrating four such deployment use-cases: (i) autonomous
parking using probabilistic robotics approach for mapping, localization, path
planning and control; (ii) behavioral cloning using computer vision and deep
imitation learning; (iii) intersection traversal using vehicle-to-vehicle
communication and deep reinforcement learning; and (iv) smart city management
using vehicle-to-infrastructure communication and internet-of-things.
|
[
{
"version": "v1",
"created": "Sat, 10 Dec 2022 08:16:05 GMT"
},
{
"version": "v2",
"created": "Sun, 19 Feb 2023 00:45:55 GMT"
},
{
"version": "v3",
"created": "Wed, 8 Mar 2023 04:30:36 GMT"
},
{
"version": "v4",
"created": "Sat, 20 May 2023 05:02:03 GMT"
},
{
"version": "v5",
"created": "Fri, 26 May 2023 17:08:31 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Samak",
"Tanmay Vilas",
""
],
[
"Samak",
"Chinmay Vilas",
""
],
[
"Kandhasamy",
"Sivanathan",
""
],
[
"Krovi",
"Venkat",
""
],
[
"Xie",
"Ming",
""
]
] |
new_dataset
| 0.996707 |
2212.09535
|
Zheng-Xin Yong
|
Zheng-Xin Yong, Hailey Schoelkopf, Niklas Muennighoff, Alham Fikri
Aji, David Ifeoluwa Adelani, Khalid Almubarak, M Saiful Bari, Lintang
Sutawika, Jungo Kasai, Ahmed Baruwa, Genta Indra Winata, Stella Biderman,
Edward Raff, Dragomir Radev and Vassilina Nikoulina
|
BLOOM+1: Adding Language Support to BLOOM for Zero-Shot Prompting
|
ACL 2023
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The BLOOM model is a large publicly available multilingual language model,
but its pretraining was limited to 46 languages. To extend the benefits of
BLOOM to other languages without incurring prohibitively large costs, it is
desirable to adapt BLOOM to new languages not seen during pretraining. In this
work, we apply existing language adaptation strategies to BLOOM and benchmark
its zero-shot prompting performance on eight new languages in a
resource-constrained setting. We find language adaptation to be effective at
improving zero-shot performance in new languages. Surprisingly, we find that
adapter-based finetuning is more effective than continued pretraining for large
models. In addition, we discover that prompting performance is not
significantly affected by language specifics, such as the writing system. It is
primarily determined by the size of the language adaptation data. We also add
new languages to BLOOMZ, which is a multitask finetuned version of BLOOM
capable of following task instructions zero-shot. We find including a new
language in the multitask fine-tuning mixture to be the most effective method
to teach BLOOMZ a new language. We conclude that with sufficient training data
language adaptation can generalize well to diverse languages. Our code is
available at https://github.com/bigscience-workshop/multilingual-modeling.
|
[
{
"version": "v1",
"created": "Mon, 19 Dec 2022 15:24:45 GMT"
},
{
"version": "v2",
"created": "Thu, 25 May 2023 10:50:40 GMT"
},
{
"version": "v3",
"created": "Sat, 27 May 2023 05:48:38 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Yong",
"Zheng-Xin",
""
],
[
"Schoelkopf",
"Hailey",
""
],
[
"Muennighoff",
"Niklas",
""
],
[
"Aji",
"Alham Fikri",
""
],
[
"Adelani",
"David Ifeoluwa",
""
],
[
"Almubarak",
"Khalid",
""
],
[
"Bari",
"M Saiful",
""
],
[
"Sutawika",
"Lintang",
""
],
[
"Kasai",
"Jungo",
""
],
[
"Baruwa",
"Ahmed",
""
],
[
"Winata",
"Genta Indra",
""
],
[
"Biderman",
"Stella",
""
],
[
"Raff",
"Edward",
""
],
[
"Radev",
"Dragomir",
""
],
[
"Nikoulina",
"Vassilina",
""
]
] |
new_dataset
| 0.964158 |
2212.10168
|
Sumanth Doddapaneni
|
Arnav Mhaske, Harshit Kedia, Sumanth Doddapaneni, Mitesh M. Khapra,
Pratyush Kumar, Rudra Murthy V, Anoop Kunchukuttan
|
Naamapadam: A Large-Scale Named Entity Annotated Data for Indic
Languages
|
ACL 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present, Naamapadam, the largest publicly available Named Entity
Recognition (NER) dataset for the 11 major Indian languages from two language
families. The dataset contains more than 400k sentences annotated with a total
of at least 100k entities from three standard entity categories (Person,
Location, and, Organization) for 9 out of the 11 languages. The training
dataset has been automatically created from the Samanantar parallel corpus by
projecting automatically tagged entities from an English sentence to the
corresponding Indian language translation. We also create manually annotated
testsets for 9 languages. We demonstrate the utility of the obtained dataset on
the Naamapadam-test dataset. We also release IndicNER, a multilingual IndicBERT
model fine-tuned on Naamapadam training set. IndicNER achieves an F1 score of
more than $80$ for $7$ out of $9$ test languages. The dataset and models are
available under open-source licences at
https://ai4bharat.iitm.ac.in/naamapadam.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 11:15:24 GMT"
},
{
"version": "v2",
"created": "Sun, 28 May 2023 06:26:45 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Mhaske",
"Arnav",
""
],
[
"Kedia",
"Harshit",
""
],
[
"Doddapaneni",
"Sumanth",
""
],
[
"Khapra",
"Mitesh M.",
""
],
[
"Kumar",
"Pratyush",
""
],
[
"Murthy",
"Rudra",
"V"
],
[
"Kunchukuttan",
"Anoop",
""
]
] |
new_dataset
| 0.999872 |
2301.04077
|
Nevin George
|
Nevin George
|
ALMA: Automata Learner using Modulo 2 Multiplicity Automata
| null | null | null | null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
We present ALMA (Automata Learner using modulo 2 Multiplicity Automata), a
Java-based tool that can learn any automaton accepting regular languages of
finite or infinite words with an implementable membership query function. Users
can either pass as input their own membership query function, or use the
predefined membership query functions for modulo 2 multiplicity automata and
non-deterministic B\"uchi automata. While learning, ALMA can output the state
of the observation table after every equivalence query, and upon termination,
it can output the dimension, transition matrices, and final vector of the
learned modulo 2 multiplicity automaton. Users can test whether a word is
accepted by performing a membership query on the learned automaton.
ALMA follows the polynomial-time learning algorithm of Beimel et. al.
(Learning functions represented as multiplicity automata. J. ACM 47(3), 2000),
which uses membership and equivalence queries and represents hypotheses using
modulo 2 multiplicity automata. ALMA also implements a polynomial-time learning
algorithm for strongly unambiguous B\"uchi automata by Angluin et. al.
(Strongly unambiguous B\"uchi automata are polynomially predictable with
membership queries. CSL 2020), and a minimization algorithm for modulo 2
multiplicity automata by Sakarovitch (Elements of Automata Theory. 2009).
|
[
{
"version": "v1",
"created": "Tue, 10 Jan 2023 17:01:29 GMT"
},
{
"version": "v2",
"created": "Fri, 26 May 2023 19:37:32 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"George",
"Nevin",
""
]
] |
new_dataset
| 0.997714 |
2301.06388
|
Keyu Li Miss
|
Keyu Li, Yangxin Xu, Ziqi Zhao, Ang Li, Max Q.-H. Meng
|
Closed-Loop Magnetic Manipulation for Robotic Transesophageal
Echocardiography
|
Accepted by IEEE Transactions on Robotics. Copyright may be
transferred without notice, after which this version may no longer be
accessible
| null |
10.1109/TRO.2023.3281477
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a closed-loop magnetic manipulation framework for robotic
transesophageal echocardiography (TEE) acquisitions. Different from previous
work on intracorporeal robotic ultrasound acquisitions that focus on continuum
robot control, we first investigate the use of magnetic control methods for
more direct, intuitive, and accurate manipulation of the distal tip of the
probe. We modify a standard TEE probe by attaching a permanent magnet and an
inertial measurement unit sensor to the probe tip and replacing the flexible
gastroscope with a soft tether containing only wires for transmitting
ultrasound signals, and show that 6-DOF localization and 5-DOF closed-loop
control of the probe can be achieved with an external permanent magnet based on
the fusion of internal inertial measurement and external magnetic field sensing
data. The proposed method does not require complex structures or motions of the
actuator and the probe compared with existing magnetic manipulation methods. We
have conducted extensive experiments to validate the effectiveness of the
framework in terms of localization accuracy, update rate, workspace size, and
tracking accuracy. In addition, our results obtained on a realistic cardiac
tissue-mimicking phantom show that the proposed framework is applicable in real
conditions and can generally meet the requirements for tele-operated TEE
acquisitions.
|
[
{
"version": "v1",
"created": "Mon, 16 Jan 2023 12:15:04 GMT"
},
{
"version": "v2",
"created": "Sun, 28 May 2023 14:55:28 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Li",
"Keyu",
""
],
[
"Xu",
"Yangxin",
""
],
[
"Zhao",
"Ziqi",
""
],
[
"Li",
"Ang",
""
],
[
"Meng",
"Max Q. -H.",
""
]
] |
new_dataset
| 0.981525 |
2301.09413
|
Mahyar Emami
|
Mahyar Emami, Sahand Kashani, Keisuke Kamahori, Mohammad Sepehr
Pourghannad, Ritik Raj, James R. Larus
|
Manticore: Hardware-Accelerated RTL Simulation with Static
Bulk-Synchronous Parallelism
| null | null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
The demise of Moore's Law and Dennard Scaling has revived interest in
specialized computer architectures and accelerators. Verification and testing
of this hardware depend heavily upon cycle-accurate simulation of
register-transfer-level (RTL) designs. The fastest software RTL simulators can
simulate designs at 1--1000 kHz, i.e., more than three orders of magnitude
slower than hardware. Improved simulators can increase designers' productivity
by speeding design iterations and permitting more exhaustive exploration. One
possibility is to exploit low-level parallelism, as RTL expresses considerable
fine-grain concurrency. Unfortunately, state-of-the-art RTL simulators often
perform best on a single core since modern processors cannot effectively
exploit fine-grain parallelism. This work presents Manticore: a parallel
computer designed to accelerate RTL simulation. Manticore uses a static
bulk-synchronous parallel (BSP) execution model to eliminate fine-grain
synchronization overhead. It relies entirely on a compiler to schedule
resources and communication, which is feasible since RTL code contains few
divergent execution paths. With static scheduling, communication and
synchronization no longer incur runtime overhead, making fine-grain parallelism
practical. Moreover, static scheduling dramatically simplifies processor
implementation, significantly increasing the number of cores that fit on a
chip. Our 225-core FPGA implementation running at 475 MHz outperforms a
state-of-the-art RTL simulator running on desktop and server computers in 8 out
of 9 benchmarks.
|
[
{
"version": "v1",
"created": "Mon, 23 Jan 2023 13:12:11 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Mar 2023 11:17:37 GMT"
},
{
"version": "v3",
"created": "Mon, 29 May 2023 13:01:46 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Emami",
"Mahyar",
""
],
[
"Kashani",
"Sahand",
""
],
[
"Kamahori",
"Keisuke",
""
],
[
"Pourghannad",
"Mohammad Sepehr",
""
],
[
"Raj",
"Ritik",
""
],
[
"Larus",
"James R.",
""
]
] |
new_dataset
| 0.999416 |
2301.10018
|
Haipeng Li
|
Haipeng Li and Kunming Luo and Bing Zeng and Shuaicheng Liu
|
GyroFlow+: Gyroscope-Guided Unsupervised Deep Homography and Optical
Flow Learning
|
12 pages. arXiv admin note: substantial text overlap with
arXiv:2103.13725
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing homography and optical flow methods are erroneous in challenging
scenes, such as fog, rain, night, and snow because the basic assumptions such
as brightness and gradient constancy are broken. To address this issue, we
present an unsupervised learning approach that fuses gyroscope into homography
and optical flow learning. Specifically, we first convert gyroscope readings
into motion fields named gyro field. Second, we design a self-guided fusion
module (SGF) to fuse the background motion extracted from the gyro field with
the optical flow and guide the network to focus on motion details. Meanwhile,
we propose a homography decoder module (HD) to combine gyro field and
intermediate results of SGF to produce the homography. To the best of our
knowledge, this is the first deep learning framework that fuses gyroscope data
and image content for both deep homography and optical flow learning. To
validate our method, we propose a new dataset that covers regular and
challenging scenes. Experiments show that our method outperforms the
state-of-the-art methods in both regular and challenging scenes.
|
[
{
"version": "v1",
"created": "Mon, 23 Jan 2023 13:44:15 GMT"
},
{
"version": "v2",
"created": "Mon, 29 May 2023 11:46:42 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Li",
"Haipeng",
""
],
[
"Luo",
"Kunming",
""
],
[
"Zeng",
"Bing",
""
],
[
"Liu",
"Shuaicheng",
""
]
] |
new_dataset
| 0.999227 |
2301.13090
|
Ali Farajzadeh Bavil Soflaei
|
Ali Farajzadeh Bavil, Hamed Damirchi, Hamid D. Taghirad
|
Action Capsules: Human Skeleton Action Recognition
|
11 pages, 11 figures
|
Computer Vision and Image Understanding Volume 233, August 2023,
103722
|
10.1016/j.cviu.2023.103722
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to the compact and rich high-level representations offered,
skeleton-based human action recognition has recently become a highly active
research topic. Previous studies have demonstrated that investigating joint
relationships in spatial and temporal dimensions provides effective information
critical to action recognition. However, effectively encoding global
dependencies of joints during spatio-temporal feature extraction is still
challenging. In this paper, we introduce Action Capsule which identifies
action-related key joints by considering the latent correlation of joints in a
skeleton sequence. We show that, during inference, our end-to-end network pays
attention to a set of joints specific to each action, whose encoded
spatio-temporal features are aggregated to recognize the action. Additionally,
the use of multiple stages of action capsules enhances the ability of the
network to classify similar actions. Consequently, our network outperforms the
state-of-the-art approaches on the N-UCLA dataset and obtains competitive
results on the NTURGBD dataset. This is while our approach has significantly
lower computational requirements based on GFLOPs measurements.
|
[
{
"version": "v1",
"created": "Mon, 30 Jan 2023 17:28:34 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Bavil",
"Ali Farajzadeh",
""
],
[
"Damirchi",
"Hamed",
""
],
[
"Taghirad",
"Hamid D.",
""
]
] |
new_dataset
| 0.994727 |
2302.03649
|
Neil Ernst
|
Neil A. Ernst and Maria Teresa Baldassarre
|
Registered Reports in Software Engineering
|
in press as EMSE J. comment
| null |
10.1007/s10664-022-10277-5
| null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Registered reports are scientific publications which begin the publication
process by first having the detailed research protocol, including key research
questions, reviewed and approved by peers. Subsequent analysis and results are
published with minimal additional review, even if there was no clear support
for the underlying hypothesis, as long as the approved protocol is followed.
Registered reports can prevent several questionable research practices and give
early feedback on research designs. In software engineering research,
registered reports were first introduced in the International Conference on
Mining Software Repositories (MSR) in 2020. They are now established in three
conferences and two pre-eminent journals, including Empirical Software
Engineering. We explain the motivation for registered reports, outline the way
they have been implemented in software engineering, and outline some ongoing
challenges for addressing high quality software engineering research.
|
[
{
"version": "v1",
"created": "Tue, 7 Feb 2023 18:02:19 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Ernst",
"Neil A.",
""
],
[
"Baldassarre",
"Maria Teresa",
""
]
] |
new_dataset
| 0.974189 |
2302.06594
|
David Ruhe
|
David Ruhe, Jayesh K. Gupta, Steven de Keninck, Max Welling, Johannes
Brandstetter
|
Geometric Clifford Algebra Networks
| null | null | null | null |
cs.LG cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose Geometric Clifford Algebra Networks (GCANs) for modeling dynamical
systems. GCANs are based on symmetry group transformations using geometric
(Clifford) algebras. We first review the quintessence of modern (plane-based)
geometric algebra, which builds on isometries encoded as elements of the
$\mathrm{Pin}(p,q,r)$ group. We then propose the concept of group action
layers, which linearly combine object transformations using pre-specified group
actions. Together with a new activation and normalization scheme, these layers
serve as adjustable $\textit{geometric templates}$ that can be refined via
gradient descent. Theoretical advantages are strongly reflected in the modeling
of three-dimensional rigid body transformations as well as large-scale fluid
dynamics simulations, showing significantly improved performance over
traditional methods.
|
[
{
"version": "v1",
"created": "Mon, 13 Feb 2023 18:48:33 GMT"
},
{
"version": "v2",
"created": "Mon, 29 May 2023 16:51:59 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Ruhe",
"David",
""
],
[
"Gupta",
"Jayesh K.",
""
],
[
"de Keninck",
"Steven",
""
],
[
"Welling",
"Max",
""
],
[
"Brandstetter",
"Johannes",
""
]
] |
new_dataset
| 0.968069 |
2302.09325
|
Jie Li
|
Jie Li, Yi Liu, Xiaohu Tang, Yunghsiang S. Han, Bo Bai, and Gong Zhang
|
MDS Array Codes With (Near) Optimal Repair Bandwidth for All Admissible
Repair Degrees
|
Submitted to the IEEE Transactions on Communications
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Abundant high-rate (n, k) minimum storage regenerating (MSR) codes have been
reported in the literature. However, most of them require contacting all the
surviving nodes during a node repair process, resulting in a repair degree of
d=n-1. In practical systems, it may not always be feasible to connect and
download data from all surviving nodes, as some nodes may be unavailable.
Therefore, there is a need for MSR code constructions with a repair degree of
d<n-1. Up to now, only a few (n, k) MSR code constructions with repair degree
d<n-1 have been reported, some have a large sub-packetization level, a large
finite field, or restrictions on the repair degree d. In this paper, we propose
a new (n, k) MSR code construction that works for any repair degree d>k, and
has a smaller sub-packetization level or finite field than some existing
constructions. Additionally, in conjunction with a previous generic
transformation to reduce the sub-packetization level, we obtain an MDS array
code with a small sub-packetization level and $(1+\epsilon)$-optimal repair
bandwidth (i.e., $(1+\epsilon)$ times the optimal repair bandwidth) for repair
degree d=n-1. This code outperforms some existing ones in terms of either the
sub-packetization level or the field size.
|
[
{
"version": "v1",
"created": "Sat, 18 Feb 2023 13:11:57 GMT"
},
{
"version": "v2",
"created": "Sat, 27 May 2023 03:26:51 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Li",
"Jie",
""
],
[
"Liu",
"Yi",
""
],
[
"Tang",
"Xiaohu",
""
],
[
"Han",
"Yunghsiang S.",
""
],
[
"Bai",
"Bo",
""
],
[
"Zhang",
"Gong",
""
]
] |
new_dataset
| 0.99874 |
2302.09527
|
Jivnesh Sandhan
|
Jivnesh Sandhan, Anshul Agarwal, Laxmidhar Behera, Tushar Sandhan and
Pawan Goyal
|
SanskritShala: A Neural Sanskrit NLP Toolkit with Web-Based Interface
for Pedagogical and Annotation Purposes
|
7 pages, Accepted at ACL23 (Demo track) to be held at Toronto, Canada
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present a neural Sanskrit Natural Language Processing (NLP) toolkit named
SanskritShala (a school of Sanskrit) to facilitate computational linguistic
analyses for several tasks such as word segmentation, morphological tagging,
dependency parsing, and compound type identification. Our systems currently
report state-of-the-art performance on available benchmark datasets for all
tasks. SanskritShala is deployed as a web-based application, which allows a
user to get real-time analysis for the given input. It is built with
easy-to-use interactive data annotation features that allow annotators to
correct the system predictions when it makes mistakes. We publicly release the
source codes of the 4 modules included in the toolkit, 7 word embedding models
that have been trained on publicly available Sanskrit corpora and multiple
annotated datasets such as word similarity, relatedness, categorization,
analogy prediction to assess intrinsic properties of word embeddings. So far as
we know, this is the first neural-based Sanskrit NLP toolkit that has a
web-based interface and a number of NLP modules. We are sure that the people
who are willing to work with Sanskrit will find it useful for pedagogical and
annotative purposes. SanskritShala is available at:
https://cnerg.iitkgp.ac.in/sanskritshala. The demo video of our platform can be
accessed at: https://youtu.be/x0X31Y9k0mw4.
|
[
{
"version": "v1",
"created": "Sun, 19 Feb 2023 09:58:55 GMT"
},
{
"version": "v2",
"created": "Mon, 29 May 2023 07:36:21 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Sandhan",
"Jivnesh",
""
],
[
"Agarwal",
"Anshul",
""
],
[
"Behera",
"Laxmidhar",
""
],
[
"Sandhan",
"Tushar",
""
],
[
"Goyal",
"Pawan",
""
]
] |
new_dataset
| 0.999729 |
2303.05197
|
Peng Sun
|
Changnan Xiao, Yongxin Zhang, Xuefeng Huang, Qinhan Huang, Jie Chen,
Peng Sun
|
Mastering Strategy Card Game (Hearthstone) with Improved Techniques
|
cog2023 full
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Strategy card game is a well-known genre that is demanding on the intelligent
game-play and can be an ideal test-bench for AI. Previous work combines an
end-to-end policy function and an optimistic smooth fictitious play, which
shows promising performances on the strategy card game Legend of Code and
Magic. In this work, we apply such algorithms to Hearthstone, a famous
commercial game that is more complicated in game rules and mechanisms. We
further propose several improved techniques and consequently achieve
significant progress. For a machine-vs-human test we invite a Hearthstone
streamer whose best rank was top 10 of the official league in China region that
is estimated to be of millions of players. Our models defeat the human player
in all Best-of-5 tournaments of full games (including both deck building and
battle), showing a strong capability of decision making.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 11:52:52 GMT"
},
{
"version": "v2",
"created": "Sun, 28 May 2023 14:19:07 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Xiao",
"Changnan",
""
],
[
"Zhang",
"Yongxin",
""
],
[
"Huang",
"Xuefeng",
""
],
[
"Huang",
"Qinhan",
""
],
[
"Chen",
"Jie",
""
],
[
"Sun",
"Peng",
""
]
] |
new_dataset
| 0.997209 |
2303.05329
|
Tao Chen
|
Tao Chen, Ruirui Li, Jiafeng Fu, and Daguang Jiang
|
Tucker Bilinear Attention Network for Multi-scale Remote Sensing Object
Detection
|
arXiv admin note: text overlap with arXiv:1705.06676,
arXiv:2209.13351 by other authors
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Object detection on VHR remote sensing images plays a vital role in
applications such as urban planning, land resource management, and rescue
missions. The large-scale variation of the remote-sensing targets is one of the
main challenges in VHR remote-sensing object detection. Existing methods
improve the detection accuracy of high-resolution remote sensing objects by
improving the structure of feature pyramids and adopting different attention
modules. However, for small targets, there still be seriously missed detections
due to the loss of key detail features. There is still room for improvement in
the way of multiscale feature fusion and balance. To address this issue, this
paper proposes two novel modules: Guided Attention and Tucker Bilinear
Attention, which are applied to the stages of early fusion and late fusion
respectively. The former can effectively retain clean key detail features, and
the latter can better balance features through semantic-level correlation
mining. Based on two modules, we build a new multi-scale remote sensing object
detection framework. No bells and whistles. The proposed method largely
improves the average precisions of small objects and achieves the highest mean
average precisions compared with 9 state-of-the-art methods on DOTA, DIOR, and
NWPU VHR-10.Code and models are available at
https://github.com/Shinichict/GTNet.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 15:20:03 GMT"
},
{
"version": "v2",
"created": "Sun, 28 May 2023 06:39:19 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Chen",
"Tao",
""
],
[
"Li",
"Ruirui",
""
],
[
"Fu",
"Jiafeng",
""
],
[
"Jiang",
"Daguang",
""
]
] |
new_dataset
| 0.998687 |
2303.15662
|
Ruck Thawonmas
|
Pittawat Taveekitworachai, Febri Abdullah, Mury F. Dewantoro, Ruck
Thawonmas, Julian Togelius, Jochen Renz
|
ChatGPT4PCG Competition: Character-like Level Generation for Science
Birds
|
This paper accepted for presentation at IEEE CoG 2023 is made
available for participants of ChatGPT4PCG Competition
(https://chatgpt4pcg.github.io/) and readers interested in relevant areas
| null | null | null |
cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents the first ChatGPT4PCG Competition at the 2023 IEEE
Conference on Games. The objective of this competition is for participants to
create effective prompts for ChatGPT--enabling it to generate Science Birds
levels with high stability and character-like qualities--fully using their
creativity as well as prompt engineering skills. ChatGPT is a conversational
agent developed by OpenAI. Science Birds is selected as the competition
platform because designing an Angry Birds-like level is not a trivial task due
to the in-game gravity; the quality of the levels is determined by their
stability. To lower the entry barrier to the competition, we limit the task to
the generation of capitalized English alphabetical characters. We also allow
only a single prompt to be used for generating all the characters. Here, the
quality of the generated levels is determined by their stability and similarity
to the given characters. A sample prompt is provided to participants for their
reference. An experiment is conducted to determine the effectiveness of several
modified versions of this sample prompt on level stability and similarity by
testing them on several characters. To the best of our knowledge, we believe
that ChatGPT4PCG is the first competition of its kind and hope to inspire
enthusiasm for prompt engineering in procedural content generation.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 01:07:38 GMT"
},
{
"version": "v2",
"created": "Mon, 29 May 2023 05:32:04 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Taveekitworachai",
"Pittawat",
""
],
[
"Abdullah",
"Febri",
""
],
[
"Dewantoro",
"Mury F.",
""
],
[
"Thawonmas",
"Ruck",
""
],
[
"Togelius",
"Julian",
""
],
[
"Renz",
"Jochen",
""
]
] |
new_dataset
| 0.996033 |
2304.01412
|
Xing Han Lu
|
Xing Han Lu, Siva Reddy, Harm de Vries
|
The StatCan Dialogue Dataset: Retrieving Data Tables through
Conversations with Genuine Intents
|
Accepted at EACL 2023
|
Proceedings of the 17th Conference of the European Chapter of the
Association for Computational Linguistics. (2023) 2799-2829
| null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce the StatCan Dialogue Dataset consisting of 19,379 conversation
turns between agents working at Statistics Canada and online users looking for
published data tables. The conversations stem from genuine intents, are held in
English or French, and lead to agents retrieving one of over 5000 complex data
tables. Based on this dataset, we propose two tasks: (1) automatic retrieval of
relevant tables based on a on-going conversation, and (2) automatic generation
of appropriate agent responses at each turn. We investigate the difficulty of
each task by establishing strong baselines. Our experiments on a temporal data
split reveal that all models struggle to generalize to future conversations, as
we observe a significant drop in performance across both tasks when we move
from the validation to the test set. In addition, we find that response
generation models struggle to decide when to return a table. Considering that
the tasks pose significant challenges to existing models, we encourage the
community to develop models for our task, which can be directly used to help
knowledge workers find relevant tables for live chat users.
|
[
{
"version": "v1",
"created": "Mon, 3 Apr 2023 23:18:30 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Apr 2023 01:20:51 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Lu",
"Xing Han",
""
],
[
"Reddy",
"Siva",
""
],
[
"de Vries",
"Harm",
""
]
] |
new_dataset
| 0.996913 |
2305.05921
|
Anni Zou
|
Anni Zou, Zhuosheng Zhang and Hai Zhao
|
Decker: Double Check with Heterogeneous Knowledge for Commonsense Fact
Verification
|
Accepted to ACL 2023 Findings
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Commonsense fact verification, as a challenging branch of commonsense
question-answering (QA), aims to verify through facts whether a given
commonsense claim is correct or not. Answering commonsense questions
necessitates a combination of knowledge from various levels. However, existing
studies primarily rest on grasping either unstructured evidence or potential
reasoning paths from structured knowledge bases, yet failing to exploit the
benefits of heterogeneous knowledge simultaneously. In light of this, we
propose Decker, a commonsense fact verification model that is capable of
bridging heterogeneous knowledge by uncovering latent relationships between
structured and unstructured knowledge. Experimental results on two commonsense
fact verification benchmark datasets, CSQA2.0 and CREAK demonstrate the
effectiveness of our Decker and further analysis verifies its capability to
seize more precious information through reasoning.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 06:28:16 GMT"
},
{
"version": "v2",
"created": "Sat, 27 May 2023 08:49:05 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Zou",
"Anni",
""
],
[
"Zhang",
"Zhuosheng",
""
],
[
"Zhao",
"Hai",
""
]
] |
new_dataset
| 0.987167 |
2305.13868
|
Charles Vanwynsberghe
|
Charles Vanwynsberghe, Jiguang He, Chongwen Huang, and Merouane Debbah
|
Walsh Meets OAM in Holographic MIMO
|
Submission to ICEAA 2023
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Holographic multiple-input multiple-output (MIMO) is deemed as a promising
technique beyond massive MIMO, unleashing near-field communications,
localization, and sensing in the next-generation wireless networks.
Semi-continuous surface with densely packed elements brings new opportunities
for increased spatial degrees of freedom (DoFs) and spectrum efficiency (SE)
even in the line-of-sight (LoS) condition. In this paper, we analyze
holographic MIMO performance with disk-shaped large intelligent surfaces (LISs)
according to different precoding designs. Beyond the well-known technique of
orbital angular momentum (OAM) of radio waves, we propose a new design based on
polar Walsh functions. Furthermore, we characterize the performance gap between
the proposed scheme and the optimal case with singular value decomposition
(SVD) alongside perfect channel state information (CSI) as well as other
benchmark schemes in terms of channel capacity. It is verified that the
proposed scheme marginally underperforms the OAM-based approach, while offering
potential perspectives for reducing implementation complexity and expenditure.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 09:39:26 GMT"
},
{
"version": "v2",
"created": "Sat, 27 May 2023 10:15:38 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Vanwynsberghe",
"Charles",
""
],
[
"He",
"Jiguang",
""
],
[
"Huang",
"Chongwen",
""
],
[
"Debbah",
"Merouane",
""
]
] |
new_dataset
| 0.980725 |
2305.14345
|
Taeksoo Kim
|
Taeksoo Kim, Shunsuke Saito, Hanbyul Joo
|
NCHO: Unsupervised Learning for Neural 3D Composition of Humans and
Objects
|
The project page is available at https://taeksuu.github.io/ncho/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Deep generative models have been recently extended to synthesizing 3D digital
humans. However, previous approaches treat clothed humans as a single chunk of
geometry without considering the compositionality of clothing and accessories.
As a result, individual items cannot be naturally composed into novel
identities, leading to limited expressiveness and controllability of generative
3D avatars. While several methods attempt to address this by leveraging
synthetic data, the interaction between humans and objects is not authentic due
to the domain gap, and manual asset creation is difficult to scale for a wide
variety of objects. In this work, we present a novel framework for learning a
compositional generative model of humans and objects (backpacks, coats,
scarves, and more) from real-world 3D scans. Our compositional model is
interaction-aware, meaning the spatial relationship between humans and objects,
and the mutual shape change by physical contact is fully incorporated. The key
challenge is that, since humans and objects are in contact, their 3D scans are
merged into a single piece. To decompose them without manual annotations, we
propose to leverage two sets of 3D scans of a single person with and without
objects. Our approach learns to decompose objects and naturally compose them
back into a generative human model in an unsupervised manner. Despite our
simple setup requiring only the capture of a single subject with objects, our
experiments demonstrate the strong generalization of our model by enabling the
natural composition of objects to diverse identities in various poses and the
composition of multiple objects, which is unseen in training data.
https://taeksuu.github.io/ncho/
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 17:59:52 GMT"
},
{
"version": "v2",
"created": "Mon, 29 May 2023 13:51:25 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Kim",
"Taeksoo",
""
],
[
"Saito",
"Shunsuke",
""
],
[
"Joo",
"Hanbyul",
""
]
] |
new_dataset
| 0.987793 |
2305.14749
|
Chaitanya K. Joshi
|
Chaitanya K. Joshi, Arian R. Jamasb, Ramon Vi\~nas, Charles Harris,
Simon Mathis, Pietro Li\`o
|
Multi-State RNA Design with Geometric Multi-Graph Neural Networks
| null | null | null | null |
cs.LG q-bio.BM q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computational RNA design has broad applications across synthetic biology and
therapeutic development. Fundamental to the diverse biological functions of RNA
is its conformational flexibility, enabling single sequences to adopt a variety
of distinct 3D states. Currently, computational biomolecule design tasks are
often posed as inverse problems, where sequences are designed based on adopting
a single desired structural conformation. In this work, we propose gRNAde, a
geometric RNA design pipeline that operates on sets of 3D RNA backbone
structures to explicitly account for and reflect RNA conformational diversity
in its designs. We demonstrate the utility of gRNAde for improving native
sequence recovery over single-state approaches on a new large-scale 3D RNA
design dataset, especially for multi-state and structurally diverse RNAs. Our
code is available at https://github.com/chaitjo/geometric-rna-design
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 05:46:56 GMT"
},
{
"version": "v2",
"created": "Thu, 25 May 2023 14:53:11 GMT"
},
{
"version": "v3",
"created": "Sun, 28 May 2023 22:44:27 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Joshi",
"Chaitanya K.",
""
],
[
"Jamasb",
"Arian R.",
""
],
[
"Viñas",
"Ramon",
""
],
[
"Harris",
"Charles",
""
],
[
"Mathis",
"Simon",
""
],
[
"Liò",
"Pietro",
""
]
] |
new_dataset
| 0.999149 |
2305.17174
|
Julia Mendelsohn
|
Julia Mendelsohn, Ronan Le Bras, Yejin Choi, Maarten Sap
|
From Dogwhistles to Bullhorns: Unveiling Coded Rhetoric with Language
Models
|
ACL 2023, see https://dogwhistles.allen.ai/ for the glossary and
other materials
| null | null | null |
cs.CL cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Dogwhistles are coded expressions that simultaneously convey one meaning to a
broad audience and a second one, often hateful or provocative, to a narrow
in-group; they are deployed to evade both political repercussions and
algorithmic content moderation. For example, in the sentence 'we need to end
the cosmopolitan experiment,' the word 'cosmopolitan' likely means 'worldly' to
many, but secretly means 'Jewish' to a select few. We present the first
large-scale computational investigation of dogwhistles. We develop a typology
of dogwhistles, curate the largest-to-date glossary of over 300 dogwhistles
with rich contextual information and examples, and analyze their usage in
historical U.S. politicians' speeches. We then assess whether a large language
model (GPT-3) can identify dogwhistles and their meanings, and find that
GPT-3's performance varies widely across types of dogwhistles and targeted
groups. Finally, we show that harmful content containing dogwhistles avoids
toxicity detection, highlighting online risks of such coded language. This work
sheds light on the theoretical and applied importance of dogwhistles in both
NLP and computational social science, and provides resources for future
research in modeling dogwhistles and mitigating their online harms.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 18:00:57 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Mendelsohn",
"Julia",
""
],
[
"Bras",
"Ronan Le",
""
],
[
"Choi",
"Yejin",
""
],
[
"Sap",
"Maarten",
""
]
] |
new_dataset
| 0.999296 |
2305.17202
|
Antonios Anastasopoulos
|
Claytone Sikasote, Eunice Mukonde, Md Mahfuz Ibn Alam, Antonios
Anastasopoulos
|
BIG-C: a Multimodal Multi-Purpose Dataset for Bemba
|
accepted to ACL 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present BIG-C (Bemba Image Grounded Conversations), a large multimodal
dataset for Bemba. While Bemba is the most populous language of Zambia, it
exhibits a dearth of resources which render the development of language
technologies or language processing research almost impossible. The dataset is
comprised of multi-turn dialogues between Bemba speakers based on images,
transcribed and translated into English. There are more than 92,000
utterances/sentences, amounting to more than 180 hours of audio data with
corresponding transcriptions and English translations. We also provide
baselines on speech recognition (ASR), machine translation (MT) and speech
translation (ST) tasks, and sketch out other potential future multimodal uses
of our dataset. We hope that by making the dataset available to the research
community, this work will foster research and encourage collaboration across
the language, speech, and vision communities especially for languages outside
the "traditionally" used high-resourced ones. All data and code are publicly
available: https://github.com/csikasote/bigc.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 18:49:55 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Sikasote",
"Claytone",
""
],
[
"Mukonde",
"Eunice",
""
],
[
"Alam",
"Md Mahfuz Ibn",
""
],
[
"Anastasopoulos",
"Antonios",
""
]
] |
new_dataset
| 0.999839 |
2305.17267
|
Sina Ahmadi
|
Md Mahfuz Ibn Alam, Sina Ahmadi, Antonios Anastasopoulos
|
CODET: A Benchmark for Contrastive Dialectal Evaluation of Machine
Translation
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Neural machine translation (NMT) systems exhibit limited robustness in
handling source-side linguistic variations. Their performance tends to degrade
when faced with even slight deviations in language usage, such as different
domains or variations introduced by second-language speakers. It is intuitive
to extend this observation to encompass dialectal variations as well, but the
work allowing the community to evaluate MT systems on this dimension is
limited. To alleviate this issue, we compile and release \dataset, a
contrastive dialectal benchmark encompassing 882 different variations from nine
different languages. We also quantitatively demonstrate the challenges large MT
models face in effectively translating dialectal variants. We are releasing all
code and data.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 21:24:00 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Alam",
"Md Mahfuz Ibn",
""
],
[
"Ahmadi",
"Sina",
""
],
[
"Anastasopoulos",
"Antonios",
""
]
] |
new_dataset
| 0.999717 |
2305.17273
|
Sadhana Kumaravel
|
Sadhana Kumaravel, Tahira Naseem, Ramon Fernandez Astudillo, Radu
Florian, Salim Roukos
|
Slide, Constrain, Parse, Repeat: Synchronous SlidingWindows for Document
AMR Parsing
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The sliding window approach provides an elegant way to handle contexts of
sizes larger than the Transformer's input window, for tasks like language
modeling. Here we extend this approach to the sequence-to-sequence task of
document parsing. For this, we exploit recent progress in transition-based
parsing to implement a parser with synchronous sliding windows over source and
target. We develop an oracle and a parser for document-level AMR by expanding
on Structured-BART such that it leverages source-target alignments and
constrains decoding to guarantee synchronicity and consistency across
overlapping windows. We evaluate our oracle and parser using the Abstract
Meaning Representation (AMR) parsing 3.0 corpus. On the Multi-Sentence
development set of AMR 3.0, we show that our transition oracle loses only 8\%
of the gold cross-sentential links despite using a sliding window. In practice,
this approach also results in a high-quality document-level parser with
manageable memory requirements. Our proposed system performs on par with the
state-of-the-art pipeline approach for document-level AMR parsing task on
Multi-Sentence AMR 3.0 corpus while maintaining sentence-level parsing
performance.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 21:38:08 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Kumaravel",
"Sadhana",
""
],
[
"Naseem",
"Tahira",
""
],
[
"Astudillo",
"Ramon Fernandez",
""
],
[
"Florian",
"Radu",
""
],
[
"Roukos",
"Salim",
""
]
] |
new_dataset
| 0.997005 |
2305.17313
|
Rayson Laroca
|
Valfride Nascimento, Rayson Laroca, Jorge de A. Lambert, William
Robson Schwartz, David Menotti
|
Super-Resolution of License Plate Images Using Attention Modules and
Sub-Pixel Convolution Layers
| null |
Computers & Graphics, vol. 113, pp. 69-76, 2023
|
10.1016/j.cag.2023.05.005
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent years have seen significant developments in the field of License Plate
Recognition (LPR) through the integration of deep learning techniques and the
increasing availability of training data. Nevertheless, reconstructing license
plates (LPs) from low-resolution (LR) surveillance footage remains challenging.
To address this issue, we introduce a Single-Image Super-Resolution (SISR)
approach that integrates attention and transformer modules to enhance the
detection of structural and textural features in LR images. Our approach
incorporates sub-pixel convolution layers (also known as PixelShuffle) and a
loss function that uses an Optical Character Recognition (OCR) model for
feature extraction. We trained the proposed architecture on synthetic images
created by applying heavy Gaussian noise to high-resolution LP images from two
public datasets, followed by bicubic downsampling. As a result, the generated
images have a Structural Similarity Index Measure (SSIM) of less than 0.10. Our
results show that our approach for reconstructing these low-resolution
synthesized images outperforms existing ones in both quantitative and
qualitative measures. Our code is publicly available at
https://github.com/valfride/lpr-rsr-ext/
|
[
{
"version": "v1",
"created": "Sat, 27 May 2023 00:17:19 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Nascimento",
"Valfride",
""
],
[
"Laroca",
"Rayson",
""
],
[
"Lambert",
"Jorge de A.",
""
],
[
"Schwartz",
"William Robson",
""
],
[
"Menotti",
"David",
""
]
] |
new_dataset
| 0.971913 |
2305.17337
|
Sijia Wang
|
Sijia Wang, Alexander Hanbo Li, Henry Zhu, Sheng Zhang, Chung-Wei
Hang, Pramuditha Perera, Jie Ma, William Wang, Zhiguo Wang, Vittorio
Castelli, Bing Xiang, Patrick Ng
|
Benchmarking Diverse-Modal Entity Linking with Generative Models
|
15 pages. ACL 2023
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Entities can be expressed in diverse formats, such as texts, images, or
column names and cell values in tables. While existing entity linking (EL)
models work well on per modality configuration, such as text-only EL, visual
grounding, or schema linking, it is more challenging to design a unified model
for diverse modality configurations. To bring various modality configurations
together, we constructed a benchmark for diverse-modal EL (DMEL) from existing
EL datasets, covering all three modalities including text, image, and table. To
approach the DMEL task, we proposed a generative diverse-modal model (GDMM)
following a multimodal-encoder-decoder paradigm. Pre-training \Model with rich
corpora builds a solid foundation for DMEL without storing the entire KB for
inference. Fine-tuning GDMM builds a stronger DMEL baseline, outperforming
state-of-the-art task-specific EL models by 8.51 F1 score on average.
Additionally, extensive error analyses are conducted to highlight the
challenges of DMEL, facilitating future research on this task.
|
[
{
"version": "v1",
"created": "Sat, 27 May 2023 02:38:46 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Wang",
"Sijia",
""
],
[
"Li",
"Alexander Hanbo",
""
],
[
"Zhu",
"Henry",
""
],
[
"Zhang",
"Sheng",
""
],
[
"Hang",
"Chung-Wei",
""
],
[
"Perera",
"Pramuditha",
""
],
[
"Ma",
"Jie",
""
],
[
"Wang",
"William",
""
],
[
"Wang",
"Zhiguo",
""
],
[
"Castelli",
"Vittorio",
""
],
[
"Xiang",
"Bing",
""
],
[
"Ng",
"Patrick",
""
]
] |
new_dataset
| 0.995238 |
2305.17374
|
Yongbiao Xiao
|
Yongbiao Xiao, Hui Li, Chunyang Cheng, and Xiaoning Song
|
LE2Fusion: A novel local edge enhancement module for infrared and
visible image fusion
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Infrared and visible image fusion task aims to generate a fused image which
contains salient features and rich texture details from multi-source images.
However, under complex illumination conditions, few algorithms pay attention to
the edge information of local regions which is crucial for downstream tasks. To
this end, we propose a fusion network based on the local edge enhancement,
named LE2Fusion. Specifically, a local edge enhancement (LE2) module is
proposed to improve the edge information under complex illumination conditions
and preserve the essential features of image. For feature extraction, a
multi-scale residual attention (MRA) module is applied to extract rich
features. Then, with LE2, a set of enhancement weights are generated which are
utilized in feature fusion strategy and used to guide the image reconstruction.
To better preserve the local detail information and structure information, the
pixel intensity loss function based on the local region is also presented. The
experiments demonstrate that the proposed method exhibits better fusion
performance than the state-of-the-art fusion methods on public datasets.
|
[
{
"version": "v1",
"created": "Sat, 27 May 2023 05:37:02 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Xiao",
"Yongbiao",
""
],
[
"Li",
"Hui",
""
],
[
"Cheng",
"Chunyang",
""
],
[
"Song",
"Xiaoning",
""
]
] |
new_dataset
| 0.993371 |
2305.17390
|
Bill Yuchen Lin
|
Bill Yuchen Lin, Yicheng Fu, Karina Yang, Prithviraj Ammanabrolu,
Faeze Brahman, Shiyu Huang, Chandra Bhagavatula, Yejin Choi, Xiang Ren
|
SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex
Interactive Tasks
|
Project website: https://yuchenlin.xyz/swiftsage/
| null | null | null |
cs.CL cs.AI cs.LG cs.MA cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce SwiftSage, a novel agent framework inspired by the dual-process
theory of human cognition, designed to excel in action planning for complex
interactive reasoning tasks. SwiftSage integrates the strengths of behavior
cloning and prompting large language models (LLMs) to enhance task completion
performance. The framework comprises two primary modules: the Swift module,
representing fast and intuitive thinking, and the Sage module, emulating
deliberate thought processes. The Swift module is a small encoder-decoder LM
fine-tuned on the oracle agent's action trajectories, while the Sage module
employs LLMs such as GPT-4 for subgoal planning and grounding. We develop a
heuristic method to harmoniously integrate the two modules, resulting in a more
efficient and robust problem-solving process. In 30 tasks from the ScienceWorld
benchmark, SwiftSage significantly outperforms other methods such as SayCan,
ReAct, and Reflexion, demonstrating its effectiveness in solving complex
real-world tasks.
|
[
{
"version": "v1",
"created": "Sat, 27 May 2023 07:04:15 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Lin",
"Bill Yuchen",
""
],
[
"Fu",
"Yicheng",
""
],
[
"Yang",
"Karina",
""
],
[
"Ammanabrolu",
"Prithviraj",
""
],
[
"Brahman",
"Faeze",
""
],
[
"Huang",
"Shiyu",
""
],
[
"Bhagavatula",
"Chandra",
""
],
[
"Choi",
"Yejin",
""
],
[
"Ren",
"Xiang",
""
]
] |
new_dataset
| 0.996045 |
2305.17404
|
Atnafu Lambebo Tonja
|
Atnafu Lambebo Tonja, Christian Maldonado-Sifuentes, David Alejandro
Mendoza Castillo, Olga Kolesnikova, No\'e Castro-S\'anchez, Grigori Sidorov,
Alexander Gelbukh
|
Parallel Corpus for Indigenous Language Translation: Spanish-Mazatec and
Spanish-Mixtec
|
Accepted to Third Workshop on NLP for Indigenous Languages of the
Americas
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present a parallel Spanish-Mazatec and Spanish-Mixtec
corpus for machine translation (MT) tasks, where Mazatec and Mixtec are two
indigenous Mexican languages. We evaluated the usability of the collected
corpus using three different approaches: transformer, transfer learning, and
fine-tuning pre-trained multilingual MT models. Fine-tuning the Facebook
M2M100-48 model outperformed the other approaches, with BLEU scores of 12.09
and 22.25 for Mazatec-Spanish and Spanish-Mazatec translations, respectively,
and 16.75 and 22.15 for Mixtec-Spanish and Spanish-Mixtec translations,
respectively. The findings show that the dataset size (9,799 sentences in
Mazatec and 13,235 sentences in Mixtec) affects translation performance and
that indigenous languages work better when used as target languages. The
findings emphasize the importance of creating parallel corpora for indigenous
languages and fine-tuning models for low-resource translation tasks. Future
research will investigate zero-shot and few-shot learning approaches to further
improve translation performance in low-resource settings. The dataset and
scripts are available at
\url{https://github.com/atnafuatx/Machine-Translation-Resources}
|
[
{
"version": "v1",
"created": "Sat, 27 May 2023 08:03:44 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Tonja",
"Atnafu Lambebo",
""
],
[
"Maldonado-Sifuentes",
"Christian",
""
],
[
"Castillo",
"David Alejandro Mendoza",
""
],
[
"Kolesnikova",
"Olga",
""
],
[
"Castro-Sánchez",
"Noé",
""
],
[
"Sidorov",
"Grigori",
""
],
[
"Gelbukh",
"Alexander",
""
]
] |
new_dataset
| 0.994214 |
2305.17432
|
Yushan Zhang
|
Yushan Zhang, Johan Edstedt, Bastian Wandt, Per-Erik Forss\'en, Maria
Magnusson, Michael Felsberg
|
GMSF: Global Matching Scene Flow
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We tackle the task of scene flow estimation from point clouds. Given a source
and a target point cloud, the objective is to estimate a translation from each
point in the source point cloud to the target, resulting in a 3D motion vector
field. Previous dominant scene flow estimation methods require complicated
coarse-to-fine or recurrent architectures as a multi-stage refinement. In
contrast, we propose a significantly simpler single-scale one-shot global
matching to address the problem. Our key finding is that reliable feature
similarity between point pairs is essential and sufficient to estimate accurate
scene flow. To this end, we propose to decompose the feature extraction step
via a hybrid local-global-cross transformer architecture which is crucial to
accurate and robust feature representations. Extensive experiments show that
GMSF sets a new state-of-the-art on multiple scene flow estimation benchmarks.
On FlyingThings3D, with the presence of occlusion points, GMSF reduces the
outlier percentage from the previous best performance of 27.4% to 11.7%. On
KITTI Scene Flow, without any fine-tuning, our proposed method shows
state-of-the-art performance.
|
[
{
"version": "v1",
"created": "Sat, 27 May 2023 10:04:21 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Zhang",
"Yushan",
""
],
[
"Edstedt",
"Johan",
""
],
[
"Wandt",
"Bastian",
""
],
[
"Forssén",
"Per-Erik",
""
],
[
"Magnusson",
"Maria",
""
],
[
"Felsberg",
"Michael",
""
]
] |
new_dataset
| 0.996462 |
2305.17448
|
Ting Xu
|
Ting Xu, Huiyun Yang, Zhen Wu, Jiaze Chen, Fei Zhao, Xinyu Dai
|
Measuring Your ASTE Models in The Wild: A Diversified Multi-domain
Dataset For Aspect Sentiment Triplet Extraction
|
15pages, 5 figures, ACL2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Aspect Sentiment Triplet Extraction (ASTE) is widely used in various
applications. However, existing ASTE datasets are limited in their ability to
represent real-world scenarios, hindering the advancement of research in this
area. In this paper, we introduce a new dataset, named DMASTE, which is
manually annotated to better fit real-world scenarios by providing more diverse
and realistic reviews for the task. The dataset includes various lengths,
diverse expressions, more aspect types, and more domains than existing
datasets. We conduct extensive experiments on DMASTE in multiple settings to
evaluate previous ASTE approaches. Empirical results demonstrate that DMASTE is
a more challenging ASTE dataset. Further analyses of in-domain and cross-domain
settings provide promising directions for future research. Our code and dataset
are available at https://github.com/NJUNLP/DMASTE.
|
[
{
"version": "v1",
"created": "Sat, 27 May 2023 11:21:32 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Xu",
"Ting",
""
],
[
"Yang",
"Huiyun",
""
],
[
"Wu",
"Zhen",
""
],
[
"Chen",
"Jiaze",
""
],
[
"Zhao",
"Fei",
""
],
[
"Dai",
"Xinyu",
""
]
] |
new_dataset
| 0.999751 |
2305.17463
|
YuehCheng Huang
|
Yueh-Cheng Huang, Chen-Tao Hsu, and Jen-Hui Chuang
|
Pentagon-Match (PMatch): Identification of View-Invariant Planar Feature
for Local Feature Matching-Based Homography Estimation
|
arXiv admin note: text overlap with arXiv:2211.03007
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In computer vision, finding correct point correspondence among images plays
an important role in many applications, such as image stitching, image
retrieval, visual localization, etc. Most of the research works focus on the
matching of local feature before a sampling method is employed, such as RANSAC,
to verify initial matching results via repeated fitting of certain global
transformation among the images. However, incorrect matches may still exist.
Thus, a novel sampling scheme, Pentagon-Match (PMatch), is proposed in this
work to verify the correctness of initially matched keypoints using pentagons
randomly sampled from them. By ensuring shape and location of these pentagons
are view-invariant with various evaluations of cross-ratio (CR), incorrect
matches of keypoint can be identified easily with homography estimated from
correctly matched pentagons. Experimental results show that highly accurate
estimation of homography can be obtained efficiently for planar scenes of the
HPatches dataset, based on keypoint matching results provided by LoFTR.
Besides, accurate outlier identification for the above matching results and
possible extension of the approach for multi-plane situation are also
demonstrated.
|
[
{
"version": "v1",
"created": "Sat, 27 May 2023 12:41:23 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Huang",
"Yueh-Cheng",
""
],
[
"Hsu",
"Chen-Tao",
""
],
[
"Chuang",
"Jen-Hui",
""
]
] |
new_dataset
| 0.992708 |
2305.17477
|
Nikita Alutis
|
Nikita Alutis, Egor Chistov, Mikhail Dremin, Dmitriy Vatolin
|
BASED: Benchmarking, Analysis, and Structural Estimation of Deblurring
| null | null | null | null |
cs.CV cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
This paper discusses the challenges of evaluating deblurring-methods quality
and proposes a reduced-reference metric based on machine learning. Traditional
quality-assessment metrics such as PSNR and SSIM are common for this task, but
not only do they correlate poorly with subjective assessments, they also
require ground-truth (GT) frames, which can be difficult to obtain in the case
of deblurring. To develop and evaluate our metric, we created a new motion-blur
dataset using a beam splitter. The setup captured various motion types using a
static camera, as most scenes in existing datasets include blur due to camera
motion. We also conducted two large subjective comparisons to aid in metric
development. Our resulting metric requires no GT frames, and it correlates well
with subjective human perception of blur.
|
[
{
"version": "v1",
"created": "Sat, 27 May 2023 13:47:25 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Alutis",
"Nikita",
""
],
[
"Chistov",
"Egor",
""
],
[
"Dremin",
"Mikhail",
""
],
[
"Vatolin",
"Dmitriy",
""
]
] |
new_dataset
| 0.995675 |
2305.17491
|
Jasivan Alex Sivakumar
|
Jasivan Alex Sivakumar and Nafise Sadat Moosavi
|
FERMAT: An Alternative to Accuracy for Numerical Reasoning
|
ACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
While pre-trained language models achieve impressive performance on various
NLP benchmarks, they still struggle with tasks that require numerical
reasoning. Recent advances in improving numerical reasoning are mostly achieved
using very large language models that contain billions of parameters and are
not accessible to everyone. In addition, numerical reasoning is measured using
a single score on existing datasets. As a result, we do not have a clear
understanding of the strengths and shortcomings of existing models on different
numerical reasoning aspects and therefore, potential ways to improve them apart
from scaling them up. Inspired by CheckList (Ribeiro et al., 2020), we
introduce a multi-view evaluation set for numerical reasoning in English,
called FERMAT. Instead of reporting a single score on a whole dataset, FERMAT
evaluates models on various key numerical reasoning aspects such as number
understanding, mathematical operations, and training dependency. Apart from
providing a comprehensive evaluation of models on different numerical reasoning
aspects, FERMAT enables a systematic and automated generation of an arbitrarily
large training or evaluation set for each aspect.The datasets and codes are
publicly available to generate further multi-view data for ulterior tasks and
languages.
|
[
{
"version": "v1",
"created": "Sat, 27 May 2023 15:00:45 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Sivakumar",
"Jasivan Alex",
""
],
[
"Moosavi",
"Nafise Sadat",
""
]
] |
new_dataset
| 0.99513 |
2305.17519
|
Vishnu Murali
|
Vishnu Murali, Ashutosh Trivedi, Majid Zamani
|
Closure Certificates
|
23 pages, 4 figures
| null | null | null |
cs.LO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A barrier certificate, defined over the states of a dynamical system, is a
real-valued function whose zero level set characterizes an inductively
verifiable state invariant separating reachable states from unsafe ones. When
combined with powerful decision procedures such as sum-of-squares programming
(SOS) or satisfiability-modulo-theory solvers (SMT) barrier certificates enable
an automated deductive verification approach to safety. The barrier certificate
approach has been extended to refute omega-regular specifications by separating
consecutive transitions of omega-automata in the hope of denying all accepting
runs. Unsurprisingly, such tactics are bound to be conservative as refutation
of recurrence properties requires reasoning about the well-foundedness of the
transitive closure of the transition relation. This paper introduces the notion
of closure certificates as a natural extension of barrier certificates from
state invariants to transition invariants. We provide SOS and SMT based
characterization for automating the search of closure certificates and
demonstrate their effectiveness via a paradigmatic case study.
|
[
{
"version": "v1",
"created": "Sat, 27 May 2023 16:29:02 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Murali",
"Vishnu",
""
],
[
"Trivedi",
"Ashutosh",
""
],
[
"Zamani",
"Majid",
""
]
] |
new_dataset
| 0.996353 |
2305.17529
|
Fei Liu
|
Yebowen Hu and Tim Ganter and Hanieh Deilamsalehy and Franck
Dernoncourt and Hassan Foroosh and Fei Liu
|
MeetingBank: A Benchmark Dataset for Meeting Summarization
|
ACL 2023 Long Paper
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As the number of recorded meetings increases, it becomes increasingly
important to utilize summarization technology to create useful summaries of
these recordings. However, there is a crucial lack of annotated meeting corpora
for developing this technology, as it can be hard to collect meetings,
especially when the topics discussed are confidential. Furthermore, meeting
summaries written by experienced writers are scarce, making it hard for
abstractive summarizers to produce sensible output without a reliable
reference. This lack of annotated corpora has hindered the development of
meeting summarization technology. In this paper, we present MeetingBank, a new
benchmark dataset of city council meetings over the past decade. MeetingBank is
unique among other meeting corpora due to its divide-and-conquer approach,
which involves dividing professionally written meeting minutes into shorter
passages and aligning them with specific segments of the meeting. This breaks
down the process of summarizing a lengthy meeting into smaller, more manageable
tasks. The dataset provides a new testbed of various meeting summarization
systems and also allows the public to gain insight into how council decisions
are made. We make the collection, including meeting video links, transcripts,
reference summaries, agenda, and other metadata, publicly available to
facilitate the development of better meeting summarization techniques. Our
dataset can be accessed at: https://meetingbank.github.io
|
[
{
"version": "v1",
"created": "Sat, 27 May 2023 17:09:25 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Hu",
"Yebowen",
""
],
[
"Ganter",
"Tim",
""
],
[
"Deilamsalehy",
"Hanieh",
""
],
[
"Dernoncourt",
"Franck",
""
],
[
"Foroosh",
"Hassan",
""
],
[
"Liu",
"Fei",
""
]
] |
new_dataset
| 0.999499 |
2305.17580
|
Maha Jarallah Althobaiti
|
Maha Jarallah Althobaiti
|
ArPanEmo: An Open-Source Dataset for Fine-Grained Emotion Recognition in
Arabic Online Content during COVID-19 Pandemic
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Emotion recognition is a crucial task in Natural Language Processing (NLP)
that enables machines to comprehend the feelings conveyed in the text. The
applications of emotion recognition are diverse, including mental health
diagnosis, student support, and the detection of online suspicious behavior.
Despite the substantial amount of literature available on emotion recognition
in various languages, Arabic emotion recognition has received relatively little
attention, leading to a scarcity of emotion-annotated corpora. This paper
presents the ArPanEmo dataset, a novel dataset for fine-grained emotion
recognition of online posts in Arabic. The dataset comprises 11,128 online
posts manually labeled for ten emotion categories or neutral, with Fleiss'
kappa of 0.71. It targets a specific Arabic dialect and addresses topics
related to the COVID-19 pandemic, making it the first and largest of its kind.
Python's packages were utilized to collect online posts related to the COVID-19
pandemic from three sources: Twitter, YouTube, and online newspaper comments
between March 2020 and March 2022. Upon collection of the online posts, each
one underwent a semi-automatic classification process using a lexicon of
emotion-related terms to determine whether it belonged to the neutral or
emotional category. Subsequently, manual labeling was conducted to further
categorize the emotional data into fine-grained emotion categories.
|
[
{
"version": "v1",
"created": "Sat, 27 May 2023 21:04:26 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Althobaiti",
"Maha Jarallah",
""
]
] |
new_dataset
| 0.999836 |
2305.17644
|
Jin Sun
|
Jin Sun, Xiaoshuang Shi, Zhiyuan Weng, Kaidi Xu, Heng Tao Shen and
Xiaofeng Zhu
|
Using Caterpillar to Nibble Small-Scale Images
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, MLP-based models have become popular and attained significant
performance on medium-scale datasets (e.g., ImageNet-1k). However, their direct
applications to small-scale images remain limited. To address this issue, we
design a new MLP-based network, namely Caterpillar, by proposing a key module
of Shifted-Pillars-Concatenation (SPC) for exploiting the inductive bias of
locality. SPC consists of two processes: (1) Pillars-Shift, which is to shift
all pillars within an image along different directions to generate copies, and
(2) Pillars-Concatenation, which is to capture the local information from
discrete shift neighborhoods of the shifted copies. Extensive experiments
demonstrate its strong scalability and superior performance on popular
small-scale datasets, and the competitive performance on ImageNet-1K to recent
state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Sun, 28 May 2023 06:19:36 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Sun",
"Jin",
""
],
[
"Shi",
"Xiaoshuang",
""
],
[
"Weng",
"Zhiyuan",
""
],
[
"Xu",
"Kaidi",
""
],
[
"Shen",
"Heng Tao",
""
],
[
"Zhu",
"Xiaofeng",
""
]
] |
new_dataset
| 0.994138 |
2305.17679
|
Nicolay Rusnachenko
|
Anton Golubev, Nicolay Rusnachenko, Natalia Loukachevitch
|
RuSentNE-2023: Evaluating Entity-Oriented Sentiment Analysis on Russian
News Texts
|
12 pages, 5 tables, 3 figures
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The paper describes the RuSentNE-2023 evaluation devoted to targeted
sentiment analysis in Russian news texts. The task is to predict sentiment
towards a named entity in a single sentence. The dataset for RuSentNE-2023
evaluation is based on the Russian news corpus RuSentNE having rich
sentiment-related annotation. The corpus is annotated with named entities and
sentiments towards these entities, along with related effects and emotional
states. The evaluation was organized using the CodaLab competition framework.
The main evaluation measure was macro-averaged measure of positive and negative
classes. The best results achieved were of 66% Macro F-measure
(Positive+Negative classes). We also tested ChatGPT on the test set from our
evaluation and found that the zero-shot answers provided by ChatGPT reached 60%
of the F-measure, which corresponds to 4th place in the evaluation. ChatGPT
also provided detailed explanations of its conclusion. This can be considered
as quite high for zero-shot application.
|
[
{
"version": "v1",
"created": "Sun, 28 May 2023 10:04:15 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Golubev",
"Anton",
""
],
[
"Rusnachenko",
"Nicolay",
""
],
[
"Loukachevitch",
"Natalia",
""
]
] |
new_dataset
| 0.984219 |
2305.17690
|
Shantipriya Parida
|
Shantipriya Parida, Idris Abdulmumin, Shamsuddeen Hassan Muhammad,
Aneesh Bose, Guneet Singh Kohli, Ibrahim Said Ahmad, Ketan Kotwal, Sayan Deb
Sarkar, Ond\v{r}ej Bojar, Habeebah Adamu Kakudi
|
HaVQA: A Dataset for Visual Question Answering and Multimodal Research
in Hausa Language
|
Accepted at ACL 2023 as a long paper (Findings)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper presents HaVQA, the first multimodal dataset for visual
question-answering (VQA) tasks in the Hausa language. The dataset was created
by manually translating 6,022 English question-answer pairs, which are
associated with 1,555 unique images from the Visual Genome dataset. As a
result, the dataset provides 12,044 gold standard English-Hausa parallel
sentences that were translated in a fashion that guarantees their semantic
match with the corresponding visual information. We conducted several baseline
experiments on the dataset, including visual question answering, visual
question elicitation, text-only and multimodal machine translation.
|
[
{
"version": "v1",
"created": "Sun, 28 May 2023 10:55:31 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Parida",
"Shantipriya",
""
],
[
"Abdulmumin",
"Idris",
""
],
[
"Muhammad",
"Shamsuddeen Hassan",
""
],
[
"Bose",
"Aneesh",
""
],
[
"Kohli",
"Guneet Singh",
""
],
[
"Ahmad",
"Ibrahim Said",
""
],
[
"Kotwal",
"Ketan",
""
],
[
"Sarkar",
"Sayan Deb",
""
],
[
"Bojar",
"Ondřej",
""
],
[
"Kakudi",
"Habeebah Adamu",
""
]
] |
new_dataset
| 0.999822 |
2305.17696
|
Hwaran Lee
|
Hwaran Lee, Seokhee Hong, Joonsuk Park, Takyoung Kim, Meeyoung Cha,
Yejin Choi, Byoung Pil Kim, Gunhee Kim, Eun-Ju Lee, Yong Lim, Alice Oh,
Sangchul Park and Jung-Woo Ha
|
SQuARe: A Large-Scale Dataset of Sensitive Questions and Acceptable
Responses Created Through Human-Machine Collaboration
|
19 pages, 10 figures, ACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The potential social harms that large language models pose, such as
generating offensive content and reinforcing biases, are steeply rising.
Existing works focus on coping with this concern while interacting with
ill-intentioned users, such as those who explicitly make hate speech or elicit
harmful responses. However, discussions on sensitive issues can become toxic
even if the users are well-intentioned. For safer models in such scenarios, we
present the Sensitive Questions and Acceptable Response (SQuARe) dataset, a
large-scale Korean dataset of 49k sensitive questions with 42k acceptable and
46k non-acceptable responses. The dataset was constructed leveraging HyperCLOVA
in a human-in-the-loop manner based on real news headlines. Experiments show
that acceptable response generation significantly improves for HyperCLOVA and
GPT-3, demonstrating the efficacy of this dataset.
|
[
{
"version": "v1",
"created": "Sun, 28 May 2023 11:51:20 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Lee",
"Hwaran",
""
],
[
"Hong",
"Seokhee",
""
],
[
"Park",
"Joonsuk",
""
],
[
"Kim",
"Takyoung",
""
],
[
"Cha",
"Meeyoung",
""
],
[
"Choi",
"Yejin",
""
],
[
"Kim",
"Byoung Pil",
""
],
[
"Kim",
"Gunhee",
""
],
[
"Lee",
"Eun-Ju",
""
],
[
"Lim",
"Yong",
""
],
[
"Oh",
"Alice",
""
],
[
"Park",
"Sangchul",
""
],
[
"Ha",
"Jung-Woo",
""
]
] |
new_dataset
| 0.999753 |
2305.17709
|
Gongbo Tang
|
Gongbo Tang, Christian Hardmeier
|
Parallel Data Helps Neural Entity Coreference Resolution
|
camera-ready version; to appear in the Findings of ACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Coreference resolution is the task of finding expressions that refer to the
same entity in a text. Coreference models are generally trained on monolingual
annotated data but annotating coreference is expensive and challenging.
Hardmeier et al.(2013) have shown that parallel data contains latent anaphoric
knowledge, but it has not been explored in end-to-end neural models yet. In
this paper, we propose a simple yet effective model to exploit coreference
knowledge from parallel data. In addition to the conventional modules learning
coreference from annotations, we introduce an unsupervised module to capture
cross-lingual coreference knowledge. Our proposed cross-lingual model achieves
consistent improvements, up to 1.74 percentage points, on the OntoNotes 5.0
English dataset using 9 different synthetic parallel datasets. These
experimental results confirm that parallel data can provide additional
coreference knowledge which is beneficial to coreference resolution tasks.
|
[
{
"version": "v1",
"created": "Sun, 28 May 2023 12:30:23 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Tang",
"Gongbo",
""
],
[
"Hardmeier",
"Christian",
""
]
] |
new_dataset
| 0.955566 |
2305.17714
|
Amit Moryossef
|
Amit Moryossef, Mathias M\"uller, Anne G\"ohring, Zifan Jiang, Yoav
Goldberg, and Sarah Ebling
|
An Open-Source Gloss-Based Baseline for Spoken to Signed Language
Translation
| null | null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Sign language translation systems are complex and require many components. As
a result, it is very hard to compare methods across publications. We present an
open-source implementation of a text-to-gloss-to-pose-to-video pipeline
approach, demonstrating conversion from German to Swiss German Sign Language,
French to French Sign Language of Switzerland, and Italian to Italian Sign
Language of Switzerland. We propose three different components for the
text-to-gloss translation: a lemmatizer, a rule-based word reordering and
dropping component, and a neural machine translation system. Gloss-to-pose
conversion occurs using data from a lexicon for three different signed
languages, with skeletal poses extracted from videos. To generate a sentence,
the text-to-gloss system is first run, and the pose representations of the
resulting signs are stitched together.
|
[
{
"version": "v1",
"created": "Sun, 28 May 2023 12:57:20 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Moryossef",
"Amit",
""
],
[
"Müller",
"Mathias",
""
],
[
"Göhring",
"Anne",
""
],
[
"Jiang",
"Zifan",
""
],
[
"Goldberg",
"Yoav",
""
],
[
"Ebling",
"Sarah",
""
]
] |
new_dataset
| 0.991669 |
2305.17758
|
Yuki Okamoto
|
Yuki Okamoto, Kanta Shimonishi, Keisuke Imoto, Kota Dohi, Shota
Horiguchi, Yohei Kawaguchi
|
CAPTDURE: Captioned Sound Dataset of Single Sources
|
Accepted to INTERSPEECH2023
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In conventional studies on environmental sound separation and synthesis using
captions, datasets consisting of multiple-source sounds with their captions
were used for model training. However, when we collect the captions for
multiple-source sound, it is not easy to collect detailed captions for each
sound source, such as the number of sound occurrences and timbre. Therefore, it
is difficult to extract only the single-source target sound by the
model-training method using a conventional captioned sound dataset. In this
work, we constructed a dataset with captions for a single-source sound named
CAPTDURE, which can be used in various tasks such as environmental sound
separation and synthesis. Our dataset consists of 1,044 sounds and 4,902
captions. We evaluated the performance of environmental sound extraction using
our dataset. The experimental results show that the captions for single-source
sounds are effective in extracting only the single-source target sound from the
mixture sound.
|
[
{
"version": "v1",
"created": "Sun, 28 May 2023 15:56:20 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Okamoto",
"Yuki",
""
],
[
"Shimonishi",
"Kanta",
""
],
[
"Imoto",
"Keisuke",
""
],
[
"Dohi",
"Kota",
""
],
[
"Horiguchi",
"Shota",
""
],
[
"Kawaguchi",
"Yohei",
""
]
] |
new_dataset
| 0.999526 |
2305.17798
|
Ismel Mart\'inez-D\'iaz
|
Ismel Mart\'inez-D\'iaz
|
Ceibaco: REST API and Single Page Application for the generation and
evaluation of bijective S-boxes
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper we present the first REST API for the generation and evaluation
of bijective S-boxes. We also present the first Single Page Application tool
for researchers and students that allows the use of a graphical interface. We
give a small dataset of classical S-boxes to test the properties evaluations.
We show how to define experiments and we include two local search experiments
into the proposed tool.
|
[
{
"version": "v1",
"created": "Sun, 28 May 2023 19:00:40 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Martínez-Díaz",
"Ismel",
""
]
] |
new_dataset
| 0.99681 |
2305.17824
|
Govind R
|
S Akshay and Paul Gastin and R Govind and Aniruddha R Joshi and B
Srivathsan
|
A Unified Model for Real-Time Systems: Symbolic Techniques and
Implementation
| null | null | null | null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we consider a model of generalized timed automata (GTA) with
two kinds of clocks, history and future, that can express many timed features
succinctly, including timed automata, event-clock automata with and without
diagonal constraints, and automata with timers.
Our main contribution is a new simulation-based zone algorithm for checking
reachability in this unified model. While such algorithms are known to exist
for timed automata, and have recently been shown for event-clock automata
without diagonal constraints, this is the first result that can handle
event-clock automata with diagonal constraints and automata with timers. We
also provide a prototype implementation for our model and show experimental
results on several benchmarks. To the best of our knowledge, this is the first
effective implementation not just for our unified model, but even just for
automata with timers or for event-clock automata (with predicting clocks)
without going through a costly translation via timed automata. Last but not
least, beyond being interesting in their own right, generalized timed automata
can be used for model-checking event-clock specifications over timed automata
models.
|
[
{
"version": "v1",
"created": "Sun, 28 May 2023 23:32:31 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Akshay",
"S",
""
],
[
"Gastin",
"Paul",
""
],
[
"Govind",
"R",
""
],
[
"Joshi",
"Aniruddha R",
""
],
[
"Srivathsan",
"B",
""
]
] |
new_dataset
| 0.972387 |
2305.17834
|
Heinrich Dinkel
|
Heinrich Dinkel, Zhiyong Yan, Yongqing Wang, Junbo Zhang, Yujun Wang
|
Streaming Audio Transformers for Online Audio Tagging
| null | null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Transformers have emerged as a prominent model framework for audio tagging
(AT), boasting state-of-the-art (SOTA) performance on the widely-used Audioset
dataset. However, their impressive performance often comes at the cost of high
memory usage, slow inference speed, and considerable model delay, rendering
them impractical for real-world AT applications. In this study, we introduce
streaming audio transformers (SAT) that combine the vision transformer (ViT)
architecture with Transformer-Xl-like chunk processing, enabling efficient
processing of long-range audio signals. Our proposed SAT is benchmarked against
other transformer-based SOTA methods, achieving significant improvements in
terms of mean average precision (mAP) at a delay of 2s and 1s, while also
exhibiting significantly lower memory usage and computational overhead.
Checkpoints are publicly available https://github.com/RicherMans/SAT.
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 00:32:11 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Dinkel",
"Heinrich",
""
],
[
"Yan",
"Zhiyong",
""
],
[
"Wang",
"Yongqing",
""
],
[
"Zhang",
"Junbo",
""
],
[
"Wang",
"Yujun",
""
]
] |
new_dataset
| 0.983798 |
2305.17868
|
Kang Yang
|
Kang Yang, Kunhao Lai
|
NaturalFinger: Generating Natural Fingerprint with Generative
Adversarial Networks
| null | null | null | null |
cs.CV cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep neural network (DNN) models have become a critical asset of the model
owner as training them requires a large amount of resource (i.e. labeled data).
Therefore, many fingerprinting schemes have been proposed to safeguard the
intellectual property (IP) of the model owner against model extraction and
illegal redistribution. However, previous schemes adopt unnatural images as the
fingerprint, such as adversarial examples and noisy images, which can be easily
perceived and rejected by the adversary. In this paper, we propose
NaturalFinger which generates natural fingerprint with generative adversarial
networks (GANs). Besides, our proposed NaturalFinger fingerprints the decision
difference areas rather than the decision boundary, which is more robust. The
application of GAN not only allows us to generate more imperceptible samples,
but also enables us to generate unrestricted samples to explore the decision
boundary.To demonstrate the effectiveness of our fingerprint approach, we
evaluate our approach against four model modification attacks including
adversarial training and two model extraction attacks. Experiments show that
our approach achieves 0.91 ARUC value on the FingerBench dataset (154 models),
exceeding the optimal baseline (MetaV) over 17\%.
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 03:17:03 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Yang",
"Kang",
""
],
[
"Lai",
"Kunhao",
""
]
] |
new_dataset
| 0.986275 |
2305.17895
|
Xiang Zhang
|
Xiang Zhang, Yan Lu, Huan Yan, Jingyang Huang, Yusheng Ji and Yu Gu
|
ReSup: Reliable Label Noise Suppression for Facial Expression
Recognition
| null | null | null | null |
cs.CV cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Because of the ambiguous and subjective property of the facial expression
recognition (FER) task, the label noise is widely existing in the FER dataset.
For this problem, in the training phase, current FER methods often directly
predict whether the label of the input image is noised or not, aiming to reduce
the contribution of the noised data in training. However, we argue that this
kind of method suffers from the low reliability of such noise data decision
operation. It makes that some mistakenly abounded clean data are not utilized
sufficiently and some mistakenly kept noised data disturbing the model learning
process. In this paper, we propose a more reliable noise-label suppression
method called ReSup (Reliable label noise Suppression for FER). First, instead
of directly predicting noised or not, ReSup makes the noise data decision by
modeling the distribution of noise and clean labels simultaneously according to
the disagreement between the prediction and the target. Specifically, to
achieve optimal distribution modeling, ReSup models the similarity distribution
of all samples. To further enhance the reliability of our noise decision
results, ReSup uses two networks to jointly achieve noise suppression.
Specifically, ReSup utilize the property that two networks are less likely to
make the same mistakes, making two networks swap decisions and tending to trust
decisions with high agreement. Extensive experiments on three popular
benchmarks show that the proposed method significantly outperforms
state-of-the-art noisy label FER methods by 3.01% on FERPlus becnmarks. Code:
https://github.com/purpleleaves007/FERDenoise
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 06:02:06 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Zhang",
"Xiang",
""
],
[
"Lu",
"Yan",
""
],
[
"Yan",
"Huan",
""
],
[
"Huang",
"Jingyang",
""
],
[
"Ji",
"Yusheng",
""
],
[
"Gu",
"Yu",
""
]
] |
new_dataset
| 0.97515 |
2305.17911
|
Ming Shan Hee
|
Nirmalendu Prakash, Ming Shan Hee and Roy Ka-Wei Lee
|
TotalDefMeme: A Multi-Attribute Meme dataset on Total Defence in
Singapore
|
6 pages. Accepted at ACM MMSys 2023
| null |
10.1145/3587819.3592545
| null |
cs.SI cs.AI cs.CL cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Total Defence is a defence policy combining and extending the concept of
military defence and civil defence. While several countries have adopted total
defence as their defence policy, very few studies have investigated its
effectiveness. With the rapid proliferation of social media and digitalisation,
many social studies have been focused on investigating policy effectiveness
through specially curated surveys and questionnaires either through digital
media or traditional forms. However, such references may not truly reflect the
underlying sentiments about the target policies or initiatives of interest.
People are more likely to express their sentiment using communication mediums
such as starting topic thread on forums or sharing memes on social media. Using
Singapore as a case reference, this study aims to address this research gap by
proposing TotalDefMeme, a large-scale multi-modal and multi-attribute meme
dataset that captures public sentiments toward Singapore's Total Defence
policy. Besides supporting social informatics and public policy analysis of the
Total Defence policy, TotalDefMeme can also support many downstream multi-modal
machine learning tasks, such as aspect-based stance classification and
multi-modal meme clustering. We perform baseline machine learning experiments
on TotalDefMeme and evaluate its technical validity, and present possible
future interdisciplinary research directions and application scenarios using
the dataset as a baseline.
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 06:43:37 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Prakash",
"Nirmalendu",
""
],
[
"Hee",
"Ming Shan",
""
],
[
"Lee",
"Roy Ka-Wei",
""
]
] |
new_dataset
| 0.999853 |
2305.17925
|
Zhiren Huang
|
Zhiren Huang and Charalampos Sipetas and Alonso Espinosa Mireles de
Villafranca and Tri Quach
|
Identifying shifts in multi-modal travel patterns during special events
using mobile data: Celebrating Vappu in Helsinki
|
6 pages, 12 figures, Submitted to ITSC2023
| null | null | null |
cs.CY cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Large urban special events significantly contribute to a city's vibrancy and
economic growth but concurrently impose challenges on transportation systems
due to alterations in mobility patterns. This study aims to shed light on
mobility patterns by utilizing a unique, comprehensive dataset collected from
the Helsinki public transport mobile application and Bluetooth beacons. Earlier
methods, relying on mobile phone records or focusing on single traffic modes,
do not fully grasp the intricacies of travel behavior during such events. We
focus on the Vappu festivities (May 1st) in the Helsinki Metropolitan Area, a
national holiday characterized by mass gatherings and outdoor activities. We
examine and compare multi-modal mobility patterns during the event with those
during typical non-working days in May 2022. Through this case study, we find
that people tend to favor public transport over private cars and are prepared
to walk longer distances to participate in the event. The study underscores the
value of using comprehensive multi-modal data to better understand and manage
transportation during large-scale events.
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 07:32:49 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Huang",
"Zhiren",
""
],
[
"Sipetas",
"Charalampos",
""
],
[
"de Villafranca",
"Alonso Espinosa Mireles",
""
],
[
"Quach",
"Tri",
""
]
] |
new_dataset
| 0.999088 |
2305.17927
|
Yuexiong Ding
|
Yuexiong Ding, Xiaowei Luo
|
VCVW-3D: A Virtual Construction Vehicles and Workers Dataset with 3D
Annotations
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Currently, object detection applications in construction are almost based on
pure 2D data (both image and annotation are 2D-based), resulting in the
developed artificial intelligence (AI) applications only applicable to some
scenarios that only require 2D information. However, most advanced applications
usually require AI agents to perceive 3D spatial information, which limits the
further development of the current computer vision (CV) in construction. The
lack of 3D annotated datasets for construction object detection worsens the
situation. Therefore, this study creates and releases a virtual dataset with 3D
annotations named VCVW-3D, which covers 15 construction scenes and involves ten
categories of construction vehicles and workers. The VCVW-3D dataset is
characterized by multi-scene, multi-category, multi-randomness,
multi-viewpoint, multi-annotation, and binocular vision. Several typical 2D and
monocular 3D object detection models are then trained and evaluated on the
VCVW-3D dataset to provide a benchmark for subsequent research. The VCVW-3D is
expected to bring considerable economic benefits and practical significance by
reducing the costs of data construction, prototype development, and exploration
of space-awareness applications, thus promoting the development of CV in
construction, especially those of 3D applications.
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 07:42:10 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Ding",
"Yuexiong",
""
],
[
"Luo",
"Xiaowei",
""
]
] |
new_dataset
| 0.999745 |
2305.17975
|
Jiaxin Lu
|
Jiaxin Lu, Yifan Sun, Qixing Huang
|
Jigsaw: Learning to Assemble Multiple Fractured Objects
|
17 pages, 9 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automated assembly of 3D fractures is essential in orthopedics, archaeology,
and our daily life. This paper presents Jigsaw, a novel framework for
assembling physically broken 3D objects from multiple pieces. Our approach
leverages hierarchical features of global and local geometry to match and align
the fracture surfaces. Our framework consists of three components: (1) surface
segmentation to separate fracture and original parts, (2) multi-parts matching
to find correspondences among fracture surface points, and (3) robust global
alignment to recover the global poses of the pieces. We show how to jointly
learn segmentation and matching and seamlessly integrate feature matching and
rigidity constraints. We evaluate Jigsaw on the Breaking Bad dataset and
achieve superior performance compared to state-of-the-art methods. Our method
also generalizes well to diverse fracture modes, objects, and unseen instances.
To the best of our knowledge, this is the first learning-based method designed
specifically for 3D fracture assembly over multiple pieces.
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 09:33:43 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Lu",
"Jiaxin",
""
],
[
"Sun",
"Yifan",
""
],
[
"Huang",
"Qixing",
""
]
] |
new_dataset
| 0.990582 |
2305.17984
|
Animesh Chaturvedi Dr.
|
Animesh Chaturvedi and Rajesh Sharma
|
minOffense: Inter-Agreement Hate Terms for Stable Rules, Concepts,
Transitivities, and Lattices
|
IEEE 9th International Conference on Data Science and Advanced
Analytics (DSAA), October 13-16, 2022, Shenzhen, China. IEEE, 2022. (Core A)
| null |
10.1109/DSAA54385.2022.10032389
| null |
cs.CL cs.AI cs.SI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Hate speech classification has become an important problem due to the spread
of hate speech on social media platforms. For a given set of Hate Terms lists
(HTs-lists) and Hate Speech data (HS-data), it is challenging to understand
which hate term contributes the most for hate speech classification. This paper
contributes two approaches to quantitatively measure and qualitatively
visualise the relationship between co-occurring Hate Terms (HTs). Firstly, we
propose an approach for the classification of hate-speech by producing a Severe
Hate Terms list (Severe HTs-list) from existing HTs-lists. To achieve our goal,
we proposed three metrics (Hatefulness, Relativeness, and Offensiveness) to
measure the severity of HTs. These metrics assist to create an Inter-agreement
HTs-list, which explains the contribution of an individual hate term toward
hate speech classification. Then, we used the Offensiveness metric values of
HTs above a proposed threshold minimum Offense (minOffense) to generate a new
Severe HTs-list. To evaluate our approach, we used three hate speech datasets
and six hate terms lists. Our approach shown an improvement from 0.845 to 0.923
(best) as compared to the baseline. Secondly, we also proposed Stable Hate Rule
(SHR) mining to provide ordered co-occurrence of various HTs with minimum
Stability (minStab). The SHR mining detects frequently co-occurring HTs to form
Stable Hate Rules and Concepts. These rules and concepts are used to visualise
the graphs of Transitivities and Lattices formed by HTs.
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 09:47:36 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Chaturvedi",
"Animesh",
""
],
[
"Sharma",
"Rajesh",
""
]
] |
new_dataset
| 0.989334 |
2305.17993
|
Guangyao Li
|
Guangyao Li, Yixin Xu, Di Hu
|
Multi-Scale Attention for Audio Question Answering
|
Accepted by InterSpeech 2023
| null | null | null |
cs.SD cs.AI cs.MM eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Audio question answering (AQA), acting as a widely used proxy task to explore
scene understanding, has got more attention. The AQA is challenging for it
requires comprehensive temporal reasoning from different scales' events of an
audio scene. However, existing methods mostly extend the structures of visual
question answering task to audio ones in a simple pattern but may not perform
well when perceiving a fine-grained audio scene. To this end, we present a
Multi-scale Window Attention Fusion Model (MWAFM) consisting of an asynchronous
hybrid attention module and a multi-scale window attention module. The former
is designed to aggregate unimodal and cross-modal temporal contexts, while the
latter captures sound events of varying lengths and their temporal dependencies
for a more comprehensive understanding. Extensive experiments are conducted to
demonstrate that the proposed MWAFM can effectively explore temporal
information to facilitate AQA in the fine-grained scene.Code:
https://github.com/GeWu-Lab/MWAFM
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 10:06:58 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Li",
"Guangyao",
""
],
[
"Xu",
"Yixin",
""
],
[
"Hu",
"Di",
""
]
] |
new_dataset
| 0.98911 |
2305.18008
|
Tomasz Kryjak
|
Piotr Wzorek, Tomasz Kryjak
|
Pedestrian detection with high-resolution event camera
|
Accepted for the PP-RAI'2023 - 4th Polish Conference on Artificial
Intelligence
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Despite the dynamic development of computer vision algorithms, the
implementation of perception and control systems for autonomous vehicles such
as drones and self-driving cars still poses many challenges. A video stream
captured by traditional cameras is often prone to problems such as motion blur
or degraded image quality due to challenging lighting conditions. In addition,
the frame rate - typically 30 or 60 frames per second - can be a limiting
factor in certain scenarios. Event cameras (DVS -- Dynamic Vision Sensor) are a
potentially interesting technology to address the above mentioned problems. In
this paper, we compare two methods of processing event data by means of deep
learning for the task of pedestrian detection. We used a representation in the
form of video frames, convolutional neural networks and asynchronous sparse
convolutional neural networks. The results obtained illustrate the potential of
event cameras and allow the evaluation of the accuracy and efficiency of the
methods used for high-resolution (1280 x 720 pixels) footage.
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 10:57:59 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Wzorek",
"Piotr",
""
],
[
"Kryjak",
"Tomasz",
""
]
] |
new_dataset
| 0.997751 |
2305.18034
|
Andrea Galassi
|
Francesco Antici, Andrea Galassi, Federico Ruggeri, Katerina Korre,
Arianna Muti, Alessandra Bardi, Alice Fedotova, Alberto Barr\'on-Cede\~no
|
A Corpus for Sentence-level Subjectivity Detection on English News
Articles
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present a novel corpus for subjectivity detection at the sentence level.
We develop new annotation guidelines for the task, which are not limited to
language-specific cues, and apply them to produce a new corpus in English. The
corpus consists of 411 subjective and 638 objective sentences extracted from
ongoing coverage of political affairs from online news outlets. This new
resource paves the way for the development of models for subjectivity detection
in English and across other languages, without relying on language-specific
tools like lexicons or machine translation. We evaluate state-of-the-art
multilingual transformer-based models on the task, both in mono- and
cross-lingual settings, the latter with a similar existing corpus in Italian
language. We observe that enriching our corpus with resources in other
languages improves the results on the task.
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 11:54:50 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Antici",
"Francesco",
""
],
[
"Galassi",
"Andrea",
""
],
[
"Ruggeri",
"Federico",
""
],
[
"Korre",
"Katerina",
""
],
[
"Muti",
"Arianna",
""
],
[
"Bardi",
"Alessandra",
""
],
[
"Fedotova",
"Alice",
""
],
[
"Barrón-Cedeño",
"Alberto",
""
]
] |
new_dataset
| 0.999288 |
2305.18070
|
Zeno Geradts
|
Mart Keizer, Zeno Geradts, Meike Kombrink
|
Forensic Video Steganalysis in Spatial Domain by Noise Residual
Convolutional Neural Network
| null | null | null | null |
cs.CV cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
This research evaluates a convolutional neural network (CNN) based approach
to forensic video steganalysis. A video steganography dataset is created to
train a CNN to conduct forensic steganalysis in the spatial domain. We use a
noise residual convolutional neural network to detect embedded secrets since a
steganographic embedding process will always result in the modification of
pixel values in video frames. Experimental results show that the CNN-based
approach can be an effective method for forensic video steganalysis and can
reach a detection rate of 99.96%. Keywords: Forensic, Steganalysis, Deep
Steganography, MSU StegoVideo, Convolutional Neural Networks
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 13:17:20 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Keizer",
"Mart",
""
],
[
"Geradts",
"Zeno",
""
],
[
"Kombrink",
"Meike",
""
]
] |
new_dataset
| 0.988156 |
2305.18212
|
Yuxing Long
|
Yuxing Long, Binyuan Hui, Caixia Yuan1, Fei Huang, Yongbin Li, Xiaojie
Wang
|
Multimodal Recommendation Dialog with Subjective Preference: A New
Challenge and Benchmark
|
ACL 2023
| null | null | null |
cs.IR cs.AI cs.CL cs.CV cs.LG cs.MM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Existing multimodal task-oriented dialog data fails to demonstrate the
diverse expressions of user subjective preferences and recommendation acts in
the real-life shopping scenario. This paper introduces a new dataset SURE
(Multimodal Recommendation Dialog with SUbjective PREference), which contains
12K shopping dialogs in complex store scenes. The data is built in two phases
with human annotations to ensure quality and diversity. SURE is well-annotated
with subjective preferences and recommendation acts proposed by sales experts.
A comprehensive analysis is given to reveal the distinguishing features of
SURE. Three benchmark tasks are then proposed on the data to evaluate the
capability of multimodal recommendation agents. Based on the SURE, we propose a
baseline model, powered by a state-of-the-art multimodal model, for these
tasks.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 08:43:46 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Long",
"Yuxing",
""
],
[
"Hui",
"Binyuan",
""
],
[
"Yuan1",
"Caixia",
""
],
[
"Huang",
"Fei",
""
],
[
"Li",
"Yongbin",
""
],
[
"Wang",
"Xiaojie",
""
]
] |
new_dataset
| 0.999228 |
2305.18225
|
Sarat Chandra Varanasi
|
Sarat Chandra Varanasi, Neeraj Mittal, Gopal Gupta
|
Locksynth: Deriving Synchronization Code for Concurrent Data Structures
with ASP
| null | null | null | null |
cs.DC cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Locksynth, a tool that automatically derives synchronization
needed for destructive updates to concurrent data structures that involve a
constant number of shared heap memory write operations. Locksynth serves as the
implementation of our prior work on deriving abstract synchronization code.
Designing concurrent data structures involves inferring correct synchronization
code starting with a prior understanding of the sequential data structure's
operations. Further, an understanding of shared memory model and the
synchronization primitives is also required. The reasoning involved
transforming a sequential data structure into its concurrent version can be
performed using Answer Set Programming and we mechanized our approach in
previous work. The reasoning involves deduction and abduction that can be
succinctly modeled in ASP. We assume that the abstract sequential code of the
data structure's operations is provided, alongside axioms that describe
concurrent behavior. This information is used to automatically derive
concurrent code for that data structure, such as dictionary operations for
linked lists and binary search trees that involve a constant number of
destructive update operations. We also are able to infer the correct set of
locks (but not code synthesis) for external height-balanced binary search trees
that involve left/right tree rotations. Locksynth performs the analyses
required to infer correct sets of locks and as a final step, also derives the
C++ synchronization code for the synthesized data structures. We also provide a
performance analysis of the C++ code synthesized by Locksynth with the
hand-crafted versions available from the Synchrobench microbenchmark suite. To
the best of our knowledge, our tool is the first to employ ASP as a backend
reasoner to perform concurrent data structure synthesis.
|
[
{
"version": "v1",
"created": "Sat, 20 May 2023 20:28:20 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Varanasi",
"Sarat Chandra",
""
],
[
"Mittal",
"Neeraj",
""
],
[
"Gupta",
"Gopal",
""
]
] |
new_dataset
| 0.967515 |
2305.18265
|
Gengyu Wang
|
Gengyu Wang, Kate Harwood, Lawrence Chillrud, Amith Ananthram, Melanie
Subbiah, Kathleen McKeown
|
Check-COVID: Fact-Checking COVID-19 News Claims with Scientific Evidence
|
Accepted as ACL 2023 Findings
| null | null | null |
cs.CL cs.AI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new fact-checking benchmark, Check-COVID, that requires systems
to verify claims about COVID-19 from news using evidence from scientific
articles. This approach to fact-checking is particularly challenging as it
requires checking internet text written in everyday language against evidence
from journal articles written in formal academic language. Check-COVID contains
1, 504 expert-annotated news claims about the coronavirus paired with
sentence-level evidence from scientific journal articles and veracity labels.
It includes both extracted (journalist-written) and composed
(annotator-written) claims. Experiments using both a fact-checking specific
system and GPT-3.5, which respectively achieve F1 scores of 76.99 and 69.90 on
this task, reveal the difficulty of automatically fact-checking both claim
types and the importance of in-domain data for good performance. Our data and
models are released publicly at https://github.com/posuer/Check-COVID.
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 17:39:22 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Wang",
"Gengyu",
""
],
[
"Harwood",
"Kate",
""
],
[
"Chillrud",
"Lawrence",
""
],
[
"Ananthram",
"Amith",
""
],
[
"Subbiah",
"Melanie",
""
],
[
"McKeown",
"Kathleen",
""
]
] |
new_dataset
| 0.999364 |
2305.18273
|
Nikolas Lamb
|
Nikolas Lamb, Sean Banerjee, Natasha Kholgade Banerjee
|
Pix2Repair: Implicit Shape Restoration from Images
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Pix2Repair, an automated shape repair approach that generates
restoration shapes from images to repair fractured objects. Prior repair
approaches require a high-resolution watertight 3D mesh of the fractured object
as input. Input 3D meshes must be obtained using expensive 3D scanners, and
scanned meshes require manual cleanup, limiting accessibility and scalability.
Pix2Repair takes an image of the fractured object as input and automatically
generates a 3D printable restoration shape. We contribute a novel shape
function that deconstructs a latent code representing the fractured object into
a complete shape and a break surface. We show restorations for synthetic
fractures from the Geometric Breaks and Breaking Bad datasets, and cultural
heritage objects from the QP dataset, and for real fractures from the Fantastic
Breaks dataset. We overcome challenges in restoring axially symmetric objects
by predicting view-centered restorations. Our approach outperforms shape
completion approaches adapted for shape repair in terms of chamfer distance,
earth mover's distance, normal consistency, and percent restorations generated.
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 17:48:09 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Lamb",
"Nikolas",
""
],
[
"Banerjee",
"Sean",
""
],
[
"Banerjee",
"Natasha Kholgade",
""
]
] |
new_dataset
| 0.999106 |
2305.18277
|
Achraf Ben-Hamadou
|
Achraf Ben-Hamadou, Oussama Smaoui, Ahmed Rekik, Sergi Pujades, Edmond
Boyer, Hoyeon Lim, Minchang Kim, Minkyung Lee, Minyoung Chung, Yeong-Gil
Shin, Mathieu Leclercq, Lucia Cevidanes, Juan Carlos Prieto, Shaojie Zhuang,
Guangshun Wei, Zhiming Cui, Yuanfeng Zhou, Tudor Dascalu, Bulat Ibragimov,
Tae-Hoon Yong, Hong-Gi Ahn, Wan Kim, Jae-Hwan Han, Byungsun Choi, Niels van
Nistelrooij, Steven Kempers, Shankeeth Vinayahalingam, Julien Strippoli,
Aur\'elien Thollot, Hugo Setbon, Cyril Trosset, Edouard Ladroit
|
3DTeethSeg'22: 3D Teeth Scan Segmentation and Labeling Challenge
|
29 pages, MICCAI 2022 Singapore, Satellite Event, Challenge
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Teeth localization, segmentation, and labeling from intra-oral 3D scans are
essential tasks in modern dentistry to enhance dental diagnostics, treatment
planning, and population-based studies on oral health. However, developing
automated algorithms for teeth analysis presents significant challenges due to
variations in dental anatomy, imaging protocols, and limited availability of
publicly accessible data. To address these challenges, the 3DTeethSeg'22
challenge was organized in conjunction with the International Conference on
Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2022,
with a call for algorithms tackling teeth localization, segmentation, and
labeling from intraoral 3D scans. A dataset comprising a total of 1800 scans
from 900 patients was prepared, and each tooth was individually annotated by a
human-machine hybrid algorithm. A total of 6 algorithms were evaluated on this
dataset. In this study, we present the evaluation results of the 3DTeethSeg'22
challenge. The 3DTeethSeg'22 challenge code can be accessed at:
https://github.com/abenhamadou/3DTeethSeg22_challenge
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 17:49:58 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Ben-Hamadou",
"Achraf",
""
],
[
"Smaoui",
"Oussama",
""
],
[
"Rekik",
"Ahmed",
""
],
[
"Pujades",
"Sergi",
""
],
[
"Boyer",
"Edmond",
""
],
[
"Lim",
"Hoyeon",
""
],
[
"Kim",
"Minchang",
""
],
[
"Lee",
"Minkyung",
""
],
[
"Chung",
"Minyoung",
""
],
[
"Shin",
"Yeong-Gil",
""
],
[
"Leclercq",
"Mathieu",
""
],
[
"Cevidanes",
"Lucia",
""
],
[
"Prieto",
"Juan Carlos",
""
],
[
"Zhuang",
"Shaojie",
""
],
[
"Wei",
"Guangshun",
""
],
[
"Cui",
"Zhiming",
""
],
[
"Zhou",
"Yuanfeng",
""
],
[
"Dascalu",
"Tudor",
""
],
[
"Ibragimov",
"Bulat",
""
],
[
"Yong",
"Tae-Hoon",
""
],
[
"Ahn",
"Hong-Gi",
""
],
[
"Kim",
"Wan",
""
],
[
"Han",
"Jae-Hwan",
""
],
[
"Choi",
"Byungsun",
""
],
[
"van Nistelrooij",
"Niels",
""
],
[
"Kempers",
"Steven",
""
],
[
"Vinayahalingam",
"Shankeeth",
""
],
[
"Strippoli",
"Julien",
""
],
[
"Thollot",
"Aurélien",
""
],
[
"Setbon",
"Hugo",
""
],
[
"Trosset",
"Cyril",
""
],
[
"Ladroit",
"Edouard",
""
]
] |
new_dataset
| 0.999115 |
2305.18279
|
Yuhang Zang
|
Yuhang Zang, Wei Li, Jun Han, Kaiyang Zhou, Chen Change Loy
|
Contextual Object Detection with Multimodal Large Language Models
|
Github: https://github.com/yuhangzang/ContextDET, Project Page:
https://www.mmlab-ntu.com/project/contextdet/index.html
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Recent Multimodal Large Language Models (MLLMs) are remarkable in
vision-language tasks, such as image captioning and question answering, but
lack the essential perception ability, i.e., object detection. In this work, we
address this limitation by introducing a novel research problem of contextual
object detection -- understanding visible objects within different human-AI
interactive contexts. Three representative scenarios are investigated,
including the language cloze test, visual captioning, and question answering.
Moreover, we present ContextDET, a unified multimodal model that is capable of
end-to-end differentiable modeling of visual-language contexts, so as to
locate, identify, and associate visual objects with language inputs for
human-AI interaction. Our ContextDET involves three key submodels: (i) a visual
encoder for extracting visual representations, (ii) a pre-trained LLM for
multimodal context decoding, and (iii) a visual decoder for predicting bounding
boxes given contextual object words. The new generate-then-detect framework
enables us to detect object words within human vocabulary. Extensive
experiments show the advantages of ContextDET on our proposed CODE benchmark,
open-vocabulary detection, and referring image segmentation. Github:
https://github.com/yuhangzang/ContextDET.
|
[
{
"version": "v1",
"created": "Mon, 29 May 2023 17:50:33 GMT"
}
] | 2023-05-30T00:00:00 |
[
[
"Zang",
"Yuhang",
""
],
[
"Li",
"Wei",
""
],
[
"Han",
"Jun",
""
],
[
"Zhou",
"Kaiyang",
""
],
[
"Loy",
"Chen Change",
""
]
] |
new_dataset
| 0.994715 |
1906.02628
|
Wanxin Li
|
Wanxin Li, Mark Nejad, Rui Zhang
|
A Blockchain-Based Architecture for Traffic Signal Control Systems
|
This paper has been accepted at IEEE International Congress on
Internet of Things (IEEE ICIOT 2019), Milan, Italy
| null |
10.1109/ICIOT.2019.00018
| null |
cs.NI cs.CR cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ever-growing incorporation of connected vehicle (CV) technologies into
intelligent traffic signal control systems bring about significant data
security issues in the connected vehicular networks. This paper presents a
novel decentralized and secure by design architecture for connected vehicle
data security, which is based on the emerging blockchain paradigm. In a
simulation study, we applied this architecture to defend the Intelligent
Traffic Signal System (I-SIG), a USDOT approved CV pilot program, against
congestion attacks. The results show the performance of the proposed
architecture for the traffic signal control system.
|
[
{
"version": "v1",
"created": "Thu, 6 Jun 2019 15:02:52 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Jun 2019 03:31:39 GMT"
},
{
"version": "v3",
"created": "Fri, 13 Sep 2019 17:34:02 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Li",
"Wanxin",
""
],
[
"Nejad",
"Mark",
""
],
[
"Zhang",
"Rui",
""
]
] |
new_dataset
| 0.998144 |
2010.08183
|
Despoina Antonakaki
|
Alexander Shevtsov, Maria Oikonomidou, Despoina Antonakaki, Polyvios
Pratikakis, Sotiris Ioannidis
|
Analysis of Twitter and YouTube during USelections 2020
| null | null |
10.1371/journal.pone.0270542
| null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The presidential elections in the United States on 3 November 2020 have
caused extensive discussions on social media. A part of the content on US
elections is organic, coming from users discussing their opinions of the
candidates, political positions, or relevant content presented on television.
Another significant part of the content generated originates from organized
campaigns, both official and by astroturfing.
In this study, we obtain approximately 17.5M tweets containing 3M users,
based on prevalent hashtags related to US election 2020, as well as the related
YouTube links, contained in the Twitter dataset, likes, dislikes and comments
of the videos and conduct volume, sentiment and graph analysis on the
communities formed.
Particularly, we study the daily traffic per prevalent hashtags, plot the
retweet graph from July to September 2020, show how its main connected
component becomes denser in the period closer to the elections and highlight
the two main entities ('Biden' and 'Trump'). Additionally, we gather the
related YouTube links contained in the previous dataset and perform sentiment
analysis. The results on sentiment analysis on the Twitter corpus and the
YouTube metadata gathered, show the positive and negative sentiment for the two
entities throughout this period. The results of sentiment analysis indicate
that 45.7% express positive sentiment towards Trump in Twitter and 33.8%
positive sentiment towards Biden, while 14.55% of users express positive
sentiment in YouTube metadata gathered towards Trump and 8.7% positive
sentiment towards Biden. Our analysis fill the gap between the connection of
offline events and their consequences in social media by monitoring important
events in real world and measuring public volume and sentiment before and after
the event in social media.
|
[
{
"version": "v1",
"created": "Fri, 16 Oct 2020 06:10:35 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Oct 2020 08:37:09 GMT"
},
{
"version": "v3",
"created": "Tue, 3 Nov 2020 13:23:27 GMT"
},
{
"version": "v4",
"created": "Tue, 10 Nov 2020 13:35:37 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Shevtsov",
"Alexander",
""
],
[
"Oikonomidou",
"Maria",
""
],
[
"Antonakaki",
"Despoina",
""
],
[
"Pratikakis",
"Polyvios",
""
],
[
"Ioannidis",
"Sotiris",
""
]
] |
new_dataset
| 0.973572 |
2010.14037
|
Wanxin Li
|
Wanxin Li, Collin Meese, Hao Guo and Mark Nejad
|
Blockchain-enabled Identity Verification for Safe Ridesharing Leveraging
Zero-Knowledge Proof
|
This paper has been accepted at IEEE International Conference on Hot
Information-Centric Networking (IEEE HotICN), Hefei, China, December 12-14,
2020
| null | null | null |
cs.CR cs.DC cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The on-demand mobility market, including ridesharing, is becoming
increasingly important with e-hailing fares growing at a rate of approximately
130% per annum since 2013. By increasing utilization of existing vehicles and
empty seats, ridesharing can provide many benefits including reduced traffic
congestion and environmental impact from vehicle usage and production. However,
the safety of riders and drivers has become of paramount concern and a method
for privacy-preserving identity verification between untrusted parties is
essential for protecting users. To this end, we propose a novel
privacy-preserving identity verification system, extending zero-knowledge proof
(ZKP) and blockchain for use in ridesharing applications. We design a
permissioned blockchain network to perform the ZKP verification of a driver's
identity, which also acts as an immutable ledger to store ride logs and ZKP
records. For the ZKP module, we design a protocol to facilitate user
verification without requiring the exchange of any private information. We
prototype the proposed system on the Hyperledger Fabric platform, with the
Hyperledger Ursa cryptography library, and conduct extensive experimentation.
To measure the prototype's performance, we utilize the Hyperledger Caliper
benchmark tool to perform extensive analysis and the results show that our
system is suitable for use in real-world ridesharing applications.
|
[
{
"version": "v1",
"created": "Tue, 27 Oct 2020 03:43:39 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Oct 2020 00:44:56 GMT"
},
{
"version": "v3",
"created": "Sun, 1 Nov 2020 14:06:48 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Li",
"Wanxin",
""
],
[
"Meese",
"Collin",
""
],
[
"Guo",
"Hao",
""
],
[
"Nejad",
"Mark",
""
]
] |
new_dataset
| 0.995171 |
2110.06215
|
Xuan Tang
|
Xuan Tang, Zachary Ferguson, Teseo Schneider, Denis Zorin, Shoaib
Kamil, Daniele Panozzo
|
A Cross-Platform Benchmark for Interval Computation Libraries
|
11 pages, 33 figures, 2 tables
|
In Parallel Processing and Applied Mathematics. PPAM 2022. Lecture
Notes in Computer Science, vol 13827. Springer, Cham
|
10.1007/978-3-031-30445-3_35
| null |
cs.MS cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
Interval computation is widely used to certify computations that use floating
point operations to avoid pitfalls related to rounding error introduced by
inaccurate operations. Despite its popularity and practical benefits, support
for interval arithmetic is not standardized nor available in mainstream
programming languages. We propose the first benchmark for interval
computations, coupled with reference solutions computed with exact arithmetic,
and compare popular C and C++ libraries over different architectures, operating
systems, and compilers. The benchmark allows identifying limitations in
existing implementations, and provides a reliable guide on which library to use
on each system. We believe that our benchmark will be useful for developers of
future interval libraries, as a way to test the correctness and performance of
their algorithms.
|
[
{
"version": "v1",
"created": "Tue, 12 Oct 2021 16:24:39 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Tang",
"Xuan",
""
],
[
"Ferguson",
"Zachary",
""
],
[
"Schneider",
"Teseo",
""
],
[
"Zorin",
"Denis",
""
],
[
"Kamil",
"Shoaib",
""
],
[
"Panozzo",
"Daniele",
""
]
] |
new_dataset
| 0.999101 |
2112.08804
|
Rifat Shahriyar
|
Abhik Bhattacharjee, Tahmid Hasan, Wasi Uddin Ahmad, Yuan-Fang Li,
Yong-Bin Kang, Rifat Shahriyar
|
CrossSum: Beyond English-Centric Cross-Lingual Summarization for 1,500+
Language Pairs
|
ACL 2023 (camera-ready)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present CrossSum, a large-scale cross-lingual summarization dataset
comprising 1.68 million article-summary samples in 1,500+ language pairs. We
create CrossSum by aligning parallel articles written in different languages
via cross-lingual retrieval from a multilingual abstractive summarization
dataset and perform a controlled human evaluation to validate its quality. We
propose a multistage data sampling algorithm to effectively train a
cross-lingual summarization model capable of summarizing an article in any
target language. We also introduce LaSE, an embedding-based metric for
automatically evaluating model-generated summaries. LaSE is strongly correlated
with ROUGE and, unlike ROUGE, can be reliably measured even in the absence of
references in the target language. Performance on ROUGE and LaSE indicate that
our proposed model consistently outperforms baseline models. To the best of our
knowledge, CrossSum is the largest cross-lingual summarization dataset and the
first ever that is not centered around English. We are releasing the dataset,
training and evaluation scripts, and models to spur future research on
cross-lingual summarization. The resources can be found at
https://github.com/csebuetnlp/CrossSum
|
[
{
"version": "v1",
"created": "Thu, 16 Dec 2021 11:40:36 GMT"
},
{
"version": "v2",
"created": "Mon, 23 May 2022 18:44:10 GMT"
},
{
"version": "v3",
"created": "Thu, 25 May 2023 19:18:59 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Bhattacharjee",
"Abhik",
""
],
[
"Hasan",
"Tahmid",
""
],
[
"Ahmad",
"Wasi Uddin",
""
],
[
"Li",
"Yuan-Fang",
""
],
[
"Kang",
"Yong-Bin",
""
],
[
"Shahriyar",
"Rifat",
""
]
] |
new_dataset
| 0.996718 |
2205.12585
|
I-Hung Hsu
|
I-Hung Hsu, Kuan-Hao Huang, Shuning Zhang, Wenxin Cheng, Premkumar
Natarajan, Kai-Wei Chang, Nanyun Peng
|
TAGPRIME: A Unified Framework for Relational Structure Extraction
|
Paper accepted by ACL2023 as a main conference paper. The first two
authors contribute equally
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many tasks in natural language processing require the extraction of
relationship information for a given condition, such as event argument
extraction, relation extraction, and task-oriented semantic parsing. Recent
works usually propose sophisticated models for each task independently and pay
less attention to the commonality of these tasks and to have a unified
framework for all the tasks. In this work, we propose to take a unified view of
all these tasks and introduce TAGPRIME to address relational structure
extraction problems. TAGPRIME is a sequence tagging model that appends priming
words about the information of the given condition (such as an event trigger)
to the input text. With the self-attention mechanism in pre-trained language
models, the priming words make the output contextualized representations
contain more information about the given condition, and hence become more
suitable for extracting specific relationships for the condition. Extensive
experiments and analyses on three different tasks that cover ten datasets
across five different languages demonstrate the generality and effectiveness of
TAGPRIME.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 08:57:46 GMT"
},
{
"version": "v2",
"created": "Fri, 26 May 2023 08:31:50 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Hsu",
"I-Hung",
""
],
[
"Huang",
"Kuan-Hao",
""
],
[
"Zhang",
"Shuning",
""
],
[
"Cheng",
"Wenxin",
""
],
[
"Natarajan",
"Premkumar",
""
],
[
"Chang",
"Kai-Wei",
""
],
[
"Peng",
"Nanyun",
""
]
] |
new_dataset
| 0.959896 |
2208.01218
|
Yogesh Sharma Ph.D.
|
Yogesh Sharma, Deval Bhamare, Nishanth Sastry, Bahman Javadi, RajKumar
Buyya
|
SLA Management in Intent-Driven Service Management Systems: A Taxonomy
and Future Directions
|
Accepted for ACM Computing Surveys (CSUR) in March 2023
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Traditionally, network and system administrators are responsible for
designing, configuring, and resolving the Internet service requests.
Human-driven system configuration and management are proving unsatisfactory due
to the recent interest in time-sensitive applications with stringent quality of
service (QoS). Aiming to transition from the traditional human-driven to
zero-touch service management in the field of networks and computing,
intent-driven service management (IDSM) has been proposed as a response to
stringent quality of service requirements. In IDSM, users express their service
requirements in a declarative manner as intents. IDSM, with the help of closed
control-loop operations, perform configurations and deployments, autonomously
to meet service request requirements. The result is a faster deployment of
Internet services and reduction in configuration errors caused by manual
operations, which in turn reduces the service-level agreement (SLA) violations.
In the early stages of development, IDSM systems require attention from
industry as well as academia. In an attempt to fill the gaps in current
research, we conducted a systematic literature review of SLA management in IDSM
systems. As an outcome, we have identified four IDSM intent management
activities and proposed a taxonomy for each activity. Analysis of all studies
and future research directions, are presented in the conclusions.
|
[
{
"version": "v1",
"created": "Tue, 2 Aug 2022 03:06:06 GMT"
},
{
"version": "v2",
"created": "Fri, 26 May 2023 16:31:47 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Sharma",
"Yogesh",
""
],
[
"Bhamare",
"Deval",
""
],
[
"Sastry",
"Nishanth",
""
],
[
"Javadi",
"Bahman",
""
],
[
"Buyya",
"RajKumar",
""
]
] |
new_dataset
| 0.990752 |
2209.00507
|
Dominik Stammbach
|
Dominik Stammbach, Nicolas Webersinke, Julia Anna Bingler, Mathias
Kraus, Markus Leippold
|
Environmental Claim Detection
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
To transition to a green economy, environmental claims made by companies must
be reliable, comparable, and verifiable. To analyze such claims at scale,
automated methods are needed to detect them in the first place. However, there
exist no datasets or models for this. Thus, this paper introduces the task of
environmental claim detection. To accompany the task, we release an
expert-annotated dataset and models trained on this dataset. We preview one
potential application of such models: We detect environmental claims made in
quarterly earning calls and find that the number of environmental claims has
steadily increased since the Paris Agreement in 2015.
|
[
{
"version": "v1",
"created": "Thu, 1 Sep 2022 14:51:07 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Oct 2022 09:46:40 GMT"
},
{
"version": "v3",
"created": "Fri, 19 May 2023 08:30:17 GMT"
},
{
"version": "v4",
"created": "Fri, 26 May 2023 07:25:47 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Stammbach",
"Dominik",
""
],
[
"Webersinke",
"Nicolas",
""
],
[
"Bingler",
"Julia Anna",
""
],
[
"Kraus",
"Mathias",
""
],
[
"Leippold",
"Markus",
""
]
] |
new_dataset
| 0.994197 |
2209.01106
|
Pascal Welke
|
Vanessa Toborek and Moritz Busch and Malte Bo{\ss}ert and Christian
Bauckhage and Pascal Welke
|
A New Aligned Simple German Corpus
|
Accepted at ACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
"Leichte Sprache", the German counterpart to Simple English, is a regulated
language aiming to facilitate complex written language that would otherwise
stay inaccessible to different groups of people. We present a new
sentence-aligned monolingual corpus for Simple German -- German. It contains
multiple document-aligned sources which we have aligned using automatic
sentence-alignment methods. We evaluate our alignments based on a manually
labelled subset of aligned documents. The quality of our sentence alignments,
as measured by F1-score, surpasses previous work. We publish the dataset under
CC BY-SA and the accompanying code under MIT license.
|
[
{
"version": "v1",
"created": "Fri, 2 Sep 2022 15:14:04 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Sep 2022 07:24:59 GMT"
},
{
"version": "v3",
"created": "Tue, 16 May 2023 17:24:47 GMT"
},
{
"version": "v4",
"created": "Fri, 26 May 2023 16:11:23 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Toborek",
"Vanessa",
""
],
[
"Busch",
"Moritz",
""
],
[
"Boßert",
"Malte",
""
],
[
"Bauckhage",
"Christian",
""
],
[
"Welke",
"Pascal",
""
]
] |
new_dataset
| 0.999167 |
2209.02203
|
Xianjun Yang
|
Xianjun Yang, Yujie Lu, Linda Petzold
|
Few-Shot Document-Level Event Argument Extraction
|
Accepted to ACL 2023 Main Conference
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Event argument extraction (EAE) has been well studied at the sentence level
but under-explored at the document level. In this paper, we study to capture
event arguments that actually spread across sentences in documents. Prior works
usually assume full access to rich document supervision, ignoring the fact that
the available argument annotation is usually limited. To fill this gap, we
present FewDocAE, a Few-Shot Document-Level Event Argument Extraction
benchmark, based on the existing document-level event extraction dataset. We
first define the new problem and reconstruct the corpus by a novel N -Way-D-Doc
sampling instead of the traditional N -Way-K-Shot strategy. Then we adjust the
current document-level neural models into the few-shot setting to provide
baseline results under in- and cross-domain settings. Since the argument
extraction depends on the context from multiple sentences and the learning
process is limited to very few examples, we find this novel task to be very
challenging with substantively low performance. Considering FewDocAE is closely
related to practical use under low-resource regimes, we hope this benchmark
encourages more research in this direction. Our data and codes will be
available online.
|
[
{
"version": "v1",
"created": "Tue, 6 Sep 2022 03:57:23 GMT"
},
{
"version": "v2",
"created": "Thu, 25 May 2023 21:18:42 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Yang",
"Xianjun",
""
],
[
"Lu",
"Yujie",
""
],
[
"Petzold",
"Linda",
""
]
] |
new_dataset
| 0.994407 |
2210.07141
|
Courtney McBeth
|
Courtney McBeth, James Motes, Diane Uwacu, Marco Morales, Nancy M.
Amato
|
Scalable Multi-robot Motion Planning for Congested Environments With
Topological Guidance
|
This work has been submitted for review
| null | null | null |
cs.RO cs.AI cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-robot motion planning (MRMP) is the problem of finding collision-free
paths for a set of robots in a continuous state space. The difficulty of MRMP
increases with the number of robots and is exacerbated in environments with
narrow passages that robots must pass through, like warehouse aisles where
coordination between robots is required. In single-robot settings,
topology-guided motion planning methods have shown improved performance in
these constricted environments. In this work, we extend an existing
topology-guided single-robot motion planning method to the multi-robot domain
to leverage the improved efficiency provided by topological guidance. We
demonstrate our method's ability to efficiently plan paths in complex
environments with many narrow passages, scaling to robot teams of size up to 25
times larger than existing methods in this class of problems. By leveraging
knowledge of the topology of the environment, we also find higher-quality
solutions than other methods.
|
[
{
"version": "v1",
"created": "Thu, 13 Oct 2022 16:26:01 GMT"
},
{
"version": "v2",
"created": "Thu, 25 May 2023 18:19:21 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"McBeth",
"Courtney",
""
],
[
"Motes",
"James",
""
],
[
"Uwacu",
"Diane",
""
],
[
"Morales",
"Marco",
""
],
[
"Amato",
"Nancy M.",
""
]
] |
new_dataset
| 0.963195 |
2210.15456
|
Yi Gu
|
Mo Yu, Yi Gu, Xiaoxiao Guo, Yufei Feng, Xiaodan Zhu, Michael
Greenspan, Murray Campbell, Chuang Gan
|
JECC: Commonsense Reasoning Tasks Derived from Interactive Fictions
|
arXiv admin note: text overlap with arXiv:2010.09788
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Commonsense reasoning simulates the human ability to make presumptions about
our physical world, and it is an essential cornerstone in building general AI
systems. We propose a new commonsense reasoning dataset based on human's
Interactive Fiction (IF) gameplay walkthroughs as human players demonstrate
plentiful and diverse commonsense reasoning. The new dataset provides a natural
mixture of various reasoning types and requires multi-hop reasoning. Moreover,
the IF game-based construction procedure requires much less human interventions
than previous ones. Different from existing benchmarks, our dataset focuses on
the assessment of functional commonsense knowledge rules rather than factual
knowledge. Hence, in order to achieve higher performance on our tasks, models
need to effectively utilize such functional knowledge to infer the outcomes of
actions, rather than relying solely on memorizing facts. Experiments show that
the introduced dataset is challenging to previous machine reading models as
well as the new large language models with a significant 20% performance gap
compared to human experts.
|
[
{
"version": "v1",
"created": "Tue, 18 Oct 2022 19:20:53 GMT"
},
{
"version": "v2",
"created": "Fri, 26 May 2023 05:40:19 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Yu",
"Mo",
""
],
[
"Gu",
"Yi",
""
],
[
"Guo",
"Xiaoxiao",
""
],
[
"Feng",
"Yufei",
""
],
[
"Zhu",
"Xiaodan",
""
],
[
"Greenspan",
"Michael",
""
],
[
"Campbell",
"Murray",
""
],
[
"Gan",
"Chuang",
""
]
] |
new_dataset
| 0.99987 |
2211.15037
|
Yusen Sun
|
Yusen Sun, Liangyou Li, Qun Liu and Dit-Yan Yeung
|
SongRewriter: A Chinese Song Rewriting System with Controllable Content
and Rhyme Scheme
|
ACL Findings 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Although lyrics generation has achieved significant progress in recent years,
it has limited practical applications because the generated lyrics cannot be
performed without composing compatible melodies. In this work, we bridge this
practical gap by proposing a song rewriting system which rewrites the lyrics of
an existing song such that the generated lyrics are compatible with the rhythm
of the existing melody and thus singable. In particular, we propose
SongRewriter,a controllable Chinese lyrics generation and editing system which
assists users without prior knowledge of melody composition. The system is
trained by a randomized multi-level masking strategy which produces a unified
model for generating entirely new lyrics or editing a few fragments. To improve
the controllabiliy of the generation process, we further incorporate a keyword
prompt to control the lexical choices of the content and propose novel decoding
constraints and a vowel modeling task to enable flexible end and internal rhyme
schemes. While prior rhyming metrics are mainly for rap lyrics, we propose
three novel rhyming evaluation metrics for song lyrics. Both automatic and
human evaluations show that the proposed model performs better than the
state-of-the-art models in both contents and rhyming quality.
|
[
{
"version": "v1",
"created": "Mon, 28 Nov 2022 03:52:05 GMT"
},
{
"version": "v2",
"created": "Fri, 26 May 2023 07:53:26 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Sun",
"Yusen",
""
],
[
"Li",
"Liangyou",
""
],
[
"Liu",
"Qun",
""
],
[
"Yeung",
"Dit-Yan",
""
]
] |
new_dataset
| 0.999857 |
2212.09233
|
Kaiqiang Song
|
Xianjun Yang, Kaiqiang Song, Sangwoo Cho, Xiaoyang Wang, Xiaoman Pan,
Linda Petzold, Dong Yu
|
OASum: Large-Scale Open Domain Aspect-based Summarization
|
ACL 2023 Findings
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Aspect or query-based summarization has recently caught more attention, as it
can generate differentiated summaries based on users' interests. However, the
current dataset for aspect or query-based summarization either focuses on
specific domains, contains relatively small-scale instances, or includes only a
few aspect types. Such limitations hinder further explorations in this
direction. In this work, we take advantage of crowd-sourcing knowledge on
Wikipedia.org and automatically create a high-quality, large-scale open-domain
aspect-based summarization dataset named OASum, which contains more than 3.7
million instances with around 1 million different aspects on 2 million
Wikipedia pages. We provide benchmark results on OASum and demonstrate its
ability for diverse aspect-based summarization generation. To overcome the data
scarcity problem on specific domains, we also perform zero-shot, few-shot, and
fine-tuning on seven downstream datasets. Specifically, zero/few-shot and
fine-tuning results show that the model pre-trained on our corpus demonstrates
a strong aspect or query-focused generation ability compared with the backbone
model. Our dataset and pre-trained checkpoints are publicly available.
|
[
{
"version": "v1",
"created": "Mon, 19 Dec 2022 04:04:17 GMT"
},
{
"version": "v2",
"created": "Thu, 25 May 2023 22:29:45 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Yang",
"Xianjun",
""
],
[
"Song",
"Kaiqiang",
""
],
[
"Cho",
"Sangwoo",
""
],
[
"Wang",
"Xiaoyang",
""
],
[
"Pan",
"Xiaoman",
""
],
[
"Petzold",
"Linda",
""
],
[
"Yu",
"Dong",
""
]
] |
new_dataset
| 0.99979 |
2302.03362
|
Joachim Schaeffer
|
Joachim Schaeffer, Paul Gasper, Esteban Garcia-Tamayo, Raymond Gasper,
Masaki Adachi, Juan Pablo Gaviria-Cardona, Simon Montoya-Bedoya, Anoushka
Bhutani, Andrew Schiek, Rhys Goodall, Rolf Findeisen, Richard D. Braatz and
Simon Engelke
|
Machine Learning Benchmarks for the Classification of Equivalent Circuit
Models from Electrochemical Impedance Spectra
|
Manuscript: 17 pages, 9 figures; Supplementary Information: 9 pages,
6 figures
| null |
10.1149/1945-7111/acd8fb
| null |
cs.LG cond-mat.mtrl-sci
|
http://creativecommons.org/licenses/by/4.0/
|
Analysis of Electrochemical Impedance Spectroscopy (EIS) data for
electrochemical systems often consists of defining an Equivalent Circuit Model
(ECM) using expert knowledge and then optimizing the model parameters to
deconvolute various resistance, capacitive, inductive, or diffusion responses.
For small data sets, this procedure can be conducted manually; however, it is
not feasible to manually define a proper ECM for extensive data sets with a
wide range of EIS responses. Automatic identification of an ECM would
substantially accelerate the analysis of large sets of EIS data. We showcase
machine learning methods to classify the ECMs of 9,300 impedance spectra
provided by QuantumScape for the BatteryDEV hackathon. The best-performing
approach is a gradient-boosted tree model utilizing a library to automatically
generate features, followed by a random forest model using the raw spectral
data. A convolutional neural network using boolean images of Nyquist
representations is presented as an alternative, although it achieves a lower
accuracy. We publish the data and open source the associated code. The
approaches described in this article can serve as benchmarks for further
studies. A key remaining challenge is the identifiability of the labels,
underlined by the model performances and the comparison of misclassified
spectra.
|
[
{
"version": "v1",
"created": "Tue, 7 Feb 2023 10:08:35 GMT"
},
{
"version": "v2",
"created": "Thu, 4 May 2023 16:55:22 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Schaeffer",
"Joachim",
""
],
[
"Gasper",
"Paul",
""
],
[
"Garcia-Tamayo",
"Esteban",
""
],
[
"Gasper",
"Raymond",
""
],
[
"Adachi",
"Masaki",
""
],
[
"Gaviria-Cardona",
"Juan Pablo",
""
],
[
"Montoya-Bedoya",
"Simon",
""
],
[
"Bhutani",
"Anoushka",
""
],
[
"Schiek",
"Andrew",
""
],
[
"Goodall",
"Rhys",
""
],
[
"Findeisen",
"Rolf",
""
],
[
"Braatz",
"Richard D.",
""
],
[
"Engelke",
"Simon",
""
]
] |
new_dataset
| 0.964224 |
2303.01076
|
Jonas Rothfuss
|
Jonas Rothfuss, Bhavya Sukhija, Tobias Birchler, Parnian Kassraie,
Andreas Krause
|
Hallucinated Adversarial Control for Conservative Offline Policy
Evaluation
|
Conference on Uncertainty in Artificial Intelligence (UAI) 2023,
first three authors contributed equally
| null | null | null |
cs.LG cs.AI stat.ML
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We study the problem of conservative off-policy evaluation (COPE) where given
an offline dataset of environment interactions, collected by other agents, we
seek to obtain a (tight) lower bound on a policy's performance. This is crucial
when deciding whether a given policy satisfies certain minimal
performance/safety criteria before it can be deployed in the real world. To
this end, we introduce HAMBO, which builds on an uncertainty-aware learned
model of the transition dynamics. To form a conservative estimate of the
policy's performance, HAMBO hallucinates worst-case trajectories that the
policy may take, within the margin of the models' epistemic confidence regions.
We prove that the resulting COPE estimates are valid lower bounds, and, under
regularity conditions, show their convergence to the true expected return.
Finally, we discuss scalable variants of our approach based on Bayesian Neural
Networks and empirically demonstrate that they yield reliable and tight lower
bounds in various continuous control environments.
|
[
{
"version": "v1",
"created": "Thu, 2 Mar 2023 08:57:35 GMT"
},
{
"version": "v2",
"created": "Fri, 26 May 2023 07:52:30 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Rothfuss",
"Jonas",
""
],
[
"Sukhija",
"Bhavya",
""
],
[
"Birchler",
"Tobias",
""
],
[
"Kassraie",
"Parnian",
""
],
[
"Krause",
"Andreas",
""
]
] |
new_dataset
| 0.986304 |
2304.01339
|
John Li
|
John M. Li, Amal Ahmed, Steven Holtzen
|
Lilac: A Modal Separation Logic for Conditional Probability
|
Accepted to PLDI 2023
| null | null | null |
cs.PL cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We present Lilac, a separation logic for reasoning about probabilistic
programs where separating conjunction captures probabilistic independence.
Inspired by an analogy with mutable state where sampling corresponds to dynamic
allocation, we show how probability spaces over a fixed, ambient sample space
appear to be the natural analogue of heap fragments, and present a new
combining operation on them such that probability spaces behave like heaps and
measurability of random variables behaves like ownership. This combining
operation forms the basis for our model of separation, and produces a logic
with many pleasant properties. In particular, Lilac has a frame rule identical
to the ordinary one, and naturally accommodates advanced features like
continuous random variables and reasoning about quantitative properties of
programs. Then we propose a new modality based on disintegration theory for
reasoning about conditional probability. We show how the resulting modal logic
validates examples from prior work, and give a formal verification of an
intricate weighted sampling algorithm whose correctness depends crucially on
conditional independence structure.
|
[
{
"version": "v1",
"created": "Mon, 3 Apr 2023 20:10:53 GMT"
},
{
"version": "v2",
"created": "Fri, 26 May 2023 00:00:03 GMT"
}
] | 2023-05-29T00:00:00 |
[
[
"Li",
"John M.",
""
],
[
"Ahmed",
"Amal",
""
],
[
"Holtzen",
"Steven",
""
]
] |
new_dataset
| 0.954041 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.