id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2305.08517
|
Xiaojing Chen
|
Xiaojing Chen, Xingbo Lu, Shixin Zhu, Wan Jiang, Xindi Wang
|
New entanglement-assisted quantum codes from negacyclic codes
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The theory of entanglement-assisted quantum error-correcting codes (EAQECCs)
is a generalization of the standard stabilizer quantum error-correcting codes,
which can be possibly constructed from any classical codes by relaxing the
duality condition and utilizing preshared entanglement between the sender and
receiver. In this paper, a new family of EAQECCs is constructed from negacyclic
codes of length $n=\frac{q^2+1}{a}$, where $q$ is an odd prime power,
$a=\frac{m^2+1}{2}$ and $m$ is an odd integer. Some new entanglement-assisted
quantum maximum distance separable (EAQMDS) codes are obtained in the sense
that their parameters are not covered by the previously known ones.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 10:25:13 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Chen",
"Xiaojing",
""
],
[
"Lu",
"Xingbo",
""
],
[
"Zhu",
"Shixin",
""
],
[
"Jiang",
"Wan",
""
],
[
"Wang",
"Xindi",
""
]
] |
new_dataset
| 0.999687 |
2305.08532
|
Paola Cecilia Torrico Mor\'on
|
Paola Torrico Mor\'on, Sahar Salimpour, Lei Fu, Xianjia Yu, Jorge
Pe\~na Queralta, Tomi Westerlund
|
Benchmarking UWB-Based Infrastructure-Free Positioning and Multi-Robot
Relative Localization: Dataset and Characterization
|
6 pages, 8 figures, 3 tables
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ultra-wideband (UWB) positioning has emerged as a low-cost and dependable
localization solution for multiple use cases, from mobile robots to asset
tracking within the Industrial IoT. The technology is mature and the scientific
literature contains multiple datasets and methods for localization based on
fixed UWB nodes. At the same time, research in UWB-based relative localization
and infrastructure-free localization is gaining traction, further domains.
tools and datasets in this domain are scarce. Therefore, we introduce in this
paper a novel dataset for benchmarking infrastructure-free relative
localization targeting the domain of multi-robot systems. Compared to previous
datasets, we analyze the performance of different relative localization
approaches for a much wider variety of scenarios with varying numbers of fixed
and mobile nodes. A motion capture system provides ground truth data, are
multi-modal and include inertial or odometry measurements for benchmarking
sensor fusion methods. Additionally, the dataset contains measurements of
ranging accuracy based on the relative orientation of antennas and a
comprehensive set of measurements for ranging between a single pair of nodes.
Our experimental analysis shows that high accuracy can be localization, but the
variability of the ranging error is significant across different settings and
setups.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 10:43:46 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Morón",
"Paola Torrico",
""
],
[
"Salimpour",
"Sahar",
""
],
[
"Fu",
"Lei",
""
],
[
"Yu",
"Xianjia",
""
],
[
"Queralta",
"Jorge Peña",
""
],
[
"Westerlund",
"Tomi",
""
]
] |
new_dataset
| 0.99972 |
2305.08561
|
Shitao Li
|
Shitao Li, Minjia Shi
|
Characterization of Plotkin-optimal two-weight codes over finite chain
rings and related applications
| null | null | null | null |
cs.IT math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Few-weight codes over finite chain rings are associated with combinatorial
objects such as strongly regular graphs (SRGs), strongly walk-regular graphs
(SWRGs) and finite geometries, and are also widely used in data storage systems
and secret sharing schemes. The first objective of this paper is to
characterize all possible parameters of Plotkin-optimal two-homogeneous weight
regular projective codes over finite chain rings, as well as their weight
distributions. We show the existence of codes with these parameters by
constructing an infinite family of two-homogeneous weight codes. The parameters
of their Gray images have the same weight distribution as that of the
two-weight codes of type SU1 in the sense of Calderbank and Kantor (Bull Lond
Math Soc 18: 97-122, 1986). Further, we also construct three-homogeneous weight
regular projective codes over finite chain rings combined with some known
results. Finally, we study applications of our constructed codes in secret
sharing schemes and graph theory. In particular, infinite families of SRGs and
SWRGs with non-trivial parameters are obtained.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 11:42:29 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Li",
"Shitao",
""
],
[
"Shi",
"Minjia",
""
]
] |
new_dataset
| 0.992128 |
2305.08601
|
Anas Dakkak
|
Anas Dakkak, Jan Bosch and Helena Holmstr\"om Olsson
|
DevServOps: DevOps For Product-Oriented Product-Service Systems
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
For companies developing web-based applications, the Dev and the Ops refer to
different groups with either operational or development focus. Therefore,
DevOps help these companies streamline software development and operations
activities by emphasizing the collaboration between the two groups. However,
for companies producing software-intensive products, the Ops would refer to
customers who use and operate the product. In addition, companies producing
software-intensive products do not only offer products to customers but rather
Product Service Systems (PSS), where product-related services play a key role
in ensuring customer satisfaction besides their significant revenue
contribution. Thus, the context of product-oriented PSS is very different from
web-based applications, making it difficult to apply DevOps without considering
the role of the services. Therefore, based on a two years participant
observation case study conducted at a multinational telecommunications systems
provider, we propose a new and novel approach called
Development-Services-Operations (DevServOps) which incorporates services as a
key player facilitating an end-to-end software flow toward customers in one
direction and feedback toward developers in the other direction. Services
become the glue that connects the Dev and the Ops, achieved by providing
internal services to increase the precision of the development organization and
external services to increase the speed of deployment and new content adoption
on the customers' side.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 12:34:18 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Dakkak",
"Anas",
""
],
[
"Bosch",
"Jan",
""
],
[
"Olsson",
"Helena Holmström",
""
]
] |
new_dataset
| 0.999826 |
2305.08621
|
Laura Jahn
|
Laura Jahn and Rasmus K. Rendsvig
|
Danish National Election 2022 Twitter Data on Likes, Retweets, and
Botscores for the Purpose of Exploring Coordinated Inauthenthic Behavior
| null | null | null | null |
cs.SI cs.CY
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This note describes code and experiments related to a Twitter dataset on the
Danish National Election 2022, available at Harvard Dataverse
(doi.org/10.7910/DVN/RWPZUN). We cluster Twitter users into bins of users that
showed exactly the same liking/retweeting behavior over a month-long period
during which the Danish National Election took place. To investigate whether
any of these bins exhibited coordinated inauthentic behavior, we were
interested in whether bin size correlated with user account
deletions/suspensions and/or high bot scores from Botometer / Botometer Lite.
We did not find significant correlations (also neither between Botometer and
Botometer Lite scores). This note primarily contains the README.md from the
GitHub repository
LJ-9/Danish-Election-2022-Twitter-Likes-Retweets-Botscores-Inauthentic-Coordinated-Behavior
of the same name, with a few additional comments and references. We upload the
note for visibility, hoping that other researchers may find the data of use.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 13:17:05 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Jahn",
"Laura",
""
],
[
"Rendsvig",
"Rasmus K.",
""
]
] |
new_dataset
| 0.999748 |
2305.08671
|
Xiufeng Xu
|
Xiufeng Xu, Chenguang Zhu, Yi Li
|
CompSuite: A Dataset of Java Library Upgrade Incompatibility Issues
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern software systems heavily rely on external libraries developed by
third-parties to ensure efficient development. However, frequent library
upgrades can lead to compatibility issues between the libraries and their
client systems. In this paper, we introduce CompSuite, a dataset that includes
123 real-world Java client-library pairs where upgrading the library causes an
incompatibility issue in the corresponding client. Each incompatibility issue
in CompSuite is associated with a test case authored by the developers, which
can be used to reproduce the issue. The dataset also provides a command-line
interface that simplifies the execution and validation of each issue. With this
infrastructure, users can perform an inspection of any incompatibility issue
with the push of a button, or reproduce an issue step-by-step for a more
detailed investigation. We make CompSuite publicly available to promote open
science. We believe that various software analysis techniques, such as
compatibility checking, debugging, and regression test selection, can benefit
from CompSuite.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 14:26:14 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Xu",
"Xiufeng",
""
],
[
"Zhu",
"Chenguang",
""
],
[
"Li",
"Yi",
""
]
] |
new_dataset
| 0.999208 |
2305.08767
|
Firas Bayram
|
Firas Bayram, Phil Aupke, Bestoun S. Ahmed, Andreas Kassler, Andreas
Theocharis, Jonas Forsman
|
DA-LSTM: A Dynamic Drift-Adaptive Learning Framework for Interval Load
Forecasting with LSTM Networks
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Load forecasting is a crucial topic in energy management systems (EMS) due to
its vital role in optimizing energy scheduling and enabling more flexible and
intelligent power grid systems. As a result, these systems allow power utility
companies to respond promptly to demands in the electricity market. Deep
learning (DL) models have been commonly employed in load forecasting problems
supported by adaptation mechanisms to cope with the changing pattern of
consumption by customers, known as concept drift. A drift magnitude threshold
should be defined to design change detection methods to identify drifts. While
the drift magnitude in load forecasting problems can vary significantly over
time, existing literature often assumes a fixed drift magnitude threshold,
which should be dynamically adjusted rather than fixed during system evolution.
To address this gap, in this paper, we propose a dynamic drift-adaptive Long
Short-Term Memory (DA-LSTM) framework that can improve the performance of load
forecasting models without requiring a drift threshold setting. We integrate
several strategies into the framework based on active and passive adaptation
approaches. To evaluate DA-LSTM in real-life settings, we thoroughly analyze
the proposed framework and deploy it in a real-world problem through a
cloud-based environment. Efficiency is evaluated in terms of the prediction
performance of each approach and computational cost. The experiments show
performance improvements on multiple evaluation metrics achieved by our
framework compared to baseline methods from the literature. Finally, we present
a trade-off analysis between prediction performance and computational costs.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 16:26:03 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Bayram",
"Firas",
""
],
[
"Aupke",
"Phil",
""
],
[
"Ahmed",
"Bestoun S.",
""
],
[
"Kassler",
"Andreas",
""
],
[
"Theocharis",
"Andreas",
""
],
[
"Forsman",
"Jonas",
""
]
] |
new_dataset
| 0.996964 |
2305.08782
|
Ardalan Amiri Sani
|
Hsin-Wei Hung and Ardalan Amiri Sani
|
BRF: eBPF Runtime Fuzzer
| null | null | null | null |
cs.CR cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
The eBPF technology in the Linux kernel has been widely adopted for different
applications, such as networking, tracing, and security, thanks to the
programmability it provides. By allowing user-supplied eBPF programs to be
executed directly in the kernel, it greatly increases the flexibility and
efficiency of deploying customized logic. However, eBPF also introduces a new
and wide attack surface: malicious eBPF programs may try to exploit the
vulnerabilities in the eBPF subsystem in the kernel.
Fuzzing is a promising technique to find such vulnerabilities. Unfortunately,
our experiments with the state-of-the-art kernel fuzzer, Syzkaller, shows that
it cannot effectively fuzz the eBPF runtime, those components that are in
charge of executing an eBPF program, for two reasons. First, the eBPF verifier
(which is tasked with verifying the safety of eBPF programs) rejects many
fuzzing inputs because (1) they do not comply with its required semantics or
(2) they miss some dependencies, i.e., other syscalls that need to be issued
before the program is loaded. Second, Syzkaller fails to attach and trigger the
execution of eBPF programs most of the times.
This paper introduces the BPF Runtime Fuzzer (BRF), a fuzzer that can satisfy
the semantics and dependencies required by the verifier and the eBPF subsystem.
Our experiments show, in 48-hour fuzzing sessions, BRF can successfully execute
8x more eBPF programs compared to Syzkaller. Moreover, eBPF programs generated
by BRF are much more expressive than Syzkaller's. As a result, BRF achieves
101% higher code coverage. Finally, BRF has so far managed to find 4
vulnerabilities (some of them have been assigned CVE numbers) in the eBPF
runtime, proving its effectiveness.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 16:42:51 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Hung",
"Hsin-Wei",
""
],
[
"Sani",
"Ardalan Amiri",
""
]
] |
new_dataset
| 0.996881 |
2305.08802
|
Yue Chen
|
Yue Chen and Peng Yi
|
Multi-Cluster Aggregative Games: A Linearly Convergent Nash Equilibrium
Seeking Algorithm and its Applications in Energy Management
| null | null | null | null |
cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a type of non-cooperative game, termed multi-cluster aggregative
game, which is composed of clusters as players, where each cluster consists of
collaborative agents with cost functions depending on their own decisions and
the aggregate quantity of each participant cluster to modeling large-scale and
hierarchical multi-agent systems. This novel game model is motivated by
decision-making problems in competitive-cooperative network systems with
large-scale nodes, such as the Energy Internet. To address challenges arising
in seeking Nash equilibrium for such network systems, we develop an algorithm
with a hierarchical communication topology which is a hybrid with distributed
and semi-decentralized protocols. The upper level consists of cluster
coordinators estimating the aggregate quantities with local communications,
while the lower level is cluster subnets composed of its coordinator and agents
aiming to track the gradient of the corresponding cluster. In particular, the
clusters exchange the aggregate quantities instead of their decisions to
relieve the burden of communication. Under strongly monotone and mildly
Lipschitz continuous assumptions, we rigorously prove that the algorithm
linearly converges to a Nash equilibrium with a fixed step size.We present the
applications in the context of the Energy Internet. Furthermore, the numerical
results verify the effectiveness of the algorithm.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 17:06:24 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Chen",
"Yue",
""
],
[
"Yi",
"Peng",
""
]
] |
new_dataset
| 0.950053 |
2305.08810
|
Yuang Wang
|
Yuang Wang, Xingyi He, Sida Peng, Haotong Lin, Hujun Bao, Xiaowei Zhou
|
AutoRecon: Automated 3D Object Discovery and Reconstruction
|
Accepted to CVPR 2023 (Highlight). Project page:
https://zju3dv.github.io/autorecon
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
A fully automated object reconstruction pipeline is crucial for digital
content creation. While the area of 3D reconstruction has witnessed profound
developments, the removal of background to obtain a clean object model still
relies on different forms of manual labor, such as bounding box labeling, mask
annotations, and mesh manipulations. In this paper, we propose a novel
framework named AutoRecon for the automated discovery and reconstruction of an
object from multi-view images. We demonstrate that foreground objects can be
robustly located and segmented from SfM point clouds by leveraging
self-supervised 2D vision transformer features. Then, we reconstruct decomposed
neural scene representations with dense supervision provided by the decomposed
point clouds, resulting in accurate object reconstruction and segmentation.
Experiments on the DTU, BlendedMVS and CO3D-V2 datasets demonstrate the
effectiveness and robustness of AutoRecon.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 17:16:46 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Wang",
"Yuang",
""
],
[
"He",
"Xingyi",
""
],
[
"Peng",
"Sida",
""
],
[
"Lin",
"Haotong",
""
],
[
"Bao",
"Hujun",
""
],
[
"Zhou",
"Xiaowei",
""
]
] |
new_dataset
| 0.996147 |
2305.08819
|
Zhiyi Zhang
|
Zhiyi Zhang, Pengfei Zhang, Qi Wang
|
Dragon-Alpha&cu32: A Java-based Tensor Computing Framework With its
High-Performance CUDA Library
|
7 pages. About: deep learning, deep neural networks (DNNs), system
architecture, software engineering. The code of Alpha&cu32, and the
experimental-data can be download at
https://github.com/GilgameshXYZ123/Dragon-Alpha
| null | null | null |
cs.LG cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Java is very powerful, but in Deep Learning field, its capabilities probably
has not been sufficiently exploited. Compared to the Java-based
deep-learning-frameworks, the Python-based (PyTorch, TensorFlow, etc) are
undoubtedly the mainstream, due to their easy-to-use, flexibility and better
ecosystem. Dragon-Alpha is a Java-based Tensor Computing Framework, with
easy-to-use, high-scalability and high-performance, trying to break Java's
dilemma in deep learning field and make it more effective. Dragon-Alpha
supports different levels of APIs, and can be used as a deep-learning-framework
through its user-friendly high-level APIs. Dragon-Alpha has potential to
aggregate computing-power across heterogeneous platforms and devices, based on
its multi-layer architecture and Java's big-data ecosystem. Dragon-Alpha has
its asynchronized APIs to improve parallelism, and highly-optimized CUDA
library cu32 which adopts unique convolution\deconvolution operators for small
feature maps. The experiments show that, compared to PyTorch&cuDNN,
Dragon-Alpha&cu32 costs less time and memory (75.38% to 97.32%, 29.2% to
66.4%), to train some typical neural networks (AlexNet, VGG, GoogleNet, ResNet)
on Cifar-10.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 17:30:05 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Zhang",
"Zhiyi",
""
],
[
"Zhang",
"Pengfei",
""
],
[
"Wang",
"Qi",
""
]
] |
new_dataset
| 0.968966 |
2305.08828
|
Pinzhen Chen
|
Ashok Urlana, Pinzhen Chen, Zheng Zhao, Shay B. Cohen, Manish
Shrivastava, Barry Haddow
|
PMIndiaSum: Multilingual and Cross-lingual Headline Summarization for
Languages in India
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces PMIndiaSum, a new multilingual and massively parallel
headline summarization corpus focused on languages in India. Our corpus covers
four language families, 14 languages, and the largest to date, 196 language
pairs. It provides a testing ground for all cross-lingual pairs. We detail our
workflow to construct the corpus, including data acquisition, processing, and
quality assurance. Furthermore, we publish benchmarks for monolingual,
cross-lingual, and multilingual summarization by fine-tuning, prompting, as
well as translate-and-summarize. Experimental results confirm the crucial role
of our data in aiding the summarization of Indian texts. Our dataset is
publicly available and can be freely modified and re-distributed.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 17:41:15 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Urlana",
"Ashok",
""
],
[
"Chen",
"Pinzhen",
""
],
[
"Zhao",
"Zheng",
""
],
[
"Cohen",
"Shay B.",
""
],
[
"Shrivastava",
"Manish",
""
],
[
"Haddow",
"Barry",
""
]
] |
new_dataset
| 0.998537 |
2305.08853
|
Satya Almasian
|
Satya Almasian, Vivian Kazakova, Philip G\"oldner and Michael Gertz
|
CQE: A Comprehensive Quantity Extractor
|
8 pages of content, 3 page of appendix
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Quantities are essential in documents to describe factual information. They
are ubiquitous in application domains such as finance, business, medicine, and
science in general. Compared to other information extraction approaches,
interestingly only a few works exist that describe methods for a proper
extraction and representation of quantities in text. In this paper, we present
such a comprehensive quantity extraction framework from text data. It
efficiently detects combinations of values and units, the behavior of a
quantity (e.g., rising or falling), and the concept a quantity is associated
with. Our framework makes use of dependency parsing and a dictionary of units,
and it provides for a proper normalization and standardization of detected
quantities. Using a novel dataset for evaluation, we show that our open source
framework outperforms other systems and -- to the best of our knowledge -- is
the first to detect concepts associated with identified quantities. The code
and data underlying our framework are available at
https://github.com/vivkaz/CQE.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 17:59:41 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Almasian",
"Satya",
""
],
[
"Kazakova",
"Vivian",
""
],
[
"Göldner",
"Philip",
""
],
[
"Gertz",
"Michael",
""
]
] |
new_dataset
| 0.998847 |
2202.09221
|
Stefan Scherzinger
|
Stefan Scherzinger, Pascal Becker, Arne Roennau and R\"udiger Dillmann
|
Motion Macro Programming on Assistive Robotic Manipulators: Three Skill
Types for Everyday Tasks
|
8 pages, 10 figures, accepted to the IEEE 20th International
Conference on Ubiquitous Robots (UR 2023), Honolulu, USA
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Assistive robotic manipulators are becoming increasingly important for people
with disabilities. Teleoperating the manipulator in mundane tasks is part of
their daily lives. Instead of steering the robot through all actions, applying
self-recorded motion macros could greatly facilitate repetitive tasks. Dynamic
Movement Primitives (DMP) are a powerful method for skill learning via
teleoperation. For this use case, however, they need simple heuristics to
specify where to start, stop, and parameterize a skill without a background in
computer science and academic sensor setups for autonomous perception. To
achieve this goal, this paper provides the concept of local, global, and hybrid
skills that form a modular basis for composing single-handed tasks of daily
living. These skills are specified implicitly and can easily be programmed by
users themselves, requiring only their basic robotic manipulator. The paper
contributes all details for robot-agnostic implementations. Experiments
validate the developed methods for exemplary tasks, such as scratching an itchy
spot, sorting objects on a desk, and feeding a piggy bank with coins. The paper
is accompanied by an open-source implementation at
https://github.com/fzi-forschungszentrum-informatik/ArNe
|
[
{
"version": "v1",
"created": "Fri, 18 Feb 2022 14:41:20 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Apr 2023 11:47:28 GMT"
},
{
"version": "v3",
"created": "Fri, 12 May 2023 14:14:09 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Scherzinger",
"Stefan",
""
],
[
"Becker",
"Pascal",
""
],
[
"Roennau",
"Arne",
""
],
[
"Dillmann",
"Rüdiger",
""
]
] |
new_dataset
| 0.998113 |
2204.02064
|
Lingqi Zhang
|
Lingqi Zhang, Mohamed Wahib, Peng Chen, Jintao Meng, Xiao Wang, Toshio
Endo, Satoshi Matsuoka
|
PERKS: a Locality-Optimized Execution Model for Iterative Memory-bound
GPU Applications
|
This paper will be published in 2023 International Conference on
Supercomputing (ICS23)
| null |
10.1145/3577193.3593705
| null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Iterative memory-bound solvers commonly occur in HPC codes. Typical GPU
implementations have a loop on the host side that invokes the GPU kernel as
much as time/algorithm steps there are. The termination of each kernel
implicitly acts the barrier required after advancing the solution every time
step. We propose an execution model for running memory-bound iterative GPU
kernels: PERsistent KernelS (PERKS). In this model, the time loop is moved
inside persistent kernel, and device-wide barriers are used for
synchronization. We then reduce the traffic to device memory by caching subset
of the output in each time step in the unused registers and shared memory.
PERKS can be generalized to any iterative solver: they largely independent of
the solver's implementation. We explain the design principle of PERKS and
demonstrate effectiveness of PERKS for a wide range of iterative 2D/3D stencil
benchmarks (geomean speedup of $2.12$x for 2D stencils and $1.24$x for 3D
stencils over state-of-art libraries), and a Krylov subspace conjugate gradient
solver (geomean speedup of $4.86$x in smaller SpMV datasets from SuiteSparse
and $1.43$x in larger SpMV datasets over a state-of-art library). All
PERKS-based implementations available at: https://github.com/neozhang307/PERKS.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 08:59:18 GMT"
},
{
"version": "v2",
"created": "Sat, 21 May 2022 05:10:32 GMT"
},
{
"version": "v3",
"created": "Mon, 1 May 2023 06:08:30 GMT"
},
{
"version": "v4",
"created": "Fri, 12 May 2023 11:16:55 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Zhang",
"Lingqi",
""
],
[
"Wahib",
"Mohamed",
""
],
[
"Chen",
"Peng",
""
],
[
"Meng",
"Jintao",
""
],
[
"Wang",
"Xiao",
""
],
[
"Endo",
"Toshio",
""
],
[
"Matsuoka",
"Satoshi",
""
]
] |
new_dataset
| 0.956607 |
2208.04798
|
Albert Fannjiang
|
Albert Fannjiang
|
3D Tomographic Phase Retrieval and Unwrapping
|
Revision of the previously titled "3D UNWRAPPED PHASE RETRIEVAL WITH
CODED APERTURE IS REDUCIBLE TO PROJECTION TOMOGRAPHY"
| null | null | null |
cs.IT math.IT physics.app-ph physics.optics
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper develops uniqueness theory for 3D phase retrieval with finite,
discrete measurement data for strong phase objects and weak phase objects,
including:
(i) Unique determination of (phase) projections from diffraction patterns --
General measurement schemes with coded and uncoded apertures are proposed and
shown to ensure unique conversion of diffraction patterns into the phase
projection for a strong phase object (respectively, the projection for a weak
phase object) in each direction separately without the knowledge of relative
orientations and locations.
(ii) Uniqueness for 3D phase unwrapping -- General conditions for unique
determination of a 3D strong phase object from its phase projection data are
established, including, but not limited to, random tilt schemes densely sampled
from a spherical triangle of vertexes in three orthogonal directions and other
deterministic tilt schemes.
(iii) Uniqueness for projection tomography -- Unique determination of an
object of $n^3$ voxels from generic $n$ projections or $n+1$ coded diffraction
patterns is proved.
This approach has the practical implication of enabling classification and
alignment, when relative orientations are unknown, to be carried out in terms
of (phase) projections, instead of diffraction patterns.
The applications with the measurement schemes such as single-axis tilt,
conical tilt, dual-axis tilt, random conical tilt and general random tilt are
discussed.
|
[
{
"version": "v1",
"created": "Tue, 9 Aug 2022 14:19:15 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Feb 2023 17:08:50 GMT"
},
{
"version": "v3",
"created": "Thu, 11 May 2023 18:42:49 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Fannjiang",
"Albert",
""
]
] |
new_dataset
| 0.968595 |
2208.07556
|
Judith S\'ainz-Pardo D\'iaz
|
Judith S\'ainz-Pardo D\'iaz, \'Alvaro L\'opez Garc\'ia
|
pyCANON: A Python library to check the level of anonymity of a dataset
| null | null |
10.1038/s41597-022-01894-2
| null |
cs.CR cs.DB
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Openly sharing data with sensitive attributes and privacy restrictions is a
challenging task. In this document we present the implementation of pyCANON, a
Python library and command line interface (CLI) to check and assess the level
of anonymity of a dataset through some of the most common anonymization
techniques: k-anonymity, ($\alpha$,k)-anonymity, $\ell$-diversity, entropy
$\ell$-diversity, recursive (c,$\ell$)-diversity, basic $\beta$-likeness,
enhanced $\beta$-likeness, t-closeness and $\delta$-disclosure privacy. For the
case of more than one sensitive attributes, two approaches are proposed for
evaluating this techniques. The main strength of this library is to obtain a
full report of the parameters that are fulfilled for each of the techniques
mentioned above, with the unique requirement of the set of quasi-identifiers
and that of sensitive attributes. We present the methods implemented together
with the attacks they prevent, the description of the library, use examples of
the different functions, as well as the impact and the possible applications
that can be developed. Finally, some possible aspects to be incorporated in
future updates are proposed.
|
[
{
"version": "v1",
"created": "Tue, 16 Aug 2022 06:06:04 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Díaz",
"Judith Sáinz-Pardo",
""
],
[
"García",
"Álvaro López",
""
]
] |
new_dataset
| 0.99588 |
2210.08252
|
Siamak Layeghy
|
Siamak Layeghy, Mahsa Baktashmotlagh, Marius Portmann
|
DI-NIDS: Domain Invariant Network Intrusion Detection System
| null | null |
10.1016/j.knosys.2023.110626
| null |
cs.CR cs.LG cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
The performance of machine learning based network intrusion detection systems
(NIDSs) severely degrades when deployed on a network with significantly
different feature distributions from the ones of the training dataset. In
various applications, such as computer vision, domain adaptation techniques
have been successful in mitigating the gap between the distributions of the
training and test data. In the case of network intrusion detection however, the
state-of-the-art domain adaptation approaches have had limited success.
According to recent studies, as well as our own results, the performance of an
NIDS considerably deteriorates when the `unseen' test dataset does not follow
the training dataset distribution. In some cases, swapping the train and test
datasets makes this even more severe. In order to enhance the generalisibility
of machine learning based network intrusion detection systems, we propose to
extract domain invariant features using adversarial domain adaptation from
multiple network domains, and then apply an unsupervised technique for
recognising abnormalities, i.e., intrusions. More specifically, we train a
domain adversarial neural network on labelled source domains, extract the
domain invariant features, and train a One-Class SVM (OSVM) model to detect
anomalies. At test time, we feedforward the unlabeled test data to the feature
extractor network to project it into a domain invariant space, and then apply
OSVM on the extracted features to achieve our final goal of detecting
intrusions. Our extensive experiments on the NIDS benchmark datasets of
NFv2-CIC-2018 and NFv2-UNSW-NB15 show that our proposed setup demonstrates
superior cross-domain performance in comparison to the previous approaches.
|
[
{
"version": "v1",
"created": "Sat, 15 Oct 2022 10:26:22 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Layeghy",
"Siamak",
""
],
[
"Baktashmotlagh",
"Mahsa",
""
],
[
"Portmann",
"Marius",
""
]
] |
new_dataset
| 0.993811 |
2212.10011
|
Jianfeng Chi
|
Jianfeng Chi, Wasi Uddin Ahmad, Yuan Tian, Kai-Wei Chang
|
PLUE: Language Understanding Evaluation Benchmark for Privacy Policies
in English
|
ACL 2023. Code is released at https://github.com/JFChi/PLUE
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Privacy policies provide individuals with information about their rights and
how their personal information is handled. Natural language understanding (NLU)
technologies can support individuals and practitioners to understand better
privacy practices described in lengthy and complex documents. However, existing
efforts that use NLU technologies are limited by processing the language in a
way exclusive to a single task focusing on certain privacy practices. To this
end, we introduce the Privacy Policy Language Understanding Evaluation (PLUE)
benchmark, a multi-task benchmark for evaluating the privacy policy language
understanding across various tasks. We also collect a large corpus of privacy
policies to enable privacy policy domain-specific language model pre-training.
We evaluate several generic pre-trained language models and continue
pre-training them on the collected corpus. We demonstrate that domain-specific
continual pre-training offers performance improvements across all tasks.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 05:58:32 GMT"
},
{
"version": "v2",
"created": "Fri, 12 May 2023 07:38:29 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Chi",
"Jianfeng",
""
],
[
"Ahmad",
"Wasi Uddin",
""
],
[
"Tian",
"Yuan",
""
],
[
"Chang",
"Kai-Wei",
""
]
] |
new_dataset
| 0.999505 |
2301.08305
|
Benjamin Steel
|
Benjamin Steel, Sara Parker and Derek Ruths
|
The Invasion of Ukraine Viewed through TikTok: A Dataset
| null | null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
We present a dataset of video descriptions, comments, and user statistics,
from the social media platform TikTok, centred around the invasion of Ukraine
in 2022, an event that launched TikTok into the geopolitical arena. User
activity on the platform around the invasion exposed myriad political
behaviours and dynamics previously unexplored on this platform. To this end, we
provide a mass-scale language and interaction dataset for further research into
these processes. In this paper we conduct an initial investigation of language
and social interaction dynamics, alongside an evaluation of bot detection on
the platform. We have open-sourced the dataset and the library used to collect
it to the public.
|
[
{
"version": "v1",
"created": "Thu, 19 Jan 2023 20:32:21 GMT"
},
{
"version": "v2",
"created": "Thu, 11 May 2023 20:04:37 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Steel",
"Benjamin",
""
],
[
"Parker",
"Sara",
""
],
[
"Ruths",
"Derek",
""
]
] |
new_dataset
| 0.999822 |
2304.05379
|
B.Sundar Rajan
|
Sai Pavan Deekshitula and B. Sundar Rajan
|
Design and Analysis of Index codes for 3-Group NOMA in Vehicular Adhoc
Networks
|
16 pages and 3 figures. One figure added and presentation improved
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Index coding (IC) is a source coding technique employed to improve spectral
utilisation, where the source node aims to satisfy users' demands by making
minimum transmissions. Non-orthogonal multiple access (NOMA) is integral to the
radio access technique used in 5G networks. Index-coded NOMA (IC-NOMA)
transmission scheme in Vehicular Adhoc Networks (VANETs) involves applying NOMA
principles on index-coded data to avoid network congestion and to improve
spectral efficiency compared to conventional IC systems. In this work, a
spectral efficient transmission scheme called 3-Group IC-NOMA is proposed, and
an innovative index code design that fits with NOMA decoding principles to
obtain improved spectral efficiency is developed. Through exhaustive analytical
studies, we demonstrate that the proposed transmission scheme always supports
higher rates than the conventional IC systems and requires less power to
achieve an information rate at least as good as conventional IC systems.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 17:45:42 GMT"
},
{
"version": "v2",
"created": "Fri, 12 May 2023 12:44:28 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Deekshitula",
"Sai Pavan",
""
],
[
"Rajan",
"B. Sundar",
""
]
] |
new_dataset
| 0.971029 |
2304.09015
|
Jianhao Chen
|
Jianhao Chen, Junyang Ren, Wentao Ding, Yuzhong Qu
|
PaTeCon: A Pattern-Based Temporal Constraint Mining Method for Conflict
Detection on Knowledge Graphs
|
Accepted by AAAI23
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Temporal facts, the facts for characterizing events that hold in specific
time periods, are attracting rising attention in the knowledge graph (KG)
research communities. In terms of quality management, the introduction of time
restrictions brings new challenges to maintaining the temporal consistency of
KGs and detecting potential temporal conflicts. Previous studies rely on
manually enumerated temporal constraints to detect conflicts, which are
labor-intensive and may have granularity issues. We start from the common
pattern of temporal facts and constraints and propose a pattern-based temporal
constraint mining method, PaTeCon. PaTeCon uses automatically determined graph
patterns and their relevant statistical information over the given KG instead
of human experts to generate time constraints. Specifically, PaTeCon
dynamically attaches class restriction to candidate constraints according to
their measuring scores.We evaluate PaTeCon on two large-scale datasets based on
Wikidata and Freebase respectively. The experimental results show that
pattern-based automatic constraint mining is powerful in generating valuable
temporal constraints.
|
[
{
"version": "v1",
"created": "Tue, 18 Apr 2023 14:28:35 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Apr 2023 13:00:26 GMT"
},
{
"version": "v3",
"created": "Fri, 12 May 2023 14:48:00 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Chen",
"Jianhao",
""
],
[
"Ren",
"Junyang",
""
],
[
"Ding",
"Wentao",
""
],
[
"Qu",
"Yuzhong",
""
]
] |
new_dataset
| 0.997555 |
2305.03232
|
Kobe Knowles
|
Kobe Knowles, Joshua Bensemann, Diana Benavides-Prado, Vithya
Yogarajan, Michael Witbrock, Gillian Dobbie and Yang Chen
|
Neuromodulation Gated Transformer
|
8 pages, 1 figure, 4 tables, ICLR 2023 Tiny Papers
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a novel architecture, the Neuromodulation Gated Transformer
(NGT), which is a simple implementation of neuromodulation in transformers via
a multiplicative effect. We compare it to baselines and show that it results in
the best average performance on the SuperGLUE benchmark validation sets.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 01:23:22 GMT"
},
{
"version": "v2",
"created": "Thu, 11 May 2023 21:28:30 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Knowles",
"Kobe",
""
],
[
"Bensemann",
"Joshua",
""
],
[
"Benavides-Prado",
"Diana",
""
],
[
"Yogarajan",
"Vithya",
""
],
[
"Witbrock",
"Michael",
""
],
[
"Dobbie",
"Gillian",
""
],
[
"Chen",
"Yang",
""
]
] |
new_dataset
| 0.987134 |
2305.06472
|
Huan Wang
|
Yan-Fu Li, Huan Wang, Muxia Sun
|
ChatGPT-Like Large-Scale Foundation Models for Prognostics and Health
Management: A Survey and Roadmaps
|
55 pages, 10 figures
| null | null | null |
cs.LG cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Prognostics and health management (PHM) technology plays a critical role in
industrial production and equipment maintenance by identifying and predicting
possible equipment failures and damages, thereby allowing necessary maintenance
measures to be taken to enhance equipment service life and reliability while
reducing production costs and downtime. In recent years, PHM technology based
on artificial intelligence (AI) has made remarkable achievements in the context
of the industrial IoT and big data, and it is widely used in various
industries, such as railway, energy, and aviation, for condition monitoring,
fault prediction, and health management. The emergence of large-scale
foundation models (LSF-Models) such as ChatGPT and DALLE-E marks the entry of
AI into a new era of AI-2.0 from AI-1.0, where deep models have rapidly evolved
from a research paradigm of single-modal, single-task, and limited-data to a
multi-modal, multi-task, massive data, and super-large model paradigm. ChatGPT
represents a landmark achievement in this research paradigm, offering hope for
general artificial intelligence due to its highly intelligent natural language
understanding ability. However, the PHM field lacks a consensus on how to
respond to this significant change in the AI field, and a systematic review and
roadmap is required to elucidate future development directions. To fill this
gap, this paper systematically expounds on the key components and latest
developments of LSF-Models. Then, we systematically answered how to build the
LSF-Model applicable to PHM tasks and outlined the challenges and future
development roadmaps for this research paradigm.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 21:37:44 GMT"
},
{
"version": "v2",
"created": "Fri, 12 May 2023 10:41:35 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Li",
"Yan-Fu",
""
],
[
"Wang",
"Huan",
""
],
[
"Sun",
"Muxia",
""
]
] |
new_dataset
| 0.997216 |
2305.07019
|
Yantao Shen
|
Zhaoyang Zhang, Yantao Shen, Kunyu Shi, Zhaowei Cai, Jun Fang, Siqi
Deng, Hao Yang, Davide Modolo, Zhuowen Tu, Stefano Soatto
|
Musketeer (All for One, and One for All): A Generalist Vision-Language
Model with Task Explanation Prompts
| null | null | null | null |
cs.CV cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a sequence-to-sequence vision-language model whose parameters are
jointly trained on all tasks (all for one) and fully shared among multiple
tasks (one for all), resulting in a single model which we named Musketeer. The
integration of knowledge across heterogeneous tasks is enabled by a novel
feature called Task Explanation Prompt (TEP). TEP reduces interference among
tasks, allowing the model to focus on their shared structure. With a single
model, Musketeer achieves results comparable to or better than strong baselines
trained on single tasks, almost uniformly across multiple tasks.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 17:57:49 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Zhang",
"Zhaoyang",
""
],
[
"Shen",
"Yantao",
""
],
[
"Shi",
"Kunyu",
""
],
[
"Cai",
"Zhaowei",
""
],
[
"Fang",
"Jun",
""
],
[
"Deng",
"Siqi",
""
],
[
"Yang",
"Hao",
""
],
[
"Modolo",
"Davide",
""
],
[
"Tu",
"Zhuowen",
""
],
[
"Soatto",
"Stefano",
""
]
] |
new_dataset
| 0.998042 |
2305.07035
|
Pavel Naumov
|
Pavel Naumov, Oliver Orejola
|
Shhh! The Logic of Clandestine Operations
|
32nd International Joint Conference on Artificial Intelligence
(IJCAI-23)
| null | null | null |
cs.LO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
An operation is called covert if it conceals the identity of the actor; it is
called clandestine if the very fact that the operation is conducted is
concealed. The paper proposes a formal semantics of clandestine operations and
introduces a sound and complete logical system that describes the interplay
between the distributed knowledge modality and a modality capturing coalition
power to conduct clandestine operations.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 22:15:58 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Naumov",
"Pavel",
""
],
[
"Orejola",
"Oliver",
""
]
] |
new_dataset
| 0.998682 |
2305.07079
|
Farhad Shirani Chaharsooghi
|
Mahshad Shariatnasab, Farhad Shirani, S. Sitharma Iyengar
|
The Privacy-Utility Tradeoff in Rank-Preserving Dataset Obfuscation
| null | null | null | null |
cs.IT cs.CR cs.DB math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dataset obfuscation refers to techniques in which random noise is added to
the entries of a given dataset, prior to its public release, to protect against
leakage of private information. In this work, dataset obfuscation under two
objectives is considered: i) rank-preservation: to preserve the row ordering in
the obfuscated dataset induced by a given rank function, and ii) anonymity: to
protect user anonymity under fingerprinting attacks. The first objective,
rank-preservation, is of interest in applications such as the design of search
engines and recommendation systems, feature matching, and social network
analysis. Fingerprinting attacks, considered in evaluating the anonymity
objective, are privacy attacks where an attacker constructs a fingerprint of a
victim based on its observed activities, such as online web activities, and
compares this fingerprint with information extracted from a publicly released
obfuscated dataset to identify the victim. By evaluating the performance limits
of a class of obfuscation mechanisms over asymptotically large datasets, a
fundamental trade-off is quantified between rank-preservation and user
anonymity. Single-letter obfuscation mechanisms are considered, where each
entry in the dataset is perturbed by independent noise, and their fundamental
performance limits are characterized by leveraging large deviation techniques.
The optimal obfuscating test-channel, optimizing the privacy-utility tradeoff,
is characterized in the form of a convex optimization problem which can be
solved efficiently. Numerical simulations of various scenarios are provided to
verify the theoretical derivations.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 18:26:38 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Shariatnasab",
"Mahshad",
""
],
[
"Shirani",
"Farhad",
""
],
[
"Iyengar",
"S. Sitharma",
""
]
] |
new_dataset
| 0.993811 |
2305.07233
|
Andrzej Szalas
|
Patrick Doherty and Andrzej Szalas
|
Dual Forgetting Operators in the Context of Weakest Sufficient and
Strongest Necessary Conditions
| null | null | null | null |
cs.AI cs.LO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Forgetting is an important concept in knowledge representation and automated
reasoning with widespread applications across a number of disciplines. A
standard forgetting operator, characterized in [Lin and Reiter'94] in terms of
model-theoretic semantics and primarily focusing on the propositional case,
opened up a new research subarea. In this paper, a new operator called weak
forgetting, dual to standard forgetting, is introduced and both together are
shown to offer a new more uniform perspective on forgetting operators in
general. Both the weak and standard forgetting operators are characterized in
terms of entailment and inference, rather than a model theoretic semantics.
This naturally leads to a useful algorithmic perspective based on quantifier
elimination and the use of Ackermman's Lemma and its fixpoint generalization.
The strong formal relationship between standard forgetting and strongest
necessary conditions and weak forgetting and weakest sufficient conditions is
also characterized quite naturally through the entailment-based, inferential
perspective used. The framework used to characterize the dual forgetting
operators is also generalized to the first-order case and includes useful
algorithms for computing first-order forgetting operators in special cases.
Practical examples are also included to show the importance of both weak and
standard forgetting in modeling and representation.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 04:01:21 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Doherty",
"Patrick",
""
],
[
"Szalas",
"Andrzej",
""
]
] |
new_dataset
| 0.992412 |
2305.07257
|
Arman Bolatov
|
Aknur Karabay, Arman Bolatov, Huseyin Atakan Varol, and Mei-Yen Chan
|
A Central Asian Food Dataset for Personalized Dietary Interventions,
Extended Abstract
|
3 pages, 2 figures, 5 tables
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Nowadays, it is common for people to take photographs of every beverage,
snack, or meal they eat and then post these photographs on social media
platforms. Leveraging these social trends, real-time food recognition and
reliable classification of these captured food images can potentially help
replace some of the tedious recording and coding of food diaries to enable
personalized dietary interventions. Although Central Asian cuisine is
culturally and historically distinct, there has been little published data on
the food and dietary habits of people in this region. To fill this gap, we aim
to create a reliable dataset of regional foods that is easily accessible to
both public consumers and researchers. To the best of our knowledge, this is
the first work on creating a Central Asian Food Dataset (CAFD). The final
dataset contains 42 food categories and over 16,000 images of national dishes
unique to this region. We achieved a classification accuracy of 88.70\% (42
classes) on the CAFD using the ResNet152 neural network model. The food
recognition models trained on the CAFD demonstrate computer vision's
effectiveness and high accuracy for dietary assessment.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 05:26:55 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Karabay",
"Aknur",
""
],
[
"Bolatov",
"Arman",
""
],
[
"Varol",
"Huseyin Atakan",
""
],
[
"Chan",
"Mei-Yen",
""
]
] |
new_dataset
| 0.999848 |
2305.07288
|
Sunjun Kweon
|
Sunjun Kweon, Yeonsu Kwon, Seonhee Cho, Yohan Jo, Edward Choi
|
Open-WikiTable: Dataset for Open Domain Question Answering with Complex
Reasoning over Table
|
ACL 2023 (Findings)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Despite recent interest in open domain question answering (ODQA) over tables,
many studies still rely on datasets that are not truly optimal for the task
with respect to utilizing structural nature of table. These datasets assume
answers reside as a single cell value and do not necessitate exploring over
multiple cells such as aggregation, comparison, and sorting. Thus, we release
Open-WikiTable, the first ODQA dataset that requires complex reasoning over
tables. Open-WikiTable is built upon WikiSQL and WikiTableQuestions to be
applicable in the open-domain setting. As each question is coupled with both
textual answers and SQL queries, Open-WikiTable opens up a wide range of
possibilities for future research, as both reader and parser methods can be
applied. The dataset and code are publicly available.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 07:24:16 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Kweon",
"Sunjun",
""
],
[
"Kwon",
"Yeonsu",
""
],
[
"Cho",
"Seonhee",
""
],
[
"Jo",
"Yohan",
""
],
[
"Choi",
"Edward",
""
]
] |
new_dataset
| 0.995793 |
2305.07325
|
Mattia Sinigaglia
|
Mattia Sinigaglia, Luca Bertaccini, Luca Valente, Angelo Garofalo,
Simone Benatti, Luca Benini, Francesco Conti, and Davide Rossi
|
Echoes: a 200 GOPS/W Frequency Domain SoC with FFT Processor and I2S DSP
for Flexible Data Acquisition from Microphone Arrays
| null | null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Emerging applications in the IoT domain require ultra-low-power and
high-performance end-nodes to deal with complex near-sensor-data analytics.
Domains such as audio, radar, and Structural Health Monitoring require many
computations to be performed in the frequency domain rather than in the time
domain. We present ECHOES, a System-On-a-Chip (SoC) composed of a RISC-V core
enhanced with fixed and floating-point digital signal processing (DSP)
extensions and a Fast-Fourier Transform (FFT) hardware accelerator targeting
emerging frequency-domain application. The proposed SoC features an autonomous
I/O engine supporting a wide set of peripherals, including Ultra-Low-Power
radars, MEMS, and digital microphones over I2S protocol with full-duplex Time
Division Multiplexing DSP mode, making ECHOES the first open-source SoC which
offers this functionality enabling simultaneous communication with up to 16
I/Os devices. ECHOES, fabricated with 65nm CMOS technology, reaches a peak
performance of 0.16 GFLOPS and a peak energy efficiency of 9.68 GFLOPS/W on a
wide range of floating and fixed-point general-purpose DSP kernels. The FFT
accelerator achieves performance up to 10.16 GOPS with an efficiency of 199.8
GOPS/W, improving performance and efficiency by up to 41.1x and 11.2x,
respectively, over its software implementation of this critical task for
frequency domain processing.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 08:59:43 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Sinigaglia",
"Mattia",
""
],
[
"Bertaccini",
"Luca",
""
],
[
"Valente",
"Luca",
""
],
[
"Garofalo",
"Angelo",
""
],
[
"Benatti",
"Simone",
""
],
[
"Benini",
"Luca",
""
],
[
"Conti",
"Francesco",
""
],
[
"Rossi",
"Davide",
""
]
] |
new_dataset
| 0.995029 |
2305.07340
|
Xiaoming Shi
|
Jie Xu, Lu Lu, Sen Yang, Bilin Liang, Xinwei Peng, Jiali Pang, Jinru
Ding, Xiaoming Shi, Lingrui Yang, Huan Song, Kang Li, Xin Sun, Shaoting Zhang
|
MedGPTEval: A Dataset and Benchmark to Evaluate Responses of Large
Language Models in Medicine
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
METHODS: First, a set of evaluation criteria is designed based on a
comprehensive literature review. Second, existing candidate criteria are
optimized for using a Delphi method by five experts in medicine and
engineering. Third, three clinical experts design a set of medical datasets to
interact with LLMs. Finally, benchmarking experiments are conducted on the
datasets. The responses generated by chatbots based on LLMs are recorded for
blind evaluations by five licensed medical experts. RESULTS: The obtained
evaluation criteria cover medical professional capabilities, social
comprehensive capabilities, contextual capabilities, and computational
robustness, with sixteen detailed indicators. The medical datasets include
twenty-seven medical dialogues and seven case reports in Chinese. Three
chatbots are evaluated, ChatGPT by OpenAI, ERNIE Bot by Baidu Inc., and Doctor
PuJiang (Dr. PJ) by Shanghai Artificial Intelligence Laboratory. Experimental
results show that Dr. PJ outperforms ChatGPT and ERNIE Bot in both
multiple-turn medical dialogue and case report scenarios.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 09:37:13 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Xu",
"Jie",
""
],
[
"Lu",
"Lu",
""
],
[
"Yang",
"Sen",
""
],
[
"Liang",
"Bilin",
""
],
[
"Peng",
"Xinwei",
""
],
[
"Pang",
"Jiali",
""
],
[
"Ding",
"Jinru",
""
],
[
"Shi",
"Xiaoming",
""
],
[
"Yang",
"Lingrui",
""
],
[
"Song",
"Huan",
""
],
[
"Li",
"Kang",
""
],
[
"Sun",
"Xin",
""
],
[
"Zhang",
"Shaoting",
""
]
] |
new_dataset
| 0.999721 |
2305.07489
|
Roman Solovyev A
|
Roman Solovyev, Alexander Stempkovskiy, Tatiana Habruseva
|
Benchmarks and leaderboards for sound demixing tasks
| null | null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Music demixing is the task of separating different tracks from the given
single audio signal into components, such as drums, bass, and vocals from the
rest of the accompaniment. Separation of sources is useful for a range of
areas, including entertainment and hearing aids. In this paper, we introduce
two new benchmarks for the sound source separation tasks and compare popular
models for sound demixing, as well as their ensembles, on these benchmarks. For
the models' assessments, we provide the leaderboard at
https://mvsep.com/quality_checker/, giving a comparison for a range of models.
The new benchmark datasets are available for download. We also develop a novel
approach for audio separation, based on the ensembling of different models that
are suited best for the particular stem. The proposed solution was evaluated in
the context of the Music Demixing Challenge 2023 and achieved top results in
different tracks of the challenge. The code and the approach are open-sourced
on GitHub.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 14:00:26 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Solovyev",
"Roman",
""
],
[
"Stempkovskiy",
"Alexander",
""
],
[
"Habruseva",
"Tatiana",
""
]
] |
new_dataset
| 0.999506 |
2305.07528
|
Aboli Marathe
|
Aboli Marathe, Deva Ramanan, Rahee Walambe, Ketan Kotecha
|
WEDGE: A multi-weather autonomous driving dataset built from generative
vision-language models
|
Accepted in Vision Datasets Understanding at CVPR 2023
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The open road poses many challenges to autonomous perception, including poor
visibility from extreme weather conditions. Models trained on good-weather
datasets frequently fail at detection in these out-of-distribution settings. To
aid adversarial robustness in perception, we introduce WEDGE (WEather images by
DALL-E GEneration): a synthetic dataset generated with a vision-language
generative model via prompting. WEDGE consists of 3360 images in 16 extreme
weather conditions manually annotated with 16513 bounding boxes, supporting
research in the tasks of weather classification and 2D object detection. We
have analyzed WEDGE from research standpoints, verifying its effectiveness for
extreme-weather autonomous perception. We establish baseline performance for
classification and detection with 53.87% test accuracy and 45.41 mAP. Most
importantly, WEDGE can be used to fine-tune state-of-the-art detectors,
improving SOTA performance on real-world weather benchmarks (such as DAWN) by
4.48 AP for well-generated classes like trucks. WEDGE has been collected under
OpenAI's terms of use and is released for public use under the CC BY-NC-SA 4.0
license. The repository for this work and dataset is available at
https://infernolia.github.io/WEDGE.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 14:42:47 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Marathe",
"Aboli",
""
],
[
"Ramanan",
"Deva",
""
],
[
"Walambe",
"Rahee",
""
],
[
"Kotecha",
"Ketan",
""
]
] |
new_dataset
| 0.999856 |
2305.07545
|
Ripon Patgiri
|
Sabuzima Nayak and Ripon Patgiri
|
KmerCo: A lightweight K-mer counting technique with a tiny memory
footprint
|
Submitted to the conference for possible publication
| null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
K-mer counting is a requisite process for DNA assembly because it speeds up
its overall process. The frequency of K-mers is used for estimating the
parameters of DNA assembly, error correction, etc. The process also provides a
list of district K-mers which assist in searching large databases and reducing
the size of de Bruijn graphs. Nonetheless, K-mer counting is a data and
compute-intensive process. Hence, it is crucial to implement a lightweight data
structure that occupies low memory but does fast processing of K-mers. We
proposed a lightweight K-mer counting technique, called KmerCo that implements
a potent counting Bloom Filter variant, called countBF. KmerCo has two phases:
insertion and classification. The insertion phase inserts all K-mers into
countBF and determines distinct K-mers. The classification phase is responsible
for the classification of distinct K-mers into trustworthy and erroneous K-mers
based on a user-provided threshold value. We also proposed a novel benchmark
performance metric. We used the Hadoop MapReduce program to determine the
frequency of K-mers. We have conducted rigorous experiments to prove the
dominion of KmerCo compared to state-of-the-art K-mer counting techniques. The
experiments are conducted using DNA sequences of four organisms. The datasets
are pruned to generate four different size datasets. KmerCo is compared with
Squeakr, BFCounter, and Jellyfish. KmerCo took the lowest memory, highest
number of insertions per second, and a positive trustworthy rate as compared
with the three above-mentioned methods.
|
[
{
"version": "v1",
"created": "Fri, 28 Apr 2023 10:14:01 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Nayak",
"Sabuzima",
""
],
[
"Patgiri",
"Ripon",
""
]
] |
new_dataset
| 0.988147 |
2305.07552
|
Ganesh Bagler Prof
|
Mansi Goel, Shashank Dargar, Shounak Ghatak, Nidhi Verma, Pratik
Chauhan, Anushka Gupta, Nikhila Vishnumolakala, Hareesh Amuru, Ekta Gambhir,
Ronak Chhajed, Meenal Jain, Astha Jain, Samiksha Garg, Nitesh Narwade,
Nikhilesh Verhwani, Abhuday Tiwari, Kirti Vashishtha and Ganesh Bagler
|
Dish detection in food platters: A framework for automated diet logging
and nutrition management
|
11 pages, 5 figures, 5 tables. Submitted to the 8th International
Conference on Computer Vision & Image Processing (CVIP-2023)
| null | null | null |
cs.CV cs.AI cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Diet is central to the epidemic of lifestyle disorders. Accurate and
effortless diet logging is one of the significant bottlenecks for effective
diet management and calorie restriction. Dish detection from food platters is a
challenging problem due to a visually complex food layout. We present an
end-to-end computational framework for diet management, from data compilation,
annotation, and state-of-the-art model identification to its mobile app
implementation. As a case study, we implement the framework in the context of
Indian food platters known for their complex presentation that poses a
challenge for the automated detection of dishes. Starting with the 61 most
popular Indian dishes, we identify the state-of-the-art model through a
comparative analysis of deep-learning-based object detection architectures.
Rooted in a meticulous compilation of 68,005 platter images with 134,814 manual
dish annotations, we first compare ten architectures for multi-label
classification to identify ResNet152 (mAP=84.51%) as the best model. YOLOv8x
(mAP=87.70%) emerged as the best model architecture for dish detection among
the eight deep-learning models implemented after a thorough performance
evaluation. By comparing with the state-of-the-art model for the IndianFood10
dataset, we demonstrate the superior object detection performance of YOLOv8x
for this subset and establish Resnet152 as the best architecture for
multi-label classification. The models thus trained on richly annotated data
can be extended to include dishes from across global cuisines. The proposed
framework is demonstrated through a proof-of-concept mobile application with
diverse applications for diet logging, food recommendation systems, nutritional
interventions, and mitigation of lifestyle disorders.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 15:25:58 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Goel",
"Mansi",
""
],
[
"Dargar",
"Shashank",
""
],
[
"Ghatak",
"Shounak",
""
],
[
"Verma",
"Nidhi",
""
],
[
"Chauhan",
"Pratik",
""
],
[
"Gupta",
"Anushka",
""
],
[
"Vishnumolakala",
"Nikhila",
""
],
[
"Amuru",
"Hareesh",
""
],
[
"Gambhir",
"Ekta",
""
],
[
"Chhajed",
"Ronak",
""
],
[
"Jain",
"Meenal",
""
],
[
"Jain",
"Astha",
""
],
[
"Garg",
"Samiksha",
""
],
[
"Narwade",
"Nitesh",
""
],
[
"Verhwani",
"Nikhilesh",
""
],
[
"Tiwari",
"Abhuday",
""
],
[
"Vashishtha",
"Kirti",
""
],
[
"Bagler",
"Ganesh",
""
]
] |
new_dataset
| 0.990856 |
2305.07570
|
Martin Skrodzki
|
Henriette Lipsch\"utz, Ulrich Reitebuch, Konrad Polthier, and Martin
Skrodzki
|
Isotropic Point Cloud Meshing using unit Spheres (IPCMS)
| null | null | null | null |
cs.CG cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Point clouds arise from acquisition processes applied in various scenarios,
such as reverse engineering, rapid prototyping, or cultural preservation. To
run various simulations via, e.g., finite element methods, on the derived data,
a mesh has to be created from it. In this paper, a meshing algorithm for point
clouds is presented, which is based on a sphere covering of the underlying
surface. The algorithm provides a mesh close to uniformity in terms of edge
lengths and angles of its triangles. Additionally, theoretical results
guarantee the output to be manifold, given suitable input and parameter
choices. We present both the underlying theory, which provides suitable
parameter bounds, as well as experiments showing that our algorithm can compete
with widely used competitors in terms of quality of the output and timings.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 15:57:28 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Lipschütz",
"Henriette",
""
],
[
"Reitebuch",
"Ulrich",
""
],
[
"Polthier",
"Konrad",
""
],
[
"Skrodzki",
"Martin",
""
]
] |
new_dataset
| 0.986094 |
2305.07584
|
Biqian Feng
|
Biqian Feng, Chenyuan Feng, Daquan Feng, Yongpeng Wu, Xiang-Gen Xia
|
Proactive Content Caching Scheme in Urban Vehicular Networks
|
Accepted by IEEE Transactions on Communications
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Stream media content caching is a key enabling technology to promote the
value chain of future urban vehicular networks. Nevertheless, the high mobility
of vehicles, intermittency of information transmissions, high dynamics of user
requests, limited caching capacities and extreme complexity of business
scenarios pose an enormous challenge to content caching and distribution in
vehicular networks. To tackle this problem, this paper aims to design a novel
edge-computing-enabled hierarchical cooperative caching framework. Firstly, we
profoundly analyze the spatio-temporal correlation between the historical
vehicle trajectory of user requests and construct the system model to predict
the vehicle trajectory and content popularity, which lays a foundation for
mobility-aware content caching and dispatching. Meanwhile, we probe into
privacy protection strategies to realize privacy-preserved prediction model.
Furthermore, based on trajectory and popular content prediction results,
content caching strategy is studied, and adaptive and dynamic resource
management schemes are proposed for hierarchical cooperative caching networks.
Finally, simulations are provided to verify the superiority of our proposed
scheme and algorithms. It shows that the proposed algorithms effectively
improve the performance of the considered system in terms of hit ratio and
average delay, and narrow the gap to the optimal caching scheme comparing with
the traditional schemes.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 16:27:30 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Feng",
"Biqian",
""
],
[
"Feng",
"Chenyuan",
""
],
[
"Feng",
"Daquan",
""
],
[
"Wu",
"Yongpeng",
""
],
[
"Xia",
"Xiang-Gen",
""
]
] |
new_dataset
| 0.998187 |
2305.07602
|
Steven Grosz Mr.
|
Steven A. Grosz, Kanishka P. Wijewardena, and Anil K. Jain
|
ViT Unified: Joint Fingerprint Recognition and Presentation Attack
Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A secure fingerprint recognition system must contain both a presentation
attack (i.e., spoof) detection and recognition module in order to protect users
against unwanted access by malicious users. Traditionally, these tasks would be
carried out by two independent systems; however, recent studies have
demonstrated the potential to have one unified system architecture in order to
reduce the computational burdens on the system, while maintaining high
accuracy. In this work, we leverage a vision transformer architecture for joint
spoof detection and matching and report competitive results with
state-of-the-art (SOTA) models for both a sequential system (two ViT models
operating independently) and a unified architecture (a single ViT model for
both tasks). ViT models are particularly well suited for this task as the ViT's
global embedding encodes features useful for recognition, whereas the
individual, local embeddings are useful for spoof detection. We demonstrate the
capability of our unified model to achieve an average integrated matching (IM)
accuracy of 98.87% across LivDet 2013 and 2015 CrossMatch sensors. This is
comparable to IM accuracy of 98.95% of our sequential dual-ViT system, but with
~50% of the parameters and ~58% of the latency.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 16:51:14 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Grosz",
"Steven A.",
""
],
[
"Wijewardena",
"Kanishka P.",
""
],
[
"Jain",
"Anil K.",
""
]
] |
new_dataset
| 0.998912 |
2305.07608
|
Anirudha Paul
|
Anirudha Paul
|
Torrent Driven (TD) Coin: A Crypto Coin with Built In Distributed Data
Storage System
| null | null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years decentralized currencies developed through Blockchains are
increasingly becoming popular because of their transparent nature and absence
of a central controlling authority. Though a lot of computation power, disk
space, and energy are being used to run this system, most of these resources
are dedicated to just keeping the bad actors away by using Proof of Work, Proof
of Stake, Proof of Space, etc., consensus. In this paper, we discuss a way to
combine those consensus mechanism and modify the defense system to create
actual values for the end-users by providing a solution for securely storing
their data in a decentralized manner without compromising the integrity of the
blockchain.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 16:54:24 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Paul",
"Anirudha",
""
]
] |
new_dataset
| 0.970644 |
2305.07614
|
Orion Weller
|
Orion Weller, Dawn Lawrie, Benjamin Van Durme
|
NevIR: Negation in Neural Information Retrieval
| null | null | null | null |
cs.IR cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Negation is a common everyday phenomena and has been a consistent area of
weakness for language models (LMs). Although the Information Retrieval (IR)
community has adopted LMs as the backbone of modern IR architectures, there has
been little to no research in understanding how negation impacts neural IR. We
therefore construct a straightforward benchmark on this theme: asking IR models
to rank two documents that differ only by negation. We show that the results
vary widely according to the type of IR architecture: cross-encoders perform
best, followed by late-interaction models, and in last place are bi-encoder and
sparse neural architectures. We find that most current information retrieval
models do not consider negation, performing similarly or worse than randomly
ranking. We show that although the obvious approach of continued fine-tuning on
a dataset of contrastive documents containing negations increases performance
(as does model size), there is still a large gap between machine and human
performance.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 17:05:54 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Weller",
"Orion",
""
],
[
"Lawrie",
"Dawn",
""
],
[
"Van Durme",
"Benjamin",
""
]
] |
new_dataset
| 0.972107 |
2305.07624
|
Ying Liu
|
Ying Liu, Liucheng Guo, Valeri A. Makarov, Yuxiang Huang, Alexander
Gorban, Evgeny Mirkes, Ivan Y. Tyukin
|
Agile gesture recognition for capacitive sensing devices: adapting
on-the-job
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Automated hand gesture recognition has been a focus of the AI community for
decades. Traditionally, work in this domain revolved largely around scenarios
assuming the availability of the flow of images of the user hands. This has
partly been due to the prevalence of camera-based devices and the wide
availability of image data. However, there is growing demand for gesture
recognition technology that can be implemented on low-power devices using
limited sensor data instead of high-dimensional inputs like hand images. In
this work, we demonstrate a hand gesture recognition system and method that
uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five
fingers. We use a machine learning technique to analyse the time series signals
and identify three features that can represent 5 fingers within 500 ms. The
analysis is composed of a two stage training strategy, including dimension
reduction through principal component analysis and classification with K
nearest neighbour. Remarkably, we found that this combination showed a level of
performance which was comparable to more advanced methods such as supervised
variational autoencoder. The base system can also be equipped with the
capability to learn from occasional errors by providing it with an additional
adaptive error correction mechanism. The results showed that the error
corrector improve the classification performance in the base system without
compromising its performance. The system requires no more than 1 ms of
computing time per input sample, and is smaller than deep neural networks,
demonstrating the feasibility of agile gesture recognition systems based on
this technology.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 17:24:02 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Liu",
"Ying",
""
],
[
"Guo",
"Liucheng",
""
],
[
"Makarov",
"Valeri A.",
""
],
[
"Huang",
"Yuxiang",
""
],
[
"Gorban",
"Alexander",
""
],
[
"Mirkes",
"Evgeny",
""
],
[
"Tyukin",
"Ivan Y.",
""
]
] |
new_dataset
| 0.968923 |
2305.07625
|
Ondrej Bohdal
|
Ondrej Bohdal, Yinbing Tian, Yongshuo Zong, Ruchika Chavhan, Da Li,
Henry Gouk, Li Guo, Timothy Hospedales
|
Meta Omnium: A Benchmark for General-Purpose Learning-to-Learn
|
Accepted at CVPR 2023. Project page:
https://edi-meta-learning.github.io/meta-omnium
| null | null | null |
cs.CV cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Meta-learning and other approaches to few-shot learning are widely studied
for image recognition, and are increasingly applied to other vision tasks such
as pose estimation and dense prediction. This naturally raises the question of
whether there is any few-shot meta-learning algorithm capable of generalizing
across these diverse task types? To support the community in answering this
question, we introduce Meta Omnium, a dataset-of-datasets spanning multiple
vision tasks including recognition, keypoint localization, semantic
segmentation and regression. We experiment with popular few-shot meta-learning
baselines and analyze their ability to generalize across tasks and to transfer
knowledge between them. Meta Omnium enables meta-learning researchers to
evaluate model generalization to a much wider array of tasks than previously
possible, and provides a single framework for evaluating meta-learners across a
wide suite of vision applications in a consistent manner.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 17:25:19 GMT"
}
] | 2023-05-15T00:00:00 |
[
[
"Bohdal",
"Ondrej",
""
],
[
"Tian",
"Yinbing",
""
],
[
"Zong",
"Yongshuo",
""
],
[
"Chavhan",
"Ruchika",
""
],
[
"Li",
"Da",
""
],
[
"Gouk",
"Henry",
""
],
[
"Guo",
"Li",
""
],
[
"Hospedales",
"Timothy",
""
]
] |
new_dataset
| 0.999496 |
1607.03243
|
Siamak Layeghy
|
Siamak Layeghy, Farzaneh Pakzad, Marius Portmann
|
SCOR: Software-defined Constrained Optimal Routing Platform for SDN
|
19 pages, 11 figures, 11 algorithms, 3 tables
|
Horizons in computer science research. Volume 22, 2022, ISBN
9798886971019
| null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A Software-defined Constrained Optimal Routing (SCOR) platform is introduced
as a Northbound interface in SDN architecture. It is based on constraint
programming techniques and is implemented in MiniZinc modelling language. Using
constraint programming techniques in this Northbound interface has created an
efficient tool for implementing complex Quality of Service routing applications
in a few lines of code. The code includes only the problem statement and the
solution is found by a general solver program. A routing framework is
introduced based on SDN's architecture model which uses SCOR as its Northbound
interface and an upper layer of applications implemented in SCOR. Performance
of a few implemented routing applications are evaluated in different network
topologies, network sizes and various number of concurrent flows.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2016 07:07:52 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Layeghy",
"Siamak",
""
],
[
"Pakzad",
"Farzaneh",
""
],
[
"Portmann",
"Marius",
""
]
] |
new_dataset
| 0.999731 |
1812.01082
|
Zhiyu Sun
|
Zhiyu Sun, Ethan Rooke, Jerome Charton, Yusen He, Jia Lu and Stephen
Baek
|
ZerNet: Convolutional Neural Networks on Arbitrary Surfaces via Zernike
Local Tangent Space Estimation
| null | null |
10.1111/cgf.14012
| null |
cs.CV cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a novel formulation to extend CNNs to
two-dimensional (2D) manifolds using orthogonal basis functions, called Zernike
polynomials. In many areas, geometric features play a key role in understanding
scientific phenomena. Thus, an ability to codify geometric features into a
mathematical quantity can be critical. Recently, convolutional neural networks
(CNNs) have demonstrated the promising capability of extracting and codifying
features from visual information. However, the progress has been concentrated
in computer vision applications where there exists an inherent grid-like
structure. In contrast, many geometry processing problems are defined on curved
surfaces, and the generalization of CNNs is not quite trivial. The difficulties
are rooted in the lack of key ingredients such as the canonical grid-like
representation, the notion of consistent orientation, and a compatible local
topology across the domain. In this paper, we prove that the convolution of two
functions can be represented as a simple dot product between Zernike polynomial
coefficients; and the rotation of a convolution kernel is essentially a set of
2-by-2 rotation matrices applied to the coefficients. As such, the key
contribution of this work resides in a concise but rigorous mathematical
generalization of the CNN building blocks.
|
[
{
"version": "v1",
"created": "Mon, 3 Dec 2018 21:11:48 GMT"
},
{
"version": "v2",
"created": "Sat, 13 Apr 2019 03:44:26 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Oct 2019 07:40:15 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Sun",
"Zhiyu",
""
],
[
"Rooke",
"Ethan",
""
],
[
"Charton",
"Jerome",
""
],
[
"He",
"Yusen",
""
],
[
"Lu",
"Jia",
""
],
[
"Baek",
"Stephen",
""
]
] |
new_dataset
| 0.998808 |
1901.05613
|
Sanjay Saha
|
Shahjalal Ahmed, Md. Rafiqul Islam, Jahid Hassan, Minhaz Uddin Ahmed,
Bilkis Jamal Ferdosi, Sanjay Saha and Md. Shopon
|
Hand Sign to Bangla Speech: A Deep Learning in Vision based system for
Recognizing Hand Sign Digits and Generating Bangla Speech
| null |
Proceedings of International Conference on Sustainable Computing
in Science, Technology and Management (SUSCOM), Amity University Rajasthan,
Jaipur - India, February 26-28, 2019
|
10.2139/ssrn.3358187
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Recent advancements in the field of computer vision with the help of deep
neural networks have led us to explore and develop many existing challenges
that were once unattended due to the lack of necessary technologies. Hand
Sign/Gesture Recognition is one of the significant areas where the deep neural
network is making a substantial impact. In the last few years, a large number
of researches has been conducted to recognize hand signs and hand gestures,
which we aim to extend to our mother-tongue, Bangla (also known as Bengali).
The primary goal of our work is to make an automated tool to aid the people who
are unable to speak. We developed a system that automatically detects hand sign
based digits and speaks out the result in Bangla language. According to the
report of the World Health Organization (WHO), 15% of people in the world live
with some kind of disabilities. Among them, individuals with communication
impairment such as speech disabilities experience substantial barrier in social
interaction. The proposed system can be invaluable to mitigate such a barrier.
The core of the system is built with a deep learning model which is based on
convolutional neural networks (CNN). The model classifies hand sign based
digits with 92% accuracy over validation data which ensures it a highly
trustworthy system. Upon classification of the digits, the resulting output is
fed to the text to speech engine and the translator unit eventually which
generates audio output in Bangla language. A web application to demonstrate our
tool is available at http://bit.ly/signdigits2banglaspeech.
|
[
{
"version": "v1",
"created": "Thu, 17 Jan 2019 04:27:34 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Ahmed",
"Shahjalal",
""
],
[
"Islam",
"Md. Rafiqul",
""
],
[
"Hassan",
"Jahid",
""
],
[
"Ahmed",
"Minhaz Uddin",
""
],
[
"Ferdosi",
"Bilkis Jamal",
""
],
[
"Saha",
"Sanjay",
""
],
[
"Shopon",
"Md.",
""
]
] |
new_dataset
| 0.995443 |
2011.01871
|
Xiang Li
|
Xiang Li
|
FASTCloud: A novel framework of assessment and selection for trustworthy
cloud service
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
By virtue of technology and benefit advantages, cloud computing has
increasingly attracted a large number of potential cloud consumers (PCC) plan
to migrate the traditional business to the cloud service. However, trust has
become one of the most challenging issues that prevent the PCC from adopting
cloud services, especially in trustworthy cloud service selection. Besides, due
to the diversity and dynamic of quality of service (QoS) in the cloud
environment, the existing trust assessment methods based on the single constant
value of QoS attribute and the subjective weight assignment are not good enough
to provide an effective solution for PCCs to identify and select a trustworthy
cloud service among a wide range of functionally-equivalent cloud service
providers (CSPs). To address the challenge, a novel assessment and selection
framework for trustworthy cloud service, FASTCloud, is proposed in this study.
This framework facilitates PCCs to select a trustworthy cloud service based on
their actual QoS requirements. In order to accurately and efficiently assess
the trust level of cloud services, a QoS-based trust assessment model is
proposed. This model represents a trust level assessment method based on the
interval multiple attributes with an objective weight assignment method based
on the deviation maximization to adaptively determine the trust level of
different cloud services provisioned by candidate CSPs. The advantage of the
proposed trust level assessment method in time complexity is demonstrated by
the performance analysis and comparison. The experimental result of a case
study with an open-source dataset shows that the trust model is efficient in
cloud service trust assessment and the FASTCloud can effectively help PCCs
select a trustworthy cloud service.
|
[
{
"version": "v1",
"created": "Mon, 2 Nov 2020 01:18:05 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Jan 2021 07:49:30 GMT"
},
{
"version": "v3",
"created": "Thu, 11 May 2023 08:34:41 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Li",
"Xiang",
""
]
] |
new_dataset
| 0.996943 |
2201.11115
|
Jan Drchal
|
Herbert Ullrich, Jan Drchal, Martin R\'ypar, Hana Vincourov\'a,
V\'aclav Moravec
|
CsFEVER and CTKFacts: Acquiring Czech data for fact verification
|
submitted to LREV journal for review, resubmission, changed title
according to reviewer suggestion
| null |
10.1007/s10579-023-09654-3
| null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we examine several methods of acquiring Czech data for
automated fact-checking, which is a task commonly modeled as a classification
of textual claim veracity w.r.t. a corpus of trusted ground truths. We attempt
to collect sets of data in form of a factual claim, evidence within the ground
truth corpus, and its veracity label (supported, refuted or not enough info).
As a first attempt, we generate a Czech version of the large-scale FEVER
dataset built on top of Wikipedia corpus. We take a hybrid approach of machine
translation and document alignment; the approach and the tools we provide can
be easily applied to other languages. We discuss its weaknesses and
inaccuracies, propose a future approach for their cleaning and publish the 127k
resulting translations, as well as a version of such dataset reliably
applicable for the Natural Language Inference task - the CsFEVER-NLI.
Furthermore, we collect a novel dataset of 3,097 claims, which is annotated
using the corpus of 2.2M articles of Czech News Agency. We present its extended
annotation methodology based on the FEVER approach, and, as the underlying
corpus is kept a trade secret, we also publish a standalone version of the
dataset for the task of Natural Language Inference we call CTKFactsNLI. We
analyze both acquired datasets for spurious cues - annotation patterns leading
to model overfitting. CTKFacts is further examined for inter-annotator
agreement, thoroughly cleaned, and a typology of common annotator errors is
extracted. Finally, we provide baseline models for all stages of the
fact-checking pipeline and publish the NLI datasets, as well as our annotation
platform and other experimental data.
|
[
{
"version": "v1",
"created": "Wed, 26 Jan 2022 18:48:42 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Jan 2022 19:49:12 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Oct 2022 10:00:15 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Ullrich",
"Herbert",
""
],
[
"Drchal",
"Jan",
""
],
[
"Rýpar",
"Martin",
""
],
[
"Vincourová",
"Hana",
""
],
[
"Moravec",
"Václav",
""
]
] |
new_dataset
| 0.999187 |
2203.06355
|
Yingjie Chen
|
Yingjie Chen, Jiarui Zhang, Tao Wang, and Yun Liang
|
EventFormer: AU Event Transformer for Facial Action Unit Event Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Facial action units (AUs) play an indispensable role in human emotion
analysis. We observe that although AU-based high-level emotion analysis is
urgently needed by real-world applications, frame-level AU results provided by
previous works cannot be directly used for such analysis. Moreover, as AUs are
dynamic processes, the utilization of global temporal information is important
but has been gravely ignored in the literature. To this end, we propose
EventFormer for AU event detection, which is the first work directly detecting
AU events from a video sequence by viewing AU event detection as a multiple
class-specific sets prediction problem. Extensive experiments conducted on a
commonly used AU benchmark dataset, BP4D, show the superiority of EventFormer
under suitable metrics.
|
[
{
"version": "v1",
"created": "Sat, 12 Mar 2022 06:19:22 GMT"
},
{
"version": "v2",
"created": "Thu, 11 May 2023 04:19:51 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Chen",
"Yingjie",
""
],
[
"Zhang",
"Jiarui",
""
],
[
"Wang",
"Tao",
""
],
[
"Liang",
"Yun",
""
]
] |
new_dataset
| 0.98759 |
2205.12029
|
Souhail Bakkali
|
Souhail Bakkali, Zuheng Ming, Mickael Coustaty, Mar\c{c}al Rusi\~nol,
Oriol Ramos Terrades
|
VLCDoC: Vision-Language Contrastive Pre-Training Model for Cross-Modal
Document Classification
|
Accepted at PR
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multimodal learning from document data has achieved great success lately as
it allows to pre-train semantically meaningful features as a prior into a
learnable downstream task. In this paper, we approach the document
classification problem by learning cross-modal representations through language
and vision cues, considering intra- and inter-modality relationships. Instead
of merging features from different modalities into a joint representation
space, the proposed method exploits high-level interactions and learns relevant
semantic information from effective attention flows within and across
modalities. The proposed learning objective is devised between intra- and
inter-modality alignment tasks, where the similarity distribution per task is
computed by contracting positive sample pairs while simultaneously contrasting
negative ones in the joint representation space}. Extensive experiments on
public document classification datasets demonstrate the effectiveness and the
generality of our model on low-scale and large-scale datasets.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 12:28:12 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Jul 2022 14:33:37 GMT"
},
{
"version": "v3",
"created": "Thu, 11 May 2023 15:31:06 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Bakkali",
"Souhail",
""
],
[
"Ming",
"Zuheng",
""
],
[
"Coustaty",
"Mickael",
""
],
[
"Rusiñol",
"Marçal",
""
],
[
"Terrades",
"Oriol Ramos",
""
]
] |
new_dataset
| 0.985755 |
2209.10507
|
Vibhaalakshmi Sivaraman
|
Vibhaalakshmi Sivaraman, Pantea Karimi, Vedantha Venkatapathy, Mehrdad
Khani, Sadjad Fouladi, Mohammad Alizadeh, Fr\'edo Durand, Vivienne Sze
|
Gemino: Practical and Robust Neural Compression for Video Conferencing
|
13 pages, 5 appendix
| null | null | null |
cs.NI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Video conferencing systems suffer from poor user experience when network
conditions deteriorate because current video codecs simply cannot operate at
extremely low bitrates. Recently, several neural alternatives have been
proposed that reconstruct talking head videos at very low bitrates using sparse
representations of each frame such as facial landmark information. However,
these approaches produce poor reconstructions in scenarios with major movement
or occlusions over the course of a call, and do not scale to higher
resolutions. We design Gemino, a new neural compression system for video
conferencing based on a novel high-frequency-conditional super-resolution
pipeline. Gemino upsamples a very low-resolution version of each target frame
while enhancing high-frequency details (e.g., skin texture, hair, etc.) based
on information extracted from a single high-resolution reference image. We use
a multi-scale architecture that runs different components of the model at
different resolutions, allowing it to scale to resolutions comparable to 720p,
and we personalize the model to learn specific details of each person,
achieving much better fidelity at low bitrates. We implement Gemino atop
aiortc, an open-source Python implementation of WebRTC, and show that it
operates on 1024x1024 videos in real-time on a Titan X GPU, and achieves 2.2-5x
lower bitrate than traditional video codecs for the same perceptual quality.
|
[
{
"version": "v1",
"created": "Wed, 21 Sep 2022 17:10:46 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Sep 2022 01:31:49 GMT"
},
{
"version": "v3",
"created": "Thu, 11 May 2023 14:24:47 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Sivaraman",
"Vibhaalakshmi",
""
],
[
"Karimi",
"Pantea",
""
],
[
"Venkatapathy",
"Vedantha",
""
],
[
"Khani",
"Mehrdad",
""
],
[
"Fouladi",
"Sadjad",
""
],
[
"Alizadeh",
"Mohammad",
""
],
[
"Durand",
"Frédo",
""
],
[
"Sze",
"Vivienne",
""
]
] |
new_dataset
| 0.980787 |
2209.12003
|
Jaap-Henk Hoepman
|
Jaap-Henk Hoepman
|
Mutual Contact Discovery
|
33 pages (including appendix)
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Messaging services allow new users to find existing contacts that already use
that service through a process called contact discovery. Existing users are
similarly informed of new users that are already on their contact list. This
creates a privacy issue: when you join and enable contact discovery, anyone
already on the service that has your number on their contact list gets notified
that you joined. Even if you don't know that person, or if it is an ex or
former colleague that you long parted with and whose contact details you
deleted long ago. To solve this, we propose a mutual contact discovery
protocol, that only allow users to discover each other when both are (still) in
each other's contact list. Mutual contact discovery has the additional
advantage that it can be implemented in a more privacy friendly fashion (e.g.
protecting the social graph from the server) than traditional, one-sided
contact discovery, without necessarily relying on trusted hardware.
|
[
{
"version": "v1",
"created": "Sat, 24 Sep 2022 13:08:32 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Sep 2022 07:33:01 GMT"
},
{
"version": "v3",
"created": "Thu, 11 May 2023 13:32:52 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Hoepman",
"Jaap-Henk",
""
]
] |
new_dataset
| 0.988922 |
2210.00379
|
Yilin(Kyle) Gao
|
Kyle Gao, Yina Gao, Hongjie He, Dening Lu, Linlin Xu and Jonathan Li
|
NeRF: Neural Radiance Field in 3D Vision, A Comprehensive Review
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural Radiance Field (NeRF), a new novel view synthesis with implicit scene
representation has taken the field of Computer Vision by storm. As a novel view
synthesis and 3D reconstruction method, NeRF models find applications in
robotics, urban mapping, autonomous navigation, virtual reality/augmented
reality, and more. Since the original paper by Mildenhall et al., more than 250
preprints were published, with more than 100 eventually being accepted in tier
one Computer Vision Conferences. Given NeRF popularity and the current interest
in this research area, we believe it necessary to compile a comprehensive
survey of NeRF papers from the past two years, which we organized into both
architecture, and application based taxonomies. We also provide an introduction
to the theory of NeRF based novel view synthesis, and a benchmark comparison of
the performance and speed of key NeRF models. By creating this survey, we hope
to introduce new researchers to NeRF, provide a helpful reference for
influential works in this field, as well as motivate future research directions
with our discussion section.
|
[
{
"version": "v1",
"created": "Sat, 1 Oct 2022 21:35:11 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Nov 2022 22:09:48 GMT"
},
{
"version": "v3",
"created": "Sun, 18 Dec 2022 23:41:26 GMT"
},
{
"version": "v4",
"created": "Wed, 10 May 2023 22:13:47 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Gao",
"Kyle",
""
],
[
"Gao",
"Yina",
""
],
[
"He",
"Hongjie",
""
],
[
"Lu",
"Dening",
""
],
[
"Xu",
"Linlin",
""
],
[
"Li",
"Jonathan",
""
]
] |
new_dataset
| 0.998203 |
2210.13325
|
Alireza Dehlaghi Ghadim
|
Alireza Dehlaghi-Ghadim, Ali Balador, Mahshid Helali Moghadam, Hans
Hansson, Mauro Conti
|
ICSSIM-A Framework for Building Industrial Control Systems Security
Simulation Testbeds
|
43 pages, 13 figures
|
Computers in Industry 148 (2023): 103906
|
10.1016/j.compind.2023.103906
| null |
cs.CR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
With the advent of smart industry, Industrial Control Systems (ICS) are
increasingly using Cloud, IoT, and other services to meet Industry 4.0 targets.
The connectivity inherent in these services exposes such systems to increased
cybersecurity risks. To protect ICSs against cyberattacks, intrusion detection
systems and intrusion prevention systems empowered by machine learning are used
to detect abnormal behavior of the systems. Operational ICSs are not safe
environments to research intrusion detection systems due to the possibility of
catastrophic risks. Therefore, realistic ICS testbeds enable researchers to
analyze and validate their intrusion detection algorithms in a controlled
environment. Although various ICS testbeds have been developed, researchers'
access to a low-cost, adaptable, and customizable testbed that can accurately
simulate industrial control systems and suits security research is still an
important issue.
In this paper, we present ICSSIM, a framework for building customized virtual
ICS security testbeds, in which various types of cyber threats and attacks can
be effectively and efficiently investigated. This framework contains base
classes to simulate control system components and communications. ICSSIM aims
to produce extendable, versatile, reproducible, low-cost, and comprehensive ICS
testbeds with realistic details and high fidelity. ICSSIM is built on top of
the Docker container technology, which provides realistic network emulation and
runs ICS components on isolated private operating system kernels. ICSSIM
reduces the time for developing ICS components and offers physical process
modelling using software and hardware in the loop simulation. We demonstrated
ICSSIM by creating a testbed and validating its functionality by showing how
different cyberattacks can be applied.
|
[
{
"version": "v1",
"created": "Mon, 24 Oct 2022 15:27:16 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Oct 2022 14:29:00 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Nov 2022 13:57:25 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Dehlaghi-Ghadim",
"Alireza",
""
],
[
"Balador",
"Ali",
""
],
[
"Moghadam",
"Mahshid Helali",
""
],
[
"Hansson",
"Hans",
""
],
[
"Conti",
"Mauro",
""
]
] |
new_dataset
| 0.998504 |
2301.07510
|
Naoya Hatta
|
Naoya Hatta (1), Shuntaro Tsunoda (1), Kouhei Uchida (1), Taichi
Ishitani (1), Ryota Shioya (1 and 2), Kei Ishii (1) ((1) PEZY Computing, (2)
The University of Tokyo)
|
PEZY-SC3: A MIMD Many-core Processor for Energy-efficient Computing
| null | null | null | null |
cs.AR cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
PEZY-SC3 is a highly energy- and area-efficient processor for supercomputers
developed using TSMC 7nm process technology. It is the third generation of the
PEZY-SCx series developed by PEZY Computing, K.K. Supercomputers equipped with
the PEZY-SCx series have been deployed at several research centers and are used
for large scale scientific calculations.
PEZY-SC3 outperforms previous PEZY-SCx and other processors in terms of
energy and area efficiency. To achieve high efficiency, PEZY-SC3 employs a MIMD
many-core, fine-grained multithreading, and non-coherent cache, focusing on
applications involving high thread-level parallelism. Our MIMD many-core-based
architecture achieves high efficiency while providing higher programmability
than existing architectures based on specialized tensor units with limited
functionality or wide-SIMD. Another key point of this architecture is to
achieve both high efficiency and high throughput without using complex and
expensive units such as out-of-order schedulers. Moreover, our novel
non-coherent and hierarchical cache system enables high scalability on
many-core without compromising programmability.
The energy efficiency of a system equipped with PEZY-SC3 is approximately
24.6 GFlops/W, and it ranked 12th in the Green500 (November 2021), which
measures the energy efficiency of supercomputers. In terms of processor
architecture, all the systems ranked higher than the PEZY-SC3 system are
equipped with NVIDIA A100 or Preferred Networks MN-Core, and thus PEZY-SC3 is
the third-ranked processor after them. While A100 and MN-Core achieve high
energy efficiency with tensor units specialized for specific functions,
PEZY-SC3 does not have such specialized tensor units and thus has higher
programmability.
|
[
{
"version": "v1",
"created": "Mon, 19 Dec 2022 06:23:28 GMT"
},
{
"version": "v2",
"created": "Mon, 23 Jan 2023 04:42:58 GMT"
},
{
"version": "v3",
"created": "Thu, 11 May 2023 08:35:24 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Hatta",
"Naoya",
"",
"1 and 2"
],
[
"Tsunoda",
"Shuntaro",
"",
"1 and 2"
],
[
"Uchida",
"Kouhei",
"",
"1 and 2"
],
[
"Ishitani",
"Taichi",
"",
"1 and 2"
],
[
"Shioya",
"Ryota",
"",
"1 and 2"
],
[
"Ishii",
"Kei",
""
]
] |
new_dataset
| 0.999709 |
2302.00402
|
Haiyang Xu
|
Haiyang Xu, Qinghao Ye, Ming Yan, Yaya Shi, Jiabo Ye, Yuanhong Xu,
Chenliang Li, Bin Bi, Qi Qian, Wei Wang, Guohai Xu, Ji Zhang, Songfang Huang,
Fei Huang, Jingren Zhou
|
mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image
and Video
| null |
ICML2023
| null | null |
cs.CV cs.CL cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Recent years have witnessed a big convergence of language, vision, and
multi-modal pretraining. In this work, we present mPLUG-2, a new unified
paradigm with modularized design for multi-modal pretraining, which can benefit
from modality collaboration while addressing the problem of modality
entanglement. In contrast to predominant paradigms of solely relying on
sequence-to-sequence generation or encoder-based instance discrimination,
mPLUG-2 introduces a multi-module composition network by sharing common
universal modules for modality collaboration and disentangling different
modality modules to deal with modality entanglement. It is flexible to select
different modules for different understanding and generation tasks across all
modalities including text, image, and video. Empirical study shows that mPLUG-2
achieves state-of-the-art or competitive results on a broad range of over 30
downstream tasks, spanning multi-modal tasks of image-text and video-text
understanding and generation, and uni-modal tasks of text-only, image-only, and
video-only understanding. Notably, mPLUG-2 shows new state-of-the-art results
of 48.0 top-1 accuracy and 80.3 CIDEr on the challenging MSRVTT video QA and
video caption tasks with a far smaller model size and data scale. It also
demonstrates strong zero-shot transferability on vision-language and
video-language tasks. Code and models will be released in
https://github.com/alibaba/AliceMind.
|
[
{
"version": "v1",
"created": "Wed, 1 Feb 2023 12:40:03 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Xu",
"Haiyang",
""
],
[
"Ye",
"Qinghao",
""
],
[
"Yan",
"Ming",
""
],
[
"Shi",
"Yaya",
""
],
[
"Ye",
"Jiabo",
""
],
[
"Xu",
"Yuanhong",
""
],
[
"Li",
"Chenliang",
""
],
[
"Bi",
"Bin",
""
],
[
"Qian",
"Qi",
""
],
[
"Wang",
"Wei",
""
],
[
"Xu",
"Guohai",
""
],
[
"Zhang",
"Ji",
""
],
[
"Huang",
"Songfang",
""
],
[
"Huang",
"Fei",
""
],
[
"Zhou",
"Jingren",
""
]
] |
new_dataset
| 0.989488 |
2302.06100
|
Andrew Blair-Stanek
|
Andrew Blair-Stanek, Nils Holzenberger, Benjamin Van Durme
|
Can GPT-3 Perform Statutory Reasoning?
|
10 pages
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Statutory reasoning is the task of reasoning with facts and statutes, which
are rules written in natural language by a legislature. It is a basic legal
skill. In this paper we explore the capabilities of the most capable GPT-3
model, text-davinci-003, on an established statutory-reasoning dataset called
SARA. We consider a variety of approaches, including dynamic few-shot
prompting, chain-of-thought prompting, and zero-shot prompting. While we
achieve results with GPT-3 that are better than the previous best published
results, we also identify several types of clear errors it makes. We
investigate why these errors happen. We discover that GPT-3 has imperfect prior
knowledge of the actual U.S. statutes on which SARA is based. More importantly,
we create simple synthetic statutes, which GPT-3 is guaranteed not to have seen
during training. We find GPT-3 performs poorly at answering straightforward
questions about these simple synthetic statutes.
|
[
{
"version": "v1",
"created": "Mon, 13 Feb 2023 04:56:11 GMT"
},
{
"version": "v2",
"created": "Wed, 10 May 2023 19:17:23 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Blair-Stanek",
"Andrew",
""
],
[
"Holzenberger",
"Nils",
""
],
[
"Van Durme",
"Benjamin",
""
]
] |
new_dataset
| 0.998857 |
2302.13694
|
Piotr Kicki
|
Piotr Kicki, Amadeusz Szymko, Krzysztof Walas
|
DLOFTBs -- Fast Tracking of Deformable Linear Objects with B-splines
|
Accepted at International Conference on Robotics and Automation
(ICRA) 2023
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
While manipulating rigid objects is an extensively explored research topic,
deformable linear object (DLO) manipulation seems significantly underdeveloped.
A potential reason for this is the inherent difficulty in describing and
observing the state of the DLO as its geometry changes during manipulation.
This paper proposes an algorithm for fast-tracking the shape of a DLO based on
the masked image. Having no prior knowledge about the tracked object, the
proposed method finds a reliable representation of the shape of the tracked
object within tens of milliseconds. This algorithm's main idea is to first
skeletonize the DLO mask image, walk through the parts of the DLO skeleton,
arrange the segments into an ordered path, and finally fit a B-spline into it.
Experiments show that our solution outperforms the State-of-the-Art approaches
in DLO's shape reconstruction accuracy and algorithm running time and can
handle challenging scenarios such as severe occlusions, self-intersections, and
multiple DLOs in a single image.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 11:54:04 GMT"
},
{
"version": "v2",
"created": "Thu, 11 May 2023 08:36:50 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Kicki",
"Piotr",
""
],
[
"Szymko",
"Amadeusz",
""
],
[
"Walas",
"Krzysztof",
""
]
] |
new_dataset
| 0.991388 |
2305.02993
|
Mael Jullien
|
Ma\"el Jullien, Marco Valentino, Hannah Frost, Paul O'Regan, Donal
Landers, Andr\'e Freitas
|
SemEval-2023 Task 7: Multi-Evidence Natural Language Inference for
Clinical Trial Data
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This paper describes the results of SemEval 2023 task 7 -- Multi-Evidence
Natural Language Inference for Clinical Trial Data (NLI4CT) -- consisting of 2
tasks, a Natural Language Inference (NLI) task, and an evidence selection task
on clinical trial data. The proposed challenges require multi-hop biomedical
and numerical reasoning, which are of significant importance to the development
of systems capable of large-scale interpretation and retrieval of medical
evidence, to provide personalized evidence-based care.
Task 1, the entailment task, received 643 submissions from 40 participants,
and Task 2, the evidence selection task, received 364 submissions from 23
participants. The tasks are challenging, with the majority of submitted systems
failing to significantly outperform the majority class baseline on the
entailment task, and we observe significantly better performance on the
evidence selection task than on the entailment task. Increasing the number of
model parameters leads to a direct increase in performance, far more
significant than the effect of biomedical pre-training. Future works could
explore the limitations of large models for generalization and numerical
inference, and investigate methods to augment clinical datasets to allow for
more rigorous testing and to facilitate fine-tuning.
We envisage that the dataset, models, and results of this task will be useful
to the biomedical NLI and evidence retrieval communities. The dataset,
competition leaderboard, and website are publicly available.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 16:58:19 GMT"
},
{
"version": "v2",
"created": "Thu, 11 May 2023 09:10:06 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Jullien",
"Maël",
""
],
[
"Valentino",
"Marco",
""
],
[
"Frost",
"Hannah",
""
],
[
"O'Regan",
"Paul",
""
],
[
"Landers",
"Donal",
""
],
[
"Freitas",
"André",
""
]
] |
new_dataset
| 0.997963 |
2305.03567
|
Ehud Shapiro
|
Andrew Lewis-Pye, Oded Naor, Ehud Shapiro
|
Flash: An Asynchronous Payment System with Good-Case Linear
Communication Complexity
| null | null | null | null |
cs.DC cs.MA
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
While the original purpose of blockchains was to realize a payment system, it
has been shown that, in fact, such systems do not require consensus and can be
implemented deterministically in asynchronous networks. State-of-the-art
payment systems employ Reliable Broadcast to disseminate payments and prevent
double spending, which entails O(n^2) communication complexity per payment even
if Byzantine behavior is scarce or non-existent.
Here we present Flash, the first payment system to achieve $O(n)$
communication complexity per payment in the good case and $O(n^2)$ complexity
in the worst-case, matching the lower bound. This is made possible by
sidestepping Reliable Broadcast and instead using the blocklace -- a DAG-like
partially-ordered generalization of the blockchain -- for the tasks of
recording transaction dependencies, block dissemination, and equivocation
exclusion, which in turn prevents doublespending.
Flash has two variants: for high congestion when multiple blocks that contain
multiple payments are issued concurrently; and for low congestion when payments
are infrequent.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 14:18:36 GMT"
},
{
"version": "v2",
"created": "Thu, 11 May 2023 11:28:43 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Lewis-Pye",
"Andrew",
""
],
[
"Naor",
"Oded",
""
],
[
"Shapiro",
"Ehud",
""
]
] |
new_dataset
| 0.993556 |
2305.03795
|
Jingfan Meng
|
Jingfan Meng, Ziheng Liu, Yiwei Wang, Jun Xu
|
RECIPE: Rateless Erasure Codes Induced by Protocol-Based Encoding
|
Accepted by IEEE ISIT 2023
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
LT (Luby transform) codes are a celebrated family of rateless erasure codes
(RECs). Most of existing LT codes were designed for applications in which a
centralized encoder possesses all message blocks and is solely responsible for
encoding them into codewords. Distributed LT codes, in which message blocks are
physically scattered across multiple different locations (encoders) that need
to collaboratively perform the encoding, has never been systemically studied
before despite its growing importance in applications. In this work, we present
the first systemic study of LT codes in the distributed setting, and make the
following three major contributions. First, we show that only a proper subset
of LT codes are feasible in the distributed setting, and give the sufficient
and necessary condition for such feasibility. Second, we propose a distributed
encoding protocol that can efficiently implement any feasible code. The
protocol is parameterized by a so-called action probability array (APA) that is
only a few KBs in size, and any feasible code corresponds to a valid APA
setting and vice versa. Third, we propose two heuristic search algorithms that
have led to the discovery of feasible codes that are much more efficient than
the state of the art.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 18:50:42 GMT"
},
{
"version": "v2",
"created": "Wed, 10 May 2023 18:29:16 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Meng",
"Jingfan",
""
],
[
"Liu",
"Ziheng",
""
],
[
"Wang",
"Yiwei",
""
],
[
"Xu",
"Jun",
""
]
] |
new_dataset
| 0.999282 |
2305.06356
|
Mustafa I\c{s}{\i}k
|
Mustafa I\c{s}{\i}k, Martin R\"unz, Markos Georgopoulos, Taras
Khakhulin, Jonathan Starck, Lourdes Agapito, Matthias Nie{\ss}ner
|
HumanRF: High-Fidelity Neural Radiance Fields for Humans in Motion
|
Project webpage: https://synthesiaresearch.github.io/humanrf Dataset
webpage: https://www.actors-hq.com/ Video:
https://www.youtube.com/watch?v=OTnhiLLE7io Code:
https://github.com/synthesiaresearch/humanrf
| null |
10.1145/3592415
| null |
cs.CV cs.GR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Representing human performance at high-fidelity is an essential building
block in diverse applications, such as film production, computer games or
videoconferencing. To close the gap to production-level quality, we introduce
HumanRF, a 4D dynamic neural scene representation that captures full-body
appearance in motion from multi-view video input, and enables playback from
novel, unseen viewpoints. Our novel representation acts as a dynamic video
encoding that captures fine details at high compression rates by factorizing
space-time into a temporal matrix-vector decomposition. This allows us to
obtain temporally coherent reconstructions of human actors for long sequences,
while representing high-resolution details even in the context of challenging
motion. While most research focuses on synthesizing at resolutions of 4MP or
lower, we address the challenge of operating at 12MP. To this end, we introduce
ActorsHQ, a novel multi-view dataset that provides 12MP footage from 160
cameras for 16 sequences with high-fidelity, per-frame mesh reconstructions. We
demonstrate challenges that emerge from using such high-resolution data and
show that our newly introduced HumanRF effectively leverages this data, making
a significant step towards production-level quality novel view synthesis.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 17:59:55 GMT"
},
{
"version": "v2",
"created": "Thu, 11 May 2023 17:59:43 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Işık",
"Mustafa",
""
],
[
"Rünz",
"Martin",
""
],
[
"Georgopoulos",
"Markos",
""
],
[
"Khakhulin",
"Taras",
""
],
[
"Starck",
"Jonathan",
""
],
[
"Agapito",
"Lourdes",
""
],
[
"Nießner",
"Matthias",
""
]
] |
new_dataset
| 0.970297 |
2305.06357
|
Yi Yu
|
Yi Yu, Shengyue Yao, Juanjuan Li, Fei-Yue Wang, Yilun Lin
|
SWDPM: A Social Welfare-Optimized Data Pricing Mechanism
| null | null | null | null |
cs.GT cs.CE cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data trading has been hindered by privacy concerns associated with user-owned
data and the infinite reproducibility of data, making it challenging for data
owners to retain exclusive rights over their data once it has been disclosed.
Traditional data pricing models relied on uniform pricing or subscription-based
models. However, with the development of Privacy-Preserving Computing
techniques, the market can now protect the privacy and complete transactions
using progressively disclosed information, which creates a technical foundation
for generating greater social welfare through data usage. In this study, we
propose a novel approach to modeling multi-round data trading with
progressively disclosed information using a matchmaking-based Markov Decision
Process (MDP) and introduce a Social Welfare-optimized Data Pricing Mechanism
(SWDPM) to find optimal pricing strategies. To the best of our knowledge, this
is the first study to model multi-round data trading with progressively
disclosed information. Numerical experiments demonstrate that the SWDPM can
increase social welfare 3 times by up to 54\% in trading feasibility, 43\% in
trading efficiency, and 25\% in trading fairness by encouraging better matching
of demand and price negotiation among traders.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 02:25:35 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Yu",
"Yi",
""
],
[
"Yao",
"Shengyue",
""
],
[
"Li",
"Juanjuan",
""
],
[
"Wang",
"Fei-Yue",
""
],
[
"Lin",
"Yilun",
""
]
] |
new_dataset
| 0.979142 |
2305.06415
|
Ali Septiandri
|
Ali Akbar Septiandri, Marios Constantinides, Mohammad Tahaei, Daniele
Quercia
|
WEIRD FAccTs: How Western, Educated, Industrialized, Rich, and
Democratic is FAccT?
|
To appear at ACM FAccT 2023
| null |
10.1145/3593013.3593985
| null |
cs.HC cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Studies conducted on Western, Educated, Industrialized, Rich, and Democratic
(WEIRD) samples are considered atypical of the world's population and may not
accurately represent human behavior. In this study, we aim to quantify the
extent to which the ACM FAccT conference, the leading venue in exploring
Artificial Intelligence (AI) systems' fairness, accountability, and
transparency, relies on WEIRD samples. We collected and analyzed 128 papers
published between 2018 and 2022, accounting for 30.8% of the overall
proceedings published at FAccT in those years (excluding abstracts, tutorials,
and papers without human-subject studies or clear country attribution for the
participants). We found that 84% of the analyzed papers were exclusively based
on participants from Western countries, particularly exclusively from the U.S.
(63%). Only researchers who undertook the effort to collect data about local
participants through interviews or surveys added diversity to an otherwise
U.S.-centric view of science. Therefore, we suggest that researchers collect
data from under-represented populations to obtain an inclusive worldview. To
achieve this goal, scientific communities should champion data collection from
such populations and enforce transparent reporting of data biases.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 18:52:09 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Septiandri",
"Ali Akbar",
""
],
[
"Constantinides",
"Marios",
""
],
[
"Tahaei",
"Mohammad",
""
],
[
"Quercia",
"Daniele",
""
]
] |
new_dataset
| 0.976768 |
2305.06423
|
Luke Szramowski
|
Emma Andrade, Jessalyn Bolkema, Thomas Dexter, Harrison Eggers,
Victoria Luongo, Felice Manganiello and Luke Szramowski
|
CSS-T Codes from Reed Muller Codes for Quantum Fault Tolerance
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
CSS-T codes are a class of stabilizer codes introduced by Rengaswami et al
with desired properties for quantum fault-tolerance. In this work, we give a
comprehensive study of CSS-T codes built from Reed-Muller codes. These
classical codes allow for the construction of CSST code families with
non-vanishing asymptotic rate up to 1/2 and possibly diverging minimum
distance. This desirable property enables constant overhead magic state
distillation.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 19:07:06 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Andrade",
"Emma",
""
],
[
"Bolkema",
"Jessalyn",
""
],
[
"Dexter",
"Thomas",
""
],
[
"Eggers",
"Harrison",
""
],
[
"Luongo",
"Victoria",
""
],
[
"Manganiello",
"Felice",
""
],
[
"Szramowski",
"Luke",
""
]
] |
new_dataset
| 0.996384 |
2305.06508
|
Hanglong Zhang
|
Hanglong Zhang and Xiwang Cao
|
Dimensions of some LCD BCH codes
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we investigate the first few largest coset leaders modulo
$\frac{q^m+1}{\lambda}$ where $\lambda\mid q+1$ and $q$ is an odd prime power,
and give the dimensions of some LCD BCH codes of length $\frac{q^m+1}{\lambda}$
with large designed distances.We also determine the dimensions of some LCD BCH
codes of length $n=\frac{(q^m+1)}{\lambda}$ with designed distances $2\leq
\delta \leq \frac{ q^{\lfloor(m+1)/2\rfloor}}{\lambda}+1$, where $ \lambda\mid
q+1$ and $1<\lambda<q+1$. The LCD BCH codes presented in this paper have a
sharper lower bound on the minimum distance than the BCH bound.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 01:13:30 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Zhang",
"Hanglong",
""
],
[
"Cao",
"Xiwang",
""
]
] |
new_dataset
| 0.996621 |
2305.06537
|
Shoujie Li
|
Shoujie Li, Mingshan He, Wenbo Ding, Linqi Ye, Xueqian Wang, Junbo
Tan, Jinqiu Yuan, Xiao-Ping Zhang
|
Visuotactile Sensor Enabled Pneumatic Device Towards Compliant
Oropharyngeal Swab Sampling
|
8 pages
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Manual oropharyngeal (OP) swab sampling is an intensive and risky task. In
this article, a novel OP swab sampling device of low cost and high compliance
is designed by combining the visuo-tactile sensor and the pneumatic
actuator-based gripper. Here, a concave visuo-tactile sensor called CoTac is
first proposed to address the problems of high cost and poor reliability of
traditional multi-axis force sensors. Besides, by imitating the doctor's
fingers, a soft pneumatic actuator with a rigid skeleton structure is designed,
which is demonstrated to be reliable and safe via finite element modeling and
experiments. Furthermore, we propose a sampling method that adopts a compliant
control algorithm based on the adaptive virtual force to enhance the safety and
compliance of the swab sampling process. The effectiveness of the device has
been verified through sampling experiments as well as in vivo tests, indicating
great application potential. The cost of the device is around 30 US dollars and
the total weight of the functional part is less than 0.1 kg, allowing the
device to be rapidly deployed on various robotic arms. Videos, hardware, and
source code are available at: https://sites.google.com/view/swab-sampling/.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 02:47:41 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Li",
"Shoujie",
""
],
[
"He",
"Mingshan",
""
],
[
"Ding",
"Wenbo",
""
],
[
"Ye",
"Linqi",
""
],
[
"Wang",
"Xueqian",
""
],
[
"Tan",
"Junbo",
""
],
[
"Yuan",
"Jinqiu",
""
],
[
"Zhang",
"Xiao-Ping",
""
]
] |
new_dataset
| 0.987938 |
2305.06545
|
Dongyang Li
|
Dongyang Li, Ruixue Ding, Qiang Zhang, Zheng Li, Boli Chen, Pengjun
Xie, Yao Xu, Xin Li, Ning Guo, Fei Huang and Xiaofeng He
|
GeoGLUE: A GeoGraphic Language Understanding Evaluation Benchmark
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With a fast developing pace of geographic applications, automatable and
intelligent models are essential to be designed to handle the large volume of
information. However, few researchers focus on geographic natural language
processing, and there has never been a benchmark to build a unified standard.
In this work, we propose a GeoGraphic Language Understanding Evaluation
benchmark, named GeoGLUE. We collect data from open-released geographic
resources and introduce six natural language understanding tasks, including
geographic textual similarity on recall, geographic textual similarity on
rerank, geographic elements tagging, geographic composition analysis,
geographic where what cut, and geographic entity alignment. We also pro vide
evaluation experiments and analysis of general baselines, indicating the
effectiveness and significance of the GeoGLUE benchmark.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 03:21:56 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Li",
"Dongyang",
""
],
[
"Ding",
"Ruixue",
""
],
[
"Zhang",
"Qiang",
""
],
[
"Li",
"Zheng",
""
],
[
"Chen",
"Boli",
""
],
[
"Xie",
"Pengjun",
""
],
[
"Xu",
"Yao",
""
],
[
"Li",
"Xin",
""
],
[
"Guo",
"Ning",
""
],
[
"Huang",
"Fei",
""
],
[
"He",
"Xiaofeng",
""
]
] |
new_dataset
| 0.999213 |
2305.06556
|
Kyle Yoshida
|
Kyle T. Yoshida, Joel X. Kiernan, Rachel A. G. Adenekan, Steven H.
Trinh, Alexis J. Lowber, Allison M. Okamura, Cara M. Nunez
|
Cognitive and Physical Activities Impair Perception of Smartphone
Vibrations
|
To be published in IEEE Transactions on Haptics
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vibration feedback is common in everyday devices, from virtual reality
systems to smartphones. However, cognitive and physical activities may impede
our ability to sense vibrations from devices. In this study, we develop and
characterize a smartphone platform to investigate how a shape-memory task
(cognitive activity) and walking (physical activity) impair human perception of
smartphone vibrations. We measured how Apple's Core Haptics Framework
parameters can be used for haptics research, namely how hapticIntensity
modulates amplitudes of 230 Hz vibrations. A 23-person user study found that
physical (p<0.001) and cognitive (p=0.004) activity increase vibration
perception thresholds. Cognitive activity also increases vibration response
time (p<0.001). This work also introduces a smartphone platform that can be
used for out-of-lab vibration perception testing. Researchers can use our
smartphone platform and results to design better haptic devices for diverse,
unique populations.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 04:22:24 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Yoshida",
"Kyle T.",
""
],
[
"Kiernan",
"Joel X.",
""
],
[
"Adenekan",
"Rachel A. G.",
""
],
[
"Trinh",
"Steven H.",
""
],
[
"Lowber",
"Alexis J.",
""
],
[
"Okamura",
"Allison M.",
""
],
[
"Nunez",
"Cara M.",
""
]
] |
new_dataset
| 0.996467 |
2305.06558
|
Yangming Cheng
|
Yangming Cheng, Liulei Li, Yuanyou Xu, Xiaodi Li, Zongxin Yang,
Wenguan Wang, Yi Yang
|
Segment and Track Anything
|
8 pages, 3 figures; Technical Report
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This report presents a framework called Segment And Track Anything (SAMTrack)
that allows users to precisely and effectively segment and track any object in
a video. Additionally, SAM-Track employs multimodal interaction methods that
enable users to select multiple objects in videos for tracking, corresponding
to their specific requirements. These interaction methods comprise click,
stroke, and text, each possessing unique benefits and capable of being employed
in combination. As a result, SAM-Track can be used across an array of fields,
ranging from drone technology, autonomous driving, medical imaging, augmented
reality, to biological analysis. SAM-Track amalgamates Segment Anything Model
(SAM), an interactive key-frame segmentation model, with our proposed AOT-based
tracking model (DeAOT), which secured 1st place in four tracks of the VOT 2022
challenge, to facilitate object tracking in video. In addition, SAM-Track
incorporates Grounding-DINO, which enables the framework to support text-based
interaction. We have demonstrated the remarkable capabilities of SAM-Track on
DAVIS-2016 Val (92.0%), DAVIS-2017 Test (79.2%)and its practicability in
diverse applications. The project page is available at:
https://github.com/z-x-yang/Segment-and-Track-Anything.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 04:33:08 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Cheng",
"Yangming",
""
],
[
"Li",
"Liulei",
""
],
[
"Xu",
"Yuanyou",
""
],
[
"Li",
"Xiaodi",
""
],
[
"Yang",
"Zongxin",
""
],
[
"Wang",
"Wenguan",
""
],
[
"Yang",
"Yi",
""
]
] |
new_dataset
| 0.996193 |
2305.06594
|
Judith Yue Li
|
Kun Su, Judith Yue Li, Qingqing Huang, Dima Kuzmin, Joonseok Lee,
Chris Donahue, Fei Sha, Aren Jansen, Yu Wang, Mauro Verzetti, Timo I. Denk
|
V2Meow: Meowing to the Visual Beat via Music Generation
| null | null | null | null |
cs.SD cs.CV cs.LG cs.MM eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generating high quality music that complements the visual content of a video
is a challenging task. Most existing visual conditioned music generation
systems generate symbolic music data, such as MIDI files, instead of raw audio
waveform. Given the limited availability of symbolic music data, such methods
can only generate music for a few instruments or for specific types of visual
input. In this paper, we propose a novel approach called V2Meow that can
generate high-quality music audio that aligns well with the visual semantics of
a diverse range of video input types. Specifically, the proposed music
generation system is a multi-stage autoregressive model which is trained with a
number of O(100K) music audio clips paired with video frames, which are mined
from in-the-wild music videos, and no parallel symbolic music data is involved.
V2Meow is able to synthesize high-fidelity music audio waveform solely
conditioned on pre-trained visual features extracted from an arbitrary silent
video clip, and it also allows high-level control over the music style of
generation examples via supporting text prompts in addition to the video frames
conditioning. Through both qualitative and quantitative evaluations, we
demonstrate that our model outperforms several existing music generation
systems in terms of both visual-audio correspondence and audio quality.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 06:26:41 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Su",
"Kun",
""
],
[
"Li",
"Judith Yue",
""
],
[
"Huang",
"Qingqing",
""
],
[
"Kuzmin",
"Dima",
""
],
[
"Lee",
"Joonseok",
""
],
[
"Donahue",
"Chris",
""
],
[
"Sha",
"Fei",
""
],
[
"Jansen",
"Aren",
""
],
[
"Wang",
"Yu",
""
],
[
"Verzetti",
"Mauro",
""
],
[
"Denk",
"Timo I.",
""
]
] |
new_dataset
| 0.992838 |
2305.06669
|
Sunzhou Huang
|
Sunzhou Huang, Xiaoyin Wang
|
PExReport: Automatic Creation of Pruned Executable Cross-Project Failure
Reports
|
ICSE 2023, Technical Track, full paper
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern software development extensively depends on existing libraries written
by other developer teams from the same or a different organization. When a
developer executes the software, the execution trace may go across the
boundaries of multiple software products and create cross-project failures
(CPFs). Existing studies show that a stand-alone executable failure report may
enable the most effective communication, but creating such a report is often
challenging due to the complicated files and dependencies interactions in the
software ecosystems. In this paper, to solve the CPF report trilemma, we
developed PExReport, which automatically creates stand-alone executable CPF
reports. PExReport leverages build tools to prune source code and dependencies,
and further analyzes the build process to create a pruned build environment for
reproducing the CPF. We performed an evaluation on 74 software project issues
with 198 CPFs, and the evaluation results show that PExReport can create
executable CPF reports for 184 out of 198 test failures in our dataset, with an
average reduction of 72.97% on source classes and the classes in internal JARs.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 09:09:42 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Huang",
"Sunzhou",
""
],
[
"Wang",
"Xiaoyin",
""
]
] |
new_dataset
| 0.999769 |
2305.06673
|
Cyril Gavoille
|
Cyril Gavoille, Claire Hilaire (LaBRI, UB)
|
Minor-Universal Graph for Graphs on Surfaces
| null | null | null | null |
cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show that, for every n and every surface $\Sigma$, there is a graph U
embeddable on $\Sigma$ with at most cn^2 vertices that contains as minor every
graph embeddable on $\Sigma$ with n vertices. The constant c depends
polynomially on the Euler genus of $\Sigma$. This generalizes a well-known
result for planar graphs due to Robertson, Seymour, and Thomas [Quickly
Excluding a Planar Graph. J. Comb. Theory B, 1994] which states that the square
grid on 4n^2 vertices contains as minor every planar graph with n vertices.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 09:13:50 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Gavoille",
"Cyril",
"",
"LaBRI, UB"
],
[
"Hilaire",
"Claire",
"",
"LaBRI, UB"
]
] |
new_dataset
| 0.998472 |
2305.06709
|
Mike Diessner
|
Mike Diessner, Kevin Wilson, Richard D. Whalley
|
NUBO: A Transparent Python Package for Bayesian Optimisation
| null | null | null | null |
cs.LG cs.MS stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
NUBO, short for Newcastle University Bayesian Optimisation, is a Bayesian
optimisation framework for the optimisation of expensive-to-evaluate black-box
functions, such as physical experiments and computer simulators. Bayesian
optimisation is a cost-efficient optimisation strategy that uses surrogate
modelling via Gaussian processes to represent an objective function and
acquisition functions to guide the selection of candidate points to approximate
the global optimum of the objective function. NUBO itself focuses on
transparency and user experience to make Bayesian optimisation easily
accessible to researchers from all disciplines. Clean and understandable code,
precise references, and thorough documentation ensure transparency, while user
experience is ensured by a modular and flexible design, easy-to-write syntax,
and careful selection of Bayesian optimisation algorithms. NUBO allows users to
tailor Bayesian optimisation to their specific problem by writing the
optimisation loop themselves using the provided building blocks. It supports
sequential single-point, parallel multi-point, and asynchronous optimisation of
bounded, constrained, and/or mixed (discrete and continuous) parameter input
spaces. Only algorithms and methods that are extensively tested and validated
to perform well are included in NUBO. This ensures that the package remains
compact and does not overwhelm the user with an unnecessarily large number of
options. The package is written in Python but does not require expert knowledge
of Python to optimise your simulators and experiments. NUBO is distributed as
open-source software under the BSD 3-Clause licence.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 10:34:27 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Diessner",
"Mike",
""
],
[
"Wilson",
"Kevin",
""
],
[
"Whalley",
"Richard D.",
""
]
] |
new_dataset
| 0.970053 |
2305.06732
|
Eleonore Bach
|
Eleonore Bach, Friedrich Eisenbrand, Rom Pinchasi
|
Integer points in the degree-sequence polytope
|
14 pages
| null | null | null |
cs.DM
|
http://creativecommons.org/licenses/by-sa/4.0/
|
An integer vector $b \in \mathbb{Z}^d$ is a degree sequence if there exists a
hypergraph with vertices $\{1,\dots,d\}$ such that each $b_i$ is the number of
hyperedges containing $i$. The degree-sequence polytope $\mathscr{Z}^d$ is the
convex hull of all degree sequences. We show that all but a $2^{-\Omega(d)}$
fraction of integer vectors in the degree sequence polytope are degree
sequences. Furthermore, the corresponding hypergraph of these points can be
computed in time $2^{O(d)}$ via linear programming techniques. This is
substantially faster than the $2^{O(d^2)}$ running time of the current-best
algorithm for the degree-sequence problem. We also show that for $d\geq 98$,
the degree-sequence polytope $\mathscr{Z}^d$ contains integer points that are
not degree sequences. Furthermore, we prove that the linear optimization
problem over $\mathscr{Z}^d$ is $\mathrm{NP}$-hard. This complements a recent
result of Deza et al. (2018) who provide an algorithm that is polynomial in $d$
and the number of hyperedges.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 11:20:40 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Bach",
"Eleonore",
""
],
[
"Eisenbrand",
"Friedrich",
""
],
[
"Pinchasi",
"Rom",
""
]
] |
new_dataset
| 0.990632 |
2305.06747
|
Hossein Hassani
|
Zina Kamal and Hossein Hassani
|
The First Parallel Corpora for Kurdish Sign Language
|
7 pages, 5 figures, 2 tables
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Kurdish Sign Language (KuSL) is the natural language of the Kurdish Deaf
people. We work on automatic translation between spoken Kurdish and KuSL. Sign
languages evolve rapidly and follow grammatical rules that differ from spoken
languages. Consequently,those differences should be considered during any
translation. We proposed an avatar-based automatic translation of Kurdish texts
in the Sorani (Central Kurdish) dialect into the Kurdish Sign language. We
developed the first parallel corpora for that pair that we use to train a
Statistical Machine Translation (SMT) engine. We tested the outcome
understandability and evaluated it using the Bilingual Evaluation Understudy
(BLEU). Results showed 53.8% accuracy. Compared to the previous experiments in
the field, the result is considerably high. We suspect the reason to be the
similarity between the structure of the two pairs. We plan to make the
resources publicly available under CC BY-NC-SA 4.0 license on the Kurdish-BLARK
(https://kurdishblark.github.io/).
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 12:10:20 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Kamal",
"Zina",
""
],
[
"Hassani",
"Hossein",
""
]
] |
new_dataset
| 0.999159 |
2305.06902
|
Meet Udeshi
|
Meet Udeshi, Prashanth Krishnamurthy, Hammond Pearce, Ramesh Karri,
Farshad Khorrami
|
REMaQE -- Reverse Engineering Math Equations from Executables
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Cybersecurity attacks against industrial control systems and cyber-physical
systems can cause catastrophic real-world damage by infecting device binaries
with malware. Mitigating such attacks can benefit from reverse engineering
tools that recover sufficient semantic knowledge in terms of mathematical
operations in the code. Conventional reverse engineering tools can decompile
binaries to low-level code, but offer little semantic insight. This paper
proposes REMaQE, an automated framework for reverse engineering of math
equations from binary executables. REMaQE uses symbolic execution for dynamic
analysis of the binary to extract the relevant semantic knowledge of the
implemented algorithms. REMaQE provides an automatic parameter analysis pass
which also leverages symbolic execution to identify input, output, and constant
parameters of the implemented math equations. REMaQE automatically handles
parameters accessed via registers, the stack, global memory, or pointers, and
supports reverse engineering of object-oriented implementations such as C++
classes. REMaQE uses an algebraic simplification method which allows it to
scale to complex conditional equations with ease. These features make REMaQE
stand out over existing reverse engineering approaches for math equations. On a
dataset of randomly generated math equations compiled to binaries from C and
Simulink implementations, REMaQE accurately recovers a semantically matching
equation for 97.53% of the models. For complex equations with more operations,
accuracy stays consistently over 94%. REMaQE executes in 0.25 seconds on
average and in 1.3 seconds for more complex equations. This real-time execution
speed enables a smooth integration in an interactive mathematics-oriented
reverse engineering workflow.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 15:45:45 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Udeshi",
"Meet",
""
],
[
"Krishnamurthy",
"Prashanth",
""
],
[
"Pearce",
"Hammond",
""
],
[
"Karri",
"Ramesh",
""
],
[
"Khorrami",
"Farshad",
""
]
] |
new_dataset
| 0.987918 |
2305.06958
|
Christopher Alexander Anred Tatsch
|
Christopher Tatsch, Jonas Amoama Bredu Jnr, Dylan Covell, Ihsan Berk
Tulu, Yu Gu
|
Rhino: An Autonomous Robot for Mapping Underground Mine Environments
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
There are many benefits for exploring and exploiting underground mines, but
there are also significant risks and challenges. One such risk is the potential
for accidents caused by the collapse of the pillars, and roofs which can be
mitigated through inspections. However, these inspections can be costly and may
put the safety of the inspectors at risk. To address this issue, this work
presents Rhino, an autonomous robot that can navigate underground mine
environments and generate 3D maps. These generated maps will allow mine workers
to proactively respond to potential hazards and prevent accidents. The system
being developed is a skid-steer, four-wheeled unmanned ground vehicle (UGV)
that uses a LiDAR and IMU to perform long-duration autonomous navigation and
generation of maps through a LIO-SAM framework. The system has been tested in
different environments and terrains to ensure its robustness and ability to
operate for extended periods of time while also generating 3D maps.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 16:36:55 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Tatsch",
"Christopher",
""
],
[
"Jnr",
"Jonas Amoama Bredu",
""
],
[
"Covell",
"Dylan",
""
],
[
"Tulu",
"Ihsan Berk",
""
],
[
"Gu",
"Yu",
""
]
] |
new_dataset
| 0.980406 |
2305.06973
|
Zhikai Zhang
|
Zhikai Zhang, Jian Ding, Li Jiang, Dengxin Dai, Gui-Song Xia
|
FreePoint: Unsupervised Point Cloud Instance Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Instance segmentation of point clouds is a crucial task in 3D field with
numerous applications that involve localizing and segmenting objects in a
scene. However, achieving satisfactory results requires a large number of
manual annotations, which is a time-consuming and expensive process. To
alleviate dependency on annotations, we propose a method, called FreePoint, for
underexplored unsupervised class-agnostic instance segmentation on point
clouds. In detail, we represent the point features by combining coordinates,
colors, normals, and self-supervised deep features. Based on the point
features, we perform a multicut algorithm to segment point clouds into coarse
instance masks as pseudo labels, which are used to train a point cloud instance
segmentation model. To alleviate the inaccuracy of coarse masks during
training, we propose a weakly-supervised training strategy and corresponding
loss. Our work can also serve as an unsupervised pre-training pretext for
supervised semantic instance segmentation with limited annotations. For
class-agnostic instance segmentation on point clouds, FreePoint largely fills
the gap with its fully-supervised counterpart based on the state-of-the-art
instance segmentation model Mask3D and even surpasses some previous
fully-supervised methods. When serving as a pretext task and fine-tuning on
S3DIS, FreePoint outperforms training from scratch by 5.8% AP with only 10%
mask annotations.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 16:56:26 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Zhang",
"Zhikai",
""
],
[
"Ding",
"Jian",
""
],
[
"Jiang",
"Li",
""
],
[
"Dai",
"Dengxin",
""
],
[
"Xia",
"Gui-Song",
""
]
] |
new_dataset
| 0.99896 |
2305.07027
|
Xinyu Liu
|
Xinyu Liu, Houwen Peng, Ningxin Zheng, Yuqing Yang, Han Hu, Yixuan
Yuan
|
EfficientViT: Memory Efficient Vision Transformer with Cascaded Group
Attention
|
CVPR 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Vision transformers have shown great success due to their high model
capabilities. However, their remarkable performance is accompanied by heavy
computation costs, which makes them unsuitable for real-time applications. In
this paper, we propose a family of high-speed vision transformers named
EfficientViT. We find that the speed of existing transformer models is commonly
bounded by memory inefficient operations, especially the tensor reshaping and
element-wise functions in MHSA. Therefore, we design a new building block with
a sandwich layout, i.e., using a single memory-bound MHSA between efficient FFN
layers, which improves memory efficiency while enhancing channel communication.
Moreover, we discover that the attention maps share high similarities across
heads, leading to computational redundancy. To address this, we present a
cascaded group attention module feeding attention heads with different splits
of the full feature, which not only saves computation cost but also improves
attention diversity. Comprehensive experiments demonstrate EfficientViT
outperforms existing efficient models, striking a good trade-off between speed
and accuracy. For instance, our EfficientViT-M5 surpasses MobileNetV3-Large by
1.9% in accuracy, while getting 40.4% and 45.2% higher throughput on Nvidia
V100 GPU and Intel Xeon CPU, respectively. Compared to the recent efficient
model MobileViT-XXS, EfficientViT-M2 achieves 1.8% superior accuracy, while
running 5.8x/3.7x faster on the GPU/CPU, and 7.4x faster when converted to ONNX
format. Code and models are available at
https://github.com/microsoft/Cream/tree/main/EfficientViT.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 17:59:41 GMT"
}
] | 2023-05-12T00:00:00 |
[
[
"Liu",
"Xinyu",
""
],
[
"Peng",
"Houwen",
""
],
[
"Zheng",
"Ningxin",
""
],
[
"Yang",
"Yuqing",
""
],
[
"Hu",
"Han",
""
],
[
"Yuan",
"Yixuan",
""
]
] |
new_dataset
| 0.965254 |
2104.13663
|
Ken Duffy
|
Wei An, Muriel M\'edard and Ken R. Duffy
|
CRC Codes as Error Correction Codes
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
|
IEEE ICC 2021
|
10.1109/ICC42927.2021.9500279
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
CRC codes have long since been adopted in a vast range of applications. The
established notion that they are suitable primarily for error detection can be
set aside through use of the recently proposed Guessing Random Additive Noise
Decoding (GRAND). Hard-detection (GRAND-SOS) and soft-detection (ORBGRAND)
variants can decode any short, high-rate block code, making them suitable for
error correction of CRC-coded data. When decoded with GRAND, short CRC codes
have error correction capability that is at least as good as popular codes such
as BCH codes, but with no restriction on either code length or rate.
The state-of-the-art CA-Polar codes are concatenated CRC and Polar codes. For
error correction, we find that the CRC is a better short code than either Polar
or CA-Polar codes. Moreover, the standard CA-SCL decoder only uses the CRC for
error detection and therefore suffers severe performance degradation in short,
high rate settings when compared with the performance GRAND provides, which
uses all of the CA-Polar bits for error correction.
Using GRAND, existing systems can be upgraded from error detection to
low-latency error correction without re-engineering the encoder, and additional
applications of CRCs can be found in IoT, Ultra-Reliable Low Latency
Communication (URLLC), and beyond. The universality of GRAND, its ready
parallelized implementation in hardware, and the good performance of CRC as
codes make their combination a viable solution for low-latency applications.
|
[
{
"version": "v1",
"created": "Wed, 28 Apr 2021 09:33:54 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"An",
"Wei",
""
],
[
"Médard",
"Muriel",
""
],
[
"Duffy",
"Ken R.",
""
]
] |
new_dataset
| 0.999638 |
2201.08810
|
Junchen Zhao
|
Junchen Zhao, Yurun Song, Junlin Wang, Ian G. Harris
|
GAP-Gen: Guided Automatic Python Code Generation
|
Proceedings of the 17th Conference of the European Chapter of the
Association for Computational Linguistics: Student Research Workshop
| null | null | null |
cs.PL cs.CL cs.LG cs.SE
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Automatic code generation from natural language descriptions can be highly
beneficial during the process of software development. In this work, we propose
GAP-Gen, a Guided Automatic Python Code Generation method based on Python
syntactic constraints and semantic constraints. We first introduce Python
syntactic constraints in the form of Syntax-Flow, which is a simplified version
of Abstract Syntax Tree (AST) reducing the size and high complexity of Abstract
Syntax Tree but maintaining crucial syntactic information of Python code. In
addition to Syntax-Flow, we introduce Variable-Flow which abstracts variable
and function names consistently through out the code. In our work, rather than
pretraining, we focus on modifying the finetuning process which reduces
computational requirements but retains high generation performance on automatic
Python code generation task. GAP-Gen fine-tunes the transformer based language
models T5 and CodeT5 using the Code-to-Docstring datasets CodeSearchNet,
CodeSearchNet AdvTest and Code-Docstring Corpus from EdinburghNLP. Our
experiments show that GAP-Gen achieves better results on automatic Python code
generation task than previous works.
|
[
{
"version": "v1",
"created": "Wed, 19 Jan 2022 06:32:47 GMT"
},
{
"version": "v2",
"created": "Wed, 10 May 2023 01:01:43 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Zhao",
"Junchen",
""
],
[
"Song",
"Yurun",
""
],
[
"Wang",
"Junlin",
""
],
[
"Harris",
"Ian G.",
""
]
] |
new_dataset
| 0.959314 |
2206.02862
|
Sara Khosravi
|
Sara Khosravi, Hossein S. Ghadikolaei, Jens Zander, and Marina Petrova
|
Beam Alignment Using Trajectory Information in Mobile Millimeter-wave
Networks
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Millimeter-wave and terahertz systems rely on beamforming/combining codebooks
to determine the best beam directions during the initial access and data
transmission. Existing approaches suffer from large codebook sizes and high
beam searching overhead in the presence of mobile devices. To address this
issue, we utilize the similarity of the channel in adjacent locations to divide
the user trajectory into a set of separate regions and maintain a set of
candidate beams for each region in a database. Due to the tradeoff between the
number of regions and the signalling overhead, i.e., the greater number of
regions results in a higher signal-to-noise ratio (SNR) but also a larger
signalling overhead for the database, we propose an optimization framework to
find the minimum number of regions based on the trajectory of a mobile device.
Using a ray tracing tool, we demonstrate that the proposed method provides high
SNR while being more robust to the location information accuracy in comparison
to the lookup table baseline and fixed size region baseline.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2022 19:29:24 GMT"
},
{
"version": "v2",
"created": "Tue, 9 May 2023 20:20:19 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Khosravi",
"Sara",
""
],
[
"Ghadikolaei",
"Hossein S.",
""
],
[
"Zander",
"Jens",
""
],
[
"Petrova",
"Marina",
""
]
] |
new_dataset
| 0.999239 |
2209.07048
|
Yue Liu
|
Yue Liu and Chakkrit Tantithamthavorn and Yonghui Liu and Patanamon
Thongtanunam and Li Li
|
AutoUpdate: Automatically Recommend Code Updates for Android Apps
|
Under review at a SE journal
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Android has become the predominant smartphone operating system, with a
rapidly evolving ecosystem that requires app developers to frequently update
their apps to maintain quality, security, and compatibility. While deep
learning has made significant strides in various software engineering tasks,
including automated code updates, existing methods are not specifically
tailored for Android apps, and the potential of pre-trained Language Models of
Code (CodeLMs) for updating Android app code remains unexplored. In this paper,
we present the first comprehensive evaluation of state-of-the-art CodeLMs,
including CodeT5, CodeBERT, CodeGPT, and UniXcoder, for recommending code
updates in Android applications. To facilitate this evaluation, we curate a
unique dataset of paired updated methods from 3,195 Android apps published on
Google Play and hosted on GitHub between 2008 and 2022. Our findings
demonstrate that pre-trained CodeLMs outperform traditional approaches,
achieving a higher accuracy ranging from 190% to 385% under a realistic
time-wise evaluation scenario. Among the CodeLMs, CodeT5 consistently exhibits
superior performance across most code update types. Furthermore, we examine the
impact of update types, evaluation scenarios, method size, and update size on
the performance of CodeLMs, revealing areas for future research to improve
temporal adaptability and generalization capabilities.
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2022 05:07:25 GMT"
},
{
"version": "v2",
"created": "Wed, 10 May 2023 15:14:42 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Liu",
"Yue",
""
],
[
"Tantithamthavorn",
"Chakkrit",
""
],
[
"Liu",
"Yonghui",
""
],
[
"Thongtanunam",
"Patanamon",
""
],
[
"Li",
"Li",
""
]
] |
new_dataset
| 0.997526 |
2210.03625
|
Andrew Rouditchenko
|
Andrew Rouditchenko, Yung-Sung Chuang, Nina Shvetsova, Samuel Thomas,
Rogerio Feris, Brian Kingsbury, Leonid Karlinsky, David Harwath, Hilde
Kuehne, James Glass
|
C2KD: Cross-Lingual Cross-Modal Knowledge Distillation for Multilingual
Text-Video Retrieval
|
Accepted at ICASSP 2023. The code, models, and dataset are available
at https://github.com/roudimit/c2kd
| null | null | null |
cs.CL cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multilingual text-video retrieval methods have improved significantly in
recent years, but the performance for other languages lags behind English. We
propose a Cross-Lingual Cross-Modal Knowledge Distillation method to improve
multilingual text-video retrieval. Inspired by the fact that English text-video
retrieval outperforms other languages, we train a student model using input
text in different languages to match the cross-modal predictions from teacher
models using input text in English. We propose a cross entropy based objective
which forces the distribution over the student's text-video similarity scores
to be similar to those of the teacher models. We introduce a new multilingual
video dataset, Multi-YouCook2, by translating the English captions in the
YouCook2 video dataset to 8 other languages. Our method improves multilingual
text-video retrieval performance on Multi-YouCook2 and several other datasets
such as Multi-MSRVTT and VATEX. We also conducted an analysis on the
effectiveness of different multilingual text models as teachers. The code,
models, and dataset are available at https://github.com/roudimit/c2kd.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 15:30:24 GMT"
},
{
"version": "v2",
"created": "Tue, 9 May 2023 19:58:59 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Rouditchenko",
"Andrew",
""
],
[
"Chuang",
"Yung-Sung",
""
],
[
"Shvetsova",
"Nina",
""
],
[
"Thomas",
"Samuel",
""
],
[
"Feris",
"Rogerio",
""
],
[
"Kingsbury",
"Brian",
""
],
[
"Karlinsky",
"Leonid",
""
],
[
"Harwath",
"David",
""
],
[
"Kuehne",
"Hilde",
""
],
[
"Glass",
"James",
""
]
] |
new_dataset
| 0.98951 |
2210.04847
|
Ruilong Li
|
Ruilong Li, Matthew Tancik and Angjoo Kanazawa
|
NerfAcc: A General NeRF Acceleration Toolbox
|
Webpage: https://www.nerfacc.com/; Updated Write-up: arXiv:2305.04966
| null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
We propose NerfAcc, a toolbox for efficient volumetric rendering of radiance
fields. We build on the techniques proposed in Instant-NGP, and extend these
techniques to not only support bounded static scenes, but also for dynamic
scenes and unbounded scenes. NerfAcc comes with a user-friendly Python API, and
is ready for plug-and-play acceleration of most NeRFs. Various examples are
provided to show how to use this toolbox. Code can be found here:
https://github.com/KAIR-BAIR/nerfacc. Note this write-up matches with NerfAcc
v0.3.5. For the latest features in NerfAcc, please check out our more recent
write-up at arXiv:2305.04966
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2022 17:03:23 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Oct 2022 05:41:45 GMT"
},
{
"version": "v3",
"created": "Wed, 10 May 2023 05:31:59 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Li",
"Ruilong",
""
],
[
"Tancik",
"Matthew",
""
],
[
"Kanazawa",
"Angjoo",
""
]
] |
new_dataset
| 0.999296 |
2211.02477
|
Wolfgang Kircheis
|
Janek Bevendorff, Philipp Sauer, Lukas Gienapp, Wolfgang Kircheis,
Erik K\"orner, Benno Stein, Martin Potthast
|
SMAuC -- The Scientific Multi-Authorship Corpus
| null | null | null | null |
cs.CL cs.DL
|
http://creativecommons.org/licenses/by/4.0/
|
The rapidly growing volume of scientific publications offers an interesting
challenge for research on methods for analyzing the authorship of documents
with one or more authors. However, most existing datasets lack scientific
documents or the necessary metadata for constructing new experiments and test
cases. We introduce SMAuC, a comprehensive, metadata-rich corpus tailored to
scientific authorship analysis. Comprising over 3 million publications across
various disciplines from over 5 million authors, SMAuC is the largest openly
accessible corpus for this purpose. It encompasses scientific texts from
humanities and natural sciences, accompanied by extensive, curated metadata,
including unambiguous author IDs. SMAuC aims to significantly advance the
domain of authorship analysis in scientific texts.
|
[
{
"version": "v1",
"created": "Fri, 4 Nov 2022 14:07:17 GMT"
},
{
"version": "v2",
"created": "Wed, 10 May 2023 12:21:38 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Bevendorff",
"Janek",
""
],
[
"Sauer",
"Philipp",
""
],
[
"Gienapp",
"Lukas",
""
],
[
"Kircheis",
"Wolfgang",
""
],
[
"Körner",
"Erik",
""
],
[
"Stein",
"Benno",
""
],
[
"Potthast",
"Martin",
""
]
] |
new_dataset
| 0.999762 |
2212.02842
|
Vajira Thambawita
|
Vajira Thambawita, Steven A. Hicks, Andrea M. Stor{\aa}s, Thu Nguyen,
Jorunn M. Andersen, Oliwia Witczak, Trine B. Haugen, Hugo L. Hammer, P{\aa}l
Halvorsen, Michael A. Riegler
|
VISEM-Tracking, a human spermatozoa tracking dataset
| null |
Sci Data 10, 260 (2023)
|
10.1038/s41597-023-02173-4
|
Scientific Data volume 10
|
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
A manual assessment of sperm motility requires microscopy observation, which
is challenging due to the fast-moving spermatozoa in the field of view. To
obtain correct results, manual evaluation requires extensive training.
Therefore, computer-assisted sperm analysis (CASA) has become increasingly used
in clinics. Despite this, more data is needed to train supervised machine
learning approaches in order to improve accuracy and reliability in the
assessment of sperm motility and kinematics. In this regard, we provide a
dataset called VISEM-Tracking with 20 video recordings of 30 seconds
(comprising 29,196 frames) of wet sperm preparations with manually annotated
bounding-box coordinates and a set of sperm characteristics analyzed by experts
in the domain. In addition to the annotated data, we provide unlabeled video
clips for easy-to-use access and analysis of the data via methods such as self-
or unsupervised learning. As part of this paper, we present baseline sperm
detection performances using the YOLOv5 deep learning (DL) model trained on the
VISEM-Tracking dataset. As a result, we show that the dataset can be used to
train complex DL models to analyze spermatozoa.
|
[
{
"version": "v1",
"created": "Tue, 6 Dec 2022 09:25:52 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Dec 2022 08:56:13 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Apr 2023 06:59:26 GMT"
},
{
"version": "v4",
"created": "Wed, 26 Apr 2023 06:03:46 GMT"
},
{
"version": "v5",
"created": "Wed, 10 May 2023 07:10:31 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Thambawita",
"Vajira",
""
],
[
"Hicks",
"Steven A.",
""
],
[
"Storås",
"Andrea M.",
""
],
[
"Nguyen",
"Thu",
""
],
[
"Andersen",
"Jorunn M.",
""
],
[
"Witczak",
"Oliwia",
""
],
[
"Haugen",
"Trine B.",
""
],
[
"Hammer",
"Hugo L.",
""
],
[
"Halvorsen",
"Pål",
""
],
[
"Riegler",
"Michael A.",
""
]
] |
new_dataset
| 0.999812 |
2303.14613
|
Tenglong Ao
|
Tenglong Ao, Zeyi Zhang, Libin Liu
|
GestureDiffuCLIP: Gesture Diffusion Model with CLIP Latents
|
SIGGRAPH 2023 (Journal Track); Project Page:
https://pku-mocca.github.io/GestureDiffuCLIP-Page/
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The automatic generation of stylized co-speech gestures has recently received
increasing attention. Previous systems typically allow style control via
predefined text labels or example motion clips, which are often not flexible
enough to convey user intent accurately. In this work, we present
GestureDiffuCLIP, a neural network framework for synthesizing realistic,
stylized co-speech gestures with flexible style control. We leverage the power
of the large-scale Contrastive-Language-Image-Pre-training (CLIP) model and
present a novel CLIP-guided mechanism that extracts efficient style
representations from multiple input modalities, such as a piece of text, an
example motion clip, or a video. Our system learns a latent diffusion model to
generate high-quality gestures and infuses the CLIP representations of style
into the generator via an adaptive instance normalization (AdaIN) layer. We
further devise a gesture-transcript alignment mechanism that ensures a
semantically correct gesture generation based on contrastive learning. Our
system can also be extended to allow fine-grained style control of individual
body parts. We demonstrate an extensive set of examples showing the flexibility
and generalizability of our model to a variety of style descriptions. In a user
study, we show that our system outperforms the state-of-the-art approaches
regarding human likeness, appropriateness, and style correctness.
|
[
{
"version": "v1",
"created": "Sun, 26 Mar 2023 03:35:46 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Mar 2023 10:56:36 GMT"
},
{
"version": "v3",
"created": "Wed, 10 May 2023 05:41:55 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Ao",
"Tenglong",
""
],
[
"Zhang",
"Zeyi",
""
],
[
"Liu",
"Libin",
""
]
] |
new_dataset
| 0.998019 |
2304.03771
|
Brenda Elizabeth Olivas Padilla MSc
|
Brenda Elizabeth Olivas-Padilla, Alina Glushkova and Sotiris
Manitsaris
|
Motion Capture Benchmark of Real Industrial Tasks and Traditional Crafts
for Human Movement Analysis
| null | null |
10.1109/ACCESS.2023.3269581
| null |
cs.RO cs.AI cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Human movement analysis is a key area of research in robotics, biomechanics,
and data science. It encompasses tracking, posture estimation, and movement
synthesis. While numerous methodologies have evolved over time, a systematic
and quantitative evaluation of these approaches using verifiable ground truth
data of three-dimensional human movement is still required to define the
current state of the art. This paper presents seven datasets recorded using
inertial-based motion capture. The datasets contain professional gestures
carried out by industrial operators and skilled craftsmen performed in real
conditions in-situ. The datasets were created with the intention of being used
for research in human motion modeling, analysis, and generation. The protocols
for data collection are described in detail, and a preliminary analysis of the
collected data is provided as a benchmark. The Gesture Operational Model, a
hybrid stochastic-biomechanical approach based on kinematic descriptors, is
utilized to model the dynamics of the experts' movements and create
mathematical representations of their motion trajectories for analysis and
quantifying their body dexterity. The models allowed accurate the generation of
human professional poses and an intuitive description of how body joints
cooperate and change over time through the performance of the task.
|
[
{
"version": "v1",
"created": "Mon, 3 Apr 2023 10:29:24 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Olivas-Padilla",
"Brenda Elizabeth",
""
],
[
"Glushkova",
"Alina",
""
],
[
"Manitsaris",
"Sotiris",
""
]
] |
new_dataset
| 0.999626 |
2304.13337
|
Fabian Birkmann
|
Fabian Birkmann, Stefan Milius and Henning Urbat
|
Nominal Topology for Data Languages
|
Extended version of the corresponding paper accepted for ICALP 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a novel topological perspective on data languages recognizable by
orbit-finite nominal monoids. For this purpose, we introduce pro-orbit-finite
nominal topological spaces. Assuming globally bounded support sizes, they
coincide with nominal Stone spaces and are shown to be dually equivalent to a
subcategory of nominal boolean algebras. Recognizable data languages are
characterized as topologically clopen sets of pro-orbit-finite words. In
addition, we explore the expressive power of pro-orbit-finite equations by
establishing a nominal version of Reiterman's pseudovariety theorem.
|
[
{
"version": "v1",
"created": "Wed, 26 Apr 2023 07:11:44 GMT"
},
{
"version": "v2",
"created": "Wed, 10 May 2023 06:50:37 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Birkmann",
"Fabian",
""
],
[
"Milius",
"Stefan",
""
],
[
"Urbat",
"Henning",
""
]
] |
new_dataset
| 0.997161 |
2305.04434
|
Michael Rabinovich
|
Dallan Goldblatt and Calvin Vuong and Michael Rabinovich
|
On Blowback Traffic on the Internet
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
This paper considers the phenomenon where a single probe to a target
generates multiple, sometimes numerous, packets in response -- which we term
"blowback". Understanding blowback is important because attackers can leverage
it to launch amplified denial of service attacks by redirecting blowback
towards a victim. Blowback also has serious implications for Internet
researchers since their experimental setups must cope with bursts of blowback
traffic. We find that tens of thousands, and in some protocols, hundreds of
thousands, of hosts generate blowback, with orders of magnitude amplification
on average. In fact, some prolific blowback generators produce millions of
response packets in the aftermath of a single probe. We also find that blowback
generators are fairly stable over periods of weeks, so once identified, many of
these hosts can be exploited by attackers for a long time.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 03:08:02 GMT"
},
{
"version": "v2",
"created": "Tue, 9 May 2023 22:15:32 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Goldblatt",
"Dallan",
""
],
[
"Vuong",
"Calvin",
""
],
[
"Rabinovich",
"Michael",
""
]
] |
new_dataset
| 0.99753 |
2305.05706
|
Helin Xu
|
Chen Bao, Helin Xu, Yuzhe Qin, Xiaolong Wang
|
DexArt: Benchmarking Generalizable Dexterous Manipulation with
Articulated Objects
|
Accepted to CVPR 2023. Project page: https://www.chenbao.tech/dexart/
Equal contributors: Chen Bao, Helin Xu
| null | null | null |
cs.RO cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
To enable general-purpose robots, we will require the robot to operate daily
articulated objects as humans do. Current robot manipulation has heavily relied
on using a parallel gripper, which restricts the robot to a limited set of
objects. On the other hand, operating with a multi-finger robot hand will allow
better approximation to human behavior and enable the robot to operate on
diverse articulated objects. To this end, we propose a new benchmark called
DexArt, which involves Dexterous manipulation with Articulated objects in a
physical simulator. In our benchmark, we define multiple complex manipulation
tasks, and the robot hand will need to manipulate diverse articulated objects
within each task. Our main focus is to evaluate the generalizability of the
learned policy on unseen articulated objects. This is very challenging given
the high degrees of freedom of both hands and objects. We use Reinforcement
Learning with 3D representation learning to achieve generalization. Through
extensive studies, we provide new insights into how 3D representation learning
affects decision making in RL with 3D point cloud inputs. More details can be
found at https://www.chenbao.tech/dexart/.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 18:30:58 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Bao",
"Chen",
""
],
[
"Xu",
"Helin",
""
],
[
"Qin",
"Yuzhe",
""
],
[
"Wang",
"Xiaolong",
""
]
] |
new_dataset
| 0.999847 |
2305.05718
|
Yung-Fu Chen
|
Yung-Fu Chen, Kenneth W. Parker, Anish Arora
|
QF-Geo: Capacity Aware Geographic Routing using Bounded Regions of
Wireless Meshes
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Routing in wireless meshes must detour around holes. Extant routing protocols
often underperform in minimally connected networks where holes are larger and
more frequent. Minimal density networks are common in practice due to
deployment cost constraints, mobility dynamics, and/or adversarial jamming.
Protocols that use global search to determine optimal paths incur search
overhead that limits scaling. Conversely, protocols that use local search tend
to find approximately optimal paths at higher densities due to the existence of
geometrically direct routes but underperform as the connectivity lowers and
regional (or global) information is required to address holes. Designing a
routing protocol to achieve high throughput-latency performance across network
densities, mobility, and interference dynamics remains challenging.
This paper shows that, in a probabilistic setting, bounded exploration can be
leveraged to mitigate this challenge. We show, first, that the length of
shortest paths in networks with uniform random node distribution can, with high
probability (whp), be bounded. Thus, whp a shortest path may be found by
limiting exploration to an elliptic region whose size is a function of the
network density and the Euclidean distance between the two endpoints. Second,
we propose a geographic routing protocol that achieves high reliability and
throughput-latency performance by forwarding packets within an ellipse whose
size is bounded similarly and by an estimate of the available capacity. Our
protocol, QF-Geo, selects forwarding relays within the elliptic region,
prioritizing those with sufficient capacity to avoid bottlenecks. Our
simulation results show that QF-Geo achieves high goodput efficiency and
reliability in both static and mobile networks across both low and high
densities, at large scales, with a wide range of concurrent flows, and in the
presence of adversarial jamming.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 19:00:20 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Chen",
"Yung-Fu",
""
],
[
"Parker",
"Kenneth W.",
""
],
[
"Arora",
"Anish",
""
]
] |
new_dataset
| 0.97637 |
2305.05763
|
Nadja Willenborg
|
Nadja Willenborg, Anna-Lena Horlemann and Violetta Weger
|
On the Number of $t$-Lee-Error-Correcting Codes
| null | null | null | null |
cs.IT cs.DM cs.DS math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
We consider $t$-Lee-error-correcting codes of length $n$ over the residue
ring $\mathbb{Z}_m := \mathbb{Z}/m\mathbb{Z}$ and determine upper and lower
bounds on the number of $t$-Lee-error-correcting codes. We use two different
methods, namely estimating isolated nodes on bipartite graphs and the graph
container method. The former gives density results for codes of fixed size and
the latter for any size. This confirms some recent density results for linear
Lee metric codes and provides new density results for nonlinear codes. To apply
a variant of the graph container algorithm we also investigate some geometrical
properties of the balls in the Lee metric.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 20:44:34 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Willenborg",
"Nadja",
""
],
[
"Horlemann",
"Anna-Lena",
""
],
[
"Weger",
"Violetta",
""
]
] |
new_dataset
| 0.996735 |
2305.05784
|
Kirill Trapeznikov
|
Brandon B. May, Kirill Trapeznikov, Shengbang Fang, Matthew C. Stamm
|
Comprehensive Dataset of Synthetic and Manipulated Overhead Imagery for
Development and Evaluation of Forensic Tools
| null | null | null | null |
cs.CV cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present a first of its kind dataset of overhead imagery for development
and evaluation of forensic tools. Our dataset consists of real, fully synthetic
and partially manipulated overhead imagery generated from a custom diffusion
model trained on two sets of different zoom levels and on two sources of
pristine data. We developed our model to support controllable generation of
multiple manipulation categories including fully synthetic imagery conditioned
on real and generated base maps, and location. We also support partial
in-painted imagery with same conditioning options and with several types of
manipulated content. The data consist of raw images and ground truth
annotations describing the manipulation parameters. We also report benchmark
performance on several tasks supported by our dataset including detection of
fully and partially manipulated imagery, manipulation localization and
classification.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 22:09:35 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"May",
"Brandon B.",
""
],
[
"Trapeznikov",
"Kirill",
""
],
[
"Fang",
"Shengbang",
""
],
[
"Stamm",
"Matthew C.",
""
]
] |
new_dataset
| 0.999499 |
2305.05858
|
Rahul Aralikatte
|
Rahul Aralikatte, Ziling Cheng, Sumanth Doddapaneni, Jackie Chi Kit
Cheung
|
V\=arta: A Large-Scale Headline-Generation Dataset for Indic Languages
|
Findings of ACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present V\=arta, a large-scale multilingual dataset for headline
generation in Indic languages. This dataset includes 41.8 million news articles
in 14 different Indic languages (and English), which come from a variety of
high-quality sources. To the best of our knowledge, this is the largest
collection of curated articles for Indic languages currently available. We use
the data collected in a series of experiments to answer important questions
related to Indic NLP and multilinguality research in general. We show that the
dataset is challenging even for state-of-the-art abstractive models and that
they perform only slightly better than extractive baselines. Owing to its size,
we also show that the dataset can be used to pretrain strong language models
that outperform competitive baselines in both NLU and NLG benchmarks.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 03:07:17 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Aralikatte",
"Rahul",
""
],
[
"Cheng",
"Ziling",
""
],
[
"Doddapaneni",
"Sumanth",
""
],
[
"Cheung",
"Jackie Chi Kit",
""
]
] |
new_dataset
| 0.999818 |
2305.05928
|
Kenichiro Ando
|
Kenichiro Ando, Satoshi Sekine, Mamoru Komachi
|
WikiSQE: A Large-Scale Dataset for Sentence Quality Estimation in
Wikipedia
|
First draft
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Wikipedia can be edited by anyone and thus contains various quality
sentences. Therefore, Wikipedia includes some poor-quality edits, which are
often marked up by other editors. While editors' reviews enhance the
credibility of Wikipedia, it is hard to check all edited text. Assisting in
this process is very important, but a large and comprehensive dataset for
studying it does not currently exist. Here, we propose WikiSQE, the first
large-scale dataset for sentence quality estimation in Wikipedia. Each sentence
is extracted from the entire revision history of Wikipedia, and the target
quality labels were carefully investigated and selected. WikiSQE has about 3.4
M sentences with 153 quality labels. In the experiment with automatic
classification using competitive machine learning models, sentences that had
problems with citation, syntax/semantics, or propositions were found to be more
difficult to detect. In addition, we conducted automated essay scoring
experiments to evaluate the generalizability of the dataset. We show that the
models trained on WikiSQE perform better than the vanilla model, indicating its
potential usefulness in other domains. WikiSQE is expected to be a valuable
resource for other tasks in NLP.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 06:45:13 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Ando",
"Kenichiro",
""
],
[
"Sekine",
"Satoshi",
""
],
[
"Komachi",
"Mamoru",
""
]
] |
new_dataset
| 0.999825 |
2305.05938
|
Haibao Yu
|
Haibao Yu, Wenxian Yang, Hongzhi Ruan, Zhenwei Yang, Yingjuan Tang, Xu
Gao, Xin Hao, Yifeng Shi, Yifeng Pan, Ning Sun, Juan Song, Jirui Yuan, Ping
Luo, Zaiqing Nie
|
V2X-Seq: A Large-Scale Sequential Dataset for Vehicle-Infrastructure
Cooperative Perception and Forecasting
|
CVPR2023
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Utilizing infrastructure and vehicle-side information to track and forecast
the behaviors of surrounding traffic participants can significantly improve
decision-making and safety in autonomous driving. However, the lack of
real-world sequential datasets limits research in this area. To address this
issue, we introduce V2X-Seq, the first large-scale sequential V2X dataset,
which includes data frames, trajectories, vector maps, and traffic lights
captured from natural scenery. V2X-Seq comprises two parts: the sequential
perception dataset, which includes more than 15,000 frames captured from 95
scenarios, and the trajectory forecasting dataset, which contains about 80,000
infrastructure-view scenarios, 80,000 vehicle-view scenarios, and 50,000
cooperative-view scenarios captured from 28 intersections' areas, covering 672
hours of data. Based on V2X-Seq, we introduce three new tasks for
vehicle-infrastructure cooperative (VIC) autonomous driving: VIC3D Tracking,
Online-VIC Forecasting, and Offline-VIC Forecasting. We also provide benchmarks
for the introduced tasks. Find data, code, and more up-to-date information at
\href{https://github.com/AIR-THU/DAIR-V2X-Seq}{https://github.com/AIR-THU/DAIR-V2X-Seq}.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 07:20:51 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Yu",
"Haibao",
""
],
[
"Yang",
"Wenxian",
""
],
[
"Ruan",
"Hongzhi",
""
],
[
"Yang",
"Zhenwei",
""
],
[
"Tang",
"Yingjuan",
""
],
[
"Gao",
"Xu",
""
],
[
"Hao",
"Xin",
""
],
[
"Shi",
"Yifeng",
""
],
[
"Pan",
"Yifeng",
""
],
[
"Sun",
"Ning",
""
],
[
"Song",
"Juan",
""
],
[
"Yuan",
"Jirui",
""
],
[
"Luo",
"Ping",
""
],
[
"Nie",
"Zaiqing",
""
]
] |
new_dataset
| 0.999829 |
2305.05957
|
Bohan Li
|
Bohan Li, Diego Dupleich, Guoqing Xia, Huiyu Zhou, Yue Zhang, Pei
Xiao, Lie-Liang Yang
|
MDD-Enabled Two-Tier Terahertz Fronthaul in Indoor Industrial Cell-Free
Massive MIMO
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
To make indoor industrial cell-free massive multiple-input multiple-output
(CF-mMIMO) networks free from wired fronthaul, this paper studies a
multicarrier-division duplex (MDD)-enabled two-tier terahertz (THz) fronthaul
scheme. More specifically, two layers of fronthaul links rely on the mutually
orthogonal subcarreir sets in the same THz band, while access links are
implemented over sub-6G band. The proposed scheme leads to a complicated
mixed-integer nonconvex optimization problem incorporating access point (AP)
clustering, device selection, the assignment of subcarrier sets between two
fronthaul links and the resource allocation at both the central processing unit
(CPU) and APs. In order to address the formulated problem, we first resort to
the low-complexity but efficient heuristic methods thereby relaxing the binary
variables. Then, the overall end-to-end rate is obtained by iteratively
optimizing the assignment of subcarrier sets and the number of AP clusters.
Furthermore, an advanced MDD frame structure consisting of three parallel data
streams is tailored for the proposed scheme. Simulation results demonstrate the
effectiveness of the proposed dynamic AP clustering approach in dealing with
the varying sizes of networks. Moreover, benefiting from the well-designed
frame structure, MDD is capable of outperforming TDD in the two-tier fronthaul
networks. Additionally, the effect of the THz bandwidth on system performance
is analyzed, and it is shown that with sufficient frequency resources, our
proposed two-tier fully-wireless fronthaul scheme can achieve a comparable
performance to the fiber-optic based systems. Finally, the superiority of the
proposed MDD-enabled fronthaul scheme is verified in a practical scenario with
realistic ray-tracing simulations.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 08:00:24 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Li",
"Bohan",
""
],
[
"Dupleich",
"Diego",
""
],
[
"Xia",
"Guoqing",
""
],
[
"Zhou",
"Huiyu",
""
],
[
"Zhang",
"Yue",
""
],
[
"Xiao",
"Pei",
""
],
[
"Yang",
"Lie-Liang",
""
]
] |
new_dataset
| 0.976324 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.