id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2301.10559
|
Chamath Abeysinghe
|
Chamath Abeysinghe, Chris Reid, Hamid Rezatofighi and Bernd Meyer
|
Tracking Different Ant Species: An Unsupervised Domain Adaptation
Framework and a Dataset for Multi-object Tracking
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Tracking individuals is a vital part of many experiments conducted to
understand collective behaviour. Ants are the paradigmatic model system for
such experiments but their lack of individually distinguishing visual features
and their high colony densities make it extremely difficult to perform reliable
tracking automatically. Additionally, the wide diversity of their species'
appearances makes a generalized approach even harder. In this paper, we propose
a data-driven multi-object tracker that, for the first time, employs domain
adaptation to achieve the required generalisation. This approach is built upon
a joint-detection-and-tracking framework that is extended by a set of domain
discriminator modules integrating an adversarial training strategy in addition
to the tracking loss. In addition to this novel domain-adaptive tracking
framework, we present a new dataset and a benchmark for the ant tracking
problem. The dataset contains 57 video sequences with full trajectory
annotation, including 30k frames captured from two different ant species moving
on different background patterns. It comprises 33 and 24 sequences for source
and target domains, respectively. We compare our proposed framework against
other domain-adaptive and non-domain-adaptive multi-object tracking baselines
using this dataset and show that incorporating domain adaptation at multiple
levels of the tracking pipeline yields significant improvements. The code and
the dataset are available at https://github.com/chamathabeysinghe/da-tracker.
|
[
{
"version": "v1",
"created": "Wed, 25 Jan 2023 13:00:16 GMT"
},
{
"version": "v2",
"created": "Tue, 16 May 2023 09:46:02 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Abeysinghe",
"Chamath",
""
],
[
"Reid",
"Chris",
""
],
[
"Rezatofighi",
"Hamid",
""
],
[
"Meyer",
"Bernd",
""
]
] |
new_dataset
| 0.956285 |
2302.10640
|
David Kurniadi Angdinata
|
David Kurniadi Angdinata and Junyan Xu
|
An Elementary Formal Proof of the Group Law on Weierstrass Elliptic
Curves in any Characteristic
|
Submitted to 14th International Conference on Interactive Theorem
Proving (ITP 2023), source code in
https://github.com/alreadydone/mathlib/tree/associativity
| null | null | null |
cs.LO math.AC math.AG math.NT
|
http://creativecommons.org/licenses/by/4.0/
|
Elliptic curves are fundamental objects in number theory and algebraic
geometry, whose points over a field form an abelian group under a geometric
addition law. Any elliptic curve over a field admits a Weierstrass model, but
prior formal proofs that the addition law is associative in this model involve
either advanced algebraic geometry or tedious computation, especially in
characteristic two. We formalise in the Lean theorem prover, the type of
nonsingular points of a Weierstrass curve over a field of any characteristic
and a purely algebraic proof that it forms an abelian group.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 12:57:39 GMT"
},
{
"version": "v2",
"created": "Mon, 15 May 2023 22:40:53 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Angdinata",
"David Kurniadi",
""
],
[
"Xu",
"Junyan",
""
]
] |
new_dataset
| 0.988787 |
2302.14859
|
Lior Yariv
|
Lior Yariv, Peter Hedman, Christian Reiser, Dor Verbin, Pratul P.
Srinivasan, Richard Szeliski, Jonathan T. Barron, Ben Mildenhall
|
BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis
|
Video and interactive web demo available at
https://bakedsdf.github.io/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a method for reconstructing high-quality meshes of large unbounded
real-world scenes suitable for photorealistic novel view synthesis. We first
optimize a hybrid neural volume-surface scene representation designed to have
well-behaved level sets that correspond to surfaces in the scene. We then bake
this representation into a high-quality triangle mesh, which we equip with a
simple and fast view-dependent appearance model based on spherical Gaussians.
Finally, we optimize this baked representation to best reproduce the captured
viewpoints, resulting in a model that can leverage accelerated polygon
rasterization pipelines for real-time view synthesis on commodity hardware. Our
approach outperforms previous scene representations for real-time rendering in
terms of accuracy, speed, and power consumption, and produces high quality
meshes that enable applications such as appearance editing and physical
simulation.
|
[
{
"version": "v1",
"created": "Tue, 28 Feb 2023 18:58:03 GMT"
},
{
"version": "v2",
"created": "Tue, 16 May 2023 15:01:42 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Yariv",
"Lior",
""
],
[
"Hedman",
"Peter",
""
],
[
"Reiser",
"Christian",
""
],
[
"Verbin",
"Dor",
""
],
[
"Srinivasan",
"Pratul P.",
""
],
[
"Szeliski",
"Richard",
""
],
[
"Barron",
"Jonathan T.",
""
],
[
"Mildenhall",
"Ben",
""
]
] |
new_dataset
| 0.976 |
2303.04178
|
Emily Wenger
|
Cathy Li, Jana Sot\'akov\'a, Emily Wenger, Mohamed Malhou, Evrard
Garcelon, Francois Charton, Kristin Lauter
|
SALSA PICANTE: a machine learning attack on LWE with binary secrets
|
15 pages, 6 figures, 17 tables
| null | null | null |
cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning with Errors (LWE) is a hard math problem underpinning many proposed
post-quantum cryptographic (PQC) systems. The only PQC Key Exchange Mechanism
(KEM) standardized by NIST is based on module~LWE, and current publicly
available PQ Homomorphic Encryption (HE) libraries are based on ring LWE. The
security of LWE-based PQ cryptosystems is critical, but certain implementation
choices could weaken them. One such choice is sparse binary secrets, desirable
for PQ HE schemes for efficiency reasons. Prior work, SALSA, demonstrated a
machine learning-based attack on LWE with sparse binary secrets in small
dimensions ($n \le 128$) and low Hamming weights ($h \le 4$). However, this
attack assumes access to millions of eavesdropped LWE samples and fails at
higher Hamming weights or dimensions.
We present PICANTE, an enhanced machine learning attack on LWE with sparse
binary secrets, which recovers secrets in much larger dimensions (up to
$n=350$) and with larger Hamming weights (roughly $n/10$, and up to $h=60$ for
$n=350$). We achieve this dramatic improvement via a novel preprocessing step,
which allows us to generate training data from a linear number of eavesdropped
LWE samples ($4n$) and changes the distribution of the data to improve
transformer training. We also improve the secret recovery methods of SALSA and
introduce a novel cross-attention recovery mechanism allowing us to read off
the secret directly from the trained models. While PICANTE does not threaten
NIST's proposed LWE standards, it demonstrates significant improvement over
SALSA and could scale further, highlighting the need for future investigation
into machine learning attacks on LWE with sparse binary secrets.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 19:01:01 GMT"
},
{
"version": "v2",
"created": "Tue, 9 May 2023 20:17:43 GMT"
},
{
"version": "v3",
"created": "Tue, 16 May 2023 15:19:20 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Li",
"Cathy",
""
],
[
"Sotáková",
"Jana",
""
],
[
"Wenger",
"Emily",
""
],
[
"Malhou",
"Mohamed",
""
],
[
"Garcelon",
"Evrard",
""
],
[
"Charton",
"Francois",
""
],
[
"Lauter",
"Kristin",
""
]
] |
new_dataset
| 0.999348 |
2303.08394
|
Srinath Kailasa
|
Srinath Kailasa and Tingyu Wang and Lorena A. Barba and Timo Betcke
|
PyExaFMM: an exercise in designing high-performance software with Python
and Numba
|
10 pages, 3 figures
|
Computing in Science & Engineering, vol. 24, no. 05, pp. 77-84,
2022
|
10.1109/MCSE.2023.3258288
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Numba is a game-changing compiler for high-performance computing with Python.
It produces machine code that runs outside of the single-threaded Python
interpreter and that fully utilizes the resources of modern CPUs. This means
support for parallel multithreading and auto vectorization if available, as
with compiled languages such as C++ or Fortran. In this article we document our
experience developing PyExaFMM, a multithreaded Numba implementation of the
Fast Multipole Method, an algorithm with a non-linear data structure and a
large amount of data organization. We find that designing performant Numba code
for complex algorithms can be as challenging as writing in a compiled language.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 06:51:42 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Mar 2023 09:18:47 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Apr 2023 14:43:43 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Kailasa",
"Srinath",
""
],
[
"Wang",
"Tingyu",
""
],
[
"Barba",
"Lorena A.",
""
],
[
"Betcke",
"Timo",
""
]
] |
new_dataset
| 0.995197 |
2304.00793
|
Juan Lagos
|
Juan Lagos, Urho Lempi\"o and Esa Rahtu
|
FinnWoodlands Dataset
|
Scandinavian Conference on Image Analysis 2023
| null |
10.1007/978-3-031-31435-3_7
| null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
While the availability of large and diverse datasets has contributed to
significant breakthroughs in autonomous driving and indoor applications,
forestry applications are still lagging behind and new forest datasets would
most certainly contribute to achieving significant progress in the development
of data-driven methods for forest-like scenarios. This paper introduces a
forest dataset called \textit{FinnWoodlands}, which consists of RGB stereo
images, point clouds, and sparse depth maps, as well as ground truth manual
annotations for semantic, instance, and panoptic segmentation.
\textit{FinnWoodlands} comprises a total of 4226 objects manually annotated,
out of which 2562 objects (60.6\%) correspond to tree trunks classified into
three different instance categories, namely "Spruce Tree", "Birch Tree", and
"Pine Tree". Besides tree trunks, we also annotated "Obstacles" objects as
instances as well as the semantic stuff classes "Lake", "Ground", and "Track".
Our dataset can be used in forestry applications where a holistic
representation of the environment is relevant. We provide an initial benchmark
using three models for instance segmentation, panoptic segmentation, and depth
completion, and illustrate the challenges that such unstructured scenarios
introduce.
|
[
{
"version": "v1",
"created": "Mon, 3 Apr 2023 08:28:13 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Lagos",
"Juan",
""
],
[
"Lempiö",
"Urho",
""
],
[
"Rahtu",
"Esa",
""
]
] |
new_dataset
| 0.999812 |
2304.05071
|
RuiYang Ju
|
Rui-Yang Ju, Weiming Cai
|
Fracture Detection in Pediatric Wrist Trauma X-ray Images Using YOLOv8
Algorithm
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hospital emergency departments frequently receive lots of bone fracture
cases, with pediatric wrist trauma fracture accounting for the majority of
them. Before pediatric surgeons perform surgery, they need to ask patients how
the fracture occurred and analyze the fracture situation by interpreting X-ray
images. The interpretation of X-ray images often requires a combination of
techniques from radiologists and surgeons, which requires time-consuming
specialized training. With the rise of deep learning in the field of computer
vision, network models applying for fracture detection has become an important
research topic. In this paper, we train YOLOv8 (the latest version of You Only
Look Once) model on the GRAZPEDWRI-DX dataset, and use data augmentation to
improve the model performance. The experimental results show that our model
have reached the state-of-the-art (SOTA) real-time model performance.
Specifically, compared to YOLOv8s models, the mean average precision (mAP 50)
of our models improve from 0.604 and 0.625 to 0.612 and 0.631 at the input
image size of 640 and 1024, respectively. To enable surgeons to use our model
for fracture detection on pediatric wrist trauma X-ray images, we have designed
the application "Fracture Detection Using YOLOv8 App" to assist surgeons in
diagnosing fractures, reducing the probability of error analysis, and providing
more useful information for surgery. Our implementation code is released at
https://github.com/RuiyangJu/Bone_Fracture_Detection_YOLOv8.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 09:08:09 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Apr 2023 13:05:45 GMT"
},
{
"version": "v3",
"created": "Tue, 16 May 2023 16:10:03 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Ju",
"Rui-Yang",
""
],
[
"Cai",
"Weiming",
""
]
] |
new_dataset
| 0.993815 |
2304.09915
|
Di Wang
|
Di Wang, Jing Zhang, Bo Du, Liangpei Zhang and Dacheng Tao
|
DCN-T: Dual Context Network with Transformer for Hyperspectral Image
Classification
|
Accepted by IEEE TIP. The code will be released at
https://github.com/DotWang/DCN-T
| null |
10.1109/TIP.2023.3270104
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hyperspectral image (HSI) classification is challenging due to spatial
variability caused by complex imaging conditions. Prior methods suffer from
limited representation ability, as they train specially designed networks from
scratch on limited annotated data. We propose a tri-spectral image generation
pipeline that transforms HSI into high-quality tri-spectral images, enabling
the use of off-the-shelf ImageNet pretrained backbone networks for feature
extraction. Motivated by the observation that there are many homogeneous areas
with distinguished semantic and geometric properties in HSIs, which can be used
to extract useful contexts, we propose an end-to-end segmentation network named
DCN-T. It adopts transformers to effectively encode regional adaptation and
global aggregation spatial contexts within and between the homogeneous areas
discovered by similarity-based clustering. To fully exploit the rich spectrums
of the HSI, we adopt an ensemble approach where all segmentation results of the
tri-spectral images are integrated into the final prediction through a voting
scheme. Extensive experiments on three public benchmarks show that our proposed
method outperforms state-of-the-art methods for HSI classification.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 18:32:52 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Wang",
"Di",
""
],
[
"Zhang",
"Jing",
""
],
[
"Du",
"Bo",
""
],
[
"Zhang",
"Liangpei",
""
],
[
"Tao",
"Dacheng",
""
]
] |
new_dataset
| 0.997089 |
2304.14211
|
Marcell Tam\'as Kurbucz
|
Marcell T. Kurbucz, P\'eter P\'osfay, Antal Jakov\'ac
|
LLT: An R package for Linear Law-based Feature Space Transformation
|
15 pages, 5 figures, 1 table
| null | null | null |
cs.LG cs.AI cs.CV cs.MS stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
The goal of the linear law-based feature space transformation (LLT) algorithm
is to assist with the classification of univariate and multivariate time
series. The presented R package, called LLT, implements this algorithm in a
flexible yet user-friendly way. This package first splits the instances into
training and test sets. It then utilizes time-delay embedding and spectral
decomposition techniques to identify the governing patterns (called linear
laws) of each input sequence (initial feature) within the training set.
Finally, it applies the linear laws of the training set to transform the
initial features of the test set. These steps are performed by three separate
functions called trainTest, trainLaw, and testTrans. Their application requires
a predefined data structure; however, for fast calculation, they use only
built-in functions. The LLT R package and a sample dataset with the appropriate
data structure are publicly available on GitHub.
|
[
{
"version": "v1",
"created": "Thu, 27 Apr 2023 14:18:29 GMT"
},
{
"version": "v2",
"created": "Mon, 15 May 2023 19:26:13 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Kurbucz",
"Marcell T.",
""
],
[
"Pósfay",
"Péter",
""
],
[
"Jakovác",
"Antal",
""
]
] |
new_dataset
| 0.972514 |
2305.08872
|
Junyoung Kim
|
Junyoung Kim, Kenneth Ross, Eric Sedlar, Lukas Stadler
|
AMULET: Adaptive Matrix-Multiplication-Like Tasks
|
15 pages, 19 figures
| null | null | null |
cs.PL cs.DB cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Many useful tasks in data science and machine learning applications can be
written as simple variations of matrix multiplication. However, users have
difficulty performing such tasks as existing matrix/vector libraries support
only a limited class of computations hand-tuned for each unique hardware
platform. Users can alternatively write the task as a simple nested loop but
current compilers are not sophisticated enough to generate fast code for the
task written in this way. To address these issues, we extend an open-source
compiler to recognize and optimize these matrix multiplication-like tasks. Our
framework, called Amulet, uses both database-style and compiler optimization
techniques to generate fast code tailored to its execution environment. We show
through experiments that Amulet achieves speedups on a variety of matrix
multiplication-like tasks compared to existing compilers. For large matrices
Amulet typically performs within 15% of hand-tuned matrix multiplication
libraries, while handling a much broader class of computations.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 17:04:24 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Kim",
"Junyoung",
""
],
[
"Ross",
"Kenneth",
""
],
[
"Sedlar",
"Eric",
""
],
[
"Stadler",
"Lukas",
""
]
] |
new_dataset
| 0.988619 |
2305.09009
|
Junwoo Jang
|
Junwoo Jang, Sangli Teng, Maani Ghaffari
|
Convex Geometric Trajectory Tracking using Lie Algebraic MPC for
Autonomous Marine Vehicles
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Controlling marine vehicles in challenging environments is a complex task due
to the presence of nonlinear hydrodynamics and uncertain external disturbances.
Despite nonlinear model predictive control (MPC) showing potential in
addressing these issues, its practical implementation is often constrained by
computational limitations. In this paper, we propose an efficient controller
for trajectory tracking of marine vehicles by employing a convex error-state
MPC on the Lie group. By leveraging the inherent geometric properties of the
Lie group, we can construct globally valid error dynamics and formulate a
quadratic programming-based optimization problem. Our proposed MPC demonstrates
effectiveness in trajectory tracking through extensive-numerical simulations,
including scenarios involving ocean currents. Notably, our method substantially
reduces computation time compared to nonlinear MPC, making it well-suited for
real-time control applications with long prediction horizons or involving small
marine vehicles.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 20:46:32 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Jang",
"Junwoo",
""
],
[
"Teng",
"Sangli",
""
],
[
"Ghaffari",
"Maani",
""
]
] |
new_dataset
| 0.98591 |
2305.09123
|
Weizhao Tang
|
Weizhao Tang, Peiyao Sheng, Pronoy Roy, Xuechao Wang, Giulia Fanti,
and Pramod Viswanath
|
Raft-Forensics: High Performance CFT Consensus with Accountability for
Byzantine Faults
| null | null | null | null |
cs.DC cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Crash fault tolerant (CFT) consensus algorithms are commonly used in
scenarios where system components are trusted, such as enterprise settings. CFT
algorithms offer high throughput and low latency, making them an attractive
option for centralized operations that require fault tolerance. However, CFT
consensus is vulnerable to Byzantine faults, which can be introduced by a
single corrupt component. Such faults can break consensus in the system.
Byzantine fault tolerant (BFT) consensus algorithms withstand Byzantine faults,
but they are not as competitive with CFT algorithms in terms of performance. In
this work, we explore a middle ground between BFT and CFT consensus by
exploring the role of accountability in CFT protocols. That is, if a CFT
protocol node breaks protocol and affects consensus safety, we aim to identify
which node was the culprit. Based on Raft, one of the most popular CFT
algorithms, we present Raft-Forensics, which provides accountability over
Byzantine faults. We theoretically prove that if two honest components fail to
reach consensus, the Raft-Forensics auditing algorithm finds the adversarial
component that caused the inconsistency. In an empirical evaluation, we
demonstrate that Raft-Forensics performs similarly to Raft and significantly
better than state-of-the-art BFT algorithms. With 256 byte messages,
Raft-Forensics achieves peak throughput 87.8% of vanilla Raft at 46% higher
latency, while state-of-the-art BFT protocol Dumbo-NG only achieves 18.9% peak
throughput at nearly $6\times$ higher latency.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 03:09:26 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Tang",
"Weizhao",
""
],
[
"Sheng",
"Peiyao",
""
],
[
"Roy",
"Pronoy",
""
],
[
"Wang",
"Xuechao",
""
],
[
"Fanti",
"Giulia",
""
],
[
"Viswanath",
"Pramod",
""
]
] |
new_dataset
| 0.95508 |
2305.09167
|
Xintao Zhao
|
Xintao Zhao, Shuai Wang, Yang Chao, Zhiyong Wu, Helen Meng,
|
Adversarial Speaker Disentanglement Using Unannotated External Data for
Self-supervised Representation Based Voice Conversion
|
Accepted by ICME 2023
| null | null | null |
cs.SD cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays, recognition-synthesis-based methods have been quite popular with
voice conversion (VC). By introducing linguistics features with good
disentangling characters extracted from an automatic speech recognition (ASR)
model, the VC performance achieved considerable breakthroughs. Recently,
self-supervised learning (SSL) methods trained with a large-scale unannotated
speech corpus have been applied to downstream tasks focusing on the content
information, which is suitable for VC tasks. However, a huge amount of speaker
information in SSL representations degrades timbre similarity and the quality
of converted speech significantly. To address this problem, we proposed a
high-similarity any-to-one voice conversion method with the input of SSL
representations. We incorporated adversarial training mechanisms in the
synthesis module using external unannotated corpora. Two auxiliary
discriminators were trained to distinguish whether a sequence of
mel-spectrograms has been converted by the acoustic model and whether a
sequence of content embeddings contains speaker information from external
corpora. Experimental results show that our proposed method achieves comparable
similarity and higher naturalness than the supervised method, which needs a
huge amount of annotated corpora for training and is applicable to improve
similarity for VC methods with other SSL representations as input.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 04:52:29 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Zhao",
"Xintao",
""
],
[
"Wang",
"Shuai",
""
],
[
"Chao",
"Yang",
""
],
[
"Wu",
"Zhiyong",
""
],
[
"Meng",
"Helen",
""
]
] |
new_dataset
| 0.997875 |
2305.09214
|
Nisar Ahmed
|
Nisar Ahmed, Hafiz Muhammad Shahzad Asif, and Hassan Khalid
|
PIQI: Perceptual Image Quality Index based on Ensemble of Gaussian
Process Regression
| null |
AMultimed Tools Appl 80, 15677 to 15700 (2021)
|
10.1007/s11042-020-10286-w
| null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Digital images contain a lot of redundancies, therefore, compression
techniques are applied to reduce the image size without loss of reasonable
image quality. Same become more prominent in the case of videos which contains
image sequences and higher compression ratios are achieved in low throughput
networks. Assessment of quality of images in such scenarios has become of
particular interest. Subjective evaluation in most of the scenarios is
infeasible so objective evaluation is preferred. Among the three objective
quality measures, full-reference and reduced-reference methods require an
original image in some form to calculate the image quality which is unfeasible
in scenarios such as broadcasting, acquisition or enhancement. Therefore, a
no-reference Perceptual Image Quality Index (PIQI) is proposed in this paper to
assess the quality of digital images which calculates luminance and gradient
statistics along with mean subtracted contrast normalized products in multiple
scales and color spaces. These extracted features are provided to a stacked
ensemble of Gaussian Process Regression (GPR) to perform the perceptual quality
evaluation. The performance of the PIQI is checked on six benchmark databases
and compared with twelve state-of-the-art methods and competitive results are
achieved. The comparison is made based on RMSE, Pearson and Spearman
correlation coefficients between ground truth and predicted quality scores. The
scores of 0.0552, 0.9802 and 0.9776 are achieved respectively for these metrics
on CSIQ database. Two cross-dataset evaluation experiments are performed to
check the generalization of PIQI.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 06:44:17 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Ahmed",
"Nisar",
""
],
[
"Asif",
"Hafiz Muhammad Shahzad",
""
],
[
"Khalid",
"Hassan",
""
]
] |
new_dataset
| 0.986781 |
2305.09221
|
Nazatul Haque Sultan
|
Nazatul H. Sultan, Shabnam Kasra-Kermanshahi, Yen Tran, Shangqi Lai,
Vijay Varadharajan, Surya Nepal, and Xun Yi
|
A Multi-Client Searchable Encryption Scheme for IoT Environment
|
22 pages, 5 figures, this version was submitted to ESORICS 2023
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
The proliferation of connected devices through Internet connectivity presents
both opportunities for smart applications and risks to security and privacy. It
is vital to proactively address these concerns to fully leverage the potential
of the Internet of Things. IoT services where one data owner serves multiple
clients, like smart city transportation, smart building management and
healthcare can offer benefits but also bring cybersecurity and data privacy
risks. For example, in healthcare, a hospital may collect data from medical
devices and make it available to multiple clients such as researchers and
pharmaceutical companies. This data can be used to improve medical treatments
and research but if not protected, it can also put patients' personal
information at risk. To ensure the benefits of these services, it is important
to implement proper security and privacy measures. In this paper, we propose a
symmetric searchable encryption scheme with dynamic updates on a database that
has a single owner and multiple clients for IoT environments. Our proposed
scheme supports both forward and backward privacy. Additionally, our scheme
supports a decentralized storage environment in which data owners can outsource
data across multiple servers or even across multiple service providers to
improve security and privacy. Further, it takes a minimum amount of effort and
costs to revoke a client's access to our system at any time. The performance
and formal security analyses of the proposed scheme show that our scheme
provides better functionality, and security and is more efficient in terms of
computation and storage than the closely related works.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 06:53:39 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Sultan",
"Nazatul H.",
""
],
[
"Kasra-Kermanshahi",
"Shabnam",
""
],
[
"Tran",
"Yen",
""
],
[
"Lai",
"Shangqi",
""
],
[
"Varadharajan",
"Vijay",
""
],
[
"Nepal",
"Surya",
""
],
[
"Yi",
"Xun",
""
]
] |
new_dataset
| 0.987802 |
2305.09243
|
sami barrit
|
Sami Barrit (ULB, UPEC M\'edecine), Alexandre Niset (UCL)
|
LogDoctor: an open and decentralized worker-centered solution for
occupational management in healthcare
| null | null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Occupational stress among health workers is a pervasive issue that affects
individual well-being, patient care quality, and healthcare systems'
sustainability. Current time-tracking solutions are mostly employer-driven,
neglecting the unique requirements of health workers. In turn, we propose an
open and decentralized worker-centered solution that leverages machine
intelligence for occupational health and safety monitoring. Its robust
technological stack, including blockchain technology and machine learning,
ensures compliance with legal frameworks for data protection and working time
regulations, while a decentralized autonomous organization bolsters distributed
governance. To tackle implementation challenges, we employ a scalable,
interoperable, and modular architecture while engaging diverse stakeholders
through open beta testing and pilot programs. By bridging an unaddressed
technological gap in healthcare, this approach offers a unique opportunity to
incentivize user adoption and align stakeholders' interests. We aim to empower
health workers to take control of their time, valorize their work, and
safeguard their health while enhancing the care of their patients.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 07:49:20 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Barrit",
"Sami",
"",
"ULB, UPEC Médecine"
],
[
"Niset",
"Alexandre",
"",
"UCL"
]
] |
new_dataset
| 0.998656 |
2305.09249
|
Xiaoyu Shen
|
Xiaoyu Shen, Akari Asai, Bill Byrne and Adri\`a de Gispert
|
xPQA: Cross-Lingual Product Question Answering across 12 Languages
|
ACL 2023 industry track. Dataset available in
https://github.com/amazon-science/contextual-product-qa
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Product Question Answering (PQA) systems are key in e-commerce applications
to provide responses to customers' questions as they shop for products. While
existing work on PQA focuses mainly on English, in practice there is need to
support multiple customer languages while leveraging product information
available in English. To study this practical industrial task, we present xPQA,
a large-scale annotated cross-lingual PQA dataset in 12 languages across 9
branches, and report results in (1) candidate ranking, to select the best
English candidate containing the information to answer a non-English question;
and (2) answer generation, to generate a natural-sounding non-English answer
based on the selected English candidate. We evaluate various approaches
involving machine translation at runtime or offline, leveraging multilingual
pre-trained LMs, and including or excluding xPQA training data. We find that
(1) In-domain data is essential as cross-lingual rankers trained on other
domains perform poorly on the PQA task; (2) Candidate ranking often prefers
runtime-translation approaches while answer generation prefers multilingual
approaches; (3) Translating offline to augment multilingual models helps
candidate ranking mainly on languages with non-Latin scripts; and helps answer
generation mainly on languages with Latin scripts. Still, there remains a
significant performance gap between the English and the cross-lingual test
sets.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 07:56:19 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Shen",
"Xiaoyu",
""
],
[
"Asai",
"Akari",
""
],
[
"Byrne",
"Bill",
""
],
[
"de Gispert",
"Adrià",
""
]
] |
new_dataset
| 0.999784 |
2305.09257
|
Menouar Boulif
|
Menouar Boulif, Aghiles Gharbi
|
A new node-shift encoding representation for the travelling salesman
problem
|
6 pages, 5 figures. Accepted in ICL2022, Jeddah, Saudi Arabia
conference (postponed to 2024)
| null | null | null |
cs.NE cs.AI math.OC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper presents a new genetic algorithm encoding representation to solve
the travelling salesman problem. To assess the performance of the proposed
chromosome structure, we compare it with state-of-the-art encoding
representations. For that purpose, we use 14 benchmarks of different sizes
taken from TSPLIB. Finally, after conducting the experimental study, we report
the obtained results and draw our conclusion.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 08:06:02 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Boulif",
"Menouar",
""
],
[
"Gharbi",
"Aghiles",
""
]
] |
new_dataset
| 0.998895 |
2305.09258
|
Simra Shahid
|
Simra Shahid, Tanay Anand, Nikitha Srikanth, Sumit Bhatia, Balaji
Krishnamurthy, Nikaash Puri
|
HyHTM: Hyperbolic Geometry based Hierarchical Topic Models
|
This paper is accepted in Findings of the Association for
Computational Linguistics (2023)
| null | null | null |
cs.IR cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Hierarchical Topic Models (HTMs) are useful for discovering topic hierarchies
in a collection of documents. However, traditional HTMs often produce
hierarchies where lowerlevel topics are unrelated and not specific enough to
their higher-level topics. Additionally, these methods can be computationally
expensive. We present HyHTM - a Hyperbolic geometry based Hierarchical Topic
Models - that addresses these limitations by incorporating hierarchical
information from hyperbolic geometry to explicitly model hierarchies in topic
models. Experimental results with four baselines show that HyHTM can better
attend to parent-child relationships among topics. HyHTM produces coherent
topic hierarchies that specialise in granularity from generic higher-level
topics to specific lowerlevel topics. Further, our model is significantly
faster and leaves a much smaller memory footprint than our best-performing
baseline.We have made the source code for our algorithm publicly accessible.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 08:06:11 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Shahid",
"Simra",
""
],
[
"Anand",
"Tanay",
""
],
[
"Srikanth",
"Nikitha",
""
],
[
"Bhatia",
"Sumit",
""
],
[
"Krishnamurthy",
"Balaji",
""
],
[
"Puri",
"Nikaash",
""
]
] |
new_dataset
| 0.99811 |
2305.09302
|
Di Xu
|
Di Xu, Yang Zhao, Xiang Hao, Xin Meng
|
Pink-Eggs Dataset V1: A Step Toward Invasive Species Management Using
Deep Learning Embedded Solutions
| null | null | null |
02
|
cs.CV cs.AI eess.AS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We introduce a novel dataset consisting of images depicting pink eggs that
have been identified as Pomacea canaliculata eggs, accompanied by corresponding
bounding box annotations. The purpose of this dataset is to aid researchers in
the analysis of the spread of Pomacea canaliculata species by utilizing deep
learning techniques, as well as supporting other investigative pursuits that
require visual data pertaining to the eggs of Pomacea canaliculata. It is worth
noting, however, that the identity of the eggs in question is not definitively
established, as other species within the same taxonomic family have been
observed to lay similar-looking eggs in regions of the Americas. Therefore, a
crucial prerequisite to any decision regarding the elimination of these eggs
would be to establish with certainty whether they are exclusively attributable
to invasive Pomacea canaliculata or if other species are also involved. The
dataset is available at https://www.kaggle.com/datasets/deeshenzhen/pinkeggs
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 09:21:56 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Xu",
"Di",
""
],
[
"Zhao",
"Yang",
""
],
[
"Hao",
"Xiang",
""
],
[
"Meng",
"Xin",
""
]
] |
new_dataset
| 0.999374 |
2305.09425
|
Sarah Bee
|
Sarah Bee, Lawrence Bull, Nikolas Dervilis, Keith Worden
|
When is an SHM problem a Multi-Task-Learning problem?
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Multi-task neural networks learn tasks simultaneously to improve individual
task performance. There are three mechanisms of multi-task learning (MTL) which
are explored here for the context of structural health monitoring (SHM): (i)
the natural occurrence of multiple tasks; (ii) using outputs as inputs (both
linked to the recent research in population-based SHM (PBSHM)); and, (iii)
additional loss functions to provide different insights. Each of these problem
settings for MTL is detailed and an example is given.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 13:31:11 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Bee",
"Sarah",
""
],
[
"Bull",
"Lawrence",
""
],
[
"Dervilis",
"Nikolas",
""
],
[
"Worden",
"Keith",
""
]
] |
new_dataset
| 0.998651 |
2305.09433
|
Claudio Anliker
|
Claudio Anliker, Giovanni Camurati, Srdjan Capkun
|
Time for Change: How Clocks Break UWB Secure Ranging
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to its suitability for wireless ranging, Ultra-Wide Band (UWB) has gained
traction over the past years. UWB chips have been integrated into consumer
electronics and considered for security-relevant use cases, such as access
control or contactless payments. However, several publications in the recent
past have shown that it is difficult to protect the integrity of instance
measurements on the physical layer. In this paper, we identify transceiver
clock imperfections as a new, important parameter that has been widely ignored
so far. We present Mix-Down and Stretch-and-Advance, two novel attacks against
the current (IEEE 802.15.4z) and the upcoming (IEEE 802.15.4ab) UWB standard,
respectively. We demonstrate Mix-Down on commercial chips and achieve distance
reduction from 10 m to 0 m. For the Stretch-and-Advance attack, we show
analytically that the current proposal of IEEE 802.15.4ab allows reductions of
over 90 m. In order to prevent the attack, we propose and analyze an effective
countermeasure.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 13:44:09 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Anliker",
"Claudio",
""
],
[
"Camurati",
"Giovanni",
""
],
[
"Capkun",
"Srdjan",
""
]
] |
new_dataset
| 0.995929 |
2305.09452
|
Joseph Chow
|
Gyugeun Yoon, Joseph Y. J. Chow
|
A sequential transit network design algorithm with optimal learning
under correlated beliefs
| null | null | null | null |
cs.AI cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Mobility service route design requires potential demand information to well
accommodate travel demand within the service region. Transit planners and
operators can access various data sources including household travel survey
data and mobile device location logs. However, when implementing a mobility
system with emerging technologies, estimating demand level becomes harder
because of more uncertainties with user behaviors. Therefore, this study
proposes an artificial intelligence-driven algorithm that combines sequential
transit network design with optimal learning. An operator gradually expands its
route system to avoid risks from inconsistency between designed routes and
actual travel demand. At the same time, observed information is archived to
update the knowledge that the operator currently uses. Three learning policies
are compared within the algorithm: multi-armed bandit, knowledge gradient, and
knowledge gradient with correlated beliefs. For validation, a new route system
is designed on an artificial network based on public use microdata areas in New
York City. Prior knowledge is reproduced from the regional household travel
survey data. The results suggest that exploration considering correlations can
achieve better performance compared to greedy choices in general. In future
work, the problem may incorporate more complexities such as demand elasticity
to travel time, no limitations to the number of transfers, and costs for
expansion.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 14:14:51 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Yoon",
"Gyugeun",
""
],
[
"Chow",
"Joseph Y. J.",
""
]
] |
new_dataset
| 0.96446 |
2305.09475
|
Yuanyuan Wei
|
Yuanyuan Wei, Julian Jang-Jaccard, Fariza Sabrina, Wen Xu, Seyit
Camtepe, Aeryn Dunmore
|
Reconstruction-based LSTM-Autoencoder for Anomaly-based DDoS Attack
Detection over Multivariate Time-Series Data
|
13 pages
| null | null | null |
cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A Distributed Denial-of-service (DDoS) attack is a malicious attempt to
disrupt the regular traffic of a targeted server, service, or network by
sending a flood of traffic to overwhelm the target or its surrounding
infrastructure. As technology improves, new attacks have been developed by
hackers. Traditional statistical and shallow machine learning techniques can
detect superficial anomalies based on shallow data and feature selection,
however, these approaches cannot detect unseen DDoS attacks. In this context,
we propose a reconstruction-based anomaly detection model named
LSTM-Autoencoder (LSTM-AE) which combines two deep learning-based models for
detecting DDoS attack anomalies. The proposed structure of long short-term
memory (LSTM) networks provides units that work with each other to learn the
long short-term correlation of data within a time series sequence. Autoencoders
are used to identify the optimal threshold based on the reconstruction error
rates evaluated on each sample across all time-series sequences. As such, a
combination model LSTM-AE can not only learn delicate sub-pattern differences
in attacks and benign traffic flows, but also minimize reconstructed benign
traffic to obtain a lower range reconstruction error, with attacks presenting a
larger reconstruction error. In this research, we trained and evaluated our
proposed LSTM-AE model on reflection-based DDoS attacks (DNS, LDAP, and SNMP).
The results of our experiments demonstrate that our method performs better than
other state-of-the-art methods, especially for LDAP attacks, with an accuracy
of over 99.
|
[
{
"version": "v1",
"created": "Fri, 21 Apr 2023 03:56:03 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Wei",
"Yuanyuan",
""
],
[
"Jang-Jaccard",
"Julian",
""
],
[
"Sabrina",
"Fariza",
""
],
[
"Xu",
"Wen",
""
],
[
"Camtepe",
"Seyit",
""
],
[
"Dunmore",
"Aeryn",
""
]
] |
new_dataset
| 0.997414 |
2305.09482
|
Mounika Vanamala
|
Brendan Pelto, Mounika Vanamala, Rushit Dave
|
Your Identity is Your Behavior -- Continuous User Authentication based
on Machine Learning and Touch Dynamics
| null | null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The aim of this research paper is to look into the use of continuous
authentication with mobile touch dynamics, using three different algorithms:
Neural Network, Extreme Gradient Boosting, and Support Vector Machine. Mobile
devices are constantly increasing in popularity in the world, today smartphone
subscriptions have surpassed 6 billion. Mobile touch dynamics refer to the
distinct patterns of how a user interacts with their mobile device, this
includes factors such as touch pressure, swipe speed, and touch duration.
Continuous authentication refers to the process of continuously verifying a
user's identity while they are using a device, rather than just at the initial
login. This research used a dataset of touch dynamics collected from 40
subjects using the LG V30+. The participants played four mobile games, PUBG,
Diep.io, Slither, and Minecraft, for 10 minutes each game. The three algorithms
were trained and tested on the extracted dataset, and their performance was
evaluated based on metrics such as accuracy, precision, false negative rate,
and false positive rate. The results of the research showed that all three
algorithms were able to effectively classify users based on their individual
touch dynamics, with accuracy ranging from 80% to 95%. The Neural Network
algorithm performed the best, achieving the highest accuracy and precision
scores, followed closely by XGBoost and SVC. The data shows that continuous
authentication using mobile touch dynamics has the potential to be a useful
method for enhancing security and reducing the risk of unauthorized access to
personal devices. This research also notes the importance of choosing the
correct algorithm for a given dataset and use case, as different algorithms may
have varying levels of performance depending on the specific task.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 13:45:25 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Pelto",
"Brendan",
""
],
[
"Vanamala",
"Mounika",
""
],
[
"Dave",
"Rushit",
""
]
] |
new_dataset
| 0.995861 |
2305.09497
|
Tiziano Piccardi
|
Tiziano Piccardi, Martin Gerlach, Robert West
|
Curious Rhythms: Temporal Regularities of Wikipedia Consumption
| null | null | null | null |
cs.CY cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wikipedia, in its role as the world's largest encyclopedia, serves a broad
range of information needs. Although previous studies have noted that Wikipedia
users' information needs vary throughout the day, there is to date no
large-scale, quantitative study of the underlying dynamics. The present paper
fills this gap by investigating temporal regularities in daily consumption
patterns in a large-scale analysis of billions of timezone-corrected page
requests mined from English Wikipedia's server logs, with the goal of
investigating how context and time relate to the kind of information consumed.
First, we show that even after removing the global pattern of day-night
alternation, the consumption habits of individual articles maintain strong
diurnal regularities. Then, we characterize the prototypical shapes of
consumption patterns, finding a particularly strong distinction between
articles preferred during the evening/night and articles preferred during
working hours. Finally, we investigate topical and contextual correlates of
Wikipedia articles' access rhythms, finding that article topic, reader country,
and access device (mobile vs. desktop) are all important predictors of daily
attention patterns. These findings shed new light on how humans seek
information on the Web by focusing on Wikipedia as one of the largest open
platforms for knowledge and learning, emphasizing Wikipedia's role as a rich
knowledge base that fulfills information needs spread throughout the day, with
implications for understanding information seeking across the globe and for
designing appropriate information systems.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 14:48:08 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Piccardi",
"Tiziano",
""
],
[
"Gerlach",
"Martin",
""
],
[
"West",
"Robert",
""
]
] |
new_dataset
| 0.997891 |
2305.09520
|
Ruoxi Xu
|
Ruoxi Xu, Hongyu Lin, Xinyan Guan, Xianpei Han, Yingfei Sun, Le Sun
|
DLUE: Benchmarking Document Language Understanding
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Understanding documents is central to many real-world tasks but remains a
challenging topic. Unfortunately, there is no well-established consensus on how
to comprehensively evaluate document understanding abilities, which
significantly hinders the fair comparison and measuring the progress of the
field. To benchmark document understanding researches, this paper summarizes
four representative abilities, i.e., document classification, document
structural analysis, document information extraction, and document
transcription. Under the new evaluation framework, we propose \textbf{Document
Language Understanding Evaluation} -- \textbf{DLUE}, a new task suite which
covers a wide-range of tasks in various forms, domains and document genres. We
also systematically evaluate six well-established transformer models on DLUE,
and find that due to the lengthy content, complicated underlying structure and
dispersed knowledge, document understanding is still far from being solved, and
currently there is no neural architecture that dominates all tasks, raising
requirements for a universal document understanding architecture.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 15:16:24 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Xu",
"Ruoxi",
""
],
[
"Lin",
"Hongyu",
""
],
[
"Guan",
"Xinyan",
""
],
[
"Han",
"Xianpei",
""
],
[
"Sun",
"Yingfei",
""
],
[
"Sun",
"Le",
""
]
] |
new_dataset
| 0.998977 |
2305.09523
|
Huan Mao
|
Huan Mao, Yulin Chen, Zongtan Li, Feng Chen, Pingping Chen
|
SCTracker: Multi-object tracking with shape and confidence constraints
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Detection-based tracking is one of the main methods of multi-object tracking.
It can obtain good tracking results when using excellent detectors but it may
associate wrong targets when facing overlapping and low-confidence detections.
To address this issue, this paper proposes a multi-object tracker based on
shape constraint and confidence named SCTracker. In the data association stage,
an Intersection of Union distance with shape constraints is applied to
calculate the cost matrix between tracks and detections, which can effectively
avoid the track tracking to the wrong target with the similar position but
inconsistent shape, so as to improve the accuracy of data association.
Additionally, the Kalman Filter based on the detection confidence is used to
update the motion state to improve the tracking performance when the detection
has low confidence. Experimental results on MOT 17 dataset show that the
proposed method can effectively improve the tracking performance of
multi-object tracking.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 15:18:42 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Mao",
"Huan",
""
],
[
"Chen",
"Yulin",
""
],
[
"Li",
"Zongtan",
""
],
[
"Chen",
"Feng",
""
],
[
"Chen",
"Pingping",
""
]
] |
new_dataset
| 0.985813 |
2305.09534
|
Fritz Hohl
|
Fritz Hohl, Nianheng Wu, Martina Galetti, Remi van Trijp
|
MetaSRL++: A Uniform Scheme for Modelling Deeper Semantics
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Despite enormous progress in Natural Language Processing (NLP), our field is
still lacking a common deep semantic representation scheme. As a result, the
problem of meaning and understanding is typically sidestepped through more
simple, approximative methods. This paper argues that in order to arrive at
such a scheme, we also need a common modelling scheme. It therefore introduces
MetaSRL++, a uniform, language- and modality-independent modelling scheme based
on Semantic Graphs, as a step towards a common representation scheme; as well
as a method for defining the concepts and entities that are used in these
graphs. Our output is twofold. First, we illustrate MetaSRL++ through concrete
examples. Secondly, we discuss how it relates to existing work in the field.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 15:26:52 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Hohl",
"Fritz",
""
],
[
"Wu",
"Nianheng",
""
],
[
"Galetti",
"Martina",
""
],
[
"van Trijp",
"Remi",
""
]
] |
new_dataset
| 0.999025 |
2305.09556
|
Liya Wang
|
Liya Wang, Jason Chou, Dave Rouck, Alex Tien, Diane M Baumgartner
|
Adapting Sentence Transformers for the Aviation Domain
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Learning effective sentence representations is crucial for many Natural
Language Processing (NLP) tasks, including semantic search, semantic textual
similarity (STS), and clustering. While multiple transformer models have been
developed for sentence embedding learning, these models may not perform
optimally when dealing with specialized domains like aviation, which has unique
characteristics such as technical jargon, abbreviations, and unconventional
grammar. Furthermore, the absence of labeled datasets makes it difficult to
train models specifically for the aviation domain. To address these challenges,
we propose a novel approach for adapting sentence transformers for the aviation
domain. Our method is a two-stage process consisting of pre-training followed
by fine-tuning. During pre-training, we use Transformers and Sequential
Denoising AutoEncoder (TSDAE) with aviation text data as input to improve the
initial model performance. Subsequently, we fine-tune our models using a
Natural Language Inference (NLI) dataset in the Sentence Bidirectional Encoder
Representations from Transformers (SBERT) architecture to mitigate overfitting
issues. Experimental results on several downstream tasks show that our adapted
sentence transformers significantly outperform general-purpose transformers,
demonstrating the effectiveness of our approach in capturing the nuances of the
aviation domain. Overall, our work highlights the importance of domain-specific
adaptation in developing high-quality NLP solutions for specialized industries
like aviation.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 15:53:24 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Wang",
"Liya",
""
],
[
"Chou",
"Jason",
""
],
[
"Rouck",
"Dave",
""
],
[
"Tien",
"Alex",
""
],
[
"Baumgartner",
"Diane M",
""
]
] |
new_dataset
| 0.973614 |
2305.09592
|
Amin Sarihi
|
Amin Sarihi, Ahmad Patooghy, Peter Jamieson, Abdel-Hameed A. Badawy
|
Trojan Playground: A Reinforcement Learning Framework for Hardware
Trojan Insertion and Detection
| null | null | null | null |
cs.CR cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Current Hardware Trojan (HT) detection techniques are mostly developed based
on a limited set of HT benchmarks. Existing HT benchmarks circuits are
generated with multiple shortcomings, i.e., i) they are heavily biased by the
designers' mindset when they are created, and ii) they are created through a
one-dimensional lens, mainly the signal activity of nets. To address these
shortcomings, we introduce the first automated reinforcement learning (RL) HT
insertion and detection framework. In the insertion phase, an RL agent explores
the circuits and finds different locations that are best for keeping inserted
HTs hidden. On the defense side, we introduce a multi-criteria RL-based
detector that generates test vectors to discover the existence of HTs. Using
the proposed framework, one can explore the HT insertion and detection design
spaces to break the human mindset limitations as well as the benchmark issues,
ultimately leading toward the next-generation of innovative detectors. Our HT
toolset is open-source to accelerate research in this field and reduce the
initial setup time for newcomers. We demonstrate the efficacy of our framework
on ISCAS-85 benchmarks and provide the attack and detection success rates and
define a methodology for comparing our techniques.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 16:42:07 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Sarihi",
"Amin",
""
],
[
"Patooghy",
"Ahmad",
""
],
[
"Jamieson",
"Peter",
""
],
[
"Badawy",
"Abdel-Hameed A.",
""
]
] |
new_dataset
| 0.995523 |
2305.09594
|
Bechir Hamdaoui
|
Luke Puppo, Weng-Keen Wong, Bechir Hamdaoui, Abdurrahman Elmaghbub
|
HiNoVa: A Novel Open-Set Detection Method for Automating RF Device
Authentication
| null | null | null | null |
cs.CR cs.LG eess.SP
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
New capabilities in wireless network security have been enabled by deep
learning, which leverages patterns in radio frequency (RF) data to identify and
authenticate devices. Open-set detection is an area of deep learning that
identifies samples captured from new devices during deployment that were not
part of the training set. Past work in open-set detection has mostly been
applied to independent and identically distributed data such as images. In
contrast, RF signal data present a unique set of challenges as the data forms a
time series with non-linear time dependencies among the samples. We introduce a
novel open-set detection approach based on the patterns of the hidden state
values within a Convolutional Neural Network (CNN) Long Short-Term Memory
(LSTM) model. Our approach greatly improves the Area Under the Precision-Recall
Curve on LoRa, Wireless-WiFi, and Wired-WiFi datasets, and hence, can be used
successfully to monitor and control unauthorized network access of wireless
devices.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 16:47:02 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Puppo",
"Luke",
""
],
[
"Wong",
"Weng-Keen",
""
],
[
"Hamdaoui",
"Bechir",
""
],
[
"Elmaghbub",
"Abdurrahman",
""
]
] |
new_dataset
| 0.977354 |
2305.09615
|
Tarik A. Rashid
|
Azad A. Ameen, Tarik A. Rashid and Shavan Askar
|
CDDO-HS:Child Drawing Development Optimization Harmony Search Algorithm
|
21 pages
|
Applied Sciences, 2023
|
10.3390/app13095795
| null |
cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
Child drawing development optimization (CDDO) is a recent example of a
metaheuristic algorithm. The motive for inventing this method is children's
learning behavior and cognitive development, with the golden ratio employed to
optimize their artwork's aesthetic value. Unfortunately, CDDO suffers from low
performance in the exploration phase, and the local best solution stagnates.
Harmony search (HS) is a highly competitive algorithm relative to other
prevalent metaheuristic algorithms, as its exploration phase performance on
unimodal benchmark functions is outstanding. Thus, to avoid these issues, we
present CDDOHS, a hybridization of both standards of CDDO and HS. The
hybridized model proposed consists of two phases. Initially, the pattern size
(PS) is relocated to the algorithm's core and the initial pattern size is set
to 80 % of the total population size. Second, the standard harmony search (HS)
is added to the pattern size (PS) for the exploration phase to enhance and
update the solution after each iteration. Experiments are evaluated using two
distinct standard benchmark functions, known as classical test functions,
including 23 common functions and 10 CEC-C06 2019 functions. Additionally, the
suggested CDDOHS is compared to CDDO, HS, and six other widely used algorithms.
Using the Wilcoxon ranksum test, the results indicate that CDDOHS beats
alternative algorithms.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 06:29:30 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Ameen",
"Azad A.",
""
],
[
"Rashid",
"Tarik A.",
""
],
[
"Askar",
"Shavan",
""
]
] |
new_dataset
| 0.97006 |
2305.09644
|
Jack Collins
|
Jack Collins, Mark Robson, Jun Yamada, Mohan Sridharan, Karol Janik
and Ingmar Posner
|
RAMP: A Benchmark for Evaluating Robotic Assembly Manipulation and
Planning
|
Project website:
https://sites.google.com/oxfordrobotics.institute/ramp
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce RAMP, an open-source robotics benchmark inspired by real-world
industrial assembly tasks. RAMP consists of beams that a robot must assemble
into specified goal configurations using pegs as fasteners. As such it assesses
planning and execution capabilities, and poses challenges in perception,
reasoning, manipulation, diagnostics, fault recovery and goal parsing. RAMP has
been designed to be accessible and extensible. Parts are either 3D printed or
otherwise constructed from materials that are readily obtainable. The part
design and detailed instructions are publicly available. In order to broaden
community engagement, RAMP incorporates fixtures such as April Tags which
enable researchers to focus on individual sub-tasks of the assembly challenge
if desired. We provide a full digital twin as well as rudimentary baselines to
enable rapid progress. Our vision is for RAMP to form the substrate for a
community-driven endeavour that evolves as capability matures.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 17:44:45 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Collins",
"Jack",
""
],
[
"Robson",
"Mark",
""
],
[
"Yamada",
"Jun",
""
],
[
"Sridharan",
"Mohan",
""
],
[
"Janik",
"Karol",
""
],
[
"Posner",
"Ingmar",
""
]
] |
new_dataset
| 0.99988 |
2305.09646
|
Joanna Komorniczak
|
Joanna Komorniczak and Pawel Ksieniewicz
|
torchosr -- a PyTorch extension package for Open Set Recognition models
evaluation in Python
| null | null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The article presents the torchosr package - a Python package compatible with
PyTorch library - offering tools and methods dedicated to Open Set Recognition
in Deep Neural Networks. The package offers two state-of-the-art methods in the
field, a set of functions for handling base sets and generation of derived sets
for the Open Set Recognition task (where some classes are considered unknown
and used only in the testing process) and additional tools to handle datasets
and methods. The main goal of the package proposal is to simplify and promote
the correct experimental evaluation, where experiments are carried out on a
large number of derivative sets with various Openness and class-to-category
assignments. The authors hope that state-of-the-art methods available in the
package will become a source of a correct and open-source implementation of the
relevant solutions in the domain.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 17:45:32 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Komorniczak",
"Joanna",
""
],
[
"Ksieniewicz",
"Pawel",
""
]
] |
new_dataset
| 0.999214 |
2305.09647
|
George Eskandar
|
George Eskandar, Mohamed Abdelsamad, Karim Armanious, Shuai Zhang, Bin
Yang
|
Wavelet-based Unsupervised Label-to-Image Translation
|
arXiv admin note: substantial text overlap with arXiv:2109.14715
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Semantic Image Synthesis (SIS) is a subclass of image-to-image translation
where a semantic layout is used to generate a photorealistic image.
State-of-the-art conditional Generative Adversarial Networks (GANs) need a huge
amount of paired data to accomplish this task while generic unpaired
image-to-image translation frameworks underperform in comparison, because they
color-code semantic layouts and learn correspondences in appearance instead of
semantic content. Starting from the assumption that a high quality generated
image should be segmented back to its semantic layout, we propose a new
Unsupervised paradigm for SIS (USIS) that makes use of a self-supervised
segmentation loss and whole image wavelet based discrimination. Furthermore, in
order to match the high-frequency distribution of real images, a novel
generator architecture in the wavelet domain is proposed. We test our
methodology on 3 challenging datasets and demonstrate its ability to bridge the
performance gap between paired and unpaired models.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 17:48:44 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Eskandar",
"George",
""
],
[
"Abdelsamad",
"Mohamed",
""
],
[
"Armanious",
"Karim",
""
],
[
"Zhang",
"Shuai",
""
],
[
"Yang",
"Bin",
""
]
] |
new_dataset
| 0.971429 |
2305.09652
|
Mutian He
|
Mutian He, Philip N. Garner
|
The Interpreter Understands Your Meaning: End-to-end Spoken Language
Understanding Aided by Speech Translation
|
13 pages, 3 figures
| null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
End-to-end spoken language understanding (SLU) remains elusive even with
current large pretrained language models on text and speech, especially in
multilingual cases. Machine translation has been established as a powerful
pretraining objective on text as it enables the model to capture high-level
semantics of the input utterance and associations between different languages,
which is desired for speech models that work on lower-level acoustic frames.
Motivated particularly by the task of cross-lingual SLU, we demonstrate that
the task of speech translation (ST) is a good means of pretraining speech
models for end-to-end SLU on both monolingual and cross-lingual scenarios.
By introducing ST, our models give higher performance over current baselines
on monolingual and multilingual intent classification as well as spoken
question answering using SLURP, MINDS-14, and NMSQA benchmarks. To verify the
effectiveness of our methods, we also release two new benchmark datasets from
both synthetic and real sources, for the tasks of abstractive summarization
from speech and low-resource or zero-shot transfer from English to French. We
further show the value of preserving knowledge from the pretraining task, and
explore Bayesian transfer learning on pretrained speech models based on
continual learning regularizers for that.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 17:53:03 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"He",
"Mutian",
""
],
[
"Garner",
"Philip N.",
""
]
] |
new_dataset
| 0.955856 |
2305.09657
|
Vamsi Vytla
|
Vamsi K Vytla, Larry Doolittle
|
Newad: A register map automation tool for Verilog
|
Presented at the 3rd Workshop on Open-Source Design Automation
(OSDA), 2023 (arXiv:2303.18024)
| null | null |
OSDA/2023/03
|
cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Large scale scientific instrumentation-and-control FPGA gateware designs have
numerous run-time settable parameters. These can be used either for user-level
control or by automated processes (e.g., calibration). The number of such
parameters in a single design can reach on the order of 1000, and keeps
evolving as the gateware and its functionality evolves. One must keep track of
which module the registers belong to, where the registers need to be decoded,
and how to express the properties (or even semantics) of the register to the
next level of user or software. Note, the registers maybe embedded anywhere
throughout the module hierarchy. Purely manual handling of these tasks by HDL
developers is considered burdensome and error-prone at this scale. Typically
these registers are writable via an on-chip bus, vaguely VME-like, that is
controlled by an on-chip or off-chip CPU. There have been several attempts in
the community to address this task at different levels. However, we have found
no tool that is able to generate a register map, generate decoders and encoders
with minimal overhead to the developer. So, here we present a tool that scours
native HDL source files and looks for specific language-supported attributes
and automatically generates a register map and bus decoders, respecting
multiple clock domains, and presents a JSON file to the network that maps
register names to addresses.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 17:56:51 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Vytla",
"Vamsi K",
""
],
[
"Doolittle",
"Larry",
""
]
] |
new_dataset
| 0.999012 |
2305.09662
|
Samaneh Azadi
|
Samaneh Azadi, Akbar Shah, Thomas Hayes, Devi Parikh, Sonal Gupta
|
Make-An-Animation: Large-Scale Text-conditional 3D Human Motion
Generation
|
arXiv admin note: text overlap with arXiv:2304.07410
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text-guided human motion generation has drawn significant interest because of
its impactful applications spanning animation and robotics. Recently,
application of diffusion models for motion generation has enabled improvements
in the quality of generated motions. However, existing approaches are limited
by their reliance on relatively small-scale motion capture data, leading to
poor performance on more diverse, in-the-wild prompts. In this paper, we
introduce Make-An-Animation, a text-conditioned human motion generation model
which learns more diverse poses and prompts from large-scale image-text
datasets, enabling significant improvement in performance over prior works.
Make-An-Animation is trained in two stages. First, we train on a curated
large-scale dataset of (text, static pseudo-pose) pairs extracted from
image-text datasets. Second, we fine-tune on motion capture data, adding
additional layers to model the temporal dimension. Unlike prior diffusion
models for motion generation, Make-An-Animation uses a U-Net architecture
similar to recent text-to-video generation models. Human evaluation of motion
realism and alignment with input text shows that our model reaches
state-of-the-art performance on text-to-motion generation.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 17:58:43 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Azadi",
"Samaneh",
""
],
[
"Shah",
"Akbar",
""
],
[
"Hayes",
"Thomas",
""
],
[
"Parikh",
"Devi",
""
],
[
"Gupta",
"Sonal",
""
]
] |
new_dataset
| 0.995791 |
2103.09803
|
Alexander Wolff
|
Elena Arseneva, Linda Kleist, Boris Klemz, Maarten L\"offler, Andr\'e
Schulz, Birgit Vogtenhuber, Alexander Wolff
|
Adjacency Graphs of Polyhedral Surfaces
|
The conference version of this paper appeared in Proc. SoCG 2021
| null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
We study whether a given graph can be realized as an adjacency graph of the
polygonal cells of a polyhedral surface in $\mathbb{R}^3$. We show that every
graph is realizable as a polyhedral surface with arbitrary polygonal cells, and
that this is not true if we require the cells to be convex. In particular, if
the given graph contains $K_5$, $K_{5,81}$, or any nonplanar $3$-tree as a
subgraph, no such realization exists. On the other hand, all planar graphs,
$K_{4,4}$, and $K_{3,5}$ can be realized with convex cells. The same holds for
any subdivision of any graph where each edge is subdivided at least once, and,
by a result from McMullen et al. (1983), for any hypercube.
Our results have implications on the maximum density of graphs describing
polyhedral surfaces with convex cells: The realizability of hypercubes shows
that the maximum number of edges over all realizable $n$-vertex graphs is in
$\Omega(n \log n)$. From the non-realizability of $K_{5,81}$, we obtain that
any realizable $n$-vertex graph has $O(n^{9/5})$ edges. As such, these graphs
can be considerably denser than planar graphs, but not arbitrarily dense.
|
[
{
"version": "v1",
"created": "Wed, 17 Mar 2021 17:41:13 GMT"
},
{
"version": "v2",
"created": "Mon, 15 May 2023 16:05:47 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Arseneva",
"Elena",
""
],
[
"Kleist",
"Linda",
""
],
[
"Klemz",
"Boris",
""
],
[
"Löffler",
"Maarten",
""
],
[
"Schulz",
"André",
""
],
[
"Vogtenhuber",
"Birgit",
""
],
[
"Wolff",
"Alexander",
""
]
] |
new_dataset
| 0.975681 |
2103.10030
|
Tanmay Samak
|
Tanmay Vilas Samak, Chinmay Vilas Samak and Ming Xie
|
AutoDRIVE Simulator: A Simulator for Scaled Autonomous Vehicle Research
and Education
|
Accepted at International Conference on Control, Robotics and
Intelligent System (CCRIS) 2021
|
Proceedings of the 2021 2nd International Conference on Control,
Robotics and Intelligent System (CCRIS '21). Association for Computing
Machinery, New York, NY, USA, 1-5
|
10.1145/3483845.3483846
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
AutoDRIVE is envisioned to be an integrated research and education platform
for scaled autonomous vehicles and related applications. This work is a
stepping-stone towards achieving the greater goal of realizing such a platform.
Particularly, this work introduces the AutoDRIVE Simulator, a high-fidelity
simulator for scaled autonomous vehicles. The proposed simulation ecosystem is
developed atop the Unity game engine, and exploits its features in order to
simulate realistic system dynamics and render photorealistic graphics. It
comprises of a scaled vehicle model equipped with a comprehensive sensor suite
for redundant perception, a set of actuators for constrained motion control and
a fully functional lighting system for illumination and signaling. It also
provides a modular environment development kit, which comprises of various
environment modules that aid in reconfigurable construction of the scene.
Additionally, the simulator features a communication bridge in order to extend
an interface to the autonomous driving software stack developed independently
by the users. This work describes some of the prominent components of this
simulation system along with some key features that it has to offer in order to
accelerate education and research aimed at autonomous driving.
|
[
{
"version": "v1",
"created": "Thu, 18 Mar 2021 06:04:42 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Aug 2021 10:38:45 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Samak",
"Tanmay Vilas",
""
],
[
"Samak",
"Chinmay Vilas",
""
],
[
"Xie",
"Ming",
""
]
] |
new_dataset
| 0.973243 |
2202.00199
|
Wenqiang Xu
|
Haoyuan Fu, Wenqiang Xu, Ruolin Ye, Han Xue, Zhenjun Yu, Tutian Tang,
Yutong Li, Wenxin Du, Jieyi Zhang and Cewu Lu
|
RFUniverse: A Multiphysics Simulation Platform for Embodied AI
|
Project page: https://sites.google.com/view/rfuniverse
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Multiphysics phenomena, the coupling effects involving different aspects of
physics laws, are pervasive in the real world and can often be encountered when
performing everyday household tasks. Intelligent agents which seek to assist or
replace human laborers will need to learn to cope with such phenomena in
household task settings. To equip the agents with such kind of abilities, the
research community needs a simulation environment, which will have the
capability to serve as the testbed for the training process of these
intelligent agents, to have the ability to support multiphysics coupling
effects. Though many mature simulation software for multiphysics simulation
have been adopted in industrial production, such techniques have not been
applied to robot learning or embodied AI research. To bridge the gap, we
propose a novel simulation environment named RFUniverse. This simulator can not
only compute rigid and multi-body dynamics, but also multiphysics coupling
effects commonly observed in daily life, such as air-solid interaction,
fluid-solid interaction, and heat transfer. Because of the unique multiphysics
capacities of this simulator, we can benchmark tasks that involve complex
dynamics due to multiphysics coupling effects in a simulation environment
before deploying to the real world. RFUniverse provides multiple interfaces to
let the users interact with the virtual world in various ways, which is helpful
and essential for learning, planning, and control. We benchmark three tasks
with reinforcement learning, including food cutting, water pushing, and towel
catching. We also evaluate butter pushing with a classic planning-control
paradigm. This simulator offers an enhancement of physics simulation in terms
of the computation of multiphysics coupling effects.
|
[
{
"version": "v1",
"created": "Tue, 1 Feb 2022 03:35:13 GMT"
},
{
"version": "v2",
"created": "Sun, 14 May 2023 17:25:58 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Fu",
"Haoyuan",
""
],
[
"Xu",
"Wenqiang",
""
],
[
"Ye",
"Ruolin",
""
],
[
"Xue",
"Han",
""
],
[
"Yu",
"Zhenjun",
""
],
[
"Tang",
"Tutian",
""
],
[
"Li",
"Yutong",
""
],
[
"Du",
"Wenxin",
""
],
[
"Zhang",
"Jieyi",
""
],
[
"Lu",
"Cewu",
""
]
] |
new_dataset
| 0.969837 |
2203.08580
|
Max Landauer
|
Max Landauer, Florian Skopik, Maximilian Frank, Wolfgang Hotwagner,
Markus Wurzenberger, Andreas Rauber
|
Maintainable Log Datasets for Evaluation of Intrusion Detection Systems
| null |
IEEE Transactions on Dependable and Secure Computing (2022)
|
10.1109/TDSC.2022.3201582
| null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Intrusion detection systems (IDS) monitor system logs and network traffic to
recognize malicious activities in computer networks. Evaluating and comparing
IDSs with respect to their detection accuracies is thereby essential for their
selection in specific use-cases. Despite a great need, hardly any labeled
intrusion detection datasets are publicly available. As a consequence,
evaluations are often carried out on datasets from real infrastructures, where
analysts cannot control system parameters or generate a reliable ground truth,
or private datasets that prevent reproducibility of results. As a solution, we
present a collection of maintainable log datasets collected in a testbed
representing a small enterprise. Thereby, we employ extensive state machines to
simulate normal user behavior and inject a multi-step attack. For scalable
testbed deployment, we use concepts from model-driven engineering that enable
automatic generation and labeling of an arbitrary number of datasets that
comprise repetitions of attack executions with variations of parameters. In
total, we provide 8 datasets containing 20 distinct types of log files, of
which we label 8 files for 10 unique attack steps. We publish the labeled log
datasets and code for testbed setup and simulation online as open-source to
enable others to reproduce and extend our results.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 12:14:36 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Landauer",
"Max",
""
],
[
"Skopik",
"Florian",
""
],
[
"Frank",
"Maximilian",
""
],
[
"Hotwagner",
"Wolfgang",
""
],
[
"Wurzenberger",
"Markus",
""
],
[
"Rauber",
"Andreas",
""
]
] |
new_dataset
| 0.999847 |
2204.09593
|
Fangyi Zhu
|
Fangyi Zhu, See-Kiong Ng, St\'ephane Bressan
|
COOL, a Context Outlooker, and its Application to Question Answering and
other Natural Language Processing Tasks
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision outlooker improves the performance of vision transformers, which
implements a self-attention mechanism by adding an outlook attention, a form of
local attention.
In natural language processing, as has been the case in computer vision and
other domains, transformer-based models constitute the state-of-the-art for
most processing tasks. In this domain, too, many authors have argued and
demonstrated the importance of local context.
We present an outlook attention mechanism, COOL, for natural language
processing. COOL, added on top of the self-attention layers of a
transformer-based model, encodes local syntactic context considering word
proximity and more pair-wise constraints than dynamic convolution used by
existing approaches.
A comparative empirical performance evaluation of an implementation of COOL
with different transformer-based models confirms the opportunity for
improvement over a baseline using the original models alone for various natural
language processing tasks, including question answering. The proposed approach
achieves competitive performance with existing state-of-the-art methods on some
tasks.
|
[
{
"version": "v1",
"created": "Fri, 1 Apr 2022 07:03:40 GMT"
},
{
"version": "v2",
"created": "Mon, 15 May 2023 15:42:37 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Zhu",
"Fangyi",
""
],
[
"Ng",
"See-Kiong",
""
],
[
"Bressan",
"Stéphane",
""
]
] |
new_dataset
| 0.999285 |
2205.07780
|
David Richter
|
David Richter, David Kretzler, Pascal Weisenburger, Guido Salvaneschi,
Sebastian Faust, Mira Mezini
|
Prisma: A Tierless Language for Enforcing Contract-Client Protocols in
Decentralized Applications (Extended Version)
|
This is the extended version including appendices of the paper to be
published in TOPLAS; an extended abstract was published in ECOOP 2022
| null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Decentralized applications (dApps) consist of smart contracts that run on
blockchains and clients that model collaborating parties. dApps are used to
model financial and legal business functionality. Today, contracts and clients
are written as separate programs -- in different programming languages --
communicating via send and receive operations. This makes distributed program
flow awkward to express and reason about, increasing the potential for
mismatches in the client-contract interface, which can be exploited by
malicious clients, potentially leading to huge financial losses. In this paper,
we present Prisma, a language for tierless decentralized applications, where
the contract and its clients are defined in one unit and pairs of send and
receive actions that "belong together" are encapsulated into a single
direct-style operation, which is executed differently by sending and receiving
parties. This enables expressing distributed program flow via standard control
flow and renders mismatching communication impossible. We prove formally that
our compiler preserves program behavior in presence of an attacker controlling
the client code. We systematically compare Prisma with mainstream and advanced
programming models for dApps and provide empirical evidence for its
expressiveness and performance.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 16:12:52 GMT"
},
{
"version": "v2",
"created": "Mon, 15 May 2023 14:33:12 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Richter",
"David",
""
],
[
"Kretzler",
"David",
""
],
[
"Weisenburger",
"Pascal",
""
],
[
"Salvaneschi",
"Guido",
""
],
[
"Faust",
"Sebastian",
""
],
[
"Mezini",
"Mira",
""
]
] |
new_dataset
| 0.983858 |
2207.01105
|
Yun Liao
|
Yun Liao, Seyyed Ali Hashemi, Hengjie Yang, John M. Cioffi
|
Scalable Polar Code Construction for Successive Cancellation List
Decoding: A Graph Neural Network-Based Approach
|
33 pages, 11 figures, submitted to IEEE Transactions on
Communications
| null | null | null |
cs.IT cs.AI math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While constructing polar codes for successive-cancellation decoding can be
implemented efficiently by sorting the bit-channels, finding optimal polar
codes for cyclic-redundancy-check-aided successive-cancellation list (CA-SCL)
decoding in an efficient and scalable manner still awaits investigation. This
paper first maps a polar code to a unique heterogeneous graph called the
polar-code-construction message-passing (PCCMP) graph. Next, a heterogeneous
graph-neural-network-based iterative message-passing (IMP) algorithm is
proposed which aims to find a PCCMP graph that corresponds to the polar code
with minimum frame error rate under CA-SCL decoding. This new IMP algorithm's
major advantage lies in its scalability power. That is, the model complexity is
independent of the blocklength and code rate, and a trained IMP model over a
short polar code can be readily applied to a long polar code's construction.
Numerical experiments show that IMP-based polar-code constructions outperform
classical constructions under CA-SCL decoding. In addition, when an IMP model
trained on a length-128 polar code directly applies to the construction of
polar codes with different code rates and blocklengths, simulations show that
these polar code constructions deliver comparable performance to the 5G polar
codes.
|
[
{
"version": "v1",
"created": "Sun, 3 Jul 2022 19:27:43 GMT"
},
{
"version": "v2",
"created": "Sat, 10 Dec 2022 20:04:26 GMT"
},
{
"version": "v3",
"created": "Tue, 21 Feb 2023 19:39:22 GMT"
},
{
"version": "v4",
"created": "Sat, 13 May 2023 21:58:58 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Liao",
"Yun",
""
],
[
"Hashemi",
"Seyyed Ali",
""
],
[
"Yang",
"Hengjie",
""
],
[
"Cioffi",
"John M.",
""
]
] |
new_dataset
| 0.951797 |
2207.04156
|
Clive Gomes
|
Clive Gomes, Hyejin Park, Patrick Kollman, Yi Song, Iffanice Houndayi,
Ankit Shah
|
Automated Audio Captioning and Language-Based Audio Retrieval
|
DCASE 2022 Competition (Task 6)
| null | null | null |
cs.SD cs.CL cs.IR eess.AS
|
http://creativecommons.org/publicdomain/zero/1.0/
|
This project involved participation in the DCASE 2022 Competition (Task 6)
which had two subtasks: (1) Automated Audio Captioning and (2) Language-Based
Audio Retrieval. The first subtask involved the generation of a textual
description for audio samples, while the goal of the second was to find audio
samples within a fixed dataset that match a given description. For both
subtasks, the Clotho dataset was used. The models were evaluated on BLEU1,
BLEU2, BLEU3, ROUGEL, METEOR, CIDEr, SPICE, and SPIDEr scores for audio
captioning and R1, R5, R10 and mARP10 scores for audio retrieval. We have
conducted a handful of experiments that modify the baseline models for these
tasks. Our final architecture for Automated Audio Captioning is close to the
baseline performance, while our model for Language-Based Audio Retrieval has
surpassed its counterpart.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2022 23:48:52 GMT"
},
{
"version": "v2",
"created": "Mon, 15 May 2023 13:54:28 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Gomes",
"Clive",
""
],
[
"Park",
"Hyejin",
""
],
[
"Kollman",
"Patrick",
""
],
[
"Song",
"Yi",
""
],
[
"Houndayi",
"Iffanice",
""
],
[
"Shah",
"Ankit",
""
]
] |
new_dataset
| 0.999474 |
2208.00455
|
Yirun Wang
|
Yirun Wang, Gongpu Wang, Ruisi He, Bo Ai, and Chintha Tellambura
|
Doppler Shift and Channel Estimation for Intelligent Transparent Surface
Assisted Communication Systems on High-Speed Railways
|
10 pages, 7 figures
|
IEEE Transactions on Communications, 2023 (latest version)
|
10.1109/TCOMM.2023.3275590
| null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The critical distinction between the emerging intelligent transparent surface
(ITS) and intelligent reflection surface (IRS) is that the incident signals can
penetrate the ITS instead of being reflected, which enables the ITS to combat
the severe signal penetration loss for high-speed railway (HSR) wireless
communications. This paper thus investigates the channel estimation problem for
an ITS-assisted HSR network where the ITS is embedded into the carriage window.
We first formulate the channels as functions of physical parameters, and thus
transform the channel estimation into a parameter recovery problem. Next, we
design the first two pilot blocks within each frame and develop a serial
low-complexity channel estimation algorithm. Specifically, the channel
estimates are initially obtained, and each estimate is further expressed as the
sum of its perfectly known value and the estimation error. By leveraging the
relationship between channels for the two pilot blocks, we recover the Doppler
shifts from the channel estimates, based on which we can further acquire other
channel parameters. Moreover, the Cramer-Rao lower bound (CRLB) for each
parameter is derived as a performance benchmark. Finally, we provide numerical
results to establish the effectiveness of our proposed estimators.
|
[
{
"version": "v1",
"created": "Sun, 31 Jul 2022 15:52:48 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Wang",
"Yirun",
""
],
[
"Wang",
"Gongpu",
""
],
[
"He",
"Ruisi",
""
],
[
"Ai",
"Bo",
""
],
[
"Tellambura",
"Chintha",
""
]
] |
new_dataset
| 0.999816 |
2208.09825
|
Lintong Zhang
|
Lintong Zhang, Michael Helmberger, Lanke Frank Tarimo Fu, David Wisth,
Marco Camurri, Davide Scaramuzza, Maurice Fallon
|
Hilti-Oxford Dataset: A Millimetre-Accurate Benchmark for Simultaneous
Localization and Mapping
|
Presented at IEEE Robotics and Automation (ICRA), 2023
|
IEEE Robotics and Automation Letters ( Volume: 8, Issue: 1,
January 2023)
|
10.1109/LRA.2022.3226077
| null |
cs.RO eess.IV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Simultaneous Localization and Mapping (SLAM) is being deployed in real-world
applications, however many state-of-the-art solutions still struggle in many
common scenarios. A key necessity in progressing SLAM research is the
availability of high-quality datasets and fair and transparent benchmarking. To
this end, we have created the Hilti-Oxford Dataset, to push state-of-the-art
SLAM systems to their limits. The dataset has a variety of challenges ranging
from sparse and regular construction sites to a 17th century neoclassical
building with fine details and curved surfaces. To encourage multi-modal SLAM
approaches, we designed a data collection platform featuring a lidar, five
cameras, and an IMU (Inertial Measurement Unit). With the goal of benchmarking
SLAM algorithms for tasks where accuracy and robustness are paramount, we
implemented a novel ground truth collection method that enables our dataset to
accurately measure SLAM pose errors with millimeter accuracy. To further ensure
accuracy, the extrinsics of our platform were verified with a
micrometer-accurate scanner, and temporal calibration was managed online using
hardware time synchronization. The multi-modality and diversity of our dataset
attracted a large field of academic and industrial researchers to enter the
second edition of the Hilti SLAM challenge, which concluded in June 2022. The
results of the challenge show that while the top three teams could achieve an
accuracy of 2cm or better for some sequences, the performance dropped off in
more difficult sequences.
|
[
{
"version": "v1",
"created": "Sun, 21 Aug 2022 07:11:46 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Feb 2023 14:01:24 GMT"
},
{
"version": "v3",
"created": "Mon, 15 May 2023 10:49:18 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Zhang",
"Lintong",
""
],
[
"Helmberger",
"Michael",
""
],
[
"Fu",
"Lanke Frank Tarimo",
""
],
[
"Wisth",
"David",
""
],
[
"Camurri",
"Marco",
""
],
[
"Scaramuzza",
"Davide",
""
],
[
"Fallon",
"Maurice",
""
]
] |
new_dataset
| 0.999834 |
2209.06122
|
Shaochen Wang
|
Shaochen Wang, Wei Zhang, Zhangli Zhou, Jiaxi Cao, Ziyang Chen, Kang
Chen, Bin Li, and Zhen Kan
|
What You See is What You Grasp: User-Friendly Grasping Guided by
Near-eye-tracking
|
6 pages
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This work presents a next-generation human-robot interface that can infer and
realize the user's manipulation intention via sight only. Specifically, we
develop a system that integrates near-eye-tracking and robotic manipulation to
enable user-specified actions (e.g., grasp, pick-and-place, etc), where visual
information is merged with human attention to create a mapping for desired
robot actions. To enable sight guided manipulation, a head-mounted
near-eye-tracking device is developed to track the eyeball movements in
real-time, so that the user's visual attention can be identified. To improve
the grasping performance, a transformer based grasp model is then developed.
Stacked transformer blocks are used to extract hierarchical features where the
volumes of channels are expanded at each stage while squeezing the resolution
of feature maps. Experimental validation demonstrates that the eye-tracking
system yields low gaze estimation error and the grasping system yields
promising results on multiple grasping datasets. This work is a proof of
concept for gaze interaction-based assistive robot, which holds great promise
to help the elder or upper limb disabilities in their daily lives. A demo video
is available at https://www.youtube.com/watch?v=yuZ1hukYUrM
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2022 16:14:06 GMT"
},
{
"version": "v2",
"created": "Mon, 15 May 2023 12:45:52 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Wang",
"Shaochen",
""
],
[
"Zhang",
"Wei",
""
],
[
"Zhou",
"Zhangli",
""
],
[
"Cao",
"Jiaxi",
""
],
[
"Chen",
"Ziyang",
""
],
[
"Chen",
"Kang",
""
],
[
"Li",
"Bin",
""
],
[
"Kan",
"Zhen",
""
]
] |
new_dataset
| 0.979772 |
2209.06424
|
Kay Hutchinson
|
Kay Hutchinson, Ian Reyes, Zongyu Li, and Homa Alemzadeh
|
COMPASS: A Formal Framework and Aggregate Dataset for Generalized
Surgical Procedure Modeling
|
This preprint has not undergone peer review or any post-submission
improvements or corrections. The Version of Record of this article is
published in the International Journal of Computer Assisted Radiology and
Surgery, and is available online at
https://doi.org/10.1007/s11548-023-02922-1
| null |
10.1007/s11548-023-02922-1
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Purpose: We propose a formal framework for the modeling and segmentation of
minimally-invasive surgical tasks using a unified set of motion primitives
(MPs) to enable more objective labeling and the aggregation of different
datasets.
Methods: We model dry-lab surgical tasks as finite state machines,
representing how the execution of MPs as the basic surgical actions results in
the change of surgical context, which characterizes the physical interactions
among tools and objects in the surgical environment. We develop methods for
labeling surgical context based on video data and for automatic translation of
context to MP labels. We then use our framework to create the COntext and
Motion Primitive Aggregate Surgical Set (COMPASS), including six dry-lab
surgical tasks from three publicly-available datasets (JIGSAWS, DESK, and
ROSMA), with kinematic and video data and context and MP labels.
Results: Our context labeling method achieves near-perfect agreement between
consensus labels from crowd-sourcing and expert surgeons. Segmentation of tasks
to MPs results in the creation of the COMPASS dataset that nearly triples the
amount of data for modeling and analysis and enables the generation of separate
transcripts for the left and right tools.
Conclusion: The proposed framework results in high quality labeling of
surgical data based on context and fine-grained MPs. Modeling surgical tasks
with MPs enables the aggregation of different datasets and the separate
analysis of left and right hands for bimanual coordination assessment. Our
formal framework and aggregate dataset can support the development of
explainable and multi-granularity models for improved surgical process
analysis, skill assessment, error detection, and autonomy.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2022 05:25:19 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Oct 2022 01:45:05 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Apr 2023 14:57:40 GMT"
},
{
"version": "v4",
"created": "Tue, 18 Apr 2023 21:30:10 GMT"
},
{
"version": "v5",
"created": "Mon, 15 May 2023 16:32:23 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Hutchinson",
"Kay",
""
],
[
"Reyes",
"Ian",
""
],
[
"Li",
"Zongyu",
""
],
[
"Alemzadeh",
"Homa",
""
]
] |
new_dataset
| 0.999516 |
2209.09178
|
Yunsheng Ma
|
Yunsheng Ma and Ziran Wang
|
ViT-DD: Multi-Task Vision Transformer for Semi-Supervised Driver
Distraction Detection
|
Accepted at the 2023 IEEE Intelligent Vehicles Symposium (IV)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Ensuring traffic safety and mitigating accidents in modern driving is of
paramount importance, and computer vision technologies have the potential to
significantly contribute to this goal. This paper presents a multi-modal Vision
Transformer for Driver Distraction Detection (termed ViT-DD), which
incorporates inductive information from training signals related to both
distraction detection and driver emotion recognition. Additionally, a
self-learning algorithm is developed, allowing for the seamless integration of
driver data without emotion labels into the multi-task training process of
ViT-DD. Experimental results reveal that the proposed ViT-DD surpasses existing
state-of-the-art methods for driver distraction detection by 6.5\% and 0.9\% on
the SFDDD and AUCDD datasets, respectively. To support reproducibility and
foster further advancements in this critical research area, the source code for
this approach is made publicly available at
https://github.com/PurdueDigitalTwin/ViT-DD.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 16:56:51 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Sep 2022 16:16:13 GMT"
},
{
"version": "v3",
"created": "Sat, 13 May 2023 02:51:53 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Ma",
"Yunsheng",
""
],
[
"Wang",
"Ziran",
""
]
] |
new_dataset
| 0.997758 |
2210.13088
|
Lin He
|
Long Pan, Jiahai Yang, Lin He, Zhiliang Wang, Leyao Nie, Guanglei
Song, Yaozhong Liu
|
Your Router is My Prober: Measuring IPv6 Networks via ICMP Rate Limiting
Side Channels
| null |
Network and Distributed System Security Symposium (NDSS) 2023
| null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Active Internet measurements face challenges when some measurements require
many remote vantage points. In this paper, we propose a novel technique for
measuring remote IPv6 networks via side channels in ICMP rate limiting, a
required function for IPv6 nodes to limit the rate at which ICMP error messages
are generated. This technique, iVantage, can to some extent use 1.1M remote
routers distributed in 9.5k autonomous systems and 182 countries as our
"vantage points". We apply iVantage to two different, but both challenging
measurement tasks: 1) measuring the deployment of inbound source address
validation (ISAV) and 2) measuring reachability between arbitrary Internet
nodes. We accomplish these two tasks from only one local vantage point without
controlling the targets or relying on other services within the target
networks. Our large-scale ISAV measurements cover ~50% of all IPv6 autonomous
systems and find ~79% of them are vulnerable to spoofing, which is the most
large-scale measurement study of IPv6 ISAV to date. Our method for reachability
measurements achieves over 80% precision and recall in our evaluation. Finally,
we perform an Internet-wide measurement of the ICMP rate limiting
implementations, present a detailed discussion on ICMP rate limiting,
particularly the potential security and privacy risks in the mechanism of ICMP
rate limiting, and provide possible mitigation measures. We make our code
available to the community.
|
[
{
"version": "v1",
"created": "Mon, 24 Oct 2022 10:14:16 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Oct 2022 08:34:43 GMT"
},
{
"version": "v3",
"created": "Sat, 13 May 2023 08:23:47 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Pan",
"Long",
""
],
[
"Yang",
"Jiahai",
""
],
[
"He",
"Lin",
""
],
[
"Wang",
"Zhiliang",
""
],
[
"Nie",
"Leyao",
""
],
[
"Song",
"Guanglei",
""
],
[
"Liu",
"Yaozhong",
""
]
] |
new_dataset
| 0.997404 |
2211.08778
|
Hossein Rezaei
|
Hossein Rezaei, Nandana Rajatheva, Matti Latva-aho
|
A Combinational Multi-Kernel Decoder for Polar Codes
| null | null | null | null |
cs.IT cs.AR cs.SY eess.SY math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Polar codes have been selected as the channel coding scheme for control
channel in the fifth generation (5G) communication system thanks to their
capacity achieving characteristics. However, the traditional polar codes
support only codes constructed by binary (2x2) kernel which limits the code
lengths to powers of 2. Multi-kernel polar codes are proposed to achieve
flexible block length. In this paper, the first combinational decoder for
multi-kernel polar codes based on successive cancellation algorithm is
proposed. The proposed decoder can decode pure-binary and binary-ternary (3x3)
mixed polar codes. The architecture is rate-flexible with the capability of
online rate assignment and supports any kernel sequences. The FPGA
implementation results reveal that for a code of length N = 48, the coded
throughput of 812.1 Mbps can be achieved.
|
[
{
"version": "v1",
"created": "Wed, 16 Nov 2022 09:09:06 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Nov 2022 07:00:23 GMT"
},
{
"version": "v3",
"created": "Thu, 19 Jan 2023 10:22:13 GMT"
},
{
"version": "v4",
"created": "Sun, 7 May 2023 07:27:58 GMT"
},
{
"version": "v5",
"created": "Sat, 13 May 2023 18:08:08 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Rezaei",
"Hossein",
""
],
[
"Rajatheva",
"Nandana",
""
],
[
"Latva-aho",
"Matti",
""
]
] |
new_dataset
| 0.999621 |
2302.12971
|
Yulong Liu
|
Yulong Liu, Yongqiang Ma, Wei Zhou, Guibo Zhu, Nanning Zheng
|
BrainCLIP: Bridging Brain and Visual-Linguistic Representation Via CLIP
for Generic Natural Visual Stimulus Decoding
| null | null | null | null |
cs.CV cs.AI cs.HC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Due to the lack of paired samples and the low signal-to-noise ratio of
functional MRI (fMRI) signals, reconstructing perceived natural images or
decoding their semantic contents from fMRI data are challenging tasks. In this
work, we propose, for the first time, a task-agnostic fMRI-based brain decoding
model, BrainCLIP, which leverages CLIP's cross-modal generalization ability to
bridge the modality gap between brain activity, image, and text. Our
experiments demonstrate that CLIP can act as a pivot for generic brain decoding
tasks, including zero-shot visual categories decoding, fMRI-image/text
matching, and fMRI-to-image generation. Specifically, BrainCLIP aims to train a
mapping network that transforms fMRI patterns into a well-aligned CLIP
embedding space by combining visual and textual supervision. Our experiments
show that this combination can boost the decoding model's performance on
certain tasks like fMRI-text matching and fMRI-to-image generation. On the
zero-shot visual category decoding task, BrainCLIP achieves significantly
better performance than BraVL, a recently proposed multi-modal method
specifically designed for this task. BrainCLIP can also reconstruct visual
stimuli with high semantic fidelity and establishes a new state-of-the-art for
fMRI-based natural image reconstruction in terms of high-level semantic
features.
|
[
{
"version": "v1",
"created": "Sat, 25 Feb 2023 03:28:54 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Apr 2023 04:24:15 GMT"
},
{
"version": "v3",
"created": "Mon, 15 May 2023 04:32:59 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Liu",
"Yulong",
""
],
[
"Ma",
"Yongqiang",
""
],
[
"Zhou",
"Wei",
""
],
[
"Zhu",
"Guibo",
""
],
[
"Zheng",
"Nanning",
""
]
] |
new_dataset
| 0.997083 |
2304.02488
|
Yang Fan
|
Fan Yang
|
SCB-dataset: A Dataset for Detecting Student Classroom Behavior
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The use of deep learning methods for automatic detection of students'
classroom behavior is a promising approach to analyze their class performance
and enhance teaching effectiveness. However, the lack of publicly available
datasets on student behavior poses a challenge for researchers in this field.
To address this issue, we propose a Student Classroom Behavior dataset
(SCB-dataset) that reflects real-life scenarios. Our dataset includes 11,248
labels and 4,003 images, with a focus on hand-raising behavior. We evaluated
the dataset using the YOLOv7 algorithm, achieving a mean average precision
(map) of up to 85.3%. We believe that our dataset can serve as a robust
foundation for future research in the field of student behavior detection and
promote further advancements in this area.Our SCB-dataset can be downloaded
from: https://github.com/Whiffe/SCB-dataset
|
[
{
"version": "v1",
"created": "Wed, 5 Apr 2023 15:02:30 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Yang",
"Fan",
""
]
] |
new_dataset
| 0.999724 |
2304.05642
|
Chi Liu
|
Chi Liu, Haochun Wang, Nuwa Xi, Sendong Zhao, Bing Qin
|
Global Prompt Cell: A Portable Control Module for Effective Prompt
Tuning
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As a novel approach to tuning pre-trained models, prompt tuning involves
freezing the parameters in downstream tasks while inserting trainable
embeddings into inputs in the first layer. However, previous methods have
mainly focused on the initialization of prompt embeddings. The strategy of
training and utilizing prompt embeddings in a reasonable way has become a
limiting factor in the effectiveness of prompt tuning. To address this issue,
we introduce the Global Prompt Cell (GPC), a portable control module for prompt
tuning that selectively preserves prompt information across all encoder layers.
Our experimental results demonstrate a 5.8% improvement on SuperGLUE datasets
compared to vanilla prompt tuning.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 06:46:33 GMT"
},
{
"version": "v2",
"created": "Sat, 13 May 2023 07:45:59 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Liu",
"Chi",
""
],
[
"Wang",
"Haochun",
""
],
[
"Xi",
"Nuwa",
""
],
[
"Zhao",
"Sendong",
""
],
[
"Qin",
"Bing",
""
]
] |
new_dataset
| 0.956903 |
2304.07849
|
Ming Yan
|
Junfeng Tian, Hehong Chen, Guohai Xu, Ming Yan, Xing Gao, Jianhai
Zhang, Chenliang Li, Jiayi Liu, Wenshen Xu, Haiyang Xu, Qi Qian, Wei Wang,
Qinghao Ye, Jiejing Zhang, Ji Zhang, Fei Huang, Jingren Zhou
|
ChatPLUG: Open-Domain Generative Dialogue System with Internet-Augmented
Instruction Tuning for Digital Human
|
36 pages
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present ChatPLUG, a Chinese open-domain dialogue system for
digital human applications that instruction finetunes on a wide range of
dialogue tasks in a unified internet-augmented format. Different from other
open-domain dialogue models that focus on large-scale pre-training and scaling
up model size or dialogue corpus, we aim to build a powerful and practical
dialogue system for digital human with diverse skills and good multi-task
generalization by internet-augmented instruction tuning. To this end, we first
conduct large-scale pre-training on both common document corpus and dialogue
data with curriculum learning, so as to inject various world knowledge and
dialogue abilities into ChatPLUG. Then, we collect a wide range of dialogue
tasks spanning diverse features of knowledge, personality, multi-turn memory,
and empathy, on which we further instruction tune \modelname via unified
natural language instruction templates. External knowledge from an internet
search is also used during instruction finetuning for alleviating the problem
of knowledge hallucinations. We show that \modelname outperforms
state-of-the-art Chinese dialogue systems on both automatic and human
evaluation, and demonstrates strong multi-task generalization on a variety of
text understanding and generation tasks. In addition, we deploy \modelname to
real-world applications such as Smart Speaker and Instant Message applications
with fast inference. Our models and code will be made publicly available on
ModelScope: https://modelscope.cn/models/damo/ChatPLUG-3.7B and Github:
https://github.com/X-PLUG/ChatPLUG .
|
[
{
"version": "v1",
"created": "Sun, 16 Apr 2023 18:16:35 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Apr 2023 15:08:03 GMT"
},
{
"version": "v3",
"created": "Mon, 15 May 2023 16:17:15 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Tian",
"Junfeng",
""
],
[
"Chen",
"Hehong",
""
],
[
"Xu",
"Guohai",
""
],
[
"Yan",
"Ming",
""
],
[
"Gao",
"Xing",
""
],
[
"Zhang",
"Jianhai",
""
],
[
"Li",
"Chenliang",
""
],
[
"Liu",
"Jiayi",
""
],
[
"Xu",
"Wenshen",
""
],
[
"Xu",
"Haiyang",
""
],
[
"Qian",
"Qi",
""
],
[
"Wang",
"Wei",
""
],
[
"Ye",
"Qinghao",
""
],
[
"Zhang",
"Jiejing",
""
],
[
"Zhang",
"Ji",
""
],
[
"Huang",
"Fei",
""
],
[
"Zhou",
"Jingren",
""
]
] |
new_dataset
| 0.998672 |
2305.03306
|
Sonia Sousa
|
Sonia Sousa, Jose Cravino, Paulo Martins, David Lamas
|
Human-centered trust framework: An HCI perspective
| null | null | null | null |
cs.HC cs.AI cs.CY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The rationale of this work is based on the current user trust discourse of
Artificial Intelligence (AI). We aim to produce novel HCI approaches that use
trust as a facilitator for the uptake (or appropriation) of current
technologies. We propose a framework (HCTFrame) to guide non-experts to unlock
the full potential of user trust in AI design. Results derived from a data
triangulation of findings from three literature reviews demystify some
misconceptions of user trust in computer science and AI discourse, and three
case studies are conducted to assess the effectiveness of a psychometric scale
in mapping potential users' trust breakdowns and concerns. This work primarily
contributes to the fight against the tendency to design technical-centered
vulnerable interactions, which can eventually lead to additional real and
perceived breaches of trust. The proposed framework can be used to guide system
designers on how to map and define user trust and the socioethical and
organisational needs and characteristics of AI system design. It can also guide
AI system designers on how to develop a prototype and operationalise a solution
that meets user trust requirements. The article ends by providing some user
research tools that can be employed to measure users' trust intentions and
behaviours towards a proposed solution.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 06:15:32 GMT"
},
{
"version": "v2",
"created": "Mon, 15 May 2023 06:12:11 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Sousa",
"Sonia",
""
],
[
"Cravino",
"Jose",
""
],
[
"Martins",
"Paulo",
""
],
[
"Lamas",
"David",
""
]
] |
new_dataset
| 0.982157 |
2305.03465
|
Razane Tajeddine
|
David Karpuk and Razane Tajeddine
|
Modular Polynomial Codes for Secure and Robust Distributed Matrix
Multiplication
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Modular Polynomial (MP) Codes for Secure Distributed Matrix
Multiplication (SDMM). The construction is based on the observation that one
can decode certain proper subsets of the coefficients of a polynomial with
fewer evaluations than is necessary to interpolate the entire polynomial. We
also present Generalized Gap Additive Secure Polynomial (GGASP) codes. Both MP
and GGASP codes are shown experimentally to perform favorably in terms of
recovery threshold when compared to other comparable polynomials codes for SDMM
which use the grid partition. Both MP and GGASP codes achieve the recovery
threshold of Entangled Polynomial Codes for robustness against stragglers, but
MP codes can decode below this recovery threshold depending on the set of
worker nodes which fails. The decoding complexity of MP codes is shown to be
lower than other approaches in the literature, due to the user not being tasked
with interpolating an entire polynomial.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 12:13:09 GMT"
},
{
"version": "v2",
"created": "Mon, 15 May 2023 07:51:06 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Karpuk",
"David",
""
],
[
"Tajeddine",
"Razane",
""
]
] |
new_dataset
| 0.976494 |
2305.04582
|
Leonhard Hennig
|
Leonhard Hennig, Philippe Thomas, Sebastian M\"oller
|
MultiTACRED: A Multilingual Version of the TAC Relation Extraction
Dataset
|
Accepted at ACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Relation extraction (RE) is a fundamental task in information extraction,
whose extension to multilingual settings has been hindered by the lack of
supervised resources comparable in size to large English datasets such as
TACRED (Zhang et al., 2017). To address this gap, we introduce the MultiTACRED
dataset, covering 12 typologically diverse languages from 9 language families,
which is created by machine-translating TACRED instances and automatically
projecting their entity annotations. We analyze translation and annotation
projection quality, identify error categories, and experimentally evaluate
fine-tuned pretrained mono- and multilingual language models in common transfer
learning scenarios. Our analyses show that machine translation is a viable
strategy to transfer RE instances, with native speakers judging more than 83%
of the translated instances to be linguistically and semantically acceptable.
We find monolingual RE model performance to be comparable to the English
original for many of the target languages, and that multilingual models trained
on a combination of English and target language data can outperform their
monolingual counterparts. However, we also observe a variety of translation and
annotation projection errors, both due to the MT systems and linguistic
features of the target languages, such as pronoun-dropping, compounding and
inflection, that degrade dataset quality and RE model performance.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 09:48:21 GMT"
},
{
"version": "v2",
"created": "Mon, 15 May 2023 07:24:58 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Hennig",
"Leonhard",
""
],
[
"Thomas",
"Philippe",
""
],
[
"Möller",
"Sebastian",
""
]
] |
new_dataset
| 0.999805 |
2305.05280
|
Han Wu
|
Han Wu, Mingjie Zhan, Haochen Tan, Zhaohui Hou, Ding Liang, and Linqi
Song
|
VCSUM: A Versatile Chinese Meeting Summarization Dataset
|
Findings of ACL 2023 (long paper). GitHub:
https://github.com/hahahawu/VCSum
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Compared to news and chat summarization, the development of meeting
summarization is hugely decelerated by the limited data. To this end, we
introduce a versatile Chinese meeting summarization dataset, dubbed VCSum,
consisting of 239 real-life meetings, with a total duration of over 230 hours.
We claim our dataset is versatile because we provide the annotations of topic
segmentation, headlines, segmentation summaries, overall meeting summaries, and
salient sentences for each meeting transcript. As such, the dataset can adapt
to various summarization tasks or methods, including segmentation-based
summarization, multi-granularity summarization and retrieval-then-generate
summarization. Our analysis confirms the effectiveness and robustness of VCSum.
We also provide a set of benchmark models regarding different downstream
summarization tasks on VCSum to facilitate further research. The dataset and
code will be released at https://github.com/hahahawu/VCSum.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 09:07:15 GMT"
},
{
"version": "v2",
"created": "Mon, 15 May 2023 09:30:39 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Wu",
"Han",
""
],
[
"Zhan",
"Mingjie",
""
],
[
"Tan",
"Haochen",
""
],
[
"Hou",
"Zhaohui",
""
],
[
"Liang",
"Ding",
""
],
[
"Song",
"Linqi",
""
]
] |
new_dataset
| 0.999703 |
2305.07586
|
Sahib Julka
|
Sahib Julka and Michael Granitzer
|
Knowledge distillation with Segment Anything (SAM) model for Planetary
Geological Mapping
| null | null | null | null |
cs.CV cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Planetary science research involves analysing vast amounts of remote sensing
data, which are often costly and time-consuming to annotate and process. One of
the essential tasks in this field is geological mapping, which requires
identifying and outlining regions of interest in planetary images, including
geological features and landforms. However, manually labelling these images is
a complex and challenging task that requires significant domain expertise and
effort. To expedite this endeavour, we propose the use of knowledge
distillation using the recently introduced cutting-edge Segment Anything (SAM)
model. We demonstrate the effectiveness of this prompt-based foundation model
for rapid annotation and quick adaptability to a prime use case of mapping
planetary skylights. Our work reveals that with a small set of annotations
obtained with the right prompts from the model and subsequently training a
specialised domain decoder, we can achieve satisfactory semantic segmentation
on this task. Key results indicate that the use of knowledge distillation can
significantly reduce the effort required by domain experts for manual
annotation and improve the efficiency of image segmentation tasks. This
approach has the potential to accelerate extra-terrestrial discovery by
automatically detecting and segmenting Martian landforms.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 16:30:58 GMT"
},
{
"version": "v2",
"created": "Mon, 15 May 2023 12:46:28 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Julka",
"Sahib",
""
],
[
"Granitzer",
"Michael",
""
]
] |
new_dataset
| 0.95578 |
2305.07662
|
Wei Xu
|
Ziqing Yin, Renjie Xie, Wei Xu, Zhaohui Yang, and Xiaohu You
|
Self-information Domain-based Neural CSI Compression with Feature
Coupling
| null | null | null | null |
cs.IT cs.LG eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning (DL)-based channel state information (CSI) feedback methods
compressed the CSI matrix by exploiting its delay and angle features
straightforwardly, while the measure in terms of information contained in the
CSI matrix has rarely been considered. Based on this observation, we introduce
self-information as an informative CSI representation from the perspective of
information theory, which reflects the amount of information of the original
CSI matrix in an explicit way. Then, a novel DL-based network is proposed for
temporal CSI compression in the self-information domain, namely SD-CsiNet. The
proposed SD-CsiNet projects the raw CSI onto a self-information matrix in the
newly-defined self-information domain, extracts both temporal and spatial
features of the self-information matrix, and then couples these two features
for effective compression. Experimental results verify the effectiveness of the
proposed SD-CsiNet by exploiting the self-information of CSI. Particularly for
compression ratios 1/8 and 1/16, the SD-CsiNet respectively achieves 7.17 dB
and 3.68 dB performance gains compared to state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Sun, 30 Apr 2023 08:02:40 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Yin",
"Ziqing",
""
],
[
"Xie",
"Renjie",
""
],
[
"Xu",
"Wei",
""
],
[
"Yang",
"Zhaohui",
""
],
[
"You",
"Xiaohu",
""
]
] |
new_dataset
| 0.981159 |
2305.07686
|
Dimitrios Tyrovolas
|
Dimitrios Tyrovolas, Sotiris A. Tegos, Vasilis K. Papanikolaou, Yue
Xiao, Prodromos-Vasileios Mekikis, Panagiotis D. Diamantoulakis, Sotiris
Ioannidis, Christos K. Liaskos, George K. Karagiannidis
|
Zero-Energy Reconfigurable Intelligent Surfaces (zeRIS)
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
A primary objective of the forthcoming sixth generation (6G) of wireless
networking is to support demanding applications, while ensuring energy
efficiency. Programmable wireless environments (PWEs) have emerged as a
promising solution, leveraging reconfigurable intelligent surfaces (RISs), to
control wireless propagation and deliver exceptional quality-ofservice. In this
paper, we analyze the performance of a network supported by zero-energy RISs
(zeRISs), which harvest energy for their operation and contribute to the
realization of PWEs. Specifically, we investigate joint energy-data rate outage
probability and the energy efficiency of a zeRIS-assisted communication system
by employing three harvest-and-reflect (HaR) methods, i) power splitting, ii)
time switching, and iii) element splitting. Furthermore, we consider two zeRIS
deployment strategies, namely BS-side zeRIS and UE-side zeRIS. Simulation
results validate the provided analysis and examine which HaR method performs
better depending on the zeRIS placement. Finally, valuable insights and
conclusions for the performance of zeRISassisted wireless networks are drawn
from the presented results.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 15:14:24 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Tyrovolas",
"Dimitrios",
""
],
[
"Tegos",
"Sotiris A.",
""
],
[
"Papanikolaou",
"Vasilis K.",
""
],
[
"Xiao",
"Yue",
""
],
[
"Mekikis",
"Prodromos-Vasileios",
""
],
[
"Diamantoulakis",
"Panagiotis D.",
""
],
[
"Ioannidis",
"Sotiris",
""
],
[
"Liaskos",
"Christos K.",
""
],
[
"Karagiannidis",
"George K.",
""
]
] |
new_dataset
| 0.999318 |
2305.07713
|
Zhe Liu
|
Zhe Liu, Xiaoqing Ye, Zhikang Zou, Xinwei He, Xiao Tan, Errui Ding,
Jingdong Wang, Xiang Bai
|
Multi-Modal 3D Object Detection by Box Matching
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-modal 3D object detection has received growing attention as the
information from different sensors like LiDAR and cameras are complementary.
Most fusion methods for 3D detection rely on an accurate alignment and
calibration between 3D point clouds and RGB images. However, such an assumption
is not reliable in a real-world self-driving system, as the alignment between
different modalities is easily affected by asynchronous sensors and disturbed
sensor placement. We propose a novel {F}usion network by {B}ox {M}atching
(FBMNet) for multi-modal 3D detection, which provides an alternative way for
cross-modal feature alignment by learning the correspondence at the bounding
box level to free up the dependency of calibration during inference. With the
learned assignments between 3D and 2D object proposals, the fusion for
detection can be effectively performed by combing their ROI features. Extensive
experiments on the nuScenes dataset demonstrate that our method is much more
stable in dealing with challenging cases such as asynchronous sensors,
misaligned sensor placement, and degenerated camera images than existing fusion
methods. We hope that our FBMNet could provide an available solution to dealing
with these challenging cases for safety in real autonomous driving scenarios.
Codes will be publicly available at https://github.com/happinesslz/FBMNet.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 18:08:51 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Liu",
"Zhe",
""
],
[
"Ye",
"Xiaoqing",
""
],
[
"Zou",
"Zhikang",
""
],
[
"He",
"Xinwei",
""
],
[
"Tan",
"Xiao",
""
],
[
"Ding",
"Errui",
""
],
[
"Wang",
"Jingdong",
""
],
[
"Bai",
"Xiang",
""
]
] |
new_dataset
| 0.998669 |
2305.07769
|
Homa Nikbakht
|
Homa Nikbakht, Eric Ruzomberka, Mich\`ele Wigger, Shlomo Shamai
(Shitz), H. Vincent Poor
|
Joint Coding of eMBB and URLLC in Vehicle-to-Everything (V2X)
Communications
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
A point-to-point communication is considered where a roadside unite (RSU)
wishes to simultaneously send messages of enhanced mobile broadband (eMBB) and
ultra-reliable low-latency communication (URLLC) services to a vehicle. The
eMBB message arrives at the beginning of a block and its transmission lasts
over the entire block. During each eMBB transmission block, random arrivals of
URLLC messages are assumed. To improve the reliability of the URLLC
transmissions, the RSU reinforces their transmissions by mitigating the
interference of eMBB transmission by means of dirty paper coding (DPC). In the
proposed coding scheme, the eMBB messages are decoded based on two approaches:
treating interference as noise, and successive interference cancellation.
Rigorous bounds are derived for the error probabilities of eMBB and URLLC
transmissions achieved by our scheme. Numerical results illustrate that they
are lower than bounds for standard time-sharing.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 21:26:10 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Nikbakht",
"Homa",
"",
"Shitz"
],
[
"Ruzomberka",
"Eric",
"",
"Shitz"
],
[
"Wigger",
"Michèle",
"",
"Shitz"
],
[
"Shamai",
"Shlomo",
"",
"Shitz"
],
[
"Poor",
"H. Vincent",
""
]
] |
new_dataset
| 0.997518 |
2305.07825
|
Yang Fan
|
Fan Yang and Tao Wang, Xiaofei Wang
|
Student Classroom Behavior Detection based on YOLOv7-BRA and Multi-Model
Fusion
|
arXiv admin note: substantial text overlap with arXiv:2304.02488
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurately detecting student behavior in classroom videos can aid in
analyzing their classroom performance and improving teaching effectiveness.
However, the current accuracy rate in behavior detection is low. To address
this challenge, we propose the Student Classroom Behavior Detection system
based on based on YOLOv7-BRA (YOLOv7 with Bi-level Routing Attention ). We
identified eight different behavior patterns, including standing, sitting,
speaking, listening, walking, raising hands, reading, and writing. We
constructed a dataset, which contained 11,248 labels and 4,001 images, with an
emphasis on the common behavior of raising hands in a classroom setting
(Student Classroom Behavior dataset, SCB-Dataset). To improve detection
accuracy, we added the biformer attention module to the YOLOv7 network.
Finally, we fused the results from YOLOv7 CrowdHuman, SlowFast, and DeepSort
models to obtain student classroom behavior data. We conducted experiments on
the SCB-Dataset, and YOLOv7-BRA achieved an mAP@0.5 of 87.1%, resulting in a
2.2% improvement over previous results. Our SCB-dataset can be downloaded from:
https://github.com/Whiffe/SCB-datase
|
[
{
"version": "v1",
"created": "Sat, 13 May 2023 02:46:41 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Yang",
"Fan",
""
],
[
"Wang",
"Tao",
""
],
[
"Wang",
"Xiaofei",
""
]
] |
new_dataset
| 0.960906 |
2305.07842
|
Jasmine Roberts
|
Jasmine Roberts
|
The AR/VR Technology Stack: A Central Repository of Software Development
Libraries, Platforms, and Tools
| null | null |
10.13140/RG.2.2.10465.17769
| null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
A comprehensive repository of software development libraries, platforms, and
tools specifically to the domains of augmented, virtual, and mixed reality.
|
[
{
"version": "v1",
"created": "Sat, 13 May 2023 05:50:26 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Roberts",
"Jasmine",
""
]
] |
new_dataset
| 0.995504 |
2305.07853
|
Kuanxu Hou
|
Hao Zhuang, Xinjie Huang, Kuanxu Hou, Delei Kong, Chenming Hu, Zheng
Fang
|
EV-MGRFlowNet: Motion-Guided Recurrent Network for Unsupervised
Event-based Optical Flow with Hybrid Motion-Compensation Loss
|
11 pages, 7 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Event cameras offer promising properties, such as high temporal resolution
and high dynamic range. These benefits have been utilized into many machine
vision tasks, especially optical flow estimation. Currently, most existing
event-based works use deep learning to estimate optical flow. However, their
networks have not fully exploited prior hidden states and motion flows.
Additionally, their supervision strategy has not fully leveraged the geometric
constraints of event data to unlock the potential of networks. In this paper,
we propose EV-MGRFlowNet, an unsupervised event-based optical flow estimation
pipeline with motion-guided recurrent networks using a hybrid
motion-compensation loss. First, we propose a feature-enhanced recurrent
encoder network (FERE-Net) which fully utilizes prior hidden states to obtain
multi-level motion features. Then, we propose a flow-guided decoder network
(FGD-Net) to integrate prior motion flows. Finally, we design a hybrid
motion-compensation loss (HMC-Loss) to strengthen geometric constraints for the
more accurate alignment of events. Experimental results show that our method
outperforms the current state-of-the-art (SOTA) method on the MVSEC dataset,
with an average reduction of approximately 22.71% in average endpoint error
(AEE). To our knowledge, our method ranks first among unsupervised
learning-based methods.
|
[
{
"version": "v1",
"created": "Sat, 13 May 2023 07:08:48 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Zhuang",
"Hao",
""
],
[
"Huang",
"Xinjie",
""
],
[
"Hou",
"Kuanxu",
""
],
[
"Kong",
"Delei",
""
],
[
"Hu",
"Chenming",
""
],
[
"Fang",
"Zheng",
""
]
] |
new_dataset
| 0.996477 |
2305.07903
|
Adam Pease
|
Chad Brown, Adam Pease, Josef Urban
|
Translating SUMO-K to Higher-Order Set Theory
|
17 pages including references
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We describe a translation from a fragment of SUMO (SUMO-K) into higher-order
set theory. The translation provides a formal semantics for portions of SUMO
which are beyond first-order and which have previously only had an informal
interpretation. It also for the first time embeds a large common-sense ontology
into a very secure interactive theorem proving system. We further extend our
previous work in finding contradictions in SUMO from first order constructs to
include a portion of SUMO's higher order constructs. Finally, using the
translation, we can create problems that can be proven using higher-order
interactive and automated theorem provers. This is tested in several systems
and can be used to form a corpus of higher-order common-sense reasoning
problems.
|
[
{
"version": "v1",
"created": "Sat, 13 May 2023 12:03:52 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Brown",
"Chad",
""
],
[
"Pease",
"Adam",
""
],
[
"Urban",
"Josef",
""
]
] |
new_dataset
| 0.994268 |
2305.07930
|
Haekyu Park
|
Haekyu Park, Gonzalo Ramos, Jina Suh, Christopher Meek, Rachel Ng,
Mary Czerwinski
|
FoundWright: A System to Help People Re-find Pages from Their
Web-history
|
26 pages
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Re-finding information is an essential activity, however, it can be difficult
when people struggle to express what they are looking for. Through a
need-finding survey, we first seek opportunities for improving re-finding
experiences, and explore one of these opportunities by implementing the
FoundWright system. The system leverages recent advances in language
transformer models to expand people's ability to express what they are looking
for, through the interactive creation and manipulation of concepts contained
within documents. We use FoundWright as a design probe to understand (1) how
people create and use concepts, (2) how this expanded ability helps re-finding,
and (3) how people engage and collaborate with FoundWright's machine learning
support. Our probe reveals that this expanded way of expressing re-finding
goals helps people with the task, by complementing traditional searching and
browsing. Finally, we present insights and recommendations for future work
aiming at developing systems to support re-finding.
|
[
{
"version": "v1",
"created": "Sat, 13 May 2023 14:46:44 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Park",
"Haekyu",
""
],
[
"Ramos",
"Gonzalo",
""
],
[
"Suh",
"Jina",
""
],
[
"Meek",
"Christopher",
""
],
[
"Ng",
"Rachel",
""
],
[
"Czerwinski",
"Mary",
""
]
] |
new_dataset
| 0.994166 |
2305.07952
|
Yang Ai
|
Yang Ai, Zhen-Hua Ling
|
APNet: An All-Frame-Level Neural Vocoder Incorporating Direct Prediction
of Amplitude and Phase Spectra
|
Accepted by IEEE/ACM Transactions on Audio, Speech, and Language
Processing. Codes are available
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a novel neural vocoder named APNet which reconstructs
speech waveforms from acoustic features by predicting amplitude and phase
spectra directly. The APNet vocoder is composed of an amplitude spectrum
predictor (ASP) and a phase spectrum predictor (PSP). The ASP is a residual
convolution network which predicts frame-level log amplitude spectra from
acoustic features. The PSP also adopts a residual convolution network using
acoustic features as input, then passes the output of this network through two
parallel linear convolution layers respectively, and finally integrates into a
phase calculation formula to estimate frame-level phase spectra. Finally, the
outputs of ASP and PSP are combined to reconstruct speech waveforms by inverse
short-time Fourier transform (ISTFT). All operations of the ASP and PSP are
performed at the frame level. We train the ASP and PSP jointly and define
multilevel loss functions based on amplitude mean square error, phase
anti-wrapping error, short-time spectral inconsistency error and time domain
reconstruction error. Experimental results show that our proposed APNet vocoder
achieves an approximately 8x faster inference speed than HiFi-GAN v1 on a CPU
due to the all-frame-level operations, while its synthesized speech quality is
comparable to HiFi-GAN v1. The synthesized speech quality of the APNet vocoder
is also better than that of several equally efficient models. Ablation
experiments also confirm that the proposed parallel phase estimation
architecture is essential to phase modeling and the proposed loss functions are
helpful for improving the synthesized speech quality.
|
[
{
"version": "v1",
"created": "Sat, 13 May 2023 15:51:26 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Ai",
"Yang",
""
],
[
"Ling",
"Zhen-Hua",
""
]
] |
new_dataset
| 0.996789 |
2305.07960
|
Ozer Can Devecioglu
|
Ozer Can Devecioglu, Serkan Kiranyaz, Amer Elhmes, Sadok Sassi, Turker
Ince, Onur Avci, Mohammad Hesam Soleimani-Babakamali, Ertugrul Taciroglu, and
Moncef Gabbouj
|
Sound-to-Vibration Transformation for Sensorless Motor Health Monitoring
| null | null | null | null |
cs.SD cs.HC eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Automatic sensor-based detection of motor failures such as bearing faults is
crucial for predictive maintenance in various industries. Numerous
methodologies have been developed over the years to detect bearing faults.
Despite the appearance of numerous different approaches for diagnosing faults
in motors have been proposed, vibration-based methods have become the de facto
standard and the most commonly used techniques. However, acquiring reliable
vibration signals, especially from rotating machinery, can sometimes be
infeasibly difficult due to challenging installation and operational conditions
(e.g., variations on accelerometer locations on the motor body), which will not
only alter the signal patterns significantly but may also induce severe
artifacts. Moreover, sensors are costly and require periodic maintenance to
sustain a reliable signal acquisition. To address these drawbacks and void the
need for vibration sensors, in this study, we propose a novel
sound-to-vibration transformation method that can synthesize realistic
vibration signals directly from the sound measurements regardless of the
working conditions, fault type, and fault severity. As a result, using this
transformation, the data acquired by a simple sound recorder, e.g., a mobile
phone, can be transformed into the vibration signal, which can then be used for
fault detection by a pre-trained model. The proposed method is extensively
evaluated over the benchmark Qatar University Dual-Machine Bearing Fault
Benchmark dataset (QU-DMBF), which encapsulates sound and vibration data from
two different machines operating under various conditions. Experimental results
show that this novel approach can synthesize such realistic vibration signals
that can directly be used for reliable and highly accurate motor health
monitoring.
|
[
{
"version": "v1",
"created": "Sat, 13 May 2023 16:37:18 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Devecioglu",
"Ozer Can",
""
],
[
"Kiranyaz",
"Serkan",
""
],
[
"Elhmes",
"Amer",
""
],
[
"Sassi",
"Sadok",
""
],
[
"Ince",
"Turker",
""
],
[
"Avci",
"Onur",
""
],
[
"Soleimani-Babakamali",
"Mohammad Hesam",
""
],
[
"Taciroglu",
"Ertugrul",
""
],
[
"Gabbouj",
"Moncef",
""
]
] |
new_dataset
| 0.989599 |
2305.07972
|
Agam Shah
|
Agam Shah and Suvan Paturi and Sudheer Chava
|
Trillion Dollar Words: A New Financial Dataset, Task & Market Analysis
|
ACL 2023 (main)
| null | null | null |
cs.CL q-fin.CP
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Monetary policy pronouncements by Federal Open Market Committee (FOMC) are a
major driver of financial market returns. We construct the largest tokenized
and annotated dataset of FOMC speeches, meeting minutes, and press conference
transcripts in order to understand how monetary policy influences financial
markets. In this study, we develop a novel task of hawkish-dovish
classification and benchmark various pre-trained language models on the
proposed dataset. Using the best-performing model (RoBERTa-large), we construct
a measure of monetary policy stance for the FOMC document release days. To
evaluate the constructed measure, we study its impact on the treasury market,
stock market, and macroeconomic indicators. Our dataset, models, and code are
publicly available on Huggingface and GitHub under CC BY-NC 4.0 license.
|
[
{
"version": "v1",
"created": "Sat, 13 May 2023 17:32:39 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Shah",
"Agam",
""
],
[
"Paturi",
"Suvan",
""
],
[
"Chava",
"Sudheer",
""
]
] |
new_dataset
| 0.999846 |
2305.08029
|
Zihao Wang
|
Zihao Wang, Le Ma, Chen Zhang, Bo Han, Yikai Wang, Xinyi Chen, HaoRong
Hong, Wenbo Liu, Xinda Wu, Kejun Zhang
|
SongDriver2: Real-time Emotion-based Music Arrangement with Soft
Transition
| null | null | null | null |
cs.SD cs.AI eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Real-time emotion-based music arrangement, which aims to transform a given
music piece into another one that evokes specific emotional resonance with the
user in real-time, holds significant application value in various scenarios,
e.g., music therapy, video game soundtracks, and movie scores. However,
balancing emotion real-time fit with soft emotion transition is a challenge due
to the fine-grained and mutable nature of the target emotion. Existing studies
mainly focus on achieving emotion real-time fit, while the issue of soft
transition remains understudied, affecting the overall emotional coherence of
the music. In this paper, we propose SongDriver2 to address this balance.
Specifically, we first recognize the last timestep's music emotion and then
fuse it with the current timestep's target input emotion. The fused emotion
then serves as the guidance for SongDriver2 to generate the upcoming music
based on the input melody data. To adjust music similarity and emotion
real-time fit flexibly, we downsample the original melody and feed it into the
generation model. Furthermore, we design four music theory features to leverage
domain knowledge to enhance emotion information and employ semi-supervised
learning to mitigate the subjective bias introduced by manual dataset
annotation. According to the evaluation results, SongDriver2 surpasses the
state-of-the-art methods in both objective and subjective metrics. These
results demonstrate that SongDriver2 achieves real-time fit and soft
transitions simultaneously, enhancing the coherence of the generated music.
|
[
{
"version": "v1",
"created": "Sun, 14 May 2023 00:09:48 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Wang",
"Zihao",
""
],
[
"Ma",
"Le",
""
],
[
"Zhang",
"Chen",
""
],
[
"Han",
"Bo",
""
],
[
"Wang",
"Yikai",
""
],
[
"Chen",
"Xinyi",
""
],
[
"Hong",
"HaoRong",
""
],
[
"Liu",
"Wenbo",
""
],
[
"Wu",
"Xinda",
""
],
[
"Zhang",
"Kejun",
""
]
] |
new_dataset
| 0.994615 |
2305.08037
|
Ce Zhou
|
Ce Zhou (1), Qiben Yan (1), Zhiyuan Yu (2), Eshan Dixit (1), Ning
Zhang (2), Huacheng Zeng (1), and Alireza Safdari Ghanhdari (3) ((1) Michigan
State University, (2) Washington University in St. Louis, (3) Texas A&M
University )
|
ChargeX: Exploring State Switching Attack on Electric Vehicle Charging
Systems
|
13 pages, 13 figures
| null | null | null |
cs.CR cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Electric Vehicle (EV) has become one of the promising solutions to the
ever-evolving environmental and energy crisis. The key to the wide adoption of
EVs is a pervasive charging infrastructure, composed of both private/home
chargers and public/commercial charging stations. The security of EV charging,
however, has not been thoroughly investigated. This paper investigates the
communication mechanisms between the chargers and EVs, and exposes the lack of
protection on the authenticity in the SAE J1772 charging control protocol. To
showcase our discoveries, we propose a new class of attacks, ChargeX, which
aims to manipulate the charging states or charging rates of EV chargers with
the goal of disrupting the charging schedules, causing a denial of service
(DoS), or degrading the battery performance. ChargeX inserts a hardware attack
circuit to strategically modify the charging control signals. We design and
implement multiple attack systems, and evaluate the attacks on a public
charging station and two home chargers using a simulated vehicle load in the
lab environment. Extensive experiments on different types of chargers
demonstrate the effectiveness and generalization of ChargeX. Specifically, we
demonstrate that ChargeX can force the switching of an EV's charging state from
``stand by" to ``charging", even when the vehicle is not in the charging state.
We further validate the attacks on a Tesla Model 3 vehicle to demonstrate the
disruptive impacts of ChargeX. If deployed, ChargeX may significantly demolish
people's trust in the EV charging infrastructure.
|
[
{
"version": "v1",
"created": "Sun, 14 May 2023 00:57:52 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Zhou",
"Ce",
""
],
[
"Yan",
"Qiben",
""
],
[
"Yu",
"Zhiyuan",
""
],
[
"Dixit",
"Eshan",
""
],
[
"Zhang",
"Ning",
""
],
[
"Zeng",
"Huacheng",
""
],
[
"Ghanhdari",
"Alireza Safdari",
""
]
] |
new_dataset
| 0.970023 |
2305.08053
|
Shenghui Zhong
|
Miao Zhang, Yiqing Shen and Shenghui Zhong
|
SCRNet: a Retinex Structure-based Low-light Enhancement Model Guided by
Spatial Consistency
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Images captured under low-light conditions are often plagued by several
challenges, including diminished contrast, increased noise, loss of fine
details, and unnatural color reproduction. These factors can significantly
hinder the performance of computer vision tasks such as object detection and
image segmentation. As a result, improving the quality of low-light images is
of paramount importance for practical applications in the computer vision
domain.To effectively address these challenges, we present a novel low-light
image enhancement model, termed Spatial Consistency Retinex Network (SCRNet),
which leverages the Retinex-based structure and is guided by the principle of
spatial consistency.Specifically, our proposed model incorporates three levels
of consistency: channel level, semantic level, and texture level, inspired by
the principle of spatial consistency.These levels of consistency enable our
model to adaptively enhance image features, ensuring more accurate and visually
pleasing results.Extensive experimental evaluations on various low-light image
datasets demonstrate that our proposed SCRNet outshines existing
state-of-the-art methods, highlighting the potential of SCRNet as an effective
solution for enhancing low-light images.
|
[
{
"version": "v1",
"created": "Sun, 14 May 2023 03:32:19 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Zhang",
"Miao",
""
],
[
"Shen",
"Yiqing",
""
],
[
"Zhong",
"Shenghui",
""
]
] |
new_dataset
| 0.995504 |
2305.08152
|
Yulun Du
|
Yulun Du and Lydia Chilton
|
STORYWARS: A Dataset and Instruction Tuning Baselines for Collaborative
Story Understanding and Generation
|
ACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Collaborative stories, which are texts created through the collaborative
efforts of multiple authors with different writing styles and intentions, pose
unique challenges for NLP models. Understanding and generating such stories
remains an underexplored area due to the lack of open-domain corpora. To
address this, we introduce STORYWARS, a new dataset of over 40,000
collaborative stories written by 9,400 different authors from an online
platform. We design 12 task types, comprising 7 understanding and 5 generation
task types, on STORYWARS, deriving 101 diverse story-related tasks in total as
a multi-task benchmark covering all fully-supervised, few-shot, and zero-shot
scenarios. Furthermore, we present our instruction-tuned model, INSTRUCTSTORY,
for the story tasks showing that instruction tuning, in addition to achieving
superior results in zero-shot and few-shot scenarios, can also obtain the best
performance on the fully-supervised tasks in STORYWARS, establishing strong
multi-task benchmark performances on STORYWARS.
|
[
{
"version": "v1",
"created": "Sun, 14 May 2023 13:09:27 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Du",
"Yulun",
""
],
[
"Chilton",
"Lydia",
""
]
] |
new_dataset
| 0.999698 |
2305.08173
|
Gaurish Thakkar Mr
|
Gaurish Thakkar, Nives Mikelic Preradovic and Marko Tadi\'c
|
Croatian Film Review Dataset (Cro-FiReDa): A Sentiment Annotated Dataset
of Film Reviews
| null |
LTC 2023
| null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces Cro-FiReDa, a sentiment-annotated dataset for Croatian
in the domain of movie reviews. The dataset, which contains over 10,000
sentences, has been annotated at the sentence level. In addition to presenting
the overall annotation process, we also present benchmark results based on the
transformer-based fine-tuning approach
|
[
{
"version": "v1",
"created": "Sun, 14 May 2023 14:46:12 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Thakkar",
"Gaurish",
""
],
[
"Preradovic",
"Nives Mikelic",
""
],
[
"Tadić",
"Marko",
""
]
] |
new_dataset
| 0.999823 |
2305.08186
|
Tian Feng
|
Lehao Yang, Long Li, Qihao Chen, Jiling Zhang, Tian Feng, Wei Zhang
|
Street Layout Design via Conditional Adversarial Learning
| null | null | null | null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Designing high-quality urban street layouts has long been in high demand, but
entangles notable challenges. Conventional methods based on deep generative
models are yet to fill the gap on integrating both natural and socioeconomic
factors in the design loop. In this paper, we propose a novel urban street
layout design method based on conditional adversarial learning. Specifically, a
conditional generative adversarial network trained on a real-world dataset
synthesizes street layout images from the feature map, into which an
autoencoder fuses a set of natural and socioeconomic data for a region of
interest; The following extraction module generates high-quality street layout
graphs corresponding to the synthesized images. Experiments and evaluations
suggest that the proposed method outputs various urban street layouts that are
visually and structurally alike their real-world counterparts, which can be
used to support the creation of high-quality urban virtual environments.
|
[
{
"version": "v1",
"created": "Sun, 14 May 2023 15:39:38 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Yang",
"Lehao",
""
],
[
"Li",
"Long",
""
],
[
"Chen",
"Qihao",
""
],
[
"Zhang",
"Jiling",
""
],
[
"Feng",
"Tian",
""
],
[
"Zhang",
"Wei",
""
]
] |
new_dataset
| 0.992703 |
2305.08187
|
Gaurish Thakkar Mr
|
Gaurish Thakkar, Nives Mikelic Preradovi\'c, Marko Tadi\'c
|
CroSentiNews 2.0: A Sentence-Level News Sentiment Corpus
| null |
Slavic NLP 2023
| null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This article presents a sentence-level sentiment dataset for the Croatian
news domain. In addition to the 3K annotated texts already present, our dataset
contains 14.5K annotated sentence occurrences that have been tagged with 5
classes. We provide baseline scores in addition to the annotation process and
inter-annotator agreement.
|
[
{
"version": "v1",
"created": "Sun, 14 May 2023 15:53:54 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Thakkar",
"Gaurish",
""
],
[
"Preradović",
"Nives Mikelic",
""
],
[
"Tadić",
"Marko",
""
]
] |
new_dataset
| 0.999857 |
2305.08190
|
Yunong Wu
|
Yunong Wu, Thomas Gilles, Bogdan Stanciulescu, Fabien Moutarde
|
TSGN: Temporal Scene Graph Neural Networks with Projected Vectorized
Representation for Multi-Agent Motion Prediction
|
8 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Predicting future motions of nearby agents is essential for an autonomous
vehicle to take safe and effective actions. In this paper, we propose TSGN, a
framework using Temporal Scene Graph Neural Networks with projected vectorized
representations for multi-agent trajectory prediction. Projected vectorized
representation models the traffic scene as a graph which is constructed by a
set of vectors. These vectors represent agents, road network, and their spatial
relative relationships. All relative features under this representation are
both translationand rotation-invariant. Based on this representation, TSGN
captures the spatial-temporal features across agents, road network,
interactions among them, and temporal dependencies of temporal traffic scenes.
TSGN can predict multimodal future trajectories for all agents simultaneously,
plausibly, and accurately. Meanwhile, we propose a Hierarchical Lane
Transformer for capturing interactions between agents and road network, which
filters the surrounding road network and only keeps the most probable lane
segments which could have an impact on the future behavior of the target agent.
Without sacrificing the prediction performance, this greatly reduces the
computational burden. Experiments show TSGN achieves state-of-the-art
performance on the Argoverse motion forecasting benchmar.
|
[
{
"version": "v1",
"created": "Sun, 14 May 2023 15:58:55 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Wu",
"Yunong",
""
],
[
"Gilles",
"Thomas",
""
],
[
"Stanciulescu",
"Bogdan",
""
],
[
"Moutarde",
"Fabien",
""
]
] |
new_dataset
| 0.995427 |
2305.08191
|
Guillaume Berger
|
Antoine Mercier and Guillaume Berger and Sunny Panchal and Florian
Letsch and Cornelius Boehm and Nahua Kang and Ingo Bax and Roland Memisevic
|
Is end-to-end learning enough for fitness activity recognition?
|
9 pages, 4 figures, 4 tables
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
End-to-end learning has taken hold of many computer vision tasks, in
particular, related to still images, with task-specific optimization yielding
very strong performance. Nevertheless, human-centric action recognition is
still largely dominated by hand-crafted pipelines, and only individual
components are replaced by neural networks that typically operate on individual
frames. As a testbed to study the relevance of such pipelines, we present a new
fully annotated video dataset of fitness activities. Any recognition
capabilities in this domain are almost exclusively a function of human poses
and their temporal dynamics, so pose-based solutions should perform well. We
show that, with this labelled data, end-to-end learning on raw pixels can
compete with state-of-the-art action recognition pipelines based on pose
estimation. We also show that end-to-end learning can support temporally
fine-grained tasks such as real-time repetition counting.
|
[
{
"version": "v1",
"created": "Sun, 14 May 2023 16:00:03 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Mercier",
"Antoine",
""
],
[
"Berger",
"Guillaume",
""
],
[
"Panchal",
"Sunny",
""
],
[
"Letsch",
"Florian",
""
],
[
"Boehm",
"Cornelius",
""
],
[
"Kang",
"Nahua",
""
],
[
"Bax",
"Ingo",
""
],
[
"Memisevic",
"Roland",
""
]
] |
new_dataset
| 0.997486 |
2305.08200
|
Jiyue Jiang
|
Jiyue Jiang, Sheng Wang, Qintong Li, Lingpeng Kong, Chuan Wu
|
A Cognitive Stimulation Dialogue System with Multi-source Knowledge
Fusion for Elders with Cognitive Impairment
|
Accepted by ACL 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
When communicating with elders with cognitive impairment, cognitive
stimulation (CS) help to maintain the cognitive health of elders. Data sparsity
is the main challenge in building CS-based dialogue systems, particularly in
the Chinese language. To fill this gap, we construct a Chinese CS conversation
(CSConv) dataset, which contains about 2.6K groups of dialogues with CS
principles and emotional support strategy labels. Making chit chat while
providing emotional support is overlooked by the majority of existing cognitive
dialogue systems. In this paper, we propose a multi-source knowledge fusion
method for CS dialogue (CSD), to generate open-ended responses guided by the CS
principle and emotional support strategy. We first use a progressive mask
method based on external knowledge to learn encoders as effective classifiers,
which is the prerequisite to predict the CS principle and emotional support
strategy of the target response. Then a decoder interacts with the perceived CS
principle and emotional support strategy to generate responses. Extensive
experiments conducted on the CSConv dataset demonstrate the effectiveness of
the proposed method, while there is still a large space for improvement
compared to human performance.
|
[
{
"version": "v1",
"created": "Sun, 14 May 2023 16:52:20 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Jiang",
"Jiyue",
""
],
[
"Wang",
"Sheng",
""
],
[
"Li",
"Qintong",
""
],
[
"Kong",
"Lingpeng",
""
],
[
"Wu",
"Chuan",
""
]
] |
new_dataset
| 0.997293 |
2305.08254
|
Mojtaba Eshghie
|
Mojtaba Eshghie, Wolfgang Ahrendt, Cyrille Artho, Thomas Troels
Hildebrandt, Gerardo Schneider
|
CLawK: Monitoring Business Processes in Smart Contracts
| null | null | null | null |
cs.CR cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Smart contracts embody complex business processes that can be difficult to
analyze statically. In this paper, we present CLawK, a runtime monitoring tool
that leverages business process specifications written in DCR graphs to provide
runtime verification of smart contract execution. We demonstrate how CLawK can
detect and flag deviations from specified behaviors in smart contracts deployed
in the Ethereum network without code instrumentation and any additional gas
costs.
|
[
{
"version": "v1",
"created": "Sun, 14 May 2023 21:33:19 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Eshghie",
"Mojtaba",
""
],
[
"Ahrendt",
"Wolfgang",
""
],
[
"Artho",
"Cyrille",
""
],
[
"Hildebrandt",
"Thomas Troels",
""
],
[
"Schneider",
"Gerardo",
""
]
] |
new_dataset
| 0.981068 |
2305.08264
|
Santiago Miret
|
Yu Song, Santiago Miret, Bang Liu
|
MatSci-NLP: Evaluating Scientific Language Models on Materials Science
Language Tasks Using Text-to-Schema Modeling
| null | null | null | null |
cs.CL cond-mat.mtrl-sci cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We present MatSci-NLP, a natural language benchmark for evaluating the
performance of natural language processing (NLP) models on materials science
text. We construct the benchmark from publicly available materials science text
data to encompass seven different NLP tasks, including conventional NLP tasks
like named entity recognition and relation classification, as well as NLP tasks
specific to materials science, such as synthesis action retrieval which relates
to creating synthesis procedures for materials. We study various BERT-based
models pretrained on different scientific text corpora on MatSci-NLP to
understand the impact of pretraining strategies on understanding materials
science text. Given the scarcity of high-quality annotated data in the
materials science domain, we perform our fine-tuning experiments with limited
training data to encourage the generalize across MatSci-NLP tasks. Our
experiments in this low-resource training setting show that language models
pretrained on scientific text outperform BERT trained on general text. MatBERT,
a model pretrained specifically on materials science journals, generally
performs best for most tasks. Moreover, we propose a unified text-to-schema for
multitask learning on \benchmark and compare its performance with traditional
fine-tuning methods. In our analysis of different training methods, we find
that our proposed text-to-schema methods inspired by question-answering
consistently outperform single and multitask NLP fine-tuning methods. The code
and datasets are publicly available at
\url{https://github.com/BangLab-UdeM-Mila/NLP4MatSci-ACL23}.
|
[
{
"version": "v1",
"created": "Sun, 14 May 2023 22:01:24 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Song",
"Yu",
""
],
[
"Miret",
"Santiago",
""
],
[
"Liu",
"Bang",
""
]
] |
new_dataset
| 0.998695 |
2305.08354
|
Xianhan Tan
|
Xianhan Tan, Junming Zhu, Jianmin Zhang, Yueming Wang, Yu Qi
|
Decoding Chinese phonemes from intracortical brain signals with
hyperbolic-space neural representations
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Speech brain-computer interfaces (BCIs), which translate brain signals into
spoken words or sentences, have shown significant potential for
high-performance BCI communication. Phonemes are the fundamental units of
pronunciation in most languages. While existing speech BCIs have largely
focused on English, where words contain diverse compositions of phonemes,
Chinese Mandarin is a monosyllabic language, with words typically consisting of
a consonant and a vowel. This feature makes it feasible to develop
high-performance Mandarin speech BCIs by decoding phonemes directly from neural
signals. This study aimed to decode spoken Mandarin phonemes using
intracortical neural signals. We observed that phonemes with similar
pronunciations were often represented by inseparable neural patterns, leading
to confusion in phoneme decoding. This finding suggests that the neural
representation of spoken phonemes has a hierarchical structure. To account for
this, we proposed learning the neural representation of phoneme pronunciation
in a hyperbolic space, where the hierarchical structure could be more naturally
optimized. Experiments with intracortical neural signals from a Chinese
participant showed that the proposed model learned discriminative and
interpretable hierarchical phoneme representations from neural signals,
significantly improving Chinese phoneme decoding performance and achieving
state-of-the-art. The findings demonstrate the feasibility of constructing
high-performance Chinese speech BCIs based on phoneme decoding.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 05:22:00 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Tan",
"Xianhan",
""
],
[
"Zhu",
"Junming",
""
],
[
"Zhang",
"Jianmin",
""
],
[
"Wang",
"Yueming",
""
],
[
"Qi",
"Yu",
""
]
] |
new_dataset
| 0.99932 |
2305.08371
|
Junfeng Jiang
|
Junfeng Jiang, Chengzhang Dong, Akiko Aizawa, Sadao Kurohashi
|
SuperDialseg: A Large-scale Dataset for Supervised Dialogue Segmentation
|
Datasets and codes are available at
https://github.com/Coldog2333/SuperDialseg
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dialogue segmentation is a crucial task for dialogue systems allowing a
better understanding of conversational texts. Despite recent progress in
unsupervised dialogue segmentation methods, their performances are limited by
the lack of explicit supervised signals for training. Furthermore, the precise
definition of segmentation points in conversations still remains as a
challenging problem, increasing the difficulty of collecting manual
annotations. In this paper, we provide a feasible definition of dialogue
segmentation points with the help of document-grounded dialogues and release a
large-scale supervised dataset called SuperDialseg, containing 9K dialogues
based on two prevalent document-grounded dialogue corpora, and also inherit
their useful dialogue-related annotations. Moreover, we propose two models to
exploit the dialogue characteristics, achieving state-of-the-art performance on
SuperDialseg and showing good generalization ability on the out-of-domain
datasets. Additionally, we provide a benchmark including 20 models across four
categories for the dialogue segmentation task with several proper evaluation
metrics. Based on the analysis of the empirical studies, we also provide some
insights for the task of dialogue segmentation. We believe our work is an
important step forward in the field of dialogue segmentation.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 06:08:01 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Jiang",
"Junfeng",
""
],
[
"Dong",
"Chengzhang",
""
],
[
"Aizawa",
"Akiko",
""
],
[
"Kurohashi",
"Sadao",
""
]
] |
new_dataset
| 0.999661 |
2305.08373
|
Mahdi Javadi
|
Mahdi Javadi, Daniel Harnack, Paula Stocco, Shivesh Kumar, Shubham
Vyas, Daniel Pizzutilo, and Frank Kirchner
|
AcroMonk: A Minimalist Underactuated Brachiating Robot
|
The open-source implementation is available at
https://github.com/dfki-ric-underactuated-lab/acromonk and a video
demonstration of the experiments can be accessed at
https://youtu.be/FIcDNtJo9Jc}
|
journal={IEEE Robotics and Automation Letters}, year={2023},
volume={8}, number={6}, pages={3637-3644}
|
10.1109/LRA.2023.3269296
| null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Brachiation is a dynamic, coordinated swinging maneuver of body and arms used
by monkeys and apes to move between branches. As a unique underactuated mode of
locomotion, it is interesting to study from a robotics perspective since it can
broaden the deployment scenarios for humanoids and animaloids. While several
brachiating robots of varying complexity have been proposed in the past, this
paper presents the simplest possible prototype of a brachiation robot, using
only a single actuator and unactuated grippers. The novel passive gripper
design allows it to snap on and release from monkey bars, while guaranteeing
well defined start and end poses of the swing. The brachiation behavior is
realized in three different ways, using trajectory optimization via direct
collocation and stabilization by a model-based time-varying linear quadratic
regulator (TVLQR) or model-free proportional derivative (PD) control, as well
as by a reinforcement learning (RL) based control policy. The three control
schemes are compared in terms of robustness to disturbances, mass uncertainty,
and energy consumption. The system design and controllers have been
open-sourced. Due to its minimal and open design, the system can serve as a
canonical underactuated platform for education and research.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 06:18:54 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Javadi",
"Mahdi",
""
],
[
"Harnack",
"Daniel",
""
],
[
"Stocco",
"Paula",
""
],
[
"Kumar",
"Shivesh",
""
],
[
"Vyas",
"Shubham",
""
],
[
"Pizzutilo",
"Daniel",
""
],
[
"Kirchner",
"Frank",
""
]
] |
new_dataset
| 0.999647 |
2305.08380
|
Miroslav Popovic
|
Miroslav Popovic, Marko Popovic, Branislav Kordic, Huibiao Zhu
|
PSTM Transaction Scheduler Verification Based on CSP and Testing
|
18 pages, 5 figures, 5 tables, 4 algorithms
|
In Proceedings of 7th Conference on the Engineering of Computer
Based Systems (ECBS 2021). ACM, New York, NY, USA, 9 pages. 2021
|
10.1145/3459960.3459962
| null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many online transaction scheduler architectures and algorithms for various
software transactional memories have been designed in order to maintain good
system performance even for high concurrency workloads. Most of these
algorithms were directly implemented in a target programming language, and
experimentally evaluated, without theoretical proofs of correctness and
analysis of their performance. Only a small number of these algorithms were
modeled using formal methods, such as process algebra CSP, in order to verify
that they satisfy properties such as deadlock-freeness and starvation-freeness.
However, as this paper shows, using solely formal methods has its
disadvantages, too. In this paper, we first analyze the previous CSP model of
PSTM transaction scheduler by comparing the model checker PAT results with the
manually derived expected results, for the given test workloads. Next,
according to the results of this analysis, we correct and extend the CSP model.
Finally, based on PAT results for the new CSP model, we analyze the performance
of PSTM online transaction scheduling algorithms from the perspective of
makespan, number of aborts, and throughput. Based on our findings, we may
conclude that for the complete formal verification of trustworthy software,
both formal verification and it's testing must be jointly used.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 06:34:50 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Popovic",
"Miroslav",
""
],
[
"Popovic",
"Marko",
""
],
[
"Kordic",
"Branislav",
""
],
[
"Zhu",
"Huibiao",
""
]
] |
new_dataset
| 0.997696 |
2305.08386
|
Jialong Zuo
|
Jialong Zuo, Changqian Yu, Nong Sang, Changxin Gao
|
PLIP: Language-Image Pre-training for Person Representation Learning
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pre-training has emerged as an effective technique for learning powerful
person representations. Most existing methods have shown that pre-training on
pure-vision large-scale datasets like ImageNet and LUPerson has achieved
remarkable performance. However, solely relying on visual information, the
absence of robust explicit indicators poses a challenge for these methods to
learn discriminative person representations. Drawing inspiration from the
intrinsic fine-grained attribute indicators of person descriptions, we explore
introducing the language modality into person representation learning. To this
end, we propose a novel language-image pre-training framework for person
representation learning, termed PLIP. To explicitly build fine-grained
cross-modal associations, we specifically design three pretext tasks, \ie
semantic-fused image colorization, visual-fused attributes prediction, and
vision-language matching. In addition, due to the lack of an appropriate
dataset, we present a large-scale person dataset named SYNTH-PEDES, where the
Stylish Pedestrian Attributes-union Captioning method is proposed to synthesize
diverse textual descriptions. We pre-train PLIP on SYNTH-PEDES and evaluate our
model by spanning downstream tasks such as text-based Re-ID, image-based Re-ID,
and person attribute recognition. Extensive experiments demonstrate that our
model not only significantly improves existing methods on all these tasks, but
also shows great ability in the few-shot and domain generalization settings.
The code, dataset and weights will be released
at~\url{https://github.com/Zplusdragon/PLIP}
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 06:49:00 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Zuo",
"Jialong",
""
],
[
"Yu",
"Changqian",
""
],
[
"Sang",
"Nong",
""
],
[
"Gao",
"Changxin",
""
]
] |
new_dataset
| 0.997374 |
2305.08389
|
Linli Yao
|
Linli Yao, Yuanmeng Zhang, Ziheng Wang, Xinglin Hou, Tiezheng Ge,
Yuning Jiang and Qin Jin
|
Edit As You Wish: Video Description Editing with Multi-grained Commands
| null | null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatically narrating a video with natural language can assist people in
grasping and managing massive videos on the Internet. From the perspective of
video uploaders, they may have varied preferences for writing the desired video
description to attract more potential followers, e.g. catching customers'
attention for product videos. The Controllable Video Captioning task is
therefore proposed to generate a description conditioned on the user demand and
video content. However, existing works suffer from two shortcomings: 1) the
control signal is fixed and can only express single-grained control; 2) the
video description can not be further edited to meet dynamic user demands. In
this paper, we propose a novel Video Description Editing (VDEdit) task to
automatically revise an existing video description guided by flexible user
requests. Inspired by human writing-revision habits, we design the user command
as a {operation, position, attribute} triplet to cover multi-grained use
requirements, which can express coarse-grained control (e.g. expand the
description) as well as fine-grained control (e.g. add specified details in
specified position) in a unified format. To facilitate the VDEdit task, we
first automatically construct a large-scale benchmark dataset namely VATEX-EDIT
in the open domain describing diverse human activities. Considering the
real-life application scenario, we further manually collect an e-commerce
benchmark dataset called EMMAD-EDIT. We propose a unified framework to convert
the {operation, position, attribute} triplet into a textual control sequence to
handle multi-grained editing commands. For VDEdit evaluation, we adopt
comprehensive metrics to measure three aspects of model performance, including
caption quality, caption-command consistency, and caption-video alignment.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 07:12:19 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Yao",
"Linli",
""
],
[
"Zhang",
"Yuanmeng",
""
],
[
"Wang",
"Ziheng",
""
],
[
"Hou",
"Xinglin",
""
],
[
"Ge",
"Tiezheng",
""
],
[
"Jiang",
"Yuning",
""
],
[
"Jin",
"Qin",
""
]
] |
new_dataset
| 0.991626 |
2305.08408
|
Ding Jiun Huang
|
Ding-Jiun Huang, Yu-Ting Kao, Tieh-Hung Chuang, Ya-Chun Tsai, Jing-Kai
Lou, Shuen-Huei Guan
|
SB-VQA: A Stack-Based Video Quality Assessment Framework for Video
Enhancement
|
CVPR NTIRE 2023
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, several video quality assessment (VQA) methods have been
developed, achieving high performance. However, these methods were not
specifically trained for enhanced videos, which limits their ability to predict
video quality accurately based on human subjective perception. To address this
issue, we propose a stack-based framework for VQA that outperforms existing
state-of-the-art methods on VDPVE, a dataset consisting of enhanced videos. In
addition to proposing the VQA framework for enhanced videos, we also
investigate its application on professionally generated content (PGC). To
address copyright issues with premium content, we create the PGCVQ dataset,
which consists of videos from YouTube. We evaluate our proposed approach and
state-of-the-art methods on PGCVQ, and provide new insights on the results. Our
experiments demonstrate that existing VQA algorithms can be applied to PGC
videos, and we find that VQA performance for PGC videos can be improved by
considering the plot of a play, which highlights the importance of video
semantic understanding.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 07:44:10 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Huang",
"Ding-Jiun",
""
],
[
"Kao",
"Yu-Ting",
""
],
[
"Chuang",
"Tieh-Hung",
""
],
[
"Tsai",
"Ya-Chun",
""
],
[
"Lou",
"Jing-Kai",
""
],
[
"Guan",
"Shuen-Huei",
""
]
] |
new_dataset
| 0.999445 |
2305.08456
|
Jianzhong Su
|
Zibin Zheng, Jianzhong Su, Jiachi Chen, David Lo, Zhijie Zhong and
Mingxi Ye
|
DAppSCAN: Building Large-Scale Datasets for Smart Contract Weaknesses in
DApp Projects
|
Dataset available at https://github.com/InPlusLab/DAppSCAN
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Smart Contract Weakness Classification Registry (SWC Registry) is a
widely recognized list of smart contract weaknesses specific to the Ethereum
platform. In recent years, significant research efforts have been dedicated to
building tools to detect SWC weaknesses. However, evaluating these tools has
proven challenging due to the absence of a large, unbiased, real-world dataset.
To address this issue, we recruited 22 participants and spent 44 person-months
analyzing 1,322 open-source audit reports from 30 security teams. In total, we
identified 10,016 weaknesses and developed two distinct datasets, i.e.,
DAppSCAN-Source and DAppSCAN-Bytecode. The DAppSCAN-Source dataset comprises
25,077 Solidity files, featuring 1,689 SWC vulnerabilities sourced from 1,139
real-world DApp projects. The Solidity files in this dataset may not be
directly compilable. To enable the dataset to be compilable, we developed a
tool capable of automatically identifying dependency relationships within DApps
and completing missing public libraries. By utilizing this tool, we created our
DAPPSCAN-Bytecode dataset, which consists of 8,167 compiled smart contract
bytecode with 895 SWC weaknesses. Based on the second dataset, we conducted an
empirical study to assess the performance of five state-of-the-art smart
contract vulnerability detection tools. The evaluation results revealed subpar
performance for these tools in terms of both effectiveness and success
detection rate, indicating that future development should prioritize real-world
datasets over simplistic toy contracts.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 08:56:13 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Zheng",
"Zibin",
""
],
[
"Su",
"Jianzhong",
""
],
[
"Chen",
"Jiachi",
""
],
[
"Lo",
"David",
""
],
[
"Zhong",
"Zhijie",
""
],
[
"Ye",
"Mingxi",
""
]
] |
new_dataset
| 0.999154 |
2305.08468
|
Chengjun Ying
|
Jianying Wang (1), Tongliang Li (1), Haoze Song (1), Xinjun Yang (1),
Wenchao Zhou (1), Feifei Li (1), Baoyue Yan (1), Qianqian Wu (1), Yukun Liang
(1), Chengjun Ying (1 and 2), Yujie Wang (1), Baokai Chen (1), Chang Cai (1),
Yubin Ruan (1), Xiaoyi Weng (1), Shibin Chen (1), Liang Yin (1), Chengzhong
Yang (1), Xin Cai (1), Hongyan Xing (1), Nanlong Yu (1), Xiaofei Chen (1),
Dapeng Huang (1), Jianling Sun (1 and 2) ((1) Alibaba Group, (2) Zhejiang
University)
|
PolarDB-IMCI: A Cloud-Native HTAP Database System at Alibaba
|
14 pages, 16 figures, to be published in ACM SIGMOD 2023
| null |
10.1145/3589785
| null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cloud-native databases have become the de-facto choice for mission-critical
applications on the cloud due to the need for high availability, resource
elasticity, and cost efficiency. Meanwhile, driven by the increasing
connectivity between data generation and analysis, users prefer a single
database to efficiently process both OLTP and OLAP workloads, which enhances
data freshness and reduces the complexity of data synchronization and the
overall business cost.
In this paper, we summarize five crucial design goals for a cloud-native HTAP
database based on our experience and customers' feedback, i.e., transparency,
competitive OLAP performance, minimal perturbation on OLTP workloads, high data
freshness, and excellent resource elasticity. As our solution to realize these
goals, we present PolarDB-IMCI, a cloud-native HTAP database system designed
and deployed at Alibaba Cloud. Our evaluation results show that PolarDB-IMCI is
able to handle HTAP efficiently on both experimental and production workloads;
notably, it speeds up analytical queries up to $\times149$ on TPC-H (100 $GB$).
PolarDB-IMCI introduces low visibility delay and little performance
perturbation on OLTP workloads (< 5%), and resource elasticity can be achieved
by scaling out in tens of seconds.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 09:13:27 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Wang",
"Jianying",
"",
"1 and 2"
],
[
"Li",
"Tongliang",
"",
"1 and 2"
],
[
"Song",
"Haoze",
"",
"1 and 2"
],
[
"Yang",
"Xinjun",
"",
"1 and 2"
],
[
"Zhou",
"Wenchao",
"",
"1 and 2"
],
[
"Li",
"Feifei",
"",
"1 and 2"
],
[
"Yan",
"Baoyue",
"",
"1 and 2"
],
[
"Wu",
"Qianqian",
"",
"1 and 2"
],
[
"Liang",
"Yukun",
"",
"1 and 2"
],
[
"Ying",
"Chengjun",
"",
"1 and 2"
],
[
"Wang",
"Yujie",
"",
"1 and 2"
],
[
"Chen",
"Baokai",
"",
"1 and 2"
],
[
"Cai",
"Chang",
"",
"1 and 2"
],
[
"Ruan",
"Yubin",
"",
"1 and 2"
],
[
"Weng",
"Xiaoyi",
"",
"1 and 2"
],
[
"Chen",
"Shibin",
"",
"1 and 2"
],
[
"Yin",
"Liang",
"",
"1 and 2"
],
[
"Yang",
"Chengzhong",
"",
"1 and 2"
],
[
"Cai",
"Xin",
"",
"1 and 2"
],
[
"Xing",
"Hongyan",
"",
"1 and 2"
],
[
"Yu",
"Nanlong",
"",
"1 and 2"
],
[
"Chen",
"Xiaofei",
"",
"1 and 2"
],
[
"Huang",
"Dapeng",
"",
"1 and 2"
],
[
"Sun",
"Jianling",
"",
"1 and 2"
]
] |
new_dataset
| 0.999063 |
2305.08476
|
Patrick Hochstenbach
|
Patrick Hochstenbach, Jos De Roo, Ruben Verborgh
|
RDF Surfaces: Computer Says No
|
5 pages, position paper for the ESWC2023 TrusDeKW workshop
| null | null | null |
cs.LO cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Logic can define how agents are provided or denied access to resources, how
to interlink resources using mining processes and provide users with choices
for possible next steps in a workflow. These decisions are for the most part
hidden, internal to machines processing data. In order to exchange this
internal logic a portable Web logic is required which the Semantic Web could
provide. Combining logic and data provides insights into the reasoning process
and creates a new level of trust on the Semantic Web. Current Web logics
carries only a fragment of first-order logic (FOL) to keep exchange languages
decidable or easily processable. But, this is at a cost: the portability of
logic. Machines require implicit agreements to know which fragment of logic is
being exchanged and need a strategy for how to cope with the different
fragments. These choices could obscure insights into the reasoning process. We
created RDF Surfaces in order to express the full expressivity of FOL including
saying explicitly `no'. This vision paper provides basic principles and
compares existing work. Even though support for FOL is semi-decidable, we argue
these problems are surmountable. RDF Surfaces span many use cases, including
describing misuse of information, adding explainability and trust to reasoning,
and providing scope for reasoning over streams of data and queries. RDF
Surfaces provide the direct translation of FOL for the Semantic Web. We hope
this vision paper attracts new implementers and opens the discussion to its
formal specification.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 09:27:46 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Hochstenbach",
"Patrick",
""
],
[
"De Roo",
"Jos",
""
],
[
"Verborgh",
"Ruben",
""
]
] |
new_dataset
| 0.998882 |
2305.08487
|
Chunlan Ma
|
Chunlan Ma, Ayyoob ImaniGooghari, Haotian Ye, Ehsaneddin Asgari and
Hinrich Sch\"utze
|
Taxi1500: A Multilingual Dataset for Text Classification in 1500
Languages
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While natural language processing tools have been developed extensively for
some of the world's languages, a significant portion of the world's over 7000
languages are still neglected. One reason for this is that evaluation datasets
do not yet cover a wide range of languages, including low-resource and
endangered ones. We aim to address this issue by creating a text classification
dataset encompassing a large number of languages, many of which currently have
little to no annotated data available. We leverage parallel translations of the
Bible to construct such a dataset by first developing applicable topics and
employing a crowdsourcing tool to collect annotated data. By annotating the
English side of the data and projecting the labels onto other languages through
aligned verses, we generate text classification datasets for more than 1500
languages. We extensively benchmark several existing multilingual language
models using our dataset. To facilitate the advancement of research in this
area, we will release our dataset and code.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 09:43:32 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Ma",
"Chunlan",
""
],
[
"ImaniGooghari",
"Ayyoob",
""
],
[
"Ye",
"Haotian",
""
],
[
"Asgari",
"Ehsaneddin",
""
],
[
"Schütze",
"Hinrich",
""
]
] |
new_dataset
| 0.99962 |
2305.08502
|
Reut Apel
|
Reut Apel, Tom Braude, Amir Kantor, Eyal Kolman
|
MeeQA: Natural Questions in Meeting Transcripts
| null | null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We present MeeQA, a dataset for natural-language question answering over
meeting transcripts. It includes real questions asked during meetings by its
participants. The dataset contains 48K question-answer pairs, extracted from
422 meeting transcripts, spanning multiple domains. Questions in transcripts
pose a special challenge as they are not always clear, and considerable context
may be required in order to provide an answer. Further, many questions asked
during meetings are left unanswered. To improve baseline model performance on
this type of questions, we also propose a novel loss function, \emph{Flat
Hierarchical Loss}, designed to enhance performance over questions with no
answer in the text. Our experiments demonstrate the advantage of using our
approach over standard QA models.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 10:02:47 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Apel",
"Reut",
""
],
[
"Braude",
"Tom",
""
],
[
"Kantor",
"Amir",
""
],
[
"Kolman",
"Eyal",
""
]
] |
new_dataset
| 0.999619 |
2305.08511
|
Maurice Funk
|
Balder ten Cate, Maurice Funk, Jean Christoph Jung, Carsten Lutz
|
SAT-Based PAC Learning of Description Logic Concepts
|
19 pages, Long version of paper accepted at IJCAI 2023
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We propose bounded fitting as a scheme for learning description logic
concepts in the presence of ontologies. A main advantage is that the resulting
learning algorithms come with theoretical guarantees regarding their
generalization to unseen examples in the sense of PAC learning. We prove that,
in contrast, several other natural learning algorithms fail to provide such
guarantees. As a further contribution, we present the system SPELL which
efficiently implements bounded fitting for the description logic
$\mathcal{ELH}^r$ based on a SAT solver, and compare its performance to a
state-of-the-art learner.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 10:20:31 GMT"
}
] | 2023-05-16T00:00:00 |
[
[
"Cate",
"Balder ten",
""
],
[
"Funk",
"Maurice",
""
],
[
"Jung",
"Jean Christoph",
""
],
[
"Lutz",
"Carsten",
""
]
] |
new_dataset
| 0.962859 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.