id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.13344
|
Yigit Baran Can
|
Yigit Baran Can, Alexander Liniger, Danda Pani Paudel, Luc Van Gool
|
Prior Based Online Lane Graph Extraction from Single Onboard Camera
Image
|
ITSC 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The local road network information is essential for autonomous navigation.
This information is commonly obtained from offline HD-Maps in terms of lane
graphs. However, the local road network at a given moment can be drastically
different than the one given in the offline maps; due to construction works,
accidents etc. Moreover, the autonomous vehicle might be at a location not
covered in the offline HD-Map. Thus, online estimation of the lane graph is
crucial for widespread and reliable autonomous navigation. In this work, we
tackle online Bird's-Eye-View lane graph extraction from a single onboard
camera image. We propose to use prior information to increase quality of the
estimations. The prior is extracted from the dataset through a transformer
based Wasserstein Autoencoder. The autoencoder is then used to enhance the
initial lane graph estimates. This is done through optimization of the latent
space vector. The optimization encourages the lane graph estimation to be
logical by discouraging it to diverge from the prior distribution. We test the
method on two benchmark datasets, NuScenes and Argoverse. The results show that
the proposed method significantly improves the performance compared to
state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Tue, 25 Jul 2023 08:58:26 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Can",
"Yigit Baran",
""
],
[
"Liniger",
"Alexander",
""
],
[
"Paudel",
"Danda Pani",
""
],
[
"Van Gool",
"Luc",
""
]
] |
new_dataset
| 0.980227 |
2307.13346
|
Li Xiao
|
Li Xiao, Xiuping Yang, Xinhong Li, Weiping Tu, Xiong Chen, Weiyan Yi,
Jie Lin, Yuhong Yang, Yanzhen Ren
|
A Snoring Sound Dataset for Body Position Recognition: Collection,
Annotation, and Analysis
|
Accepted to INTERSPEECH 2023
| null | null | null |
cs.SD cs.MM eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Obstructive Sleep Apnea-Hypopnea Syndrome (OSAHS) is a chronic breathing
disorder caused by a blockage in the upper airways. Snoring is a prominent
symptom of OSAHS, and previous studies have attempted to identify the
obstruction site of the upper airways by snoring sounds. Despite some progress,
the classification of the obstruction site remains challenging in real-world
clinical settings due to the influence of sleep body position on upper airways.
To address this challenge, this paper proposes a snore-based sleep body
position recognition dataset (SSBPR) consisting of 7570 snoring recordings,
which comprises six distinct labels for sleep body position: supine, supine but
left lateral head, supine but right lateral head, left-side lying, right-side
lying and prone. Experimental results show that snoring sounds exhibit certain
acoustic features that enable their effective utilization for identifying body
posture during sleep in real-world scenarios.
|
[
{
"version": "v1",
"created": "Tue, 25 Jul 2023 09:03:27 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Xiao",
"Li",
""
],
[
"Yang",
"Xiuping",
""
],
[
"Li",
"Xinhong",
""
],
[
"Tu",
"Weiping",
""
],
[
"Chen",
"Xiong",
""
],
[
"Yi",
"Weiyan",
""
],
[
"Lin",
"Jie",
""
],
[
"Yang",
"Yuhong",
""
],
[
"Ren",
"Yanzhen",
""
]
] |
new_dataset
| 0.999804 |
2307.13538
|
Leon Migus
|
Louis Serrano, Leon Migus, Yuan Yin, Jocelyn Ahmed Mazari, Patrick
Gallinari
|
INFINITY: Neural Field Modeling for Reynolds-Averaged Navier-Stokes
Equations
|
ICML 2023 Workshop on Synergy of Scientific and Machine Learning
Modeling
|
ICML 2023 Workshop on Synergy of Scientific and Machine Learning
Modeling
| null | null |
cs.LG physics.flu-dyn
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
For numerical design, the development of efficient and accurate surrogate
models is paramount. They allow us to approximate complex physical phenomena,
thereby reducing the computational burden of direct numerical simulations. We
propose INFINITY, a deep learning model that utilizes implicit neural
representations (INRs) to address this challenge. Our framework encodes
geometric information and physical fields into compact representations and
learns a mapping between them to infer the physical fields. We use an airfoil
design optimization problem as an example task and we evaluate our approach on
the challenging AirfRANS dataset, which closely resembles real-world industrial
use-cases. The experimental results demonstrate that our framework achieves
state-of-the-art performance by accurately inferring physical fields throughout
the volume and surface. Additionally we demonstrate its applicability in
contexts such as design exploration and shape optimization: our model can
correctly predict drag and lift coefficients while adhering to the equations.
|
[
{
"version": "v1",
"created": "Tue, 25 Jul 2023 14:35:55 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Serrano",
"Louis",
""
],
[
"Migus",
"Leon",
""
],
[
"Yin",
"Yuan",
""
],
[
"Mazari",
"Jocelyn Ahmed",
""
],
[
"Gallinari",
"Patrick",
""
]
] |
new_dataset
| 0.998077 |
2307.13571
|
Xinran Liu
|
Xinran Liu, Yikun Bai, Huy Tran, Zhanqi Zhu, Matthew Thorpe, Soheil
Kolouri
|
PT$\mathrm{L}^{p}$: Partial Transport $\mathrm{L}^{p}$ Distances
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Optimal transport and its related problems, including optimal partial
transport, have proven to be valuable tools in machine learning for computing
meaningful distances between probability or positive measures. This success has
led to a growing interest in defining transport-based distances that allow for
comparing signed measures and, more generally, multi-channeled signals.
Transport $\mathrm{L}^{p}$ distances are notable extensions of the optimal
transport framework to signed and possibly multi-channeled signals. In this
paper, we introduce partial transport $\mathrm{L}^{p}$ distances as a new
family of metrics for comparing generic signals, benefiting from the robustness
of partial transport distances. We provide theoretical background such as the
existence of optimal plans and the behavior of the distance in various limits.
Furthermore, we introduce the sliced variation of these distances, which allows
for rapid comparison of generic signals. Finally, we demonstrate the
application of the proposed distances in signal class separability and nearest
neighbor classification.
|
[
{
"version": "v1",
"created": "Tue, 25 Jul 2023 15:23:15 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Liu",
"Xinran",
""
],
[
"Bai",
"Yikun",
""
],
[
"Tran",
"Huy",
""
],
[
"Zhu",
"Zhanqi",
""
],
[
"Thorpe",
"Matthew",
""
],
[
"Kolouri",
"Soheil",
""
]
] |
new_dataset
| 0.96543 |
2307.13600
|
Muhammad Ali Farooq
|
Muhammad Ali Farooq, Waseem Shariff, Mehdi Sefidgar Dilmaghani, Wang
Yao, Moazam Soomro, and Peter Corcoran
|
Decisive Data using Multi-Modality Optical Sensors for Advanced
Vehicular Systems
|
The Paper is accepted in 25th Irish Machine Vision and Image
Processing Conference (IMVIP23)
| null |
10.5281/zenodo.8160053
| null |
cs.NE cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Optical sensors have played a pivotal role in acquiring real world data for
critical applications. This data, when integrated with advanced machine
learning algorithms provides meaningful information thus enhancing human
vision. This paper focuses on various optical technologies for design and
development of state-of-the-art out-cabin forward vision systems and in-cabin
driver monitoring systems. The focused optical sensors include Longwave Thermal
Imaging (LWIR) cameras, Near Infrared (NIR), Neuromorphic/ event cameras,
Visible CMOS cameras and Depth cameras. Further the paper discusses different
potential applications which can be employed using the unique strengths of each
these optical modalities in real time environment.
|
[
{
"version": "v1",
"created": "Tue, 25 Jul 2023 16:03:47 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Farooq",
"Muhammad Ali",
""
],
[
"Shariff",
"Waseem",
""
],
[
"Dilmaghani",
"Mehdi Sefidgar",
""
],
[
"Yao",
"Wang",
""
],
[
"Soomro",
"Moazam",
""
],
[
"Corcoran",
"Peter",
""
]
] |
new_dataset
| 0.981649 |
2307.13603
|
Sukhpal Singh Gill
|
Mohit Kumar, Hritu Raj, Nisha Chaurasia, Sukhpal Singh Gill
|
Blockchain inspired secure and reliable data exchange architecture for
cyber-physical healthcare system 4.0
| null |
Internet of Things and Cyber-Physical Systems, Volume 3, 2023,
Pages 309-322
|
10.1016/j.iotcps.2023.05.006
| null |
cs.CR cs.DC
|
http://creativecommons.org/publicdomain/zero/1.0/
|
A cyber-physical system is considered to be a collection of strongly coupled
communication systems and devices that poses numerous security trials in
various industrial applications including healthcare. The security and privacy
of patient data is still a big concern because healthcare data is sensitive and
valuable, and it is most targeted over the internet. Moreover, from the
industrial perspective, the cyber-physical system plays a crucial role in the
exchange of data remotely using sensor nodes in distributed environments. In
the healthcare industry, Blockchain technology offers a promising solution to
resolve most securities-related issues due to its decentralized, immutability,
and transparency properties. In this paper, a blockchain-inspired secure and
reliable data exchange architecture is proposed in the cyber-physical
healthcare industry 4.0. The proposed system uses the BigchainDB, Tendermint,
Inter-Planetary-File-System (IPFS), MongoDB, and AES encryption algorithms to
improve Healthcare 4.0. Furthermore, blockchain-enabled secure healthcare
architecture for accessing and managing the records between Doctors and
Patients is introduced. The development of a blockchain-based Electronic
Healthcare Record (EHR) exchange system is purely patient-centric, which means
the entire control of data is in the owner's hand which is backed by blockchain
for security and privacy. Our experimental results reveal that the proposed
architecture is robust to handle more security attacks and can recover the data
if 2/3 of nodes are failed. The proposed model is patient-centric, and control
of data is in the patient's hand to enhance security and privacy, even system
administrators can't access data without user permission.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 14:47:59 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Kumar",
"Mohit",
""
],
[
"Raj",
"Hritu",
""
],
[
"Chaurasia",
"Nisha",
""
],
[
"Gill",
"Sukhpal Singh",
""
]
] |
new_dataset
| 0.995489 |
2307.13646
|
Justin Engelmann
|
Justin Engelmann, Amos Storkey, Miguel O. Bernabeu
|
QuickQual: Lightweight, convenient retinal image quality scoring with
off-the-shelf pretrained models
| null | null | null | null |
cs.CV cs.AI q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image quality remains a key problem for both traditional and deep learning
(DL)-based approaches to retinal image analysis, but identifying poor quality
images can be time consuming and subjective. Thus, automated methods for
retinal image quality scoring (RIQS) are needed. The current state-of-the-art
is MCFNet, composed of three Densenet121 backbones each operating in a
different colour space. MCFNet, and the EyeQ dataset released by the same
authors, was a huge step forward for RIQS. We present QuickQual, a simple
approach to RIQS, consisting of a single off-the-shelf ImageNet-pretrained
Densenet121 backbone plus a Support Vector Machine (SVM). QuickQual performs
very well, setting a new state-of-the-art for EyeQ (Accuracy: 88.50% vs 88.00%
for MCFNet; AUC: 0.9687 vs 0.9588). This suggests that RIQS can be solved with
generic perceptual features learned on natural images, as opposed to requiring
DL models trained on large amounts of fundus images. Additionally, we propose a
Fixed Prior linearisation scheme, that converts EyeQ from a 3-way
classification to a continuous logistic regression task. For this task, we
present a second model, QuickQual MEga Minified Estimator (QuickQual-MEME),
that consists of only 10 parameters on top of an off-the-shelf Densenet121 and
can distinguish between gradable and ungradable images with an accuracy of
89.18% (AUC: 0.9537). Code and model are available on GitHub:
https://github.com/justinengelmann/QuickQual . QuickQual is so lightweight,
that the entire inference code (and even the parameters for QuickQual-MEME) is
already contained in this paper.
|
[
{
"version": "v1",
"created": "Tue, 25 Jul 2023 16:55:13 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Engelmann",
"Justin",
""
],
[
"Storkey",
"Amos",
""
],
[
"Bernabeu",
"Miguel O.",
""
]
] |
new_dataset
| 0.998177 |
2307.13657
|
Thomas Mack
|
Thomas Mack, Ketao Zhang, Kaspar Althoefer
|
A Soft Robotic Gripper with Active Palm for In-Hand Object Reorientation
|
Originally written for ICRA2023
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
The human hand has an inherent ability to manipulate and re-orientate objects
without external assistance. As a consequence, we are able to operate tools and
perform an array of actions using just one hand, without having to continuously
re-grasp objects. Emulating this functionality in robotic end-effectors remains
a key area of study with efforts being made to create advanced control systems
that could be used to operate complex manipulators. In this paper, a three
fingered soft gripper with an active rotary palm is presented as a simpler,
alternative method of performing in-hand rotations. The gripper, complete with
its pneumatic suction cup to prevent object slippage, was tested and found to
be able to effectively grasp and rotate a variety of objects both quickly and
precisely.
|
[
{
"version": "v1",
"created": "Tue, 25 Jul 2023 17:08:21 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Mack",
"Thomas",
""
],
[
"Zhang",
"Ketao",
""
],
[
"Althoefer",
"Kaspar",
""
]
] |
new_dataset
| 0.99652 |
2307.13681
|
Julia Guerrero-Viu
|
Valentin Deschaintre, Julia Guerrero-Viu, Diego Gutierrez, Tamy
Boubekeur, Belen Masia
|
The Visual Language of Fabrics
| null |
ACM Transactions on Graphics 2023
| null | null |
cs.GR cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce text2fabric, a novel dataset that links free-text descriptions
to various fabric materials. The dataset comprises 15,000 natural language
descriptions associated to 3,000 corresponding images of fabric materials.
Traditionally, material descriptions come in the form of tags/keywords, which
limits their expressivity, induces pre-existing knowledge of the appropriate
vocabulary, and ultimately leads to a chopped description system. Therefore, we
study the use of free-text as a more appropriate way to describe material
appearance, taking the use case of fabrics as a common item that non-experts
may often deal with. Based on the analysis of the dataset, we identify a
compact lexicon, set of attributes and key structure that emerge from the
descriptions. This allows us to accurately understand how people describe
fabrics and draw directions for generalization to other types of materials. We
also show that our dataset enables specializing large vision-language models
such as CLIP, creating a meaningful latent space for fabric appearance, and
significantly improving applications such as fine-grained material retrieval
and automatic captioning.
|
[
{
"version": "v1",
"created": "Tue, 25 Jul 2023 17:39:39 GMT"
}
] | 2023-07-26T00:00:00 |
[
[
"Deschaintre",
"Valentin",
""
],
[
"Guerrero-Viu",
"Julia",
""
],
[
"Gutierrez",
"Diego",
""
],
[
"Boubekeur",
"Tamy",
""
],
[
"Masia",
"Belen",
""
]
] |
new_dataset
| 0.999833 |
2009.08820
|
Hossein Amirkhani
|
Hossein Amirkhani, Mohammad AzariJafari, Zohreh Pourjafari, Soroush
Faridan-Jahromi, Zeinab Kouhkan, Azadeh Amirak
|
FarsTail: A Persian Natural Language Inference Dataset
| null |
Soft Computing (2023)
|
10.1007/s00500-023-08959-3
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Natural language inference (NLI) is known as one of the central tasks in
natural language processing (NLP) which encapsulates many fundamental aspects
of language understanding. With the considerable achievements of data-hungry
deep learning methods in NLP tasks, a great amount of effort has been devoted
to develop more diverse datasets for different languages. In this paper, we
present a new dataset for the NLI task in the Persian language, also known as
Farsi, which is one of the dominant languages in the Middle East. This dataset,
named FarsTail, includes 10,367 samples which are provided in both the Persian
language as well as the indexed format to be useful for non-Persian
researchers. The samples are generated from 3,539 multiple-choice questions
with the least amount of annotator interventions in a way similar to the
SciTail dataset. A carefully designed multi-step process is adopted to ensure
the quality of the dataset. We also present the results of traditional and
state-of-the-art methods on FarsTail including different embedding methods such
as word2vec, fastText, ELMo, BERT, and LASER, as well as different modeling
approaches such as DecompAtt, ESIM, HBMP, and ULMFiT to provide a solid
baseline for the future research. The best obtained test accuracy is 83.38%
which shows that there is a big room for improving the current methods to be
useful for real-world NLP applications in different languages. We also
investigate the extent to which the models exploit superficial clues, also
known as dataset biases, in FarsTail, and partition the test set into easy and
hard subsets according to the success of biased models. The dataset is
available at https://github.com/dml-qom/FarsTail
|
[
{
"version": "v1",
"created": "Fri, 18 Sep 2020 13:04:04 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jul 2021 15:21:54 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Amirkhani",
"Hossein",
""
],
[
"AzariJafari",
"Mohammad",
""
],
[
"Pourjafari",
"Zohreh",
""
],
[
"Faridan-Jahromi",
"Soroush",
""
],
[
"Kouhkan",
"Zeinab",
""
],
[
"Amirak",
"Azadeh",
""
]
] |
new_dataset
| 0.999866 |
2112.13424
|
Hiram H. L\'opez
|
Eduardo Camps, Hiram H. L\'opez, Gretchen L. Matthews
|
Explicit non-special divisors of small degree, algebraic geometric
hulls, and LCD codes from Kummer extensions
| null | null | null | null |
cs.IT math.AG math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we consider the hull of an algebraic geometry code, meaning
the intersection of the code and its dual. We demonstrate how codes whose hulls
are algebraic geometry codes may be defined using only rational places of
Kummer extensions (and Hermitian function fields in particular). Our primary
tool is explicitly constructing non-special divisors of degrees $g$ and $g-1$
on certain families of function fields with many rational places, accomplished
by appealing to Weierstrass semigroups. We provide explicit algebraic geometry
codes with hulls of specified dimensions, producing along the way linearly
complementary dual algebraic geometric codes from the Hermitian function field
(among others) using only rational places and an answer to an open question
posed by Ballet and Le Brigand for particular function fields. These results
complement earlier work by Mesnager, Tang, and Qi that use lower-genus function
fields as well as instances using places of a higher degree from Hermitian
function fields to construct linearly complementary dual (LCD) codes and that
of Carlet, Mesnager, Tang, Qi, and Pellikaan to provide explicit algebraic
geometry codes with the LCD property rather than obtaining codes via monomial
equivalences.
|
[
{
"version": "v1",
"created": "Sun, 26 Dec 2021 17:57:44 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Jul 2023 08:15:32 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Camps",
"Eduardo",
""
],
[
"López",
"Hiram H.",
""
],
[
"Matthews",
"Gretchen L.",
""
]
] |
new_dataset
| 0.992838 |
2202.09268
|
Stephen Montgomery-Smith
|
Stephen Montgomery-Smith and Cecil Shy
|
Using Lie derivatives with dual quaternions for parallel robots
|
Reference update. Other small changes
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce the notion of the Lie derivative in the context of dual
quaternions that represent poses and twists. First we define the wrench in
terms of dual quaternions. Then we show how the Lie derivative helps understand
how actuators affect an end effector in parallel robots, and make it explicit
in the case of robot-driven parallel robots. We also show how to use Lie
derivatives with the Newton-Raphson Method to solve the forward kinematic
problem for over constrained parallel actuators. Finally, we derive the
equations of motion of the end effector in dual quaternion form, which include
the effect of inertia in the actuators. A large part of our methods is an
approximation of the normalization of a pure dual quaternion perturbation of
the identity, which shows that it is equal up to the second order to the
exponential of the pure dual quaternion.
|
[
{
"version": "v1",
"created": "Wed, 16 Feb 2022 17:29:56 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Feb 2022 07:24:47 GMT"
},
{
"version": "v3",
"created": "Wed, 23 Mar 2022 19:44:05 GMT"
},
{
"version": "v4",
"created": "Sat, 13 Aug 2022 21:30:12 GMT"
},
{
"version": "v5",
"created": "Sun, 23 Jul 2023 22:07:19 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Montgomery-Smith",
"Stephen",
""
],
[
"Shy",
"Cecil",
""
]
] |
new_dataset
| 0.98416 |
2207.06988
|
Andreas Ren\'e Geist
|
A. Ren\'e Geist, Jonathan Fiene, Naomi Tashiro, Zheng Jia, and
Sebastian Trimpe
|
The Wheelbot: A Jumping Reaction Wheel Unicycle
|
Erratum: In the initial publication, Equation (3) was wrong and has
been corrected in this version. Equation (3) relates to the transform from
averaged body rates ${}^{\text{B}}\omega_i$ to Euler rates. Importantly, the
results in this papers are not affected by the wrong transform. More details
are found in the projects github repo:
https://github.com/AndReGeist/wheelbot-v2.5
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Combining off-the-shelf components with 3D-printing, the Wheelbot is a
symmetric reaction wheel unicycle that can jump onto its wheels from any
initial position. With non-holonomic and under-actuated dynamics, as well as
two coupled unstable degrees of freedom, the Wheelbot provides a challenging
platform for nonlinear and data-driven control research. This paper presents
the Wheelbot's mechanical and electrical design, its estimation and control
algorithms, as well as experiments demonstrating both self-erection and
disturbance rejection while balancing.
|
[
{
"version": "v1",
"created": "Thu, 14 Jul 2022 15:16:46 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Jul 2023 20:09:42 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Geist",
"A. René",
""
],
[
"Fiene",
"Jonathan",
""
],
[
"Tashiro",
"Naomi",
""
],
[
"Jia",
"Zheng",
""
],
[
"Trimpe",
"Sebastian",
""
]
] |
new_dataset
| 0.999505 |
2208.06868
|
Jaime C\'espedes Sisniega
|
Jaime C\'espedes-Sisniega and \'Alvaro L\'opez-Garc\'ia
|
Frouros: A Python library for drift detection in machine learning
systems
|
11 pages, 1 table
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Frouros is an open-source Python library capable of detecting drift in
machine learning systems. It provides a combination of classical and more
recent algorithms for drift detection: both concept and data drift. We have
designed it with the objective of making it compatible with any machine
learning framework and easily adaptable to real-world use cases. The library is
developed following a set of best development and continuous integration
practices to ensure ease of maintenance and extensibility. The source code is
available at https://github.com/IFCA/frouros.
|
[
{
"version": "v1",
"created": "Sun, 14 Aug 2022 15:25:41 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Jun 2023 10:50:56 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Jul 2023 09:00:57 GMT"
},
{
"version": "v4",
"created": "Sun, 23 Jul 2023 10:36:55 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Céspedes-Sisniega",
"Jaime",
""
],
[
"López-García",
"Álvaro",
""
]
] |
new_dataset
| 0.99943 |
2210.15401
|
Zijie Yue
|
Zijie Yue, Miaojing Shi, Shuai Ding
|
Facial Video-based Remote Physiological Measurement via Self-supervised
Learning
|
IEEE Transactions on Pattern Analysis and Machine Intelligence
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Facial video-based remote physiological measurement aims to estimate remote
photoplethysmography (rPPG) signals from human face videos and then measure
multiple vital signs (e.g. heart rate, respiration frequency) from rPPG
signals. Recent approaches achieve it by training deep neural networks, which
normally require abundant facial videos and synchronously recorded
photoplethysmography (PPG) signals for supervision. However, the collection of
these annotated corpora is not easy in practice. In this paper, we introduce a
novel frequency-inspired self-supervised framework that learns to estimate rPPG
signals from facial videos without the need of ground truth PPG signals. Given
a video sample, we first augment it into multiple positive/negative samples
which contain similar/dissimilar signal frequencies to the original one.
Specifically, positive samples are generated using spatial augmentation.
Negative samples are generated via a learnable frequency augmentation module,
which performs non-linear signal frequency transformation on the input without
excessively changing its visual appearance. Next, we introduce a local rPPG
expert aggregation module to estimate rPPG signals from augmented samples. It
encodes complementary pulsation information from different face regions and
aggregate them into one rPPG prediction. Finally, we propose a series of
frequency-inspired losses, i.e. frequency contrastive loss, frequency ratio
consistency loss, and cross-video frequency agreement loss, for the
optimization of estimated rPPG signals from multiple augmented video samples
and across temporally neighboring video samples. We conduct rPPG-based heart
rate, heart rate variability and respiration frequency estimation on four
standard benchmarks. The experimental results demonstrate that our method
improves the state of the art by a large margin.
|
[
{
"version": "v1",
"created": "Thu, 27 Oct 2022 13:03:23 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Oct 2022 09:49:10 GMT"
},
{
"version": "v3",
"created": "Sat, 22 Jul 2023 07:21:11 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Yue",
"Zijie",
""
],
[
"Shi",
"Miaojing",
""
],
[
"Ding",
"Shuai",
""
]
] |
new_dataset
| 0.970532 |
2212.07861
|
Yan Xia
|
Yan Xia, Antti Gronow, Arttu Malkam\"aki, Tuomas Yl\"a-Anttila,
Barbara Keller, Mikko Kivel\"a
|
The Russian invasion of Ukraine selectively depolarized the Finnish NATO
discussion
| null | null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Russian invasion of Ukraine in 2022 dramatically reshaped the European
security landscape. In Finland, public opinion on NATO had long been polarized
along the left-right partisan axis, but the invasion led to a rapid convergence
of the opinion toward joining NATO. We investigate whether and how this
depolarization took place among polarized actors on Finnish Twitter. By
analyzing retweeting patterns, we find three separated user groups before the
invasion: a pro-NATO, a left-wing anti-NATO, and a conspiracy-charged anti-NATO
group. After the invasion, the left-wing anti-NATO group members broke out of
their retweeting bubble and connected with the pro-NATO group despite their
difference in partisanship, while the conspiracy-charged anti-NATO group mostly
remained a separate cluster. Our content analysis reveals that the left-wing
anti-NATO group and the pro-NATO group were bridged by a shared condemnation of
Russia's actions and shared democratic norms, while the other anti-NATO group,
mainly built around conspiracy theories and disinformation, consistently
demonstrated a clear anti-NATO attitude. We show that an external threat can
bridge partisan divides in issues linked to the threat, but bubbles upheld by
conspiracy theories and disinformation may persist even under dramatic external
threats.
|
[
{
"version": "v1",
"created": "Thu, 15 Dec 2022 14:29:11 GMT"
},
{
"version": "v2",
"created": "Sat, 4 Feb 2023 22:37:15 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Jul 2023 12:20:35 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Xia",
"Yan",
""
],
[
"Gronow",
"Antti",
""
],
[
"Malkamäki",
"Arttu",
""
],
[
"Ylä-Anttila",
"Tuomas",
""
],
[
"Keller",
"Barbara",
""
],
[
"Kivelä",
"Mikko",
""
]
] |
new_dataset
| 0.987054 |
2301.07464
|
Aviad Aberdam
|
Aviad Aberdam, David Bensa\"id, Alona Golts, Roy Ganz, Oren Nuriel,
Royee Tichauer, Shai Mazor, Ron Litman
|
CLIPTER: Looking at the Bigger Picture in Scene Text Recognition
|
Accepted for publication by ICCV 2023
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reading text in real-world scenarios often requires understanding the context
surrounding it, especially when dealing with poor-quality text. However,
current scene text recognizers are unaware of the bigger picture as they
operate on cropped text images. In this study, we harness the representative
capabilities of modern vision-language models, such as CLIP, to provide
scene-level information to the crop-based recognizer. We achieve this by fusing
a rich representation of the entire image, obtained from the vision-language
model, with the recognizer word-level features via a gated cross-attention
mechanism. This component gradually shifts to the context-enhanced
representation, allowing for stable fine-tuning of a pretrained recognizer. We
demonstrate the effectiveness of our model-agnostic framework, CLIPTER (CLIP
TExt Recognition), on leading text recognition architectures and achieve
state-of-the-art results across multiple benchmarks. Furthermore, our analysis
highlights improved robustness to out-of-vocabulary words and enhanced
generalization in low-data regimes.
|
[
{
"version": "v1",
"created": "Wed, 18 Jan 2023 12:16:19 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Jul 2023 13:51:34 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Aberdam",
"Aviad",
""
],
[
"Bensaïd",
"David",
""
],
[
"Golts",
"Alona",
""
],
[
"Ganz",
"Roy",
""
],
[
"Nuriel",
"Oren",
""
],
[
"Tichauer",
"Royee",
""
],
[
"Mazor",
"Shai",
""
],
[
"Litman",
"Ron",
""
]
] |
new_dataset
| 0.99936 |
2301.08800
|
Satish Kumar
|
Satish Kumar, Rui Kou, Henry Hill, Jake Lempges, Eric Qian, and Vikram
Jayaram
|
In-situ Water quality monitoring in Oil and Gas operations
|
15 pages, 8 figures, SPIE Defense + Commercial: Algorithms,
Technologies, and Applications for Multispectral and Hyperspectral Imaging
XXIX
| null | null | null |
cs.CV stat.AP stat.CO stat.ME
|
http://creativecommons.org/licenses/by/4.0/
|
From agriculture to mining, to energy, surface water quality monitoring is an
essential task. As oil and gas operators work to reduce the consumption of
freshwater, it is increasingly important to actively manage fresh and non-fresh
water resources over the long term. For large-scale monitoring, manual sampling
at many sites has become too time-consuming and unsustainable, given the sheer
number of dispersed ponds, small lakes, playas, and wetlands over a large area.
Therefore, satellite-based environmental monitoring presents great potential.
Many existing satellite-based monitoring studies utilize index-based methods to
monitor large water bodies such as rivers and oceans. However, these existing
methods fail when monitoring small ponds-the reflectance signal received from
small water bodies is too weak to detect. To address this challenge, we propose
a new Water Quality Enhanced Index (WQEI) Model, which is designed to enable
users to determine contamination levels in water bodies with weak reflectance
patterns. Our results show that 1) WQEI is a good indicator of water turbidity
validated with 1200 water samples measured in the laboratory, and 2) by
applying our method to commonly available satellite data (e.g. LandSat8), one
can achieve high accuracy water quality monitoring efficiently in large
regions. This provides a tool for operators to optimize the quality of water
stored within surface storage ponds and increasing the readiness and
availability of non-fresh water.
|
[
{
"version": "v1",
"created": "Fri, 20 Jan 2023 20:56:52 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Jul 2023 02:04:40 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Kumar",
"Satish",
""
],
[
"Kou",
"Rui",
""
],
[
"Hill",
"Henry",
""
],
[
"Lempges",
"Jake",
""
],
[
"Qian",
"Eric",
""
],
[
"Jayaram",
"Vikram",
""
]
] |
new_dataset
| 0.994074 |
2302.00391
|
Lala Shakti Swarup Ray
|
Lala Shakti Swarup Ray, Bo Zhou, Sungho Suh, Paul Lukowicz
|
PresSim: An End-to-end Framework for Dynamic Ground Pressure Profile
Generation from Monocular Videos Using Physics-based 3D Simulation
|
Percom2023 workshop(UMUM2023)
|
2023 IEEE International Conference on Pervasive Computing and
Communications Workshops and other Affiliated Events
|
10.1109/PerComWorkshops56833.2023.10150221
| null |
cs.CV cs.AI cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Ground pressure exerted by the human body is a valuable source of information
for human activity recognition (HAR) in unobtrusive pervasive sensing. While
data collection from pressure sensors to develop HAR solutions requires
significant resources and effort, we present a novel end-to-end framework,
PresSim, to synthesize sensor data from videos of human activities to reduce
such effort significantly. PresSim adopts a 3-stage process: first, extract the
3D activity information from videos with computer vision architectures; then
simulate the floor mesh deformation profiles based on the 3D activity
information and gravity-included physics simulation; lastly, generate the
simulated pressure sensor data with deep learning models. We explored two
approaches for the 3D activity information: inverse kinematics with mesh
re-targeting, and volumetric pose and shape estimation. We validated PresSim
with an experimental setup with a monocular camera to provide input and a
pressure-sensing fitness mat (80x28 spatial resolution) to provide the sensor
ground truth, where nine participants performed a set of predefined yoga
sequences.
|
[
{
"version": "v1",
"created": "Wed, 1 Feb 2023 12:02:04 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Ray",
"Lala Shakti Swarup",
""
],
[
"Zhou",
"Bo",
""
],
[
"Suh",
"Sungho",
""
],
[
"Lukowicz",
"Paul",
""
]
] |
new_dataset
| 0.999198 |
2302.02427
|
Pramod P Nair
|
Sneha K H, Adhithya Sudeesh, Pramod P Nair, Prashanth Suravajhala
|
Biologically inspired ChaosNet architecture for Hypothetical Protein
Classification
| null | null |
10.1109/ICECCT56650.2023.10179833
| null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
ChaosNet is a type of artificial neural network framework developed for
classification problems and is influenced by the chaotic property of the human
brain. Each neuron of the ChaosNet architecture is the one-dimensional chaotic
map called the Generalized Luroth Series (GLS). The addition of GLS as neurons
in ChaosNet makes the computations straightforward while utilizing the
advantageous elements of chaos. With substantially less data, ChaosNet has been
demonstrated to do difficult classification problems on par with or better than
traditional ANNs. In this paper, we use Chaosnet to perform a functional
classification of Hypothetical proteins [HP], which is indeed a topic of great
interest in bioinformatics. The results obtained with significantly lesser
training data are compared with the standard machine learning techniques used
in the literature.
|
[
{
"version": "v1",
"created": "Sun, 5 Feb 2023 16:48:49 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"H",
"Sneha K",
""
],
[
"Sudeesh",
"Adhithya",
""
],
[
"Nair",
"Pramod P",
""
],
[
"Suravajhala",
"Prashanth",
""
]
] |
new_dataset
| 0.990585 |
2302.09629
|
Md Abir Hossen
|
Md Hafizur Rahman, Md Ali Azam, Md Abir Hossen, Shankarachary Ragi,
and Venkataramana Gadhamshetty
|
BiofilmScanner: A Computational Intelligence Approach to Obtain
Bacterial Cell Morphological Attributes from Biofilm Image
|
Submitted to Pattern Recognition
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Desulfovibrio alaskensis G20 (DA-G20) is utilized as a model for
sulfate-reducing bacteria (SRB) that are associated with corrosion issues
caused by microorganisms. SRB-based biofilms are thought to be responsible for
the billion-dollar-per-year bio-corrosion of metal infrastructure.
Understanding the extraction of the bacterial cells' shape and size properties
in the SRB-biofilm at different growth stages will assist with the design of
anti-corrosion techniques. However, numerous issues affect current approaches,
including time-consuming geometric property extraction, low efficiency, and
high error rates. This paper proposes BiofilScanner, a Yolact-based deep
learning method integrated with invariant moments to address these problems.
Our approach efficiently detects and segments bacterial cells in an SRB image
while simultaneously invariant moments measure the geometric characteristics of
the segmented cells with low errors. The numerical experiments of the proposed
method demonstrate that the BiofilmScanner is 2.1x and 6.8x faster than our
earlier Mask-RCNN and DLv3+ methods for detecting, segmenting, and measuring
the geometric properties of the cell. Furthermore, the BiofilmScanner achieved
an F1-score of 85.28% while Mask-RCNN and DLv3+ obtained F1-scores of 77.67%
and 75.18%, respectively.
|
[
{
"version": "v1",
"created": "Sun, 19 Feb 2023 17:15:56 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Jul 2023 12:33:09 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Rahman",
"Md Hafizur",
""
],
[
"Azam",
"Md Ali",
""
],
[
"Hossen",
"Md Abir",
""
],
[
"Ragi",
"Shankarachary",
""
],
[
"Gadhamshetty",
"Venkataramana",
""
]
] |
new_dataset
| 0.998276 |
2303.02401
|
Nguyen Toan
|
Toan Nguyen, Minh Nhat Vu, An Vuong, Dzung Nguyen, Thieu Vo, Ngan Le,
Anh Nguyen
|
Open-Vocabulary Affordance Detection in 3D Point Clouds
|
Accepted at The 2023 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS 2023)
| null | null | null |
cs.RO cs.AI cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Affordance detection is a challenging problem with a wide variety of robotic
applications. Traditional affordance detection methods are limited to a
predefined set of affordance labels, hence potentially restricting the
adaptability of intelligent robots in complex and dynamic environments. In this
paper, we present the Open-Vocabulary Affordance Detection (OpenAD) method,
which is capable of detecting an unbounded number of affordances in 3D point
clouds. By simultaneously learning the affordance text and the point feature,
OpenAD successfully exploits the semantic relationships between affordances.
Therefore, our proposed method enables zero-shot detection and can be able to
detect previously unseen affordances without a single annotation example.
Intensive experimental results show that OpenAD works effectively on a wide
range of affordance detection setups and outperforms other baselines by a large
margin. Additionally, we demonstrate the practicality of the proposed OpenAD in
real-world robotic applications with a fast inference speed (~100ms). Our
project is available at https://openad2023.github.io.
|
[
{
"version": "v1",
"created": "Sat, 4 Mar 2023 12:26:47 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Jul 2023 04:56:10 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Jul 2023 14:54:03 GMT"
},
{
"version": "v4",
"created": "Tue, 18 Jul 2023 03:21:11 GMT"
},
{
"version": "v5",
"created": "Sun, 23 Jul 2023 08:31:15 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Nguyen",
"Toan",
""
],
[
"Vu",
"Minh Nhat",
""
],
[
"Vuong",
"An",
""
],
[
"Nguyen",
"Dzung",
""
],
[
"Vo",
"Thieu",
""
],
[
"Le",
"Ngan",
""
],
[
"Nguyen",
"Anh",
""
]
] |
new_dataset
| 0.995367 |
2303.04284
|
Shyam Sundar Kannan
|
Shyam Sundar Kannan, Vishnunandan L. N. Venkatesh, Revanth Krishna
Senthilkumaran, and Byung-Cheol Min
|
UPPLIED: UAV Path Planning for Inspection through Demonstration
|
Accepted for publication in IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS 2023), Detroit, Michigan, USA
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, a new demonstration-based path-planning framework for the
visual inspection of large structures using UAVs is proposed. We introduce
UPPLIED: UAV Path PLanning for InspEction through Demonstration, which utilizes
a demonstrated trajectory to generate a new trajectory to inspect other
structures of the same kind. The demonstrated trajectory can inspect specific
regions of the structure and the new trajectory generated by UPPLIED inspects
similar regions in the other structure. The proposed method generates
inspection points from the demonstrated trajectory and uses standardization to
translate those inspection points to inspect the new structure. Finally, the
position of these inspection points is optimized to refine their view. Numerous
experiments were conducted with various structures and the proposed framework
was able to generate inspection trajectories of various kinds for different
structures based on the demonstration. The trajectories generated match with
the demonstrated trajectory in geometry and at the same time inspect the
regions inspected by the demonstration trajectory with minimum deviation. The
experimental video of the work can be found at https://youtu.be/YqPx-cLkv04.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 23:06:06 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Jul 2023 17:29:23 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Kannan",
"Shyam Sundar",
""
],
[
"Venkatesh",
"Vishnunandan L. N.",
""
],
[
"Senthilkumaran",
"Revanth Krishna",
""
],
[
"Min",
"Byung-Cheol",
""
]
] |
new_dataset
| 0.998042 |
2303.06147
|
Ameya Velingker
|
Hamed Shirzad, Ameya Velingker, Balaji Venkatachalam, Danica J.
Sutherland, Ali Kemal Sinop
|
Exphormer: Sparse Transformers for Graphs
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph transformers have emerged as a promising architecture for a variety of
graph learning and representation tasks. Despite their successes, though, it
remains challenging to scale graph transformers to large graphs while
maintaining accuracy competitive with message-passing networks. In this paper,
we introduce Exphormer, a framework for building powerful and scalable graph
transformers. Exphormer consists of a sparse attention mechanism based on two
mechanisms: virtual global nodes and expander graphs, whose mathematical
characteristics, such as spectral expansion, pseduorandomness, and sparsity,
yield graph transformers with complexity only linear in the size of the graph,
while allowing us to prove desirable theoretical properties of the resulting
transformer models. We show that incorporating Exphormer into the
recently-proposed GraphGPS framework produces models with competitive empirical
results on a wide variety of graph datasets, including state-of-the-art results
on three datasets. We also show that Exphormer can scale to datasets on larger
graphs than shown in previous graph transformer architectures. Code can be
found at \url{https://github.com/hamed1375/Exphormer}.
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 18:59:57 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Jul 2023 17:58:45 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Shirzad",
"Hamed",
""
],
[
"Velingker",
"Ameya",
""
],
[
"Venkatachalam",
"Balaji",
""
],
[
"Sutherland",
"Danica J.",
""
],
[
"Sinop",
"Ali Kemal",
""
]
] |
new_dataset
| 0.998848 |
2305.13040
|
Shuzheng Si
|
Shuzheng Si, Wentao Ma, Haoyu Gao, Yuchuan Wu, Ting-En Lin, Yinpei
Dai, Hangyu Li, Rui Yan, Fei Huang, Yongbin Li
|
SpokenWOZ: A Large-Scale Speech-Text Benchmark for Spoken Task-Oriented
Dialogue Agents
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Task-oriented dialogue (TOD) models have made significant progress in recent
years. However, previous studies primarily focus on datasets written by
annotators, which has resulted in a gap between academic research and
real-world spoken conversation scenarios. While several small-scale spoken TOD
datasets are proposed to address robustness issues such as ASR errors, they
ignore the unique challenges in spoken conversation. To tackle the limitations,
we introduce SpokenWOZ, a large-scale speech-text dataset for spoken TOD,
containing 8 domains, 203k turns, 5.7k dialogues and 249 hours of audios from
human-to-human spoken conversations. SpokenWOZ further incorporates common
spoken characteristics such as word-by-word processing and reasoning in spoken
language. Based on these characteristics, we present cross-turn slot and
reasoning slot detection as new challenges. We conduct experiments on various
baselines, including text-modal models, newly proposed dual-modal models, and
LLMs, e.g., ChatGPT. The results show that the current models still have
substantial room for improvement in spoken conversation, where the most
advanced dialogue state tracker only achieves 25.65% in joint goal accuracy and
the SOTA end-to-end model only correctly completes the user request in 52.1% of
dialogues. The dataset, code, and leaderboard are available:
https://spokenwoz.github.io/SpokenWOZ-github.io/.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 13:47:51 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jun 2023 16:04:30 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Jul 2023 03:31:42 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Si",
"Shuzheng",
""
],
[
"Ma",
"Wentao",
""
],
[
"Gao",
"Haoyu",
""
],
[
"Wu",
"Yuchuan",
""
],
[
"Lin",
"Ting-En",
""
],
[
"Dai",
"Yinpei",
""
],
[
"Li",
"Hangyu",
""
],
[
"Yan",
"Rui",
""
],
[
"Huang",
"Fei",
""
],
[
"Li",
"Yongbin",
""
]
] |
new_dataset
| 0.999865 |
2305.14527
|
Alexander Kapitanov
|
Alexander Kapitanov, Karina Kvanchiani, Alexander Nagaev, Elizaveta
Petrova
|
Slovo: Russian Sign Language Dataset
|
russian sign language recognition dataset, open-source, 11 pages
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
One of the main challenges of the sign language recognition task is the
difficulty of collecting a suitable dataset due to the gap between
hard-of-hearing and hearing societies. In addition, the sign language in each
country differs significantly, which obliges the creation of new data for each
of them. This paper presents the Russian Sign Language (RSL) video dataset
Slovo, produced using crowdsourcing platforms. The dataset contains 20,000
FullHD recordings, divided into 1,000 classes of isolated RSL gestures received
by 194 signers. We also provide the entire dataset creation pipeline, from data
collection to video annotation, with the following demo application. Several
neural networks are trained and evaluated on the Slovo to demonstrate its
teaching ability. Proposed data and pre-trained models are publicly available.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 21:00:42 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Jul 2023 22:32:26 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Kapitanov",
"Alexander",
""
],
[
"Kvanchiani",
"Karina",
""
],
[
"Nagaev",
"Alexander",
""
],
[
"Petrova",
"Elizaveta",
""
]
] |
new_dataset
| 0.999802 |
2306.09224
|
Andre Araujo
|
Thomas Mensink, Jasper Uijlings, Lluis Castrejon, Arushi Goel, Felipe
Cadar, Howard Zhou, Fei Sha, Andr\'e Araujo, Vittorio Ferrari
|
Encyclopedic VQA: Visual questions about detailed properties of
fine-grained categories
|
ICCV'23
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose Encyclopedic-VQA, a large scale visual question answering (VQA)
dataset featuring visual questions about detailed properties of fine-grained
categories and instances. It contains 221k unique question+answer pairs each
matched with (up to) 5 images, resulting in a total of 1M VQA samples.
Moreover, our dataset comes with a controlled knowledge base derived from
Wikipedia, marking the evidence to support each answer. Empirically, we show
that our dataset poses a hard challenge for large vision+language models as
they perform poorly on our dataset: PaLI [14] is state-of-the-art on OK-VQA
[37], yet it only achieves 13.0% accuracy on our dataset. Moreover, we
experimentally show that progress on answering our encyclopedic questions can
be achieved by augmenting large models with a mechanism that retrieves relevant
information from the knowledge base. An oracle experiment with perfect
retrieval achieves 87.0% accuracy on the single-hop portion of our dataset, and
an automatic retrieval-augmented prototype yields 48.8%. We believe that our
dataset enables future research on retrieval-augmented vision+language models.
It is available at
https://github.com/google-research/google-research/tree/master/encyclopedic_vqa .
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 16:03:01 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Jul 2023 15:05:55 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Mensink",
"Thomas",
""
],
[
"Uijlings",
"Jasper",
""
],
[
"Castrejon",
"Lluis",
""
],
[
"Goel",
"Arushi",
""
],
[
"Cadar",
"Felipe",
""
],
[
"Zhou",
"Howard",
""
],
[
"Sha",
"Fei",
""
],
[
"Araujo",
"André",
""
],
[
"Ferrari",
"Vittorio",
""
]
] |
new_dataset
| 0.999842 |
2306.09264
|
Yan Luo
|
Yan Luo, Yu Tian, Min Shi, Louis R. Pasquale, Lucy Q. Shen, Nazlee
Zebardast, Tobias Elze, Mengyu Wang
|
Harvard Glaucoma Fairness: A Retinal Nerve Disease Dataset for Fairness
Learning and Fair Identity Normalization
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Fairness (also known as equity interchangeably) in machine learning is
important for societal well-being, but limited public datasets hinder its
progress. Currently, no dedicated public medical datasets with imaging data for
fairness learning are available, though minority groups suffer from more health
issues. To address this gap, we introduce Harvard Glaucoma Fairness
(Harvard-GF), a retinal nerve disease dataset with both 2D and 3D imaging data
and balanced racial groups for glaucoma detection. Glaucoma is the leading
cause of irreversible blindness globally with Blacks having doubled glaucoma
prevalence than other races. We also propose a fair identity normalization
(FIN) approach to equalize the feature importance between different identity
groups. Our FIN approach is compared with various the-state-of-the-art fairness
learning methods with superior performance in the racial, gender, and ethnicity
fairness tasks with 2D and 3D imaging data, which demonstrate the utilities of
our dataset Harvard-GF for fairness learning. To facilitate fairness
comparisons between different models, we propose an equity-scaled performance
measure, which can be flexibly used to compare all kinds of performance metrics
in the context of fairness. The dataset and code are publicly accessible via
\url{https://ophai.hms.harvard.edu/datasets/harvard-glaucoma-fairness-3300-samples/}.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 16:39:05 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Jul 2023 05:33:30 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Luo",
"Yan",
""
],
[
"Tian",
"Yu",
""
],
[
"Shi",
"Min",
""
],
[
"Pasquale",
"Louis R.",
""
],
[
"Shen",
"Lucy Q.",
""
],
[
"Zebardast",
"Nazlee",
""
],
[
"Elze",
"Tobias",
""
],
[
"Wang",
"Mengyu",
""
]
] |
new_dataset
| 0.99977 |
2307.02591
|
Sunjae Kwon
|
Sunjae Kwon, Xun Wang, Weisong Liu, Emily Druhl, Minhee L. Sung, Joel
I. Reisman, Wenjun Li, Robert D. Kerns, William Becker, Hong Yu
|
ODD: A Benchmark Dataset for the NLP-based Opioid Related Aberrant
Behavior Detection
|
Under review
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Opioid related aberrant behaviors (ORAB) present novel risk factors for
opioid overdose. Previously, ORAB have been mainly assessed by survey results
and by monitoring drug administrations. Such methods however, cannot scale up
and do not cover the entire spectrum of aberrant behaviors. On the other hand,
ORAB are widely documented in electronic health record notes. This paper
introduces a novel biomedical natural language processing benchmark dataset
named ODD, for ORAB Detection Dataset. ODD is an expert-annotated dataset
comprising of more than 750 publicly available EHR notes. ODD has been designed
to identify ORAB from patients' EHR notes and classify them into nine
categories; 1) Confirmed Aberrant Behavior, 2) Suggested Aberrant Behavior, 3)
Opioids, 4) Indication, 5) Diagnosed opioid dependency, 6) Benzodiapines, 7)
Medication Changes, 8) Central Nervous System-related, and 9) Social
Determinants of Health. We explored two state-of-the-art natural language
processing (NLP) models (finetuning pretrained language models and
prompt-tuning approaches) to identify ORAB. Experimental results show that the
prompt-tuning models outperformed the finetuning models in most cateogories and
the gains were especially higher among uncommon categories (Suggested aberrant
behavior, Diagnosed opioid dependency and Medication change). Although the best
model achieved the highest 83.92% on area under precision recall curve,
uncommon classes (Suggested Aberrant Behavior, Diagnosed Opioid Dependence, and
Medication Change) still have a large room for performance improvement.
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 18:41:29 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Jul 2023 00:47:23 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Kwon",
"Sunjae",
""
],
[
"Wang",
"Xun",
""
],
[
"Liu",
"Weisong",
""
],
[
"Druhl",
"Emily",
""
],
[
"Sung",
"Minhee L.",
""
],
[
"Reisman",
"Joel I.",
""
],
[
"Li",
"Wenjun",
""
],
[
"Kerns",
"Robert D.",
""
],
[
"Becker",
"William",
""
],
[
"Yu",
"Hong",
""
]
] |
new_dataset
| 0.999744 |
2307.04827
|
Yunlong Tang
|
Siting Xu, Yunlong Tang, Feng Zheng
|
LaunchpadGPT: Language Model as Music Visualization Designer on
Launchpad
|
Accepted by International Computer Music Conference (ICMC) 2023
| null | null | null |
cs.SD cs.CL cs.MM eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Launchpad is a musical instrument that allows users to create and perform
music by pressing illuminated buttons. To assist and inspire the design of the
Launchpad light effect, and provide a more accessible approach for beginners to
create music visualization with this instrument, we proposed the LaunchpadGPT
model to generate music visualization designs on Launchpad automatically. Based
on the language model with excellent generation ability, our proposed
LaunchpadGPT takes an audio piece of music as input and outputs the lighting
effects of Launchpad-playing in the form of a video (Launchpad-playing video).
We collect Launchpad-playing videos and process them to obtain music and
corresponding video frame of Launchpad-playing as prompt-completion pairs, to
train the language model. The experiment result shows the proposed method can
create better music visualization than random generation methods and hold the
potential for a broader range of music visualization applications. Our code is
available at https://github.com/yunlong10/LaunchpadGPT/.
|
[
{
"version": "v1",
"created": "Fri, 7 Jul 2023 16:25:59 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Jul 2023 10:20:28 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Xu",
"Siting",
""
],
[
"Tang",
"Yunlong",
""
],
[
"Zheng",
"Feng",
""
]
] |
new_dataset
| 0.999547 |
2307.05370
|
Lala Shakti Swarup Ray
|
Lala Shakti Swarup Ray, Daniel Gei{\ss}ler, Bo Zhou, Paul Lukowicz,
Berit Greinke
|
Capafoldable: self-tracking foldable smart textiles with capacitive
sensing
| null | null | null | null |
cs.HC cs.LG eess.IV eess.SP
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Folding is an unique structural technique to enable planer materials with
motion or 3D mechanical properties. Textile-based capacitive sensing has shown
to be sensitive to the geometry deformation and relative motion of conductive
textiles. In this work, we propose a novel self-tracking foldable smart textile
by combining folded fabric structures and capacitive sensing to detect the
structural motions using state-of-the-art sensing circuits and deep learning
technologies. We created two folding patterns, Accordion and Chevron, each with
two layouts of capacitive sensors in the form of thermobonded conductive
textile patches. In an experiment of manually moving patches of the folding
patterns, we developed deep neural network to learn and reconstruct the
vision-tracked shape of the patches. Through our approach, the geometry
primitives defining the patch shape can be reconstructed from the capacitive
signals with R-squared value of up to 95\% and tracking error of 1cm for 22.5cm
long patches. With mechanical, electrical and sensing properties, Capafoldable
could enable a new range of smart textile applications.
|
[
{
"version": "v1",
"created": "Mon, 3 Jul 2023 13:38:04 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Ray",
"Lala Shakti Swarup",
""
],
[
"Geißler",
"Daniel",
""
],
[
"Zhou",
"Bo",
""
],
[
"Lukowicz",
"Paul",
""
],
[
"Greinke",
"Berit",
""
]
] |
new_dataset
| 0.999766 |
2307.05853
|
Xinbo Yu
|
Bruce X.B. Yu, Zhi Zhang, Yongxu Liu, Sheng-hua Zhong, Yan Liu, Chang
Wen Chen
|
GLA-GCN: Global-local Adaptive Graph Convolutional Network for 3D Human
Pose Estimation from Monocular Video
|
12 pages, Accepted to ICCV 2023, GitHub code:
https://github.com/bruceyo/GLA-GCN
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D human pose estimation has been researched for decades with promising
fruits. 3D human pose lifting is one of the promising research directions
toward the task where both estimated pose and ground truth pose data are used
for training. Existing pose lifting works mainly focus on improving the
performance of estimated pose, but they usually underperform when testing on
the ground truth pose data. We observe that the performance of the estimated
pose can be easily improved by preparing good quality 2D pose, such as
fine-tuning the 2D pose or using advanced 2D pose detectors. As such, we
concentrate on improving the 3D human pose lifting via ground truth data for
the future improvement of more quality estimated pose data. Towards this goal,
a simple yet effective model called Global-local Adaptive Graph Convolutional
Network (GLA-GCN) is proposed in this work. Our GLA-GCN globally models the
spatiotemporal structure via a graph representation and backtraces local joint
features for 3D human pose estimation via individually connected layers. To
validate our model design, we conduct extensive experiments on three benchmark
datasets: Human3.6M, HumanEva-I, and MPI-INF-3DHP. Experimental results show
that our GLA-GCN implemented with ground truth 2D poses significantly
outperforms state-of-the-art methods (e.g., up to around 3%, 17%, and 14% error
reductions on Human3.6M, HumanEva-I, and MPI-INF-3DHP, respectively). GitHub:
https://github.com/bruceyo/GLA-GCN.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 00:13:04 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Jul 2023 01:30:29 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Yu",
"Bruce X. B.",
""
],
[
"Zhang",
"Zhi",
""
],
[
"Liu",
"Yongxu",
""
],
[
"Zhong",
"Sheng-hua",
""
],
[
"Liu",
"Yan",
""
],
[
"Chen",
"Chang Wen",
""
]
] |
new_dataset
| 0.989699 |
2307.08074
|
Longyue Wang
|
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun
Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
|
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language
Modelling
|
Zhaopeng Tu is the corresponding author
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench.
|
[
{
"version": "v1",
"created": "Sun, 16 Jul 2023 15:18:25 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Jul 2023 00:11:24 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Wang",
"Longyue",
""
],
[
"Du",
"Zefeng",
""
],
[
"Liu",
"Donghuai",
""
],
[
"Cai",
"Deng",
""
],
[
"Yu",
"Dian",
""
],
[
"Jiang",
"Haiyun",
""
],
[
"Wang",
"Yan",
""
],
[
"Cui",
"Leyang",
""
],
[
"Shi",
"Shuming",
""
],
[
"Tu",
"Zhaopeng",
""
]
] |
new_dataset
| 0.997925 |
2307.08912
|
Pengcheng Fang
|
Pengcheng and Peng and Yun and Qingzhao and Tao and Dawn and Prateek
and Sanjeev and Zhuotao and Xusheng
|
CONTRACTFIX: A Framework for Automatically Fixing Vulnerabilities in
Smart Contracts
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
The increased adoption of smart contracts in many industries has made them an
attractive target for cybercriminals, leading to millions of dollars in losses.
Thus, deploying smart contracts with detected vulnerabilities (known to
developers) are not acceptable, and fixing all the detected vulnerabilities is
needed, which incurs high manual labor cost without effective tool support. To
fill this need, in this paper, we propose ContractFix, a novel framework that
automatically generates security patches for vulnerable smart contracts.
ContractFix is a general framework that can incorporate different fix patterns
for different types of vulnerabilities. Users can use it as a security fix-it
tool that automatically applies patches and verifies the patched contracts
before deploying the contracts. To address the unique challenges in fixing
smart contract vulnerabilities, given an input smart contract, \tool conducts
our proposed ensemble identification based on multiple static verification
tools to identify vulnerabilities that are amenable for automatic fix. Then,
ContractFix generates patches using template-based fix patterns and conducts
program analysis (program dependency computation and pointer analysis) for
smart contracts to accurately infer and populate the parameter values for the
fix patterns. Finally, ContractFix performs static verification that guarantees
the patched contract is free of vulnerabilities. Our evaluations on $144$ real
vulnerable contracts demonstrate that \tool can successfully fix $94\%$ of the
detected vulnerabilities ($565$ out of $601$) and preserve the expected
behaviors of the smart contracts.
|
[
{
"version": "v1",
"created": "Tue, 18 Jul 2023 01:14:31 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Jul 2023 19:48:39 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Pengcheng",
"",
""
],
[
"Peng",
"",
""
],
[
"Yun",
"",
""
],
[
"Qingzhao",
"",
""
],
[
"Tao",
"",
""
],
[
"Dawn",
"",
""
],
[
"Prateek",
"",
""
],
[
"Sanjeev",
"",
""
],
[
"Zhuotao",
"",
""
],
[
"Xusheng",
"",
""
]
] |
new_dataset
| 0.995051 |
2307.09156
|
Monika Dalal
|
Monika Dalal, Sucheta Dutt, Ranjeet Sehmi
|
Reversible cyclic codes over finite chain rings
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this paper, necessary and sufficient conditions for the reversibility of a
cyclic code of arbitrary length over a finite commutative chain ring have been
derived. MDS reversible cyclic codes having length p^s over a finite chain ring
with nilpotency index 2 have been characterized and a few examples of MDS
reversible cyclic codes have been presented. Further, it is shown that the
torsion codes of a reversible cyclic code over a finite chain ring are
reversible. Also, an example of a non-reversible cyclic code for which all its
torsion codes are reversible has been presented to show that the converse of
this statement is not true. The cardinality and Hamming distance of a cyclic
code over a finite commutative chain ring have also been determined.
|
[
{
"version": "v1",
"created": "Tue, 18 Jul 2023 11:33:14 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Jul 2023 06:43:14 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Dalal",
"Monika",
""
],
[
"Dutt",
"Sucheta",
""
],
[
"Sehmi",
"Ranjeet",
""
]
] |
new_dataset
| 0.992463 |
2307.10533
|
Guillermo Colin
|
Guillermo Colin, Joseph Byrnes, Youngwoo Sim, Patrick Wensing, and
Joao Ramos
|
Whole-Body Dynamic Telelocomotion: A Step-to-Step Dynamics Approach to
Human Walking Reference Generation
|
8 pages, 8 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Teleoperated humanoid robots hold significant potential as physical avatars
for humans in hazardous and inaccessible environments, with the goal of
channeling human intelligence and sensorimotor skills through these robotic
counterparts. Precise coordination between humans and robots is crucial for
accomplishing whole-body behaviors involving locomotion and manipulation. To
progress successfully, dynamic synchronization between humans and humanoid
robots must be achieved. This work enhances advancements in whole-body dynamic
telelocomotion, addressing challenges in robustness. By embedding the hybrid
and underactuated nature of bipedal walking into a virtual human walking
interface, we achieve dynamically consistent walking gait generation.
Additionally, we integrate a reactive robot controller into a whole-body
dynamic telelocomotion framework. Thus, allowing the realization of
telelocomotion behaviors on the full-body dynamics of a bipedal robot.
Real-time telelocomotion simulation experiments validate the effectiveness of
our methods, demonstrating that a trained human pilot can dynamically
synchronize with a simulated bipedal robot, achieving sustained locomotion,
controlling walking speeds within the range of 0.0 m/s to 0.3 m/s, and enabling
backward walking for distances of up to 2.0 m. This research contributes to
advancing teleoperated humanoid robots and paves the way for future
developments in synchronized locomotion between humans and bipedal robots.
|
[
{
"version": "v1",
"created": "Thu, 20 Jul 2023 02:21:33 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Jul 2023 23:07:28 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Colin",
"Guillermo",
""
],
[
"Byrnes",
"Joseph",
""
],
[
"Sim",
"Youngwoo",
""
],
[
"Wensing",
"Patrick",
""
],
[
"Ramos",
"Joao",
""
]
] |
new_dataset
| 0.997672 |
2307.11752
|
Stephan Simonis
|
Adrian Kummerl\"ander, Samuel J. Avis, Halim Kusumaatmaja, Fedor
Bukreev, Michael Crocoll, Davide Dapelo, Simon Gro{\ss}mann, Nicolas Hafen,
Shota Ito, Julius Je{\ss}berger, Eliane Kummer, Jan E. Marquardt, Johanna
M\"odl, Tim Pertzel, Franti\v{s}ek Prinz, Florian Raichle, Martin Sadric,
Maximilian Schecher, Dennis Teutscher, Stephan Simonis, Mathias J. Krause
|
OpenLB User Guide: Associated with Release 1.6 of the Code
| null | null | null | null |
cs.MS cs.DC cs.NA math.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
OpenLB is an object-oriented implementation of LBM. It is the first
implementation of a generic platform for LBM programming, which is shared with
the open source community (GPLv2). Since the first release in 2007, the code
has been continuously improved and extended which is documented by thirteen
releases as well as the corresponding release notes which are available on the
OpenLB website (https://www.openlb.net). The OpenLB code is written in C++ and
is used by application programmers as well as developers, with the ability to
implement custom models OpenLB supports complex data structures that allow
simulations in complex geometries and parallel execution using MPI, OpenMP and
CUDA on high-performance computers. The source code uses the concepts of
interfaces and templates, so that efficient, direct and intuitive
implementations of the LBM become possible. The efficiency and scalability has
been checked and proved by code reviews. This user manual and a source code
documentation by DoxyGen are available on the OpenLB project website.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 22:47:34 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Kummerländer",
"Adrian",
""
],
[
"Avis",
"Samuel J.",
""
],
[
"Kusumaatmaja",
"Halim",
""
],
[
"Bukreev",
"Fedor",
""
],
[
"Crocoll",
"Michael",
""
],
[
"Dapelo",
"Davide",
""
],
[
"Großmann",
"Simon",
""
],
[
"Hafen",
"Nicolas",
""
],
[
"Ito",
"Shota",
""
],
[
"Jeßberger",
"Julius",
""
],
[
"Kummer",
"Eliane",
""
],
[
"Marquardt",
"Jan E.",
""
],
[
"Mödl",
"Johanna",
""
],
[
"Pertzel",
"Tim",
""
],
[
"Prinz",
"František",
""
],
[
"Raichle",
"Florian",
""
],
[
"Sadric",
"Martin",
""
],
[
"Schecher",
"Maximilian",
""
],
[
"Teutscher",
"Dennis",
""
],
[
"Simonis",
"Stephan",
""
],
[
"Krause",
"Mathias J.",
""
]
] |
new_dataset
| 0.99867 |
2307.11804
|
Andrew Eckford
|
Taha Sajjad and Andrew W. Eckford
|
High-Speed Molecular Communication in Vacuum
|
Accepted for publication in IEEE Transactions on Molecular,
Biological, and Multi-Scale Communications
| null | null | null |
cs.ET cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing molecular communication systems, both theoretical and experimental,
are characterized by low information rates. In this paper, inspired by
time-of-flight mass spectrometry (TOFMS), we consider the design of a molecular
communication system in which the channel is a vacuum and demonstrate that this
method has the potential to increase achievable information rates by many
orders of magnitude. We use modelling results from TOFMS to obtain arrival time
distributions for accelerated ions and use them to analyze several species of
ions, including hydrogen, nitrogen, argon, and benzene. We show that the
achievable information rates can be increased using a velocity (Wien) filter,
which reduces uncertainty in the velocity of the ions. Using a simplified
communication model, we show that data rates well above 1 Gbit/s/molecule are
achievable.
|
[
{
"version": "v1",
"created": "Fri, 21 Jul 2023 13:17:11 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Sajjad",
"Taha",
""
],
[
"Eckford",
"Andrew W.",
""
]
] |
new_dataset
| 0.957971 |
2307.11853
|
Shiyu Sun
|
Shiyu Sun, Shu Wang, Xinda Wang, Yunlong Xing, Elisa Zhang, Kun Sun
|
Exploring Security Commits in Python
|
Accepted to 2023 IEEE International Conference on Software
Maintenance and Evolution (ICSME)
| null | null | null |
cs.CR cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Python has become the most popular programming language as it is friendly to
work with for beginners. However, a recent study has found that most security
issues in Python have not been indexed by CVE and may only be fixed by 'silent'
security commits, which pose a threat to software security and hinder the
security fixes to downstream software. It is critical to identify the hidden
security commits; however, the existing datasets and methods are insufficient
for security commit detection in Python, due to the limited data variety,
non-comprehensive code semantics, and uninterpretable learned features. In this
paper, we construct the first security commit dataset in Python, namely
PySecDB, which consists of three subsets including a base dataset, a pilot
dataset, and an augmented dataset. The base dataset contains the security
commits associated with CVE records provided by MITRE. To increase the variety
of security commits, we build the pilot dataset from GitHub by filtering
keywords within the commit messages. Since not all commits provide commit
messages, we further construct the augmented dataset by understanding the
semantics of code changes. To build the augmented dataset, we propose a new
graph representation named CommitCPG and a multi-attributed graph learning
model named SCOPY to identify the security commit candidates through both
sequential and structural code semantics. The evaluation shows our proposed
algorithms can improve the data collection efficiency by up to 40 percentage
points. After manual verification by three security experts, PySecDB consists
of 1,258 security commits and 2,791 non-security commits. Furthermore, we
conduct an extensive case study on PySecDB and discover four common security
fix patterns that cover over 85% of security commits in Python, providing
insight into secure software maintenance, vulnerability detection, and
automated program repair.
|
[
{
"version": "v1",
"created": "Fri, 21 Jul 2023 18:46:45 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Sun",
"Shiyu",
""
],
[
"Wang",
"Shu",
""
],
[
"Wang",
"Xinda",
""
],
[
"Xing",
"Yunlong",
""
],
[
"Zhang",
"Elisa",
""
],
[
"Sun",
"Kun",
""
]
] |
new_dataset
| 0.995558 |
2307.11865
|
Dmitriy Rivkin
|
Nikhil Kakodkar, Dmitriy Rivkin, Bobak H. Baghi, Francois Hogan,
Gregory Dudek
|
CARTIER: Cartographic lAnguage Reasoning Targeted at Instruction
Execution for Robots
| null | null | null | null |
cs.RO cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work explores the capacity of large language models (LLMs) to address
problems at the intersection of spatial planning and natural language
interfaces for navigation.Our focus is on following relatively complex
instructions that are more akin to natural conversation than traditional
explicit procedural directives seen in robotics. Unlike most prior work, where
navigation directives are provided as imperative commands (e.g., go to the
fridge), we examine implicit directives within conversational interactions. We
leverage the 3D simulator AI2Thor to create complex and repeatable scenarios at
scale, and augment it by adding complex language queries for 40 object types.
We demonstrate that a robot can better parse descriptive language queries than
existing methods by using an LLM to interpret the user interaction in the
context of a list of the objects in the scene.
|
[
{
"version": "v1",
"created": "Fri, 21 Jul 2023 19:09:37 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Kakodkar",
"Nikhil",
""
],
[
"Rivkin",
"Dmitriy",
""
],
[
"Baghi",
"Bobak H.",
""
],
[
"Hogan",
"Francois",
""
],
[
"Dudek",
"Gregory",
""
]
] |
new_dataset
| 0.991822 |
2307.11914
|
Ruisheng Wang Prof
|
Ruisheng Wang, Shangfeng Huang and Hongxin Yang
|
Building3D: An Urban-Scale Dataset and Benchmarks for Learning Roof
Structures from Point Clouds
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Urban modeling from LiDAR point clouds is an important topic in computer
vision, computer graphics, photogrammetry and remote sensing. 3D city models
have found a wide range of applications in smart cities, autonomous navigation,
urban planning and mapping etc. However, existing datasets for 3D modeling
mainly focus on common objects such as furniture or cars. Lack of building
datasets has become a major obstacle for applying deep learning technology to
specific domains such as urban modeling. In this paper, we present a
urban-scale dataset consisting of more than 160 thousands buildings along with
corresponding point clouds, mesh and wire-frame models, covering 16 cities in
Estonia about 998 Km2. We extensively evaluate performance of state-of-the-art
algorithms including handcrafted and deep feature based methods. Experimental
results indicate that Building3D has challenges of high intra-class variance,
data imbalance and large-scale noises. The Building3D is the first and largest
urban-scale building modeling benchmark, allowing a comparison of supervised
and self-supervised learning methods. We believe that our Building3D will
facilitate future research on urban modeling, aerial path planning, mesh
simplification, and semantic/part segmentation etc.
|
[
{
"version": "v1",
"created": "Fri, 21 Jul 2023 21:38:57 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Wang",
"Ruisheng",
""
],
[
"Huang",
"Shangfeng",
""
],
[
"Yang",
"Hongxin",
""
]
] |
new_dataset
| 0.999838 |
2307.11984
|
Mingkui Tan
|
Kunyang Lin, Peihao Chen, Diwei Huang, Thomas H. Li, Mingkui Tan,
Chuang Gan
|
Learning Vision-and-Language Navigation from YouTube Videos
|
Accepted by ICCV 2023
| null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision-and-language navigation (VLN) requires an embodied agent to navigate
in realistic 3D environments using natural language instructions. Existing VLN
methods suffer from training on small-scale environments or unreasonable
path-instruction datasets, limiting the generalization to unseen environments.
There are massive house tour videos on YouTube, providing abundant real
navigation experiences and layout information. However, these videos have not
been explored for VLN before. In this paper, we propose to learn an agent from
these videos by creating a large-scale dataset which comprises reasonable
path-instruction pairs from house tour videos and pre-training the agent on it.
To achieve this, we have to tackle the challenges of automatically constructing
path-instruction pairs and exploiting real layout knowledge from raw and
unlabeled videos. To address these, we first leverage an entropy-based method
to construct the nodes of a path trajectory. Then, we propose an action-aware
generator for generating instructions from unlabeled trajectories. Last, we
devise a trajectory judgment pretext task to encourage the agent to mine the
layout knowledge. Experimental results show that our method achieves
state-of-the-art performance on two popular benchmarks (R2R and REVERIE). Code
is available at https://github.com/JeremyLinky/YouTube-VLN
|
[
{
"version": "v1",
"created": "Sat, 22 Jul 2023 05:26:50 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Lin",
"Kunyang",
""
],
[
"Chen",
"Peihao",
""
],
[
"Huang",
"Diwei",
""
],
[
"Li",
"Thomas H.",
""
],
[
"Tan",
"Mingkui",
""
],
[
"Gan",
"Chuang",
""
]
] |
new_dataset
| 0.980254 |
2307.12004
|
Han Liu
|
Han Liu, Hao Li, Xing Yao, Yubo Fan, Dewei Hu, Benoit Dawant, Vishwesh
Nath, Zhoubing Xu, Ipek Oguz
|
COLosSAL: A Benchmark for Cold-start Active Learning for 3D Medical
Image Segmentation
|
Accepted by MICCAI 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Medical image segmentation is a critical task in medical image analysis. In
recent years, deep learning based approaches have shown exceptional performance
when trained on a fully-annotated dataset. However, data annotation is often a
significant bottleneck, especially for 3D medical images. Active learning (AL)
is a promising solution for efficient annotation but requires an initial set of
labeled samples to start active selection. When the entire data pool is
unlabeled, how do we select the samples to annotate as our initial set? This is
also known as the cold-start AL, which permits only one chance to request
annotations from experts without access to previously annotated data.
Cold-start AL is highly relevant in many practical scenarios but has been
under-explored, especially for 3D medical segmentation tasks requiring
substantial annotation effort. In this paper, we present a benchmark named
COLosSAL by evaluating six cold-start AL strategies on five 3D medical image
segmentation tasks from the public Medical Segmentation Decathlon collection.
We perform a thorough performance analysis and explore important open questions
for cold-start AL, such as the impact of budget on different strategies. Our
results show that cold-start AL is still an unsolved problem for 3D
segmentation tasks but some important trends have been observed. The code
repository, data partitions, and baseline results for the complete benchmark
are publicly available at https://github.com/MedICL-VU/COLosSAL.
|
[
{
"version": "v1",
"created": "Sat, 22 Jul 2023 07:19:15 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Liu",
"Han",
""
],
[
"Li",
"Hao",
""
],
[
"Yao",
"Xing",
""
],
[
"Fan",
"Yubo",
""
],
[
"Hu",
"Dewei",
""
],
[
"Dawant",
"Benoit",
""
],
[
"Nath",
"Vishwesh",
""
],
[
"Xu",
"Zhoubing",
""
],
[
"Oguz",
"Ipek",
""
]
] |
new_dataset
| 0.997788 |
2307.12010
|
Jianli Bai
|
Jianli Bai, Xiaowu Zhang, Xiangfu Song, Hang Shao, Qifan Wang, Shujie
Cui, Giovanni Russello
|
CryptoMask : Privacy-preserving Face Recognition
|
18 pages,3 figures, accepted by ICICS2023
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Face recognition is a widely-used technique for identification or
verification, where a verifier checks whether a face image matches anyone
stored in a database. However, in scenarios where the database is held by a
third party, such as a cloud server, both parties are concerned about data
privacy. To address this concern, we propose CryptoMask, a privacy-preserving
face recognition system that employs homomorphic encryption (HE) and secure
multi-party computation (MPC). We design a new encoding strategy that leverages
HE properties to reduce communication costs and enable efficient similarity
checks between face images, without expensive homomorphic rotation.
Additionally, CryptoMask leaks less information than existing state-of-the-art
approaches. CryptoMask only reveals whether there is an image matching the
query or not, whereas existing approaches additionally leak sensitive
intermediate distance information. We conduct extensive experiments that
demonstrate CryptoMask's superior performance in terms of computation and
communication. For a database with 100 million 512-dimensional face vectors,
CryptoMask offers ${\thicksim}5 \times$ and ${\thicksim}144 \times$ speed-ups
in terms of computation and communication, respectively.
|
[
{
"version": "v1",
"created": "Sat, 22 Jul 2023 08:10:06 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Bai",
"Jianli",
""
],
[
"Zhang",
"Xiaowu",
""
],
[
"Song",
"Xiangfu",
""
],
[
"Shao",
"Hang",
""
],
[
"Wang",
"Qifan",
""
],
[
"Cui",
"Shujie",
""
],
[
"Russello",
"Giovanni",
""
]
] |
new_dataset
| 0.999597 |
2307.12052
|
Mallikarjun Reddy Dorsala
|
Mallikarjun Reddy Dorsala, V. N. Sastry, Sudhakar Chapram
|
Blockchain-based Cloud Data Deduplication Scheme with Fair Incentives
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
With the rapid development of cloud computing, vast amounts of duplicated
data are being uploaded to the cloud, wasting storage resources. Deduplication
(dedup) is an efficient solution to save storage costs of cloud storage
providers (CSPs) by storing only one copy of the uploaded data. However, cloud
users do not benefit directly from dedup and may be reluctant to dedup their
data. To motivate the cloud users towards dedup, CSPs offer incentives on
storage fees. The problems with the existing dedup schemes are that they do not
consider: (1) correctness - the incentive offered to a cloud user should be
computed correctly without any prejudice. (2) fairness - the cloud user
receives the file link and access rights of the uploaded data if and only if
the CSP receives the storage fee. Meeting these requirements without a trusted
party is non-trivial, and most of the existing dedup schemes do not apply.
Another drawback is that most of the existing schemes emphasize incentives to
cloud users but failed to provide a reliable incentive mechanism.
As public Blockchain networks emulate the properties of trusted parties, in
this paper, we propose a new Blockchain-based dedup scheme to meet the above
requirements. In our scheme, a smart contract computes the incentives on
storage fee, and the fairness rules are encoded into the smart contract for
facilitating fair payments between the CSPs and cloud users. We prove the
correctness and fairness of the proposed scheme. We also design a new incentive
mechanism and show that the scheme is individually rational and incentive
compatible. Furthermore, we conduct experiments by implementing the designed
smart contract on Ethereum local Blockchain network and list the transactional
and financial costs of interacting with the designed smart contract.
|
[
{
"version": "v1",
"created": "Sat, 22 Jul 2023 11:27:05 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Dorsala",
"Mallikarjun Reddy",
""
],
[
"Sastry",
"V. N.",
""
],
[
"Chapram",
"Sudhakar",
""
]
] |
new_dataset
| 0.968114 |
2307.12067
|
Roman Shapovalov
|
Roman Shapovalov, Yanir Kleiman, Ignacio Rocco, David Novotny, Andrea
Vedaldi, Changan Chen, Filippos Kokkinos, Ben Graham, Natalia Neverova
|
Replay: Multi-modal Multi-view Acted Videos for Casual Holography
|
Accepted for ICCV 2023. Roman, Yanir, and Ignacio contributed equally
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce Replay, a collection of multi-view, multi-modal videos of humans
interacting socially. Each scene is filmed in high production quality, from
different viewpoints with several static cameras, as well as wearable action
cameras, and recorded with a large array of microphones at different positions
in the room. Overall, the dataset contains over 4000 minutes of footage and
over 7 million timestamped high-resolution frames annotated with camera poses
and partially with foreground masks. The Replay dataset has many potential
applications, such as novel-view synthesis, 3D reconstruction, novel-view
acoustic synthesis, human body and face analysis, and training generative
models. We provide a benchmark for training and evaluating novel-view
synthesis, with two scenarios of different difficulty. Finally, we evaluate
several baseline state-of-the-art methods on the new benchmark.
|
[
{
"version": "v1",
"created": "Sat, 22 Jul 2023 12:24:07 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Shapovalov",
"Roman",
""
],
[
"Kleiman",
"Yanir",
""
],
[
"Rocco",
"Ignacio",
""
],
[
"Novotny",
"David",
""
],
[
"Vedaldi",
"Andrea",
""
],
[
"Chen",
"Changan",
""
],
[
"Kokkinos",
"Filippos",
""
],
[
"Graham",
"Ben",
""
],
[
"Neverova",
"Natalia",
""
]
] |
new_dataset
| 0.99981 |
2307.12128
|
Victor Adewopo
|
Victor Adewopo, Nelly Elsayed, Zag Elsayed, Murat Ozer, Victoria
Wangia-Anderson, Ahmed Abdelgawad
|
AI on the Road: A Comprehensive Analysis of Traffic Accidents and
Accident Detection System in Smart Cities
|
8,8
| null | null | null |
cs.CV cs.AI cs.CY cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Accident detection and traffic analysis is a critical component of smart city
and autonomous transportation systems that can reduce accident frequency,
severity and improve overall traffic management. This paper presents a
comprehensive analysis of traffic accidents in different regions across the
United States using data from the National Highway Traffic Safety
Administration (NHTSA) Crash Report Sampling System (CRSS). To address the
challenges of accident detection and traffic analysis, this paper proposes a
framework that uses traffic surveillance cameras and action recognition systems
to detect and respond to traffic accidents spontaneously. Integrating the
proposed framework with emergency services will harness the power of traffic
cameras and machine learning algorithms to create an efficient solution for
responding to traffic accidents and reducing human errors. Advanced
intelligence technologies, such as the proposed accident detection systems in
smart cities, will improve traffic management and traffic accident severity.
Overall, this study provides valuable insights into traffic accidents in the US
and presents a practical solution to enhance the safety and efficiency of
transportation systems.
|
[
{
"version": "v1",
"created": "Sat, 22 Jul 2023 17:08:13 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Adewopo",
"Victor",
""
],
[
"Elsayed",
"Nelly",
""
],
[
"Elsayed",
"Zag",
""
],
[
"Ozer",
"Murat",
""
],
[
"Wangia-Anderson",
"Victoria",
""
],
[
"Abdelgawad",
"Ahmed",
""
]
] |
new_dataset
| 0.998885 |
2307.12145
|
Nathaniel Hanson
|
Nathaniel Hanson, Ahmet Demirkaya, Deniz Erdo\u{g}mu\c{s}, Aron
Stubbins, Ta\c{s}k{\i}n Pad{\i}r, Tales Imbiriba
|
A Vision for Cleaner Rivers: Harnessing Snapshot Hyperspectral Imaging
to Detect Macro-Plastic Litter
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Plastic waste entering the riverine harms local ecosystems leading to
negative ecological and economic impacts. Large parcels of plastic waste are
transported from inland to oceans leading to a global scale problem of floating
debris fields. In this context, efficient and automatized monitoring of
mismanaged plastic waste is paramount. To address this problem, we analyze the
feasibility of macro-plastic litter detection using computational imaging
approaches in river-like scenarios. We enable near-real-time tracking of
partially submerged plastics by using snapshot Visible-Shortwave Infrared
hyperspectral imaging. Our experiments indicate that imaging strategies
associated with machine learning classification approaches can lead to high
detection accuracy even in challenging scenarios, especially when leveraging
hyperspectral data and nonlinear classifiers. All code, data, and models are
available online:
https://github.com/RIVeR-Lab/hyperspectral_macro_plastic_detection.
|
[
{
"version": "v1",
"created": "Sat, 22 Jul 2023 18:59:27 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Hanson",
"Nathaniel",
""
],
[
"Demirkaya",
"Ahmet",
""
],
[
"Erdoğmuş",
"Deniz",
""
],
[
"Stubbins",
"Aron",
""
],
[
"Padır",
"Taşkın",
""
],
[
"Imbiriba",
"Tales",
""
]
] |
new_dataset
| 0.989607 |
2307.12158
|
Vinicius G. Goecks
|
Ellen Novoseller, Vinicius G. Goecks, David Watkins, Josh Miller,
Nicholas Waytowich
|
DIP-RL: Demonstration-Inferred Preference Learning in Minecraft
|
Paper accepted at The Many Facets of Preference Learning Workshop at
the International Conference on Machine Learning (ICML), Honolulu, Hawaii,
USA, 2023
| null | null | null |
cs.LG cs.AI cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In machine learning for sequential decision-making, an algorithmic agent
learns to interact with an environment while receiving feedback in the form of
a reward signal. However, in many unstructured real-world settings, such a
reward signal is unknown and humans cannot reliably craft a reward signal that
correctly captures desired behavior. To solve tasks in such unstructured and
open-ended environments, we present Demonstration-Inferred Preference
Reinforcement Learning (DIP-RL), an algorithm that leverages human
demonstrations in three distinct ways, including training an autoencoder,
seeding reinforcement learning (RL) training batches with demonstration data,
and inferring preferences over behaviors to learn a reward function to guide
RL. We evaluate DIP-RL in a tree-chopping task in Minecraft. Results suggest
that the method can guide an RL agent to learn a reward function that reflects
human preferences and that DIP-RL performs competitively relative to baselines.
DIP-RL is inspired by our previous work on combining demonstrations and
pairwise preferences in Minecraft, which was awarded a research prize at the
2022 NeurIPS MineRL BASALT competition, Learning from Human Feedback in
Minecraft. Example trajectory rollouts of DIP-RL and baselines are located at
https://sites.google.com/view/dip-rl.
|
[
{
"version": "v1",
"created": "Sat, 22 Jul 2023 20:05:31 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Novoseller",
"Ellen",
""
],
[
"Goecks",
"Vinicius G.",
""
],
[
"Watkins",
"David",
""
],
[
"Miller",
"Josh",
""
],
[
"Waytowich",
"Nicholas",
""
]
] |
new_dataset
| 0.96783 |
2307.12159
|
N\'icolas Barbosa Gomes
|
N\'icolas Barbosa Gomes, Arissa Yoshida, Mateus Roder, Guilherme
Camargo de Oliveira and Jo\~ao Paulo Papa
|
Facial Point Graphs for Amyotrophic Lateral Sclerosis Identification
|
7 pages and 7 figures
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Identifying Amyotrophic Lateral Sclerosis (ALS) in its early stages is
essential for establishing the beginning of treatment, enriching the outlook,
and enhancing the overall well-being of those affected individuals. However,
early diagnosis and detecting the disease's signs is not straightforward. A
simpler and cheaper way arises by analyzing the patient's facial expressions
through computational methods. When a patient with ALS engages in specific
actions, e.g., opening their mouth, the movement of specific facial muscles
differs from that observed in a healthy individual. This paper proposes Facial
Point Graphs to learn information from the geometry of facial images to
identify ALS automatically. The experimental outcomes in the Toronto Neuroface
dataset show the proposed approach outperformed state-of-the-art results,
fostering promising developments in the area.
|
[
{
"version": "v1",
"created": "Sat, 22 Jul 2023 20:16:39 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Gomes",
"Nícolas Barbosa",
""
],
[
"Yoshida",
"Arissa",
""
],
[
"Roder",
"Mateus",
""
],
[
"de Oliveira",
"Guilherme Camargo",
""
],
[
"Papa",
"João Paulo",
""
]
] |
new_dataset
| 0.964874 |
2307.12166
|
Sakib Shahriar
|
Kadhim Hayawi, Sakib Shahriar, Sujith Samuel Mathew
|
The Imitation Game: Detecting Human and AI-Generated Texts in the Era of
Large Language Models
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The potential of artificial intelligence (AI)-based large language models
(LLMs) holds considerable promise in revolutionizing education, research, and
practice. However, distinguishing between human-written and AI-generated text
has become a significant task. This paper presents a comparative study,
introducing a novel dataset of human-written and LLM-generated texts in
different genres: essays, stories, poetry, and Python code. We employ several
machine learning models to classify the texts. Results demonstrate the efficacy
of these models in discerning between human and AI-generated text, despite the
dataset's limited sample size. However, the task becomes more challenging when
classifying GPT-generated text, particularly in story writing. The results
indicate that the models exhibit superior performance in binary classification
tasks, such as distinguishing human-generated text from a specific LLM,
compared to the more complex multiclass tasks that involve discerning among
human-generated and multiple LLMs. Our findings provide insightful implications
for AI text detection while our dataset paves the way for future research in
this evolving area.
|
[
{
"version": "v1",
"created": "Sat, 22 Jul 2023 21:00:14 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Hayawi",
"Kadhim",
""
],
[
"Shahriar",
"Sakib",
""
],
[
"Mathew",
"Sujith Samuel",
""
]
] |
new_dataset
| 0.998633 |
2307.12212
|
Srivatsan Sridhar
|
Srivatsan Sridhar, Onur Ascigil, Navin Keizer, Fran\c{c}ois Genon,
S\'ebastien Pierre, Yiannis Psaras, Etienne Rivi\`ere, Micha{\l} Kr\'ol
|
Content Censorship in the InterPlanetary File System
|
15 pages (including references), 15 figures. Accepted to be published
at the Network and Distributed System Security (NDSS) Symposium 2024
| null | null | null |
cs.CR cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The InterPlanetary File System (IPFS) is currently the largest decentralized
storage solution in operation, with thousands of active participants and
millions of daily content transfers. IPFS is used as remote data storage for
numerous blockchain-based smart contracts, Non-Fungible Tokens (NFT), and
decentralized applications.
We present a content censorship attack that can be executed with minimal
effort and cost, and that prevents the retrieval of any chosen content in the
IPFS network. The attack exploits a conceptual issue in a core component of
IPFS, the Kademlia Distributed Hash Table (DHT), which is used to resolve
content IDs to peer addresses. We provide efficient detection and mitigation
mechanisms for this vulnerability. Our mechanisms achieve a 99.6\% detection
rate and mitigate 100\% of the detected attacks with minimal signaling and
computational overhead. We followed responsible disclosure procedures, and our
countermeasures are scheduled for deployment in the future versions of IPFS.
|
[
{
"version": "v1",
"created": "Sun, 23 Jul 2023 03:02:32 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Sridhar",
"Srivatsan",
""
],
[
"Ascigil",
"Onur",
""
],
[
"Keizer",
"Navin",
""
],
[
"Genon",
"François",
""
],
[
"Pierre",
"Sébastien",
""
],
[
"Psaras",
"Yiannis",
""
],
[
"Rivière",
"Etienne",
""
],
[
"Król",
"Michał",
""
]
] |
new_dataset
| 0.996372 |
2307.12216
|
Masoud Zabihi
|
Masoud Zabihi, Yanyue Xie, Zhengang Li, Peiyan Dong, Geng Yuan, Olivia
Chen, Massoud Pedram, Yanzhi Wang
|
A Life-Cycle Energy and Inventory Analysis of Adiabatic
Quantum-Flux-Parametron Circuits
| null | null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The production process of superconductive integrated circuits is complex and
consumes significant amounts of resources and energy. Therefore, it is crucial
to evaluate the environmental impact of this emerging technology. An attractive
option for the next generation of superconductive technology is Adiabatic
Quantum-Flux-Parametron (AQFP) devices. This study is the first to present a
comprehensive process-based life-cycle assessment (LCA) and inventory analysis
of AQFP integrated circuits. To generate relevant outcomes, we conduct a
comparative LCA that included the bulk CMOS technology. The inventory analysis
considered the manufacturing, assembly, and use phases of the circuits. To
ensure a fair assessment, we choose the 32-bit AQFP RISC-V single-core
processor as the reference functional unit and compare its performance with
that of a CMOS counterpart. Our findings reveal that the AQFP processor
consumes several orders of magnitude less energy during the use phase than its
CMOS counterpart. Consequently, the total life cycle energy (which encompasses
manufacturing and assembly energies) of AQFP integrated circuits improves at
least by two orders of magnitude.
|
[
{
"version": "v1",
"created": "Sun, 23 Jul 2023 03:35:24 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Zabihi",
"Masoud",
""
],
[
"Xie",
"Yanyue",
""
],
[
"Li",
"Zhengang",
""
],
[
"Dong",
"Peiyan",
""
],
[
"Yuan",
"Geng",
""
],
[
"Chen",
"Olivia",
""
],
[
"Pedram",
"Massoud",
""
],
[
"Wang",
"Yanzhi",
""
]
] |
new_dataset
| 0.990976 |
2307.12241
|
Monika Gahalawat
|
Monika Gahalawat, Raul Fernandez Rojas, Tanaya Guha, Ramanathan
Subramanian, Roland Goecke
|
Explainable Depression Detection via Head Motion Patterns
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
While depression has been studied via multimodal non-verbal behavioural cues,
head motion behaviour has not received much attention as a biomarker. This
study demonstrates the utility of fundamental head-motion units, termed
\emph{kinemes}, for depression detection by adopting two distinct approaches,
and employing distinctive features: (a) discovering kinemes from head motion
data corresponding to both depressed patients and healthy controls, and (b)
learning kineme patterns only from healthy controls, and computing statistics
derived from reconstruction errors for both the patient and control classes.
Employing machine learning methods, we evaluate depression classification
performance on the \emph{BlackDog} and \emph{AVEC2013} datasets. Our findings
indicate that: (1) head motion patterns are effective biomarkers for detecting
depressive symptoms, and (2) explanatory kineme patterns consistent with prior
findings can be observed for the two classes. Overall, we achieve peak F1
scores of 0.79 and 0.82, respectively, over BlackDog and AVEC2013 for binary
classification over episodic \emph{thin-slices}, and a peak F1 of 0.72 over
videos for AVEC2013.
|
[
{
"version": "v1",
"created": "Sun, 23 Jul 2023 06:39:51 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Gahalawat",
"Monika",
""
],
[
"Rojas",
"Raul Fernandez",
""
],
[
"Guha",
"Tanaya",
""
],
[
"Subramanian",
"Ramanathan",
""
],
[
"Goecke",
"Roland",
""
]
] |
new_dataset
| 0.999525 |
2307.12242
|
Zhihan Jiang
|
Zhihan Jiang, Handi Chen, Rui Zhou, Jing Deng, Xinchen Zhang, Running
Zhao, Cong Xie, Yifang Wang, Edith C.H. Ngai
|
HealthPrism: A Visual Analytics System for Exploring Children's Physical
and Mental Health Profiles with Multimodal Data
|
11 pages, 6 figures, Accepted by IEEE VIS23
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The correlation between children's personal and family characteristics (e.g.,
demographics and socioeconomic status) and their physical and mental health
status has been extensively studied across various research domains, such as
public health, medicine, and data science. Such studies can provide insights
into the underlying factors affecting children's health and aid in the
development of targeted interventions to improve their health outcomes.
However, with the availability of multiple data sources, including context data
(i.e., the background information of children) and motion data (i.e., sensor
data measuring activities of children), new challenges have arisen due to the
large-scale, heterogeneous, and multimodal nature of the data. Existing
statistical hypothesis-based and learning model-based approaches have been
inadequate for comprehensively analyzing the complex correlation between
multimodal features and multi-dimensional health outcomes due to the limited
information revealed. In this work, we first distill a set of design
requirements from multiple levels through conducting a literature review and
iteratively interviewing 11 experts from multiple domains (e.g., public health
and medicine). Then, we propose HealthPrism, an interactive visual and
analytics system for assisting researchers in exploring the importance and
influence of various context and motion features on children's health status
from multi-level perspectives. Within HealthPrism, a multimodal learning model
with a gate mechanism is proposed for health profiling and cross-modality
feature importance comparison. A set of visualization components is designed
for experts to explore and understand multimodal data freely. We demonstrate
the effectiveness and usability of HealthPrism through quantitative evaluation
of the model performance, case studies, and expert interviews in associated
domains.
|
[
{
"version": "v1",
"created": "Sun, 23 Jul 2023 06:41:27 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Jiang",
"Zhihan",
""
],
[
"Chen",
"Handi",
""
],
[
"Zhou",
"Rui",
""
],
[
"Deng",
"Jing",
""
],
[
"Zhang",
"Xinchen",
""
],
[
"Zhao",
"Running",
""
],
[
"Xie",
"Cong",
""
],
[
"Wang",
"Yifang",
""
],
[
"Ngai",
"Edith C. H.",
""
]
] |
new_dataset
| 0.986649 |
2307.12285
|
Sara Jafarbeiki
|
Sara Jafarbeiki, Amin Sakzad, Ron Steinfeld, Shabnam Kasra
Kermanshahi, Chandra Thapa, Yuki Kume
|
ACE: A Consent-Embedded privacy-preserving search on genomic database
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce ACE, a consent-embedded searchable encryption
scheme. ACE enables dynamic consent management by supporting the physical
deletion of associated data at the time of consent revocation. This ensures
instant real deletion of data, aligning with privacy regulations and preserving
individuals' rights. We evaluate ACE in the context of genomic databases,
demonstrating its ability to perform the addition and deletion of genomic
records and related information based on ID, which especially complies with the
requirements of deleting information of a particular data owner. To formally
prove that ACE is secure under non-adaptive attacks, we present two new
definitions of forward and backward privacy. We also define a new hard problem,
which we call D-ACE, that facilitates the proof of our theorem (we formally
prove its hardness by a security reduction from DDH to D-ACE). We finally
present implementation results to evaluate the performance of ACE.
|
[
{
"version": "v1",
"created": "Sun, 23 Jul 2023 10:30:37 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Jafarbeiki",
"Sara",
""
],
[
"Sakzad",
"Amin",
""
],
[
"Steinfeld",
"Ron",
""
],
[
"Kermanshahi",
"Shabnam Kasra",
""
],
[
"Thapa",
"Chandra",
""
],
[
"Kume",
"Yuki",
""
]
] |
new_dataset
| 0.995554 |
2307.12292
|
Giulio Turrisi
|
Ilyass Taouil, Giulio Turrisi, Daniel Schleich, Victor Barasuol,
Claudio Semini, Sven Behnke
|
Quadrupedal Footstep Planning using Learned Motion Models of a Black-Box
Controller
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Legged robots are increasingly entering new domains and applications,
including search and rescue, inspection, and logistics. However, for such
systems to be valuable in real-world scenarios, they must be able to
autonomously and robustly navigate irregular terrains. In many cases, robots
that are sold on the market do not provide such abilities, being able to
perform only blind locomotion. Furthermore, their controller cannot be easily
modified by the end-user, requiring a new and time-consuming control synthesis.
In this work, we present a fast local motion planning pipeline that extends the
capabilities of a black-box walking controller that is only able to track
high-level reference velocities. More precisely, we learn a set of motion
models for such a controller that maps high-level velocity commands to Center
of Mass (CoM) and footstep motions. We then integrate these models with a
variant of the A star algorithm to plan the CoM trajectory, footstep sequences,
and corresponding high-level velocity commands based on visual information,
allowing the quadruped to safely traverse irregular terrains at demand.
|
[
{
"version": "v1",
"created": "Sun, 23 Jul 2023 11:07:45 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Taouil",
"Ilyass",
""
],
[
"Turrisi",
"Giulio",
""
],
[
"Schleich",
"Daniel",
""
],
[
"Barasuol",
"Victor",
""
],
[
"Semini",
"Claudio",
""
],
[
"Behnke",
"Sven",
""
]
] |
new_dataset
| 0.956196 |
2307.12301
|
Chen-Han Tsai
|
Chen-Han Tsai, Yu-Shao Peng
|
RANSAC-NN: Unsupervised Image Outlier Detection using RANSAC
|
19 pages, 18 figures
| null | null | null |
cs.CV cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Image outlier detection (OD) is crucial for ensuring the quality and accuracy
of image datasets used in computer vision tasks. The majority of OD algorithms,
however, have not been targeted toward image data. Consequently, the results of
applying such algorithms to images are often suboptimal. In this work, we
propose RANSAC-NN, a novel unsupervised OD algorithm specifically designed for
images. By comparing images in a RANSAC-based approach, our algorithm
automatically predicts the outlier score of each image without additional
training or label information. We evaluate RANSAC-NN against state-of-the-art
OD algorithms on 15 diverse datasets. Without any hyperparameter tuning,
RANSAC-NN consistently performs favorably in contrast to other algorithms in
almost every dataset category. Furthermore, we provide a detailed analysis to
understand each RANSAC-NN component, and we demonstrate its potential
applications in image mislabeled detection. Code for RANSAC-NN is provided at
https://github.com/mxtsai/ransac-nn
|
[
{
"version": "v1",
"created": "Sun, 23 Jul 2023 11:50:27 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Tsai",
"Chen-Han",
""
],
[
"Peng",
"Yu-Shao",
""
]
] |
new_dataset
| 0.994053 |
2307.12302
|
Andrzej Murawski
|
Alex Dixon and Andrzej S. Murawski
|
Saturating automata for game semantics
|
Presented at MFPS 2023
| null | null | null |
cs.PL cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
Saturation is a fundamental game-semantic property satisfied by strategies
that interpret higher-order concurrent programs. It states that the strategy
must be closed under certain rearrangements of moves, and corresponds to the
intuition that program moves (P-moves) may depend only on moves made by the
environment (O-moves).
We propose an automata model over an infinite alphabet, called saturating
automata, for which all accepted languages are guaranteed to satisfy a closure
property mimicking saturation.
We show how to translate the finitary fragment of Idealized Concurrent Algol
(FICA) into saturating automata, confirming their suitability for modelling
higher-order concurrency. Moreover, we find that, for terms in normal form, the
resultant automaton has linearly many transitions and states with respect to
term size, and can be constructed in polynomial time. This is in contrast to
earlier attempts at finding automata-theoretic models of FICA, which did not
guarantee saturation and involved an exponential blow-up during translation,
even for normal forms.
|
[
{
"version": "v1",
"created": "Sun, 23 Jul 2023 12:05:04 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Dixon",
"Alex",
""
],
[
"Murawski",
"Andrzej S.",
""
]
] |
new_dataset
| 0.998374 |
2307.12324
|
Shuo Li
|
Zhijun Ding, Cong He and Shuo Li
|
EnPAC: Petri Net Model Checking for Linear Temporal Logic
|
11 pages, 5 figures
| null | null | null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
State generation and exploration (counterexample search) are two cores of
explicit-state Petri net model checking for linear temporal logic (LTL).
Traditional state generation updates a structure to reduce the computation of
all transitions and frequently encodes/decodes to read each encoded state. We
present the optimized calculation of enabled transitions on demand by dynamic
fireset to avoid such a structure. And we propose direct read/write (DRW)
operation on encoded markings without decoding and re-encoding to make state
generation faster and reduce memory consumption. To search counterexamples more
quickly under an on-the-fly framework, we add heuristic information to the
Buchi automaton to guide the exploration in the direction of accepted states.
The above strategies can optimize existing methods for LTL model checking. We
implement these optimization strategies in a Petri net model-checking tool
called EnPAC (Enhanced Petri-net Analyser and Checker) for linear temporal
logic. Then, we evaluate it on the benchmarks of MCC (Model Checking Contest),
which shows a drastic improvement over the existing methods.
|
[
{
"version": "v1",
"created": "Sun, 23 Jul 2023 13:39:36 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Ding",
"Zhijun",
""
],
[
"He",
"Cong",
""
],
[
"Li",
"Shuo",
""
]
] |
new_dataset
| 0.997069 |
2307.12332
|
Mohammad Hadi Goldani
|
Mohammad Hadi Goldani, Reza Safabakhsh, and Saeedeh Momtazi
|
X-CapsNet For Fake News Detection
| null | null | null | null |
cs.CL cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
News consumption has significantly increased with the growing popularity and
use of web-based forums and social media. This sets the stage for misinforming
and confusing people. To help reduce the impact of misinformation on users'
potential health-related decisions and other intents, it is desired to have
machine learning models to detect and combat fake news automatically. This
paper proposes a novel transformer-based model using Capsule neural
Networks(CapsNet) called X-CapsNet. This model includes a CapsNet with dynamic
routing algorithm paralyzed with a size-based classifier for detecting short
and long fake news statements. We use two size-based classifiers, a Deep
Convolutional Neural Network (DCNN) for detecting long fake news statements and
a Multi-Layer Perceptron (MLP) for detecting short news statements. To resolve
the problem of representing short news statements, we use indirect features of
news created by concatenating the vector of news speaker profiles and a vector
of polarity, sentiment, and counting words of news statements. For evaluating
the proposed architecture, we use the Covid-19 and the Liar datasets. The
results in terms of the F1-score for the Covid-19 dataset and accuracy for the
Liar dataset show that models perform better than the state-of-the-art
baselines.
|
[
{
"version": "v1",
"created": "Sun, 23 Jul 2023 13:58:00 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Goldani",
"Mohammad Hadi",
""
],
[
"Safabakhsh",
"Reza",
""
],
[
"Momtazi",
"Saeedeh",
""
]
] |
new_dataset
| 0.992165 |
2307.12465
|
Naman Jain
|
Naman Jain, Shubham Gandhi, Atharv Sonwane, Aditya Kanade, Nagarajan
Natarajan, Suresh Parthasarathy, Sriram Rajamani, and Rahul Sharma
|
StaticFixer: From Static Analysis to Static Repair
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Static analysis tools are traditionally used to detect and flag programs that
violate properties. We show that static analysis tools can also be used to
perturb programs that satisfy a property to construct variants that violate the
property. Using this insight we can construct paired data sets of unsafe-safe
program pairs, and learn strategies to automatically repair property
violations. We present a system called \sysname, which automatically repairs
information flow vulnerabilities using this approach. Since information flow
properties are non-local (both to check and repair), \sysname also introduces a
novel domain specific language (DSL) and strategy learning algorithms for
synthesizing non-local repairs. We use \sysname to synthesize strategies for
repairing two types of information flow vulnerabilities, unvalidated dynamic
calls and cross-site scripting, and show that \sysname successfully repairs
several hundred vulnerabilities from open source {\sc JavaScript} repositories,
outperforming neural baselines built using {\sc CodeT5} and {\sc Codex}. Our
datasets can be downloaded from \url{http://aka.ms/StaticFixer}.
|
[
{
"version": "v1",
"created": "Mon, 24 Jul 2023 01:29:21 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Jain",
"Naman",
""
],
[
"Gandhi",
"Shubham",
""
],
[
"Sonwane",
"Atharv",
""
],
[
"Kanade",
"Aditya",
""
],
[
"Natarajan",
"Nagarajan",
""
],
[
"Parthasarathy",
"Suresh",
""
],
[
"Rajamani",
"Sriram",
""
],
[
"Sharma",
"Rahul",
""
]
] |
new_dataset
| 0.998511 |
2307.12518
|
Menglin Kong
|
Menglin Kong, Shaojie Zhao, Juan Cheng, Xingquan Li, Ri Su, Muzhou
Hou, Cong Cao
|
FaFCNN: A General Disease Classification Framework Based on Feature
Fusion Neural Networks
| null | null | null | null |
cs.LG cs.AI cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There are two fundamental problems in applying deep learning/machine learning
methods to disease classification tasks, one is the insufficient number and
poor quality of training samples; another one is how to effectively fuse
multiple source features and thus train robust classification models. To
address these problems, inspired by the process of human learning knowledge, we
propose the Feature-aware Fusion Correlation Neural Network (FaFCNN), which
introduces a feature-aware interaction module and a feature alignment module
based on domain adversarial learning. This is a general framework for disease
classification, and FaFCNN improves the way existing methods obtain sample
correlation features. The experimental results show that training using
augmented features obtained by pre-training gradient boosting decision tree
yields more performance gains than random-forest based methods. On the
low-quality dataset with a large amount of missing data in our setup, FaFCNN
obtains a consistently optimal performance compared to competitive baselines.
In addition, extensive experiments demonstrate the robustness of the proposed
method and the effectiveness of each component of the model\footnote{Accepted
in IEEE SMC2023}.
|
[
{
"version": "v1",
"created": "Mon, 24 Jul 2023 04:23:08 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Kong",
"Menglin",
""
],
[
"Zhao",
"Shaojie",
""
],
[
"Cheng",
"Juan",
""
],
[
"Li",
"Xingquan",
""
],
[
"Su",
"Ri",
""
],
[
"Hou",
"Muzhou",
""
],
[
"Cao",
"Cong",
""
]
] |
new_dataset
| 0.968825 |
2307.12547
|
Palash Dey
|
Palash Dey, Sudeshna Kolay, and Sipra Singh
|
Knapsack: Connectedness, Path, and Shortest-Path
|
Under review
| null | null | null |
cs.DS cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the knapsack problem with graph theoretic constraints. That is, we
assume that there exists a graph structure on the set of items of knapsack and
the solution also needs to satisfy certain graph theoretic properties on top of
knapsack constraints. In particular, we need to compute in the connected
knapsack problem a connected subset of items which has maximum value subject to
the size of knapsack constraint. We show that this problem is strongly
NP-complete even for graphs of maximum degree four and NP-complete even for
star graphs. On the other hand, we develop an algorithm running in time
$O\left(2^{tw\log tw}\cdot\text{poly}(\min\{s^2,d^2\})\right)$ where $tw,s,d$
are respectively treewidth of the graph, size, and target value of the
knapsack. We further exhibit a $(1-\epsilon)$ factor approximation algorithm
running in time $O\left(2^{tw\log tw}\cdot\text{poly}(n,1/\epsilon)\right)$ for
every $\epsilon>0$. We show similar results for several other graph theoretic
properties, namely path and shortest-path under the problem names path-knapsack
and shortestpath-knapsack. Our results seems to indicate that
connected-knapsack is computationally hardest followed by path-knapsack and
shortestpath-knapsack.
|
[
{
"version": "v1",
"created": "Mon, 24 Jul 2023 06:25:58 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Dey",
"Palash",
""
],
[
"Kolay",
"Sudeshna",
""
],
[
"Singh",
"Sipra",
""
]
] |
new_dataset
| 0.998396 |
2307.12573
|
Yuanzhi Liang
|
Yuanzhi Liang, Linchao Zhu, Yi Yang
|
Tachikuma: Understading Complex Interactions with Multi-Character and
Novel Objects by Large Language Models
|
Preliminary version of an ongoing work
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advancements in natural language and Large Language Models (LLMs) have
enabled AI agents to simulate human-like interactions within virtual worlds.
However, these interactions still face limitations in complexity and
flexibility, particularly in scenarios involving multiple characters and novel
objects. Pre-defining all interactable objects in the agent's world model
presents challenges, and conveying implicit intentions to multiple characters
through complex interactions remains difficult. To address these issues, we
propose integrating virtual Game Masters (GMs) into the agent's world model,
drawing inspiration from Tabletop Role-Playing Games (TRPGs). GMs play a
crucial role in overseeing information, estimating players' intentions,
providing environment descriptions, and offering feedback, compensating for
current world model deficiencies. To facilitate future explorations for complex
interactions, we introduce a benchmark named Tachikuma, comprising a Multiple
character and novel Object based interaction Estimation (MOE) task and a
supporting dataset. MOE challenges models to understand characters' intentions
and accurately determine their actions within intricate contexts involving
multi-character and novel object interactions. Besides, the dataset captures
log data from real-time communications during gameplay, providing diverse,
grounded, and complex interactions for further explorations. Finally, we
present a simple prompting baseline and evaluate its performance, demonstrating
its effectiveness in enhancing interaction understanding. We hope that our
dataset and task will inspire further research in complex interactions with
natural language, fostering the development of more advanced AI agents.
|
[
{
"version": "v1",
"created": "Mon, 24 Jul 2023 07:40:59 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Liang",
"Yuanzhi",
""
],
[
"Zhu",
"Linchao",
""
],
[
"Yang",
"Yi",
""
]
] |
new_dataset
| 0.997859 |
2307.12588
|
Alireza Ahmadi
|
Alireza Ahmadi, Michael Halstead, and Chris McCool
|
BonnBot-I: A Precise Weed Management and Crop Monitoring Platform
|
2022 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS)
| null | null | null |
cs.RO cs.AR cs.MA cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Cultivation and weeding are two of the primary tasks performed by farmers
today. A recent challenge for weeding is the desire to reduce herbicide and
pesticide treatments while maintaining crop quality and quantity. In this paper
we introduce BonnBot-I a precise weed management platform which can also
performs field monitoring. Driven by crop monitoring approaches which can
accurately locate and classify plants (weed and crop) we further improve their
performance by fusing the platform available GNSS and wheel odometry. This
improves tracking accuracy of our crop monitoring approach from a normalized
average error of 8.3% to 3.5%, evaluated on a new publicly available corn
dataset. We also present a novel arrangement of weeding tools mounted on linear
actuators evaluated in simulated environments. We replicate weed distributions
from a real field, using the results from our monitoring approach, and show the
validity of our work-space division techniques which require significantly less
movement (a 50% reduction) to achieve similar results. Overall, BonnBot-I is a
significant step forward in precise weed management with a novel method of
selectively spraying and controlling weeds in an arable field
|
[
{
"version": "v1",
"created": "Mon, 24 Jul 2023 07:59:54 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Ahmadi",
"Alireza",
""
],
[
"Halstead",
"Michael",
""
],
[
"McCool",
"Chris",
""
]
] |
new_dataset
| 0.995265 |
2307.12591
|
Yuyin Zhou
|
Yiqing Wang, Zihan Li, Jieru Mei, Zihao Wei, Li Liu, Chen Wang,
Shengtian Sang, Alan Yuille, Cihang Xie, Yuyin Zhou
|
SwinMM: Masked Multi-view with Swin Transformers for 3D Medical Image
Segmentation
|
MICCAI 2023; project page: https://github.com/UCSC-VLAA/SwinMM/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advancements in large-scale Vision Transformers have made significant
strides in improving pre-trained models for medical image segmentation.
However, these methods face a notable challenge in acquiring a substantial
amount of pre-training data, particularly within the medical field. To address
this limitation, we present Masked Multi-view with Swin Transformers (SwinMM),
a novel multi-view pipeline for enabling accurate and data-efficient
self-supervised medical image analysis. Our strategy harnesses the potential of
multi-view information by incorporating two principal components. In the
pre-training phase, we deploy a masked multi-view encoder devised to
concurrently train masked multi-view observations through a range of diverse
proxy tasks. These tasks span image reconstruction, rotation, contrastive
learning, and a novel task that employs a mutual learning paradigm. This new
task capitalizes on the consistency between predictions from various
perspectives, enabling the extraction of hidden multi-view information from 3D
medical data. In the fine-tuning stage, a cross-view decoder is developed to
aggregate the multi-view information through a cross-attention block. Compared
with the previous state-of-the-art self-supervised learning method Swin UNETR,
SwinMM demonstrates a notable advantage on several medical image segmentation
tasks. It allows for a smooth integration of multi-view information,
significantly boosting both the accuracy and data-efficiency of the model. Code
and models are available at https://github.com/UCSC-VLAA/SwinMM/.
|
[
{
"version": "v1",
"created": "Mon, 24 Jul 2023 08:06:46 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Wang",
"Yiqing",
""
],
[
"Li",
"Zihan",
""
],
[
"Mei",
"Jieru",
""
],
[
"Wei",
"Zihao",
""
],
[
"Liu",
"Li",
""
],
[
"Wang",
"Chen",
""
],
[
"Sang",
"Shengtian",
""
],
[
"Yuille",
"Alan",
""
],
[
"Xie",
"Cihang",
""
],
[
"Zhou",
"Yuyin",
""
]
] |
new_dataset
| 0.994693 |
2307.12593
|
Lidija Stanovnik
|
Lidija Stanovnik, Miha Mo\v{s}kon, Miha Mraz
|
In search of maximum non-overlapping codes
| null | null | null | null |
cs.IT cs.DM math.CO math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Non-overlapping codes are block codes that have arisen in diverse contexts of
computer science and biology. Applications typically require finding
non-overlapping codes with large cardinalities, but the maximum size of
non-overlapping codes has been determined only for cases where the codeword
length divides the size of the alphabet, and for codes with codewords of length
two or three. For all other alphabet sizes and codeword lengths no
computationally feasible way to identify non-overlapping codes that attain the
maximum size has been found to date. Herein we characterize maximal
non-overlapping codes. We formulate the maximum non-overlapping code problem as
an integer optimization problem and determine necessary conditions for
optimality of a non-overlapping code. Moreover, we solve several instances of
the optimization problem to show that the hitherto known constructions do not
generate the optimal codes for many alphabet sizes and codeword lengths. We
also evaluate the number of distinct maximum non-overlapping codes.
|
[
{
"version": "v1",
"created": "Mon, 24 Jul 2023 08:09:02 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Stanovnik",
"Lidija",
""
],
[
"Moškon",
"Miha",
""
],
[
"Mraz",
"Miha",
""
]
] |
new_dataset
| 0.999109 |
2307.12609
|
Jordan Samhi
|
Jordan Samhi, Marco Alecci, Tegawend\'e F. Bissyand\'e, Jacques Klein
|
A Dataset of Android Libraries
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Android app developers extensively employ code reuse, integrating many
third-party libraries into their apps. While such integration is practical for
developers, it can be challenging for static analyzers to achieve scalability
and precision when such libraries can account for a large part of the app code.
As a direct consequence, when a static analysis is performed, it is common
practice in the literature to only consider developer code --with the
assumption that the sought issues are in developer code rather than in the
libraries. However, analysts need to precisely distinguish between library code
and developer code in Android apps to ensure the effectiveness of static
analysis. Currently, many static analysis approaches rely on white lists of
libraries. However, these white lists are unreliable, as they are inaccurate
and largely non-comprehensive.
In this paper, we propose a new approach to address the lack of comprehensive
and automated solutions for the production of accurate and "always up to date"
sets of third-party libraries. First, we demonstrate the continued need for a
white list of third-party libraries. Second, we propose an automated approach
to produce an accurate and up-to-date set of third-party libraries in the form
of a dataset called AndroLibZoo. Our dataset, which we make available to the
research community, contains to date 20 162 libraries and is meant to evolve.
Third, we illustrate the significance of using AndroLibZoo to filter libraries
in recent apps. Fourth, we demonstrate that AndroLibZoo is more suitable than
the current state-of-the-art list for improved static analysis. Finally, we
show how the use of AndroLibZoo can enhance the performance of existing Android
app static analyzers.
|
[
{
"version": "v1",
"created": "Mon, 24 Jul 2023 08:36:38 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Samhi",
"Jordan",
""
],
[
"Alecci",
"Marco",
""
],
[
"Bissyandé",
"Tegawendé F.",
""
],
[
"Klein",
"Jacques",
""
]
] |
new_dataset
| 0.999757 |
2307.12648
|
Nikolai Kosmatov
|
Lo\"ic Buckwell and Olivier Gilles and Daniel Gracia P\'erez and
Nikolai Kosmatov
|
Execution at RISC: Stealth JOP Attacks on RISC-V Applications
|
16 pages. arXiv admin note: text overlap with arXiv:2211.16212
| null | null | null |
cs.CR cs.SE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
RISC-V is a recently developed open instruction set architecture gaining a
lot of attention. To achieve a lasting security on these systems and design
efficient countermeasures, a better understanding of vulnerabilities to novel
and potential future attacks is mandatory. This paper demonstrates that RISC-V
is sensible to Jump-Oriented Programming, a class of complex code-reuse
attacks. We provide an analysis of new dispatcher gadgets we discovered, and
show how they can be used together in order to build a stealth attack,
bypassing existing protections. A proof-of-concept attack is implemented on an
embedded web server compiled for RISC-V, in which we introduced a
vulnerability, allowing an attacker to remotely read an arbitrary file from the
host machine.
|
[
{
"version": "v1",
"created": "Mon, 24 Jul 2023 09:39:21 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Buckwell",
"Loïc",
""
],
[
"Gilles",
"Olivier",
""
],
[
"Pérez",
"Daniel Gracia",
""
],
[
"Kosmatov",
"Nikolai",
""
]
] |
new_dataset
| 0.978628 |
2307.12664
|
Giulio Turrisi
|
Shafeef Omar, Lorenzo Amatucci, Victor Barasuol, Giulio Turrisi,
Claudio Semini
|
SafeSteps: Learning Safer Footstep Planning Policies for Legged Robots
via Model-Based Priors
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a footstep planning policy for quadrupedal locomotion that is able
to directly take into consideration a-priori safety information in its
decisions. At its core, a learning process analyzes terrain patches,
classifying each landing location by its kinematic feasibility, shin collision,
and terrain roughness. This information is then encoded into a small vector
representation and passed as an additional state to the footstep planning
policy, which furthermore proposes only safe footstep location by applying a
masked variant of the Proximal Policy Optimization (PPO) algorithm. The
performance of the proposed approach is shown by comparative simulations on an
electric quadruped robot walking in different rough terrain scenarios. We show
that violations of the above safety conditions are greatly reduced both during
training and the successive deployment of the policy, resulting in an
inherently safer footstep planner. Furthermore, we show how, as a byproduct,
fewer reward terms are needed to shape the behavior of the policy, which in
return is able to achieve both better final performances and sample efficiency
|
[
{
"version": "v1",
"created": "Mon, 24 Jul 2023 10:10:24 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Omar",
"Shafeef",
""
],
[
"Amatucci",
"Lorenzo",
""
],
[
"Barasuol",
"Victor",
""
],
[
"Turrisi",
"Giulio",
""
],
[
"Semini",
"Claudio",
""
]
] |
new_dataset
| 0.981116 |
2307.12698
|
Adrien Bardes
|
Adrien Bardes, Jean Ponce, Yann LeCun
|
MC-JEPA: A Joint-Embedding Predictive Architecture for Self-Supervised
Learning of Motion and Content Features
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-supervised learning of visual representations has been focusing on
learning content features, which do not capture object motion or location, and
focus on identifying and differentiating objects in images and videos. On the
other hand, optical flow estimation is a task that does not involve
understanding the content of the images on which it is estimated. We unify the
two approaches and introduce MC-JEPA, a joint-embedding predictive architecture
and self-supervised learning approach to jointly learn optical flow and content
features within a shared encoder, demonstrating that the two associated
objectives; the optical flow estimation objective and the self-supervised
learning objective; benefit from each other and thus learn content features
that incorporate motion information. The proposed approach achieves performance
on-par with existing unsupervised optical flow benchmarks, as well as with
common self-supervised learning approaches on downstream tasks such as semantic
segmentation of images and videos.
|
[
{
"version": "v1",
"created": "Mon, 24 Jul 2023 11:27:14 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Bardes",
"Adrien",
""
],
[
"Ponce",
"Jean",
""
],
[
"LeCun",
"Yann",
""
]
] |
new_dataset
| 0.985761 |
2307.12718
|
Davide Di Nucci
|
Davide Di Nucci, Alessandro Simoni, Matteo Tomei, Luca Ciuffreda,
Roberto Vezzani, Rita Cucchiara
|
CarPatch: A Synthetic Benchmark for Radiance Field Evaluation on Vehicle
Components
|
Accepted at ICIAP2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Neural Radiance Fields (NeRFs) have gained widespread recognition as a highly
effective technique for representing 3D reconstructions of objects and scenes
derived from sets of images. Despite their efficiency, NeRF models can pose
challenges in certain scenarios such as vehicle inspection, where the lack of
sufficient data or the presence of challenging elements (e.g. reflections)
strongly impact the accuracy of the reconstruction. To this aim, we introduce
CarPatch, a novel synthetic benchmark of vehicles. In addition to a set of
images annotated with their intrinsic and extrinsic camera parameters, the
corresponding depth maps and semantic segmentation masks have been generated
for each view. Global and part-based metrics have been defined and used to
evaluate, compare, and better characterize some state-of-the-art techniques.
The dataset is publicly released at
https://aimagelab.ing.unimore.it/go/carpatch and can be used as an evaluation
guide and as a baseline for future work on this challenging topic.
|
[
{
"version": "v1",
"created": "Mon, 24 Jul 2023 11:59:07 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Di Nucci",
"Davide",
""
],
[
"Simoni",
"Alessandro",
""
],
[
"Tomei",
"Matteo",
""
],
[
"Ciuffreda",
"Luca",
""
],
[
"Vezzani",
"Roberto",
""
],
[
"Cucchiara",
"Rita",
""
]
] |
new_dataset
| 0.999139 |
2307.12794
|
Paris Koloveas
|
Paris Koloveas, Serafeim Chatzopoulos, Christos Tryfonopoulos,
Thanasis Vergoulis
|
BIP! NDR (NoDoiRefs): A Dataset of Citations From Papers Without DOIs in
Computer Science Conferences and Workshops
| null | null | null | null |
cs.DL
|
http://creativecommons.org/licenses/by/4.0/
|
In the field of Computer Science, conference and workshop papers serve as
important contributions, carrying substantial weight in research assessment
processes, compared to other disciplines. However, a considerable number of
these papers are not assigned a Digital Object Identifier (DOI), hence their
citations are not reported in widely used citation datasets like OpenCitations
and Crossref, raising limitations to citation analysis. While the Microsoft
Academic Graph (MAG) previously addressed this issue by providing substantial
coverage, its discontinuation has created a void in available data. BIP! NDR
aims to alleviate this issue and enhance the research assessment processes
within the field of Computer Science. To accomplish this, it leverages a
workflow that identifies and retrieves Open Science papers lacking DOIs from
the DBLP Corpus, and by performing text analysis, it extracts citation
information directly from their full text. The current version of the dataset
contains more than 510K citations made by approximately 60K open access
Computer Science conference or workshop papers that, according to DBLP, do not
have a DOI.
|
[
{
"version": "v1",
"created": "Mon, 24 Jul 2023 13:43:54 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Koloveas",
"Paris",
""
],
[
"Chatzopoulos",
"Serafeim",
""
],
[
"Tryfonopoulos",
"Christos",
""
],
[
"Vergoulis",
"Thanasis",
""
]
] |
new_dataset
| 0.99977 |
2307.12813
|
Chi Xie
|
Chi Xie, Zhao Zhang, Yixuan Wu, Feng Zhu, Rui Zhao, Shuang Liang
|
Exposing the Troublemakers in Described Object Detection
|
Preprint. Under review
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Detecting objects based on language descriptions is a popular task that
includes Open-Vocabulary object Detection (OVD) and Referring Expression
Comprehension (REC). In this paper, we advance them to a more practical setting
called Described Object Detection (DOD) by expanding category names to flexible
language expressions for OVD and overcoming the limitation of REC to only
grounding the pre-existing object. We establish the research foundation for DOD
tasks by constructing a Description Detection Dataset ($D^3$), featuring
flexible language expressions and annotating all described objects without
omission. By evaluating previous SOTA methods on $D^3$, we find some
troublemakers that fail current REC, OVD, and bi-functional methods. REC
methods struggle with confidence scores, rejecting negative instances, and
multi-target scenarios, while OVD methods face constraints with long and
complex descriptions. Recent bi-functional methods also do not work well on DOD
due to their separated training procedures and inference strategies for REC and
OVD tasks. Building upon the aforementioned findings, we propose a baseline
that largely improves REC methods by reconstructing the training data and
introducing a binary classification sub-task, outperforming existing methods.
Data and code is available at https://github.com/shikras/d-cube.
|
[
{
"version": "v1",
"created": "Mon, 24 Jul 2023 14:06:54 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Xie",
"Chi",
""
],
[
"Zhang",
"Zhao",
""
],
[
"Wu",
"Yixuan",
""
],
[
"Zhu",
"Feng",
""
],
[
"Zhao",
"Rui",
""
],
[
"Liang",
"Shuang",
""
]
] |
new_dataset
| 0.974313 |
2307.12972
|
Hongyang Li
|
Hongyang Li, Hao Zhang, Zhaoyang Zeng, Shilong Liu, Feng Li, Tianhe
Ren, and Lei Zhang
|
DFA3D: 3D Deformable Attention For 2D-to-3D Feature Lifting
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a new operator, called 3D DeFormable Attention
(DFA3D), for 2D-to-3D feature lifting, which transforms multi-view 2D image
features into a unified 3D space for 3D object detection. Existing feature
lifting approaches, such as Lift-Splat-based and 2D attention-based, either use
estimated depth to get pseudo LiDAR features and then splat them to a 3D space,
which is a one-pass operation without feature refinement, or ignore depth and
lift features by 2D attention mechanisms, which achieve finer semantics while
suffering from a depth ambiguity problem. In contrast, our DFA3D-based method
first leverages the estimated depth to expand each view's 2D feature map to 3D
and then utilizes DFA3D to aggregate features from the expanded 3D feature
maps. With the help of DFA3D, the depth ambiguity problem can be effectively
alleviated from the root, and the lifted features can be progressively refined
layer by layer, thanks to the Transformer-like architecture. In addition, we
propose a mathematically equivalent implementation of DFA3D which can
significantly improve its memory efficiency and computational speed. We
integrate DFA3D into several methods that use 2D attention-based feature
lifting with only a few modifications in code and evaluate on the nuScenes
dataset. The experiment results show a consistent improvement of +1.41\% mAP on
average, and up to +15.1\% mAP improvement when high-quality depth information
is available, demonstrating the superiority, applicability, and huge potential
of DFA3D. The code is available at
https://github.com/IDEA-Research/3D-deformable-attention.git.
|
[
{
"version": "v1",
"created": "Mon, 24 Jul 2023 17:49:11 GMT"
}
] | 2023-07-25T00:00:00 |
[
[
"Li",
"Hongyang",
""
],
[
"Zhang",
"Hao",
""
],
[
"Zeng",
"Zhaoyang",
""
],
[
"Liu",
"Shilong",
""
],
[
"Li",
"Feng",
""
],
[
"Ren",
"Tianhe",
""
],
[
"Zhang",
"Lei",
""
]
] |
new_dataset
| 0.999701 |
2202.04454
|
Min Ye
|
Guodong Li, Min Ye, Sihuang Hu
|
Adjacent-Bits-Swapped Polar codes: A new code construction to speed up
polarization
|
The implementations of all the algorithms in this paper are available
at https://github.com/PlumJelly/ABS-Polar We rewrote the whole decoding
section and added lots of detailed explanations in this revision
|
IEEE Transactions on Information Theory ( Volume: 69, Issue: 4,
April 2023)
|
10.1109/TIT.2022.3228862
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The construction of polar codes with code length $n=2^m$ involves $m$ layers
of polar transforms. In this paper, we observe that after each layer of polar
transforms, one can swap certain pairs of adjacent bits to accelerate the
polarization process. More precisely, if the previous bit is more reliable than
its next bit under the successive decoder, then switching the decoding order of
these two adjacent bits will make the reliable bit even more reliable and the
noisy bit even noisier.
Based on this observation, we propose a new family of codes called the
Adjacent-Bits-Swapped (ABS) polar codes. We add a permutation layer after each
polar transform layer in the construction of the ABS polar codes. In order to
choose which pairs of adjacent bits to swap in the permutation layers, we rely
on a new polar transform that combines two independent channels with $4$-ary
inputs. This new polar transform allows us to track the evolution of every pair
of adjacent bits through different layers of polar transforms, and it also
plays an essential role in the Successive Cancellation List (SCL) decoder for
the ABS polar codes. Extensive simulation results show that ABS polar codes
consistently outperform standard polar codes by 0.15dB--0.3dB when we use
CRC-aided SCL decoder with list size $32$ for both codes. The implementations
of all the algorithms in this paper are available at
https://github.com/PlumJelly/ABS-Polar
|
[
{
"version": "v1",
"created": "Wed, 9 Feb 2022 13:29:30 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Aug 2022 09:38:37 GMT"
}
] | 2023-07-24T00:00:00 |
[
[
"Li",
"Guodong",
""
],
[
"Ye",
"Min",
""
],
[
"Hu",
"Sihuang",
""
]
] |
new_dataset
| 0.991256 |
2208.00657
|
Amir Mohammadian
|
Amir Mohammadian, Foad Ghaderi
|
SiamixFormer: a fully-transformer Siamese network with temporal Fusion
for accurate building detection and change detection in bi-temporal remote
sensing images
| null |
International Journal of Remote Sensing(2023), 44:12, 3660-3678
|
10.1080/01431161.2023.2225228
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Building detection and change detection using remote sensing images can help
urban and rescue planning. Moreover, they can be used for building damage
assessment after natural disasters. Currently, most of the existing models for
building detection use only one image (pre-disaster image) to detect buildings.
This is based on the idea that post-disaster images reduce the model's
performance because of presence of destroyed buildings. In this paper, we
propose a siamese model, called SiamixFormer, which uses pre- and post-disaster
images as input. Our model has two encoders and has a hierarchical transformer
architecture. The output of each stage in both encoders is given to a temporal
transformer for feature fusion in a way that query is generated from
pre-disaster images and (key, value) is generated from post-disaster images. To
this end, temporal features are also considered in feature fusion. Another
advantage of using temporal transformers in feature fusion is that they can
better maintain large receptive fields generated by transformer encoders
compared with CNNs. Finally, the output of the temporal transformer is given to
a simple MLP decoder at each stage. The SiamixFormer model is evaluated on xBD,
and WHU datasets, for building detection and on LEVIR-CD and CDD datasets for
change detection and could outperform the state-of-the-art.
|
[
{
"version": "v1",
"created": "Mon, 1 Aug 2022 07:35:45 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Jul 2023 08:39:22 GMT"
}
] | 2023-07-24T00:00:00 |
[
[
"Mohammadian",
"Amir",
""
],
[
"Ghaderi",
"Foad",
""
]
] |
new_dataset
| 0.996535 |
2209.11489
|
Anouk Neerincx
|
Anouk Neerincx
|
Social Robot Scenarios for Real-World Child and Family Care Settings
through Participatory Design
|
Accepted to workshop on participatory design (PD) in human-robot
interaction in RO-MAN 2022
| null | null | null |
cs.RO cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
This paper discusses a 5-year PhD project, focused upon the implementation of
social robots for general child and family care settings in the Netherlands.
The project is a collaboration with general Dutch family care organisations as
well as specialized child mental health care organisations. The project adapts
a bottom-up, participatory design approach, where end users are included in all
stages of the project. End users consist of children, parents, and family care
professionals, who all have different needs, regarding the social robot
behaviors as well as the participatory design methods. This paper provides
suggestions to deal with these differences in designing social robots for child
mental support in real-world settings.
|
[
{
"version": "v1",
"created": "Fri, 23 Sep 2022 09:20:18 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Jul 2023 11:03:23 GMT"
}
] | 2023-07-24T00:00:00 |
[
[
"Neerincx",
"Anouk",
""
]
] |
new_dataset
| 0.991461 |
2212.09648
|
Samuel Cahyawijaya
|
Samuel Cahyawijaya, Holy Lovenia, Alham Fikri Aji, Genta Indra Winata,
Bryan Wilie, Rahmad Mahendra, Christian Wibisono, Ade Romadhony, Karissa
Vincentio, Fajri Koto, Jennifer Santoso, David Moeljadi, Cahya Wirawan,
Frederikus Hudi, Ivan Halim Parmonangan, Ika Alfina, Muhammad Satrio
Wicaksono, Ilham Firdausi Putra, Samsul Rahmadani, Yulianti Oenang, Ali Akbar
Septiandri, James Jaya, Kaustubh D. Dhole, Arie Ardiyanti Suryani, Rifki
Afina Putri, Dan Su, Keith Stevens, Made Nindyatama Nityasya, Muhammad Farid
Adilazuarda, Ryan Ignatius, Ryandito Diandaru, Tiezheng Yu, Vito Ghifari,
Wenliang Dai, Yan Xu, Dyah Damapuspita, Cuk Tho, Ichwanul Muslim Karo Karo,
Tirana Noor Fatyanosa, Ziwei Ji, Pascale Fung, Graham Neubig, Timothy
Baldwin, Sebastian Ruder, Herry Sujaini, Sakriani Sakti, Ayu Purwarianti
|
NusaCrowd: Open Source Initiative for Indonesian NLP Resources
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We present NusaCrowd, a collaborative initiative to collect and unify
existing resources for Indonesian languages, including opening access to
previously non-public resources. Through this initiative, we have brought
together 137 datasets and 118 standardized data loaders. The quality of the
datasets has been assessed manually and automatically, and their value is
demonstrated through multiple experiments. NusaCrowd's data collection enables
the creation of the first zero-shot benchmarks for natural language
understanding and generation in Indonesian and the local languages of
Indonesia. Furthermore, NusaCrowd brings the creation of the first multilingual
automatic speech recognition benchmark in Indonesian and the local languages of
Indonesia. Our work strives to advance natural language processing (NLP)
research for languages that are under-represented despite being widely spoken.
|
[
{
"version": "v1",
"created": "Mon, 19 Dec 2022 17:28:22 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Dec 2022 02:04:13 GMT"
},
{
"version": "v3",
"created": "Mon, 5 Jun 2023 17:17:53 GMT"
},
{
"version": "v4",
"created": "Fri, 21 Jul 2023 14:44:45 GMT"
}
] | 2023-07-24T00:00:00 |
[
[
"Cahyawijaya",
"Samuel",
""
],
[
"Lovenia",
"Holy",
""
],
[
"Aji",
"Alham Fikri",
""
],
[
"Winata",
"Genta Indra",
""
],
[
"Wilie",
"Bryan",
""
],
[
"Mahendra",
"Rahmad",
""
],
[
"Wibisono",
"Christian",
""
],
[
"Romadhony",
"Ade",
""
],
[
"Vincentio",
"Karissa",
""
],
[
"Koto",
"Fajri",
""
],
[
"Santoso",
"Jennifer",
""
],
[
"Moeljadi",
"David",
""
],
[
"Wirawan",
"Cahya",
""
],
[
"Hudi",
"Frederikus",
""
],
[
"Parmonangan",
"Ivan Halim",
""
],
[
"Alfina",
"Ika",
""
],
[
"Wicaksono",
"Muhammad Satrio",
""
],
[
"Putra",
"Ilham Firdausi",
""
],
[
"Rahmadani",
"Samsul",
""
],
[
"Oenang",
"Yulianti",
""
],
[
"Septiandri",
"Ali Akbar",
""
],
[
"Jaya",
"James",
""
],
[
"Dhole",
"Kaustubh D.",
""
],
[
"Suryani",
"Arie Ardiyanti",
""
],
[
"Putri",
"Rifki Afina",
""
],
[
"Su",
"Dan",
""
],
[
"Stevens",
"Keith",
""
],
[
"Nityasya",
"Made Nindyatama",
""
],
[
"Adilazuarda",
"Muhammad Farid",
""
],
[
"Ignatius",
"Ryan",
""
],
[
"Diandaru",
"Ryandito",
""
],
[
"Yu",
"Tiezheng",
""
],
[
"Ghifari",
"Vito",
""
],
[
"Dai",
"Wenliang",
""
],
[
"Xu",
"Yan",
""
],
[
"Damapuspita",
"Dyah",
""
],
[
"Tho",
"Cuk",
""
],
[
"Karo",
"Ichwanul Muslim Karo",
""
],
[
"Fatyanosa",
"Tirana Noor",
""
],
[
"Ji",
"Ziwei",
""
],
[
"Fung",
"Pascale",
""
],
[
"Neubig",
"Graham",
""
],
[
"Baldwin",
"Timothy",
""
],
[
"Ruder",
"Sebastian",
""
],
[
"Sujaini",
"Herry",
""
],
[
"Sakti",
"Sakriani",
""
],
[
"Purwarianti",
"Ayu",
""
]
] |
new_dataset
| 0.999451 |
2302.01636
|
Nadezhda Semenova Dr.
|
Tatyana Bogatenko, Konstantin Sergeev, Andrei Slepnev, J\"urgen
Kurths, Nadezhda Semenova
|
Symbiosis of an artificial neural network and models of biological
neurons: training and testing
|
6 pages, 7 figures, 2 tables
| null |
10.1063/5.0152703
| null |
cs.NE nlin.AO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we show the possibility of creating and identifying the
features of an artificial neural network (ANN) which consists of mathematical
models of biological neurons. The FitzHugh--Nagumo (FHN) system is used as an
example of model demonstrating simplified neuron activity. First, in order to
reveal how biological neurons can be embedded within an ANN, we train the ANN
with nonlinear neurons to solve a a basic image recognition problem with MNIST
database; and next, we describe how FHN systems can be introduced into this
trained ANN. After all, we show that an ANN with FHN systems inside can be
successfully trained and its accuracy becomes larger. What has been done above
opens up great opportunities in terms of the direction of analog neural
networks, in which artificial neurons can be replaced by biological ones.
\end{abstract}
|
[
{
"version": "v1",
"created": "Fri, 3 Feb 2023 10:06:54 GMT"
}
] | 2023-07-24T00:00:00 |
[
[
"Bogatenko",
"Tatyana",
""
],
[
"Sergeev",
"Konstantin",
""
],
[
"Slepnev",
"Andrei",
""
],
[
"Kurths",
"Jürgen",
""
],
[
"Semenova",
"Nadezhda",
""
]
] |
new_dataset
| 0.998986 |
2302.04031
|
Xiaoyu Zhao
|
Jun Liu, Yunzhou Zhang, Xiaoyu Zhao and Zhengnan He
|
FR-LIO: Fast and Robust Lidar-Inertial Odometry by Tightly-Coupled
Iterated Kalman Smoother and Robocentric Voxels
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a fast lidar-inertial odometry (LIO) that is robust to
aggressive motion. To achieve robust tracking in aggressive motion scenes, we
exploit the continuous scanning property of lidar to adaptively divide the full
scan into multiple partial scans (named sub-frames) according to the motion
intensity. And to avoid the degradation of sub-frames resulting from
insufficient constraints, we propose a robust state estimation method based on
a tightly-coupled iterated error state Kalman smoother (ESKS) framework.
Furthermore, we propose a robocentric voxel map (RC-Vox) to improve the
system's efficiency. The RC-Vox allows efficient maintenance of map points and
k nearest neighbor (k-NN) queries by mapping local map points into a
fixed-size, two-layer 3D array structure. Extensive experiments are conducted
on 27 sequences from 4 public datasets and our own dataset. The results show
that our system can achieve stable tracking in aggressive motion scenes
(angular velocity up to 21.8 rad/s) that cannot be handled by other
state-of-the-art methods, while our system can achieve competitive performance
with these methods in general scenes. Furthermore, thanks to the RC-Vox, our
system is much faster than the most efficient LIO system currently published.
|
[
{
"version": "v1",
"created": "Wed, 8 Feb 2023 13:07:35 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Mar 2023 14:29:05 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Jul 2023 23:50:51 GMT"
}
] | 2023-07-24T00:00:00 |
[
[
"Liu",
"Jun",
""
],
[
"Zhang",
"Yunzhou",
""
],
[
"Zhao",
"Xiaoyu",
""
],
[
"He",
"Zhengnan",
""
]
] |
new_dataset
| 0.997011 |
2303.06624
|
Xia Bingyi
|
Bingyi Xia, Hao Luan, Ziqi Zhao, Xuheng Gao, Peijia Xie, Anxing Xiao,
Jiankun Wang, Max Q.-H. Meng
|
Collaborative Trolley Transportation System with Autonomous Nonholonomic
Robots
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cooperative object transportation using multiple robots has been intensively
studied in the control and robotics literature, but most approaches are either
only applicable to omnidirectional robots or lack a complete navigation and
decision-making framework that operates in real time. This paper presents an
autonomous nonholonomic multi-robot system and an end-to-end hierarchical
autonomy framework for collaborative luggage trolley transportation. This
framework finds kinematic-feasible paths, computes online motion plans, and
provides feedback that enables the multi-robot system to handle long lines of
luggage trolleys and navigate obstacles and pedestrians while dealing with
multiple inherently complex and coupled constraints. We demonstrate the
designed collaborative trolley transportation system through practical
transportation tasks, and the experiment results reveal their effectiveness and
reliability in complex and dynamic environments.
|
[
{
"version": "v1",
"created": "Sun, 12 Mar 2023 09:47:38 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Mar 2023 04:57:41 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Apr 2023 03:06:35 GMT"
},
{
"version": "v4",
"created": "Fri, 21 Jul 2023 08:09:16 GMT"
}
] | 2023-07-24T00:00:00 |
[
[
"Xia",
"Bingyi",
""
],
[
"Luan",
"Hao",
""
],
[
"Zhao",
"Ziqi",
""
],
[
"Gao",
"Xuheng",
""
],
[
"Xie",
"Peijia",
""
],
[
"Xiao",
"Anxing",
""
],
[
"Wang",
"Jiankun",
""
],
[
"Meng",
"Max Q. -H.",
""
]
] |
new_dataset
| 0.99684 |
2304.14133
|
Stefanos-Iordanis Papadopoulos
|
Stefanos-Iordanis Papadopoulos, Christos Koutlis, Symeon Papadopoulos,
Panagiotis C. Petrantonakis
|
VERITE: A Robust Benchmark for Multimodal Misinformation Detection
Accounting for Unimodal Bias
| null | null | null | null |
cs.CV cs.MM
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Multimedia content has become ubiquitous on social media platforms, leading
to the rise of multimodal misinformation (MM) and the urgent need for effective
strategies to detect and prevent its spread. In recent years, the challenge of
multimodal misinformation detection (MMD) has garnered significant attention by
researchers and has mainly involved the creation of annotated, weakly
annotated, or synthetically generated training datasets, along with the
development of various deep learning MMD models. However, the problem of
unimodal bias in MMD benchmarks -- where biased or unimodal methods outperform
their multimodal counterparts on an inherently multimodal task -- has been
overlooked. In this study, we systematically investigate and identify the
presence of unimodal bias in widely-used MMD benchmarks (VMU-Twitter, COSMOS),
raising concerns about their suitability for reliable evaluation. To address
this issue, we introduce the "VERification of Image-TExtpairs" (VERITE)
benchmark for MMD which incorporates real-world data, excludes "asymmetric
multimodal misinformation" and utilizes "modality balancing". We conduct an
extensive comparative study with a Transformer-based architecture that shows
the ability of VERITE to effectively address unimodal bias, rendering it a
robust evaluation framework for MMD. Furthermore, we introduce a new method --
termed Crossmodal HArd Synthetic MisAlignment (CHASMA) -- for generating
realistic synthetic training data that preserve crossmodal relations between
legitimate images and false human-written captions. By leveraging CHASMA in the
training process, we observe consistent and notable improvements in predictive
performance on VERITE; with a 9.2% increase in accuracy. We release our code
at: https://github.com/stevejpapad/image-text-verification
|
[
{
"version": "v1",
"created": "Thu, 27 Apr 2023 12:28:29 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Jul 2023 12:06:17 GMT"
}
] | 2023-07-24T00:00:00 |
[
[
"Papadopoulos",
"Stefanos-Iordanis",
""
],
[
"Koutlis",
"Christos",
""
],
[
"Papadopoulos",
"Symeon",
""
],
[
"Petrantonakis",
"Panagiotis C.",
""
]
] |
new_dataset
| 0.995722 |
2305.19920
|
Yi Gu
|
Yi Gu, Yoshito Otake, Keisuke Uemura, Masaki Takao, Mazen Soufi, Yuta
Hiasa, Hugues Talbot, Seiji Okata, Nobuhiko Sugano, Yoshinobu Sato
|
MSKdeX: Musculoskeletal (MSK) decomposition from an X-ray image for
fine-grained estimation of lean muscle mass and muscle volume
|
MICCAI 2023 early acceptance (12 pages and 6 figures)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Musculoskeletal diseases such as sarcopenia and osteoporosis are major
obstacles to health during aging. Although dual-energy X-ray absorptiometry
(DXA) and computed tomography (CT) can be used to evaluate musculoskeletal
conditions, frequent monitoring is difficult due to the cost and accessibility
(as well as high radiation exposure in the case of CT). We propose a method
(named MSKdeX) to estimate fine-grained muscle properties from a plain X-ray
image, a low-cost, low-radiation, and highly accessible imaging modality,
through musculoskeletal decomposition leveraging fine-grained segmentation in
CT. We train a multi-channel quantitative image translation model to decompose
an X-ray image into projections of CT of individual muscles to infer the lean
muscle mass and muscle volume. We propose the object-wise intensity-sum loss, a
simple yet surprisingly effective metric invariant to muscle deformation and
projection direction, utilizing information in CT and X-ray images collected
from the same patient. While our method is basically an unpaired image-to-image
translation, we also exploit the nature of the bone's rigidity, which provides
the paired data through 2D-3D rigid registration, adding strong pixel-wise
supervision in unpaired training. Through the evaluation using a 539-patient
dataset, we showed that the proposed method significantly outperformed
conventional methods. The average Pearson correlation coefficient between the
predicted and CT-derived ground truth metrics was increased from 0.460 to
0.863. We believe our method opened up a new musculoskeletal diagnosis method
and has the potential to be extended to broader applications in multi-channel
quantitative image translation tasks. Our source code will be released soon.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 14:56:18 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Jul 2023 11:27:30 GMT"
}
] | 2023-07-24T00:00:00 |
[
[
"Gu",
"Yi",
""
],
[
"Otake",
"Yoshito",
""
],
[
"Uemura",
"Keisuke",
""
],
[
"Takao",
"Masaki",
""
],
[
"Soufi",
"Mazen",
""
],
[
"Hiasa",
"Yuta",
""
],
[
"Talbot",
"Hugues",
""
],
[
"Okata",
"Seiji",
""
],
[
"Sugano",
"Nobuhiko",
""
],
[
"Sato",
"Yoshinobu",
""
]
] |
new_dataset
| 0.99919 |
2306.02250
|
Sheshera Mysore
|
Sheshera Mysore, Andrew McCallum, Hamed Zamani
|
Large Language Model Augmented Narrative Driven Recommendations
|
RecSys 2023 Camera-ready
| null | null | null |
cs.IR cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Narrative-driven recommendation (NDR) presents an information access problem
where users solicit recommendations with verbose descriptions of their
preferences and context, for example, travelers soliciting recommendations for
points of interest while describing their likes/dislikes and travel
circumstances. These requests are increasingly important with the rise of
natural language-based conversational interfaces for search and recommendation
systems. However, NDR lacks abundant training data for models, and current
platforms commonly do not support these requests. Fortunately, classical
user-item interaction datasets contain rich textual data, e.g., reviews, which
often describe user preferences and context - this may be used to bootstrap
training for NDR models. In this work, we explore using large language models
(LLMs) for data augmentation to train NDR models. We use LLMs for authoring
synthetic narrative queries from user-item interactions with few-shot prompting
and train retrieval models for NDR on synthetic queries and user-item
interaction data. Our experiments demonstrate that this is an effective
strategy for training small-parameter retrieval models that outperform other
retrieval and LLM baselines for narrative-driven recommendation.
|
[
{
"version": "v1",
"created": "Sun, 4 Jun 2023 03:46:45 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Jul 2023 07:46:03 GMT"
}
] | 2023-07-24T00:00:00 |
[
[
"Mysore",
"Sheshera",
""
],
[
"McCallum",
"Andrew",
""
],
[
"Zamani",
"Hamed",
""
]
] |
new_dataset
| 0.989555 |
2306.09260
|
Pierre Lavieille
|
Pierre Lavieille and Ismail Alaoui Hassani Atlas
|
IsoEx: an explainable unsupervised approach to process event logs cyber
investigation
| null | null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
39 seconds. That is the timelapse between two consecutive cyber attacks as of
2023. Meaning that by the time you are done reading this abstract, about 1 or 2
additional cyber attacks would have occurred somewhere in the world. In this
context of highly increased frequency of cyber threats, Security Operation
Centers (SOC) and Computer Emergency Response Teams (CERT) can be overwhelmed.
In order to relieve the cybersecurity teams in their investigative effort and
help them focus on more added-value tasks, machine learning approaches and
methods started to emerge. This paper introduces a novel method, IsoEx, for
detecting anomalous and potentially problematic command lines during the
investigation of contaminated devices. IsoEx is built around a set of features
that leverages the log structure of the command line, as well as its
parent/child relationship, to achieve a greater accuracy than traditional
methods. To detect anomalies, IsoEx resorts to an unsupervised anomaly
detection technique that is both highly sensitive and lightweight. A key
contribution of the paper is its emphasis on interpretability, achieved through
the features themselves and the application of eXplainable Artificial
Intelligence (XAI) techniques and visualizations. This is critical to ensure
the adoption of the method by SOC and CERT teams, as the paper argues that the
current literature on machine learning for log investigation has not adequately
addressed the issue of explainability. This method was proven efficient in a
real-life environment as it was built to support a company\'s SOC and CERT
|
[
{
"version": "v1",
"created": "Wed, 7 Jun 2023 14:22:41 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Jul 2023 08:18:51 GMT"
}
] | 2023-07-24T00:00:00 |
[
[
"Lavieille",
"Pierre",
""
],
[
"Atlas",
"Ismail Alaoui Hassani",
""
]
] |
new_dataset
| 0.954997 |
2306.09382
|
Minseok Kim
|
Minseok Kim, Jun Hyung Lee, Soonyoung Jung
|
Sound Demixing Challenge 2023 Music Demixing Track Technical Report:
TFC-TDF-UNet v3
|
5 pages, 4 tables
| null | null | null |
cs.SD cs.LG cs.MM eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this report, we present our award-winning solutions for the Music Demixing
Track of Sound Demixing Challenge 2023. First, we propose TFC-TDF-UNet v3, a
time-efficient music source separation model that achieves state-of-the-art
results on the MUSDB benchmark. We then give full details regarding our
solutions for each Leaderboard, including a loss masking approach for
noise-robust training. Code for reproducing model training and final
submissions is available at github.com/kuielab/sdx23.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 12:59:04 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Jun 2023 17:31:30 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Jul 2023 07:59:06 GMT"
}
] | 2023-07-24T00:00:00 |
[
[
"Kim",
"Minseok",
""
],
[
"Lee",
"Jun Hyung",
""
],
[
"Jung",
"Soonyoung",
""
]
] |
new_dataset
| 0.97227 |
2306.17519
|
Pawan Kumar Rajpoot
|
Pawan Kumar Rajpoot, Ankur Parikh
|
GPT-FinRE: In-context Learning for Financial Relation Extraction using
Large Language Models
|
arXiv admin note: text overlap with arXiv:2305.02105 by other authors
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Relation extraction (RE) is a crucial task in natural language processing
(NLP) that aims to identify and classify relationships between entities
mentioned in text. In the financial domain, relation extraction plays a vital
role in extracting valuable information from financial documents, such as news
articles, earnings reports, and company filings. This paper describes our
solution to relation extraction on one such dataset REFinD. The dataset was
released along with shared task as a part of the Fourth Workshop on Knowledge
Discovery from Unstructured Data in Financial Services, co-located with SIGIR
2023. In this paper, we employed OpenAI models under the framework of
in-context learning (ICL). We utilized two retrieval strategies to find top K
relevant in-context learning demonstrations / examples from training data for a
given test example. The first retrieval mechanism, we employed, is a
learning-free dense retriever and the other system is a learning-based
retriever. We were able to achieve 3rd rank overall. Our best F1-score is
0.718.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 10:12:30 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Jul 2023 06:57:49 GMT"
}
] | 2023-07-24T00:00:00 |
[
[
"Rajpoot",
"Pawan Kumar",
""
],
[
"Parikh",
"Ankur",
""
]
] |
new_dataset
| 0.999525 |
2307.07589
|
Mina Huh
|
Mina Huh, Yi-Hao Peng, Amy Pavel
|
GenAssist: Making Image Generation Accessible
|
For accessibility tagged pdf, please refer to the ancillary file
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Blind and low vision (BLV) creators use images to communicate with sighted
audiences. However, creating or retrieving images is challenging for BLV
creators as it is difficult to use authoring tools or assess image search
results. Thus, creators limit the types of images they create or recruit
sighted collaborators. While text-to-image generation models let creators
generate high-fidelity images based on a text description (i.e. prompt), it is
difficult to assess the content and quality of generated images. We present
GenAssist, a system to make text-to-image generation accessible. Using our
interface, creators can verify whether generated image candidates followed the
prompt, access additional details in the image not specified in the prompt, and
skim a summary of similarities and differences between image candidates. To
power the interface, GenAssist uses a large language model to generate visual
questions, vision-language models to extract answers, and a large language
model to summarize the results. Our study with 12 BLV creators demonstrated
that GenAssist enables and simplifies the process of image selection and
generation, making visual authoring more accessible to all.
|
[
{
"version": "v1",
"created": "Fri, 14 Jul 2023 19:29:59 GMT"
}
] | 2023-07-24T00:00:00 |
[
[
"Huh",
"Mina",
""
],
[
"Peng",
"Yi-Hao",
""
],
[
"Pavel",
"Amy",
""
]
] |
new_dataset
| 0.999199 |
2307.08296
|
Christof A. O. Rauber
|
Christof A. O. Rauber, Lukas Brechtel, and Hans D. Schotten
|
JCAS-Enabled Sensing as a Service in 6th-Generation Mobile Communication
Networks
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The introduction of new types of frequency spectrum in 6G technology
facilitates the convergence of conventional mobile communications and radar
functions. Thus, the mobile network itself becomes a versatile sensor system.
This enables mobile network operators to offer a sensing service in addition to
conventional data and telephony services. The potential benefits are expected
to accrue to various stakeholders, including individuals, the environment, and
society in general. The paper discusses technological development, possible
integration, and use cases, as well as future development areas.
|
[
{
"version": "v1",
"created": "Mon, 17 Jul 2023 07:47:27 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Jul 2023 05:41:27 GMT"
}
] | 2023-07-24T00:00:00 |
[
[
"Rauber",
"Christof A. O.",
""
],
[
"Brechtel",
"Lukas",
""
],
[
"Schotten",
"Hans D.",
""
]
] |
new_dataset
| 0.999648 |
2307.09004
|
Jinhong Wang
|
Jinhong Wang, Yi Cheng, Jintai Chen, Tingting Chen, Danny Chen and
Jian Wu
|
Ord2Seq: Regarding Ordinal Regression as Label Sequence Prediction
|
Accepted by ICCV2023
| null | null | null |
cs.AI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Ordinal regression refers to classifying object instances into ordinal
categories. It has been widely studied in many scenarios, such as medical
disease grading, movie rating, etc. Known methods focused only on learning
inter-class ordinal relationships, but still incur limitations in
distinguishing adjacent categories thus far. In this paper, we propose a simple
sequence prediction framework for ordinal regression called Ord2Seq, which, for
the first time, transforms each ordinal category label into a special label
sequence and thus regards an ordinal regression task as a sequence prediction
process. In this way, we decompose an ordinal regression task into a series of
recursive binary classification steps, so as to subtly distinguish adjacent
categories. Comprehensive experiments show the effectiveness of distinguishing
adjacent categories for performance improvement and our new approach exceeds
state-of-the-art performances in four different scenarios. Codes are available
at https://github.com/wjh892521292/Ord2Seq.
|
[
{
"version": "v1",
"created": "Tue, 18 Jul 2023 06:44:20 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Jul 2023 08:41:23 GMT"
}
] | 2023-07-24T00:00:00 |
[
[
"Wang",
"Jinhong",
""
],
[
"Cheng",
"Yi",
""
],
[
"Chen",
"Jintai",
""
],
[
"Chen",
"Tingting",
""
],
[
"Chen",
"Danny",
""
],
[
"Wu",
"Jian",
""
]
] |
new_dataset
| 0.959664 |
2307.09815
|
Yan Yang
|
Hao Yang, Liyuan Pan, Yan Yang, Miaomiao Liu
|
LDP: Language-driven Dual-Pixel Image Defocus Deblurring Network
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recovering sharp images from dual-pixel (DP) pairs with disparity-dependent
blur is a challenging task.~Existing blur map-based deblurring methods have
demonstrated promising results. In this paper, we propose, to the best of our
knowledge, the first framework to introduce the contrastive language-image
pre-training framework (CLIP) to achieve accurate blur map estimation from DP
pairs unsupervisedly. To this end, we first carefully design text prompts to
enable CLIP to understand blur-related geometric prior knowledge from the DP
pair. Then, we propose a format to input stereo DP pair to the CLIP without any
fine-tuning, where the CLIP is pre-trained on monocular images. Given the
estimated blur map, we introduce a blur-prior attention block, a blur-weighting
loss and a blur-aware loss to recover the all-in-focus image. Our method
achieves state-of-the-art performance in extensive experiments.
|
[
{
"version": "v1",
"created": "Wed, 19 Jul 2023 08:03:53 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Jul 2023 07:10:28 GMT"
}
] | 2023-07-24T00:00:00 |
[
[
"Yang",
"Hao",
""
],
[
"Pan",
"Liyuan",
""
],
[
"Yang",
"Yan",
""
],
[
"Liu",
"Miaomiao",
""
]
] |
new_dataset
| 0.997015 |
2307.11181
|
Zeinab Nezami
|
Zeinab Nezami, Evangelos Pournaras, Amir Borzouie, Jie Xu
|
SMOTEC: An Edge Computing Testbed for Adaptive Smart Mobility
Experimentation
|
6 pages and 6 figures
| null | null | null |
cs.DC cs.MA cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Smart mobility becomes paramount for meeting net-zero targets. However,
autonomous, self-driving and electric vehicles require more than ever before an
efficient, resilient and trustworthy computational offloading backbone that
expands throughout the edge-to-cloud continuum. Utilizing on-demand
heterogeneous computational resources for smart mobility is challenging and
often cost-ineffective. This paper introduces SMOTEC, a novel open-source
testbed for adaptive smart mobility experimentation with edge computing. SMOTEC
provides for the first time a modular end-to-end instrumentation for
prototyping and optimizing placement of intelligence services on edge devices
such as augmented reality and real-time traffic monitoring. SMOTEC supports a
plug-and-play Docker container integration of the SUMO simulator for urban
mobility, Raspberry Pi edge devices communicating via ZeroMQ and EPOS for an
AI-based decentralized load balancing across edge-to-cloud. All components are
orchestrated by the K3s lightweight Kubernetes. A proof-of-concept of
self-optimized service placements for traffic monitoring from Munich
demonstrates in practice the applicability and cost-effectiveness of SMOTEC.
|
[
{
"version": "v1",
"created": "Thu, 20 Jul 2023 18:49:45 GMT"
}
] | 2023-07-24T00:00:00 |
[
[
"Nezami",
"Zeinab",
""
],
[
"Pournaras",
"Evangelos",
""
],
[
"Borzouie",
"Amir",
""
],
[
"Xu",
"Jie",
""
]
] |
new_dataset
| 0.993202 |
2307.11194
|
Lindsey Kuper
|
Patrick Redmond, Lindsey Kuper
|
An Exceptional Actor System (Functional Pearl)
|
To appear at Haskell Symposium 2023
| null |
10.1145/3609026.3609728
| null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Glasgow Haskell Compiler is known for its feature-laden runtime system
(RTS), which includes lightweight threads, asynchronous exceptions, and a slew
of other features. Their combination is powerful enough that a programmer may
complete the same task in many different ways -- some more advisable than
others.
We present a user-accessible actor framework hidden in plain sight within the
RTS and demonstrate it on a classic example from the distributed systems
literature. We then extend both the framework and example to the realm of
dynamic types. Finally, we raise questions about how RTS features intersect and
possibly subsume one another, and suggest that GHC can guide good practice by
constraining the use of some features.
|
[
{
"version": "v1",
"created": "Thu, 20 Jul 2023 19:11:54 GMT"
}
] | 2023-07-24T00:00:00 |
[
[
"Redmond",
"Patrick",
""
],
[
"Kuper",
"Lindsey",
""
]
] |
new_dataset
| 0.99416 |
2307.11248
|
Apan Qasem
|
Clara Novoa and Apan Qasem
|
GPU-accelerated Parallel Solutions to the Quadratic Assignment Problem
|
25 pages, 9 figures; parts of this work appeared as short papers in
XSEDE14 and XSEDE15 conferences. This version of the paper is a substantial
extension of previous work with optimizations for newer GPU platforms and
extended experimental results
| null | null | null |
cs.DC cs.MS
|
http://creativecommons.org/licenses/by/4.0/
|
The Quadratic Assignment Problem (QAP) is an important combinatorial
optimization problem with applications in many areas including logistics and
manufacturing. QAP is known to be NP-hard, a computationally challenging
problem, which requires the use of sophisticated heuristics in finding
acceptable solutions for most real-world data sets.
In this paper, we present GPU-accelerated implementations of a 2opt and a
tabu search algorithm for solving the QAP. For both algorithms, we extract
parallelism at multiple levels and implement novel code optimization techniques
that fully utilize the GPU hardware. On a series of experiments on the
well-known QAPLIB data sets, our solutions, on average run an
order-of-magnitude faster than previous implementations and deliver up to a
factor of 63 speedup on specific instances. The quality of the solutions
produced by our implementations of 2opt and tabu is within 1.03% and 0.15% of
the best known values. The experimental results also provide key insight into
the performance characteristics of accelerated QAP solvers. In particular, the
results reveal that both algorithmic choice and the shape of the input data
sets are key factors in finding efficient implementations.
|
[
{
"version": "v1",
"created": "Thu, 20 Jul 2023 21:38:52 GMT"
}
] | 2023-07-24T00:00:00 |
[
[
"Novoa",
"Clara",
""
],
[
"Qasem",
"Apan",
""
]
] |
new_dataset
| 0.96975 |
2307.11256
|
Erbin Qiu
|
Erbin Qiu, Yuan-Hang Zhang, Massimiliano Di Ventra and Ivan K.
Schuller
|
Reconfigurable cascaded thermal neuristors for neuromorphic computing
| null | null | null | null |
cs.ET physics.app-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While the complementary metal-oxide semiconductor (CMOS) technology is the
mainstream for the hardware implementation of neural networks, we explore an
alternative route based on a new class of spiking oscillators we call thermal
neuristors, which operate and interact solely via thermal processes. Utilizing
the insulator-to-metal transition in vanadium dioxide, we demonstrate a wide
variety of reconfigurable electrical dynamics mirroring biological neurons.
Notably, inhibitory functionality is achieved just in a single oxide device,
and cascaded information flow is realized exclusively through thermal
interactions. To elucidate the underlying mechanisms of the neuristors, a
detailed theoretical model is developed, which accurately reflects the
experimental results. This study establishes the foundation for scalable and
energy-efficient thermal neural networks, fostering progress in brain-inspired
computing.
|
[
{
"version": "v1",
"created": "Thu, 20 Jul 2023 22:12:55 GMT"
}
] | 2023-07-24T00:00:00 |
[
[
"Qiu",
"Erbin",
""
],
[
"Zhang",
"Yuan-Hang",
""
],
[
"Di Ventra",
"Massimiliano",
""
],
[
"Schuller",
"Ivan K.",
""
]
] |
new_dataset
| 0.997799 |
2307.11261
|
Anita Rau
|
Anita Rau, Sophia Bano, Yueming Jin, Pablo Azagra, Javier Morlana,
Edward Sanderson, Bogdan J. Matuszewski, Jae Young Lee, Dong-Jae Lee, Erez
Posner, Netanel Frank, Varshini Elangovan, Sista Raviteja, Zhengwen Li,
Jiquan Liu, Seenivasan Lalithkumar, Mobarakol Islam, Hongliang Ren, Jos\'e
M.M. Montiel, Danail Stoyanov
|
SimCol3D -- 3D Reconstruction during Colonoscopy Challenge
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Colorectal cancer is one of the most common cancers in the world. While
colonoscopy is an effective screening technique, navigating an endoscope
through the colon to detect polyps is challenging. A 3D map of the observed
surfaces could enhance the identification of unscreened colon tissue and serve
as a training platform. However, reconstructing the colon from video footage
remains unsolved due to numerous factors such as self-occlusion, reflective
surfaces, lack of texture, and tissue deformation that limit feature-based
methods. Learning-based approaches hold promise as robust alternatives, but
necessitate extensive datasets. By establishing a benchmark, the 2022 EndoVis
sub-challenge SimCol3D aimed to facilitate data-driven depth and pose
prediction during colonoscopy. The challenge was hosted as part of MICCAI 2022
in Singapore. Six teams from around the world and representatives from academia
and industry participated in the three sub-challenges: synthetic depth
prediction, synthetic pose prediction, and real pose prediction. This paper
describes the challenge, the submitted methods, and their results. We show that
depth prediction in virtual colonoscopy is robustly solvable, while pose
estimation remains an open research question.
|
[
{
"version": "v1",
"created": "Thu, 20 Jul 2023 22:41:23 GMT"
}
] | 2023-07-24T00:00:00 |
[
[
"Rau",
"Anita",
""
],
[
"Bano",
"Sophia",
""
],
[
"Jin",
"Yueming",
""
],
[
"Azagra",
"Pablo",
""
],
[
"Morlana",
"Javier",
""
],
[
"Sanderson",
"Edward",
""
],
[
"Matuszewski",
"Bogdan J.",
""
],
[
"Lee",
"Jae Young",
""
],
[
"Lee",
"Dong-Jae",
""
],
[
"Posner",
"Erez",
""
],
[
"Frank",
"Netanel",
""
],
[
"Elangovan",
"Varshini",
""
],
[
"Raviteja",
"Sista",
""
],
[
"Li",
"Zhengwen",
""
],
[
"Liu",
"Jiquan",
""
],
[
"Lalithkumar",
"Seenivasan",
""
],
[
"Islam",
"Mobarakol",
""
],
[
"Ren",
"Hongliang",
""
],
[
"Montiel",
"José M. M.",
""
],
[
"Stoyanov",
"Danail",
""
]
] |
new_dataset
| 0.999041 |
2307.11272
|
Sandipan Choudhuri
|
A. Sen, C. Sumnicht, S. Choudhuri, A. Chang, G. Xue
|
Quantum Communication in 6G Satellite Networks: Entanglement
Distribution Across Changing Topologies
| null | null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As LEO/VLEO satellites offer many attractive features, such as low
transmission delay, they are expected to be an integral part of 6G. Global
entanglement distribution over LEO and VLEO satellites network must reckon with
satellite movement over time. Current studies do not fully capture the dynamic
nature of satellite constellations. We model a dynamic LEO/VLEO satellite
network as a time-varying graph and construct a sequence of static graphs to
represent a dynamic network. We study the entanglement distribution problem
between a set of source-destination node pairs in this dynamic network
utilizing Multi-commodity Flow (MCF). Solving MCF over a sequence of graphs
independently for each graph may produce a completely different set of paths.
Changing the set of paths every time the graph topology changes may involve a
significant amount of overhead, as an established set of paths must be taken
down and a new set of paths established. We propose a technique that will avoid
this overhead by computing only one set of paths P to be used over all the
graphs in the sequence. The degraded performance offered by P may be viewed as
the cost of using P. The benefit of using P is the overhead cost of path
switching that can be avoided. We provide a cost-benefit analysis in a LEO/VLEO
constellation for entanglement distribution between multiple source-destination
pairs. Our extensive experimentation shows that a significant amount of savings
in overhead can be achieved if one is willing to accept a slightly degraded
performance.
|
[
{
"version": "v1",
"created": "Fri, 21 Jul 2023 00:02:43 GMT"
}
] | 2023-07-24T00:00:00 |
[
[
"Sen",
"A.",
""
],
[
"Sumnicht",
"C.",
""
],
[
"Choudhuri",
"S.",
""
],
[
"Chang",
"A.",
""
],
[
"Xue",
"G.",
""
]
] |
new_dataset
| 0.970326 |
2307.11323
|
Kai Lei
|
Kai Lei, Zhan Chen, Shuman Jia, Xiaoteng Zhang
|
HVDetFusion: A Simple and Robust Camera-Radar Fusion Framework
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the field of autonomous driving, 3D object detection is a very important
perception module. Although the current SOTA algorithm combines Camera and
Lidar sensors, limited by the high price of Lidar, the current mainstream
landing schemes are pure Camera sensors or Camera+Radar sensors. In this study,
we propose a new detection algorithm called HVDetFusion, which is a multi-modal
detection algorithm that not only supports pure camera data as input for
detection, but also can perform fusion input of radar data and camera data. The
camera stream does not depend on the input of Radar data, thus addressing the
downside of previous methods. In the pure camera stream, we modify the
framework of Bevdet4D for better perception and more efficient inference, and
this stream has the whole 3D detection output. Further, to incorporate the
benefits of Radar signals, we use the prior information of different object
positions to filter the false positive information of the original radar data,
according to the positioning information and radial velocity information
recorded by the radar sensors to supplement and fuse the BEV features generated
by the original camera data, and the effect is further improved in the process
of fusion training. Finally, HVDetFusion achieves the new state-of-the-art
67.4\% NDS on the challenging nuScenes test set among all camera-radar 3D
object detectors. The code is available at
https://github.com/HVXLab/HVDetFusion
|
[
{
"version": "v1",
"created": "Fri, 21 Jul 2023 03:08:28 GMT"
}
] | 2023-07-24T00:00:00 |
[
[
"Lei",
"Kai",
""
],
[
"Chen",
"Zhan",
""
],
[
"Jia",
"Shuman",
""
],
[
"Zhang",
"Xiaoteng",
""
]
] |
new_dataset
| 0.997319 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.