id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2210.01221
|
Yue Yu
|
Yue Yu, Shenghui Chen, David Fridovich-Keil, and Ufuk Topcu
|
Cost Design in Atomic Routing Games
| null | null | null | null |
cs.GT math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An atomic routing game is a multiplayer game on a directed graph. Each player
in the game chooses a path -- a sequence of links that connect its origin node
to its destination node -- with the lowest cost, where the cost of each link is
a function of all players' choices. We develop a novel numerical method to
design the link cost function in atomic routing games such that the players'
choices at the Nash equilibrium minimize a given smooth performance function.
This method first approximates the nonsmooth Nash equilibrium conditions with
smooth ones, then iteratively improves the link cost function via implicit
differentiation. We demonstrate the application of this method to atomic
routing games that model noncooperative agents navigating in grid worlds.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 20:32:11 GMT"
},
{
"version": "v2",
"created": "Thu, 18 May 2023 02:47:39 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Yu",
"Yue",
""
],
[
"Chen",
"Shenghui",
""
],
[
"Fridovich-Keil",
"David",
""
],
[
"Topcu",
"Ufuk",
""
]
] |
new_dataset
| 0.998428 |
2210.06015
|
Raghavendra Selvan
|
Pedram Bakhtiarifard, Christian Igel, Raghavendra Selvan
|
EC-NAS: Energy Consumption Aware Tabular Benchmarks for Neural
Architecture Search
|
Source code at https://github.com/PedramBakh/EC-NAS-Bench
| null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Energy consumption from selecting, training and deploying deep learning
models has continued to increase over the past few years. Our goal in this work
is to support the design of energy-efficient deep learning models that are
easier to train with lower compute resources, practical to deploy in real-world
edge/mobile computing settings and environmentally sustainable. Tabular
benchmarks for neural architecture search (NAS) allow the evaluation of NAS
strategies at lower computational cost by providing pre-computed performance
statistics. In this work, we suggest including energy efficiency as an
additional performance criterion to NAS and present an updated tabular
benchmark by including information on energy consumption and carbon footprint
for different architectures. The benchmark called EC-NAS is made available
open-source to support energy consumption-aware NAS research. EC-NAS also
includes a surrogate model for predicting energy consumption, and helps us
reduce the overall energy cost of creating this dataset. We demonstrate the
usefulness of EC-NAS by applying multi-objective optimisation algorithms that
reveal the trade-off between energy consumption and accuracy, showing that it
is possible to discover energy-efficient architectures with little to no loss
in performance.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 08:39:35 GMT"
},
{
"version": "v2",
"created": "Thu, 18 May 2023 09:21:49 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Bakhtiarifard",
"Pedram",
""
],
[
"Igel",
"Christian",
""
],
[
"Selvan",
"Raghavendra",
""
]
] |
new_dataset
| 0.977739 |
2302.10756
|
Feng Qian
|
Feng Qian, Yuehua Yue, Yu He, Hongtao Yu, Yingjie Zhou, Jinliang Tang,
and Guangmin Hu
|
Unsupervised Seismic Footprint Removal With Physical Prior Augmented
Deep Autoencoder
| null |
IEEE Transactions on Geoscience and Remote Sensing,2023
|
10.1109/TGRS.2023.3277973
|
2302.10756
|
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Seismic acquisition footprints appear as stably faint and dim structures and
emerge fully spatially coherent, causing inevitable damage to useful signals
during the suppression process. Various footprint removal methods, including
filtering and sparse representation (SR), have been reported to attain
promising results for surmounting this challenge. However, these methods, e.g.,
SR, rely solely on the handcrafted image priors of useful signals, which is
sometimes an unreasonable demand if complex geological structures are contained
in the given seismic data. As an alternative, this article proposes a footprint
removal network (dubbed FR-Net) for the unsupervised suppression of acquired
footprints without any assumptions regarding valuable signals. The key to the
FR-Net is to design a unidirectional total variation (UTV) model for footprint
acquisition according to the intrinsically directional property of noise. By
strongly regularizing a deep convolutional autoencoder (DCAE) using the UTV
model, our FR-Net transforms the DCAE from an entirely data-driven model to a
\textcolor{black}{prior-augmented} approach, inheriting the superiority of the
DCAE and our footprint model. Subsequently, the complete separation of the
footprint noise and useful signals is projected in an unsupervised manner,
specifically by optimizing the FR-Net via the backpropagation (BP) algorithm.
We provide qualitative and quantitative evaluations conducted on three
synthetic and field datasets, demonstrating that our FR-Net surpasses the
previous state-of-the-art (SOTA) methods.
|
[
{
"version": "v1",
"created": "Wed, 8 Feb 2023 07:46:28 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Qian",
"Feng",
""
],
[
"Yue",
"Yuehua",
""
],
[
"He",
"Yu",
""
],
[
"Yu",
"Hongtao",
""
],
[
"Zhou",
"Yingjie",
""
],
[
"Tang",
"Jinliang",
""
],
[
"Hu",
"Guangmin",
""
]
] |
new_dataset
| 0.995372 |
2305.01625
|
Uri Alon
|
Amanda Bertsch, Uri Alon, Graham Neubig, Matthew R. Gormley
|
Unlimiformer: Long-Range Transformers with Unlimited Length Input
|
Preprint
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Since the proposal of transformers, these models have been limited to bounded
input lengths, because of their need to attend to every token in the input. In
this work, we propose Unlimiformer: a general approach that wraps any existing
pretrained encoder-decoder transformer, and offloads the cross-attention
computation to a single k-nearest-neighbor (kNN) index, while the returned kNN
distances are the attention dot-product scores. This kNN index can be kept on
either the GPU or CPU memory and queried in sub-linear time; this way, we can
index practically unlimited input sequences, while every attention head in
every decoder layer retrieves its top-k keys, instead of attending to every
key. We evaluate Unlimiformer on several long-document and book-summarization
benchmarks, showing that it can process even 500k token-long inputs from the
BookSum dataset, without any input truncation at test time. We demonstrate that
Unlimiformer improves pretrained models such as BART and Longformer by
extending them to unlimited inputs without additional learned weights and
without modifying their code. We make our code and models publicly available at
https://github.com/abertsch72/unlimiformer .
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 17:35:08 GMT"
},
{
"version": "v2",
"created": "Thu, 18 May 2023 17:21:24 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Bertsch",
"Amanda",
""
],
[
"Alon",
"Uri",
""
],
[
"Neubig",
"Graham",
""
],
[
"Gormley",
"Matthew R.",
""
]
] |
new_dataset
| 0.997962 |
2305.07969
|
Yutian Chen
|
Yutian Chen, Hao Kang, Vivian Zhai, Liangze Li, Rita Singh, Bhiksha
Raj
|
GPT-Sentinel: Distinguishing Human and ChatGPT Generated Content
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a novel approach for detecting ChatGPT-generated vs.
human-written text using language models. To this end, we first collected and
released a pre-processed dataset named OpenGPTText, which consists of rephrased
content generated using ChatGPT. We then designed, implemented, and trained two
different models for text classification, using Robustly Optimized BERT
Pretraining Approach (RoBERTa) and Text-to-Text Transfer Transformer (T5),
respectively. Our models achieved remarkable results, with an accuracy of over
97% on the test dataset, as evaluated through various metrics. Furthermore, we
conducted an interpretability study to showcase our model's ability to extract
and differentiate key features between human-written and ChatGPT-generated
text. Our findings provide important insights into the effective use of
language models to detect generated text.
|
[
{
"version": "v1",
"created": "Sat, 13 May 2023 17:12:11 GMT"
},
{
"version": "v2",
"created": "Wed, 17 May 2023 18:21:03 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Chen",
"Yutian",
""
],
[
"Kang",
"Hao",
""
],
[
"Zhai",
"Vivian",
""
],
[
"Li",
"Liangze",
""
],
[
"Singh",
"Rita",
""
],
[
"Raj",
"Bhiksha",
""
]
] |
new_dataset
| 0.998393 |
2305.08018
|
Benjamin Gutteridge
|
Benjamin Gutteridge, Xiaowen Dong, Michael Bronstein, Francesco Di
Giovanni
|
DRew: Dynamically Rewired Message Passing with Delay
|
Accepted at ICML 2023; 16 pages
| null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Message passing neural networks (MPNNs) have been shown to suffer from the
phenomenon of over-squashing that causes poor performance for tasks relying on
long-range interactions. This can be largely attributed to message passing only
occurring locally, over a node's immediate neighbours. Rewiring approaches
attempting to make graphs 'more connected', and supposedly better suited to
long-range tasks, often lose the inductive bias provided by distance on the
graph since they make distant nodes communicate instantly at every layer. In
this paper we propose a framework, applicable to any MPNN architecture, that
performs a layer-dependent rewiring to ensure gradual densification of the
graph. We also propose a delay mechanism that permits skip connections between
nodes depending on the layer and their mutual distance. We validate our
approach on several long-range tasks and show that it outperforms graph
Transformers and multi-hop MPNNs.
|
[
{
"version": "v1",
"created": "Sat, 13 May 2023 22:47:40 GMT"
},
{
"version": "v2",
"created": "Thu, 18 May 2023 12:41:56 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Gutteridge",
"Benjamin",
""
],
[
"Dong",
"Xiaowen",
""
],
[
"Bronstein",
"Michael",
""
],
[
"Di Giovanni",
"Francesco",
""
]
] |
new_dataset
| 0.960953 |
2305.08596
|
Youngjin Jin
|
Youngjin Jin, Eugene Jang, Jian Cui, Jin-Woo Chung, Yongjae Lee,
Seungwon Shin
|
DarkBERT: A Language Model for the Dark Side of the Internet
|
9 pages (main paper), 17 pages (including bibliography and appendix),
to appear at the ACL 2023 Main Conference
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent research has suggested that there are clear differences in the
language used in the Dark Web compared to that of the Surface Web. As studies
on the Dark Web commonly require textual analysis of the domain, language
models specific to the Dark Web may provide valuable insights to researchers.
In this work, we introduce DarkBERT, a language model pretrained on Dark Web
data. We describe the steps taken to filter and compile the text data used to
train DarkBERT to combat the extreme lexical and structural diversity of the
Dark Web that may be detrimental to building a proper representation of the
domain. We evaluate DarkBERT and its vanilla counterpart along with other
widely used language models to validate the benefits that a Dark Web domain
specific model offers in various use cases. Our evaluations show that DarkBERT
outperforms current language models and may serve as a valuable resource for
future research on the Dark Web.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 12:23:10 GMT"
},
{
"version": "v2",
"created": "Thu, 18 May 2023 05:02:29 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Jin",
"Youngjin",
""
],
[
"Jang",
"Eugene",
""
],
[
"Cui",
"Jian",
""
],
[
"Chung",
"Jin-Woo",
""
],
[
"Lee",
"Yongjae",
""
],
[
"Shin",
"Seungwon",
""
]
] |
new_dataset
| 0.975948 |
2305.09740
|
Sanyam Jain
|
Sanyam Jain, Raju gautam, Shivani Sharma, and Ravi Tomar
|
Four Factor Authentication with emerging cybersecurity for Mobile
Transactions
| null | null |
10.1007/978-981-16-4149-7_35
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cybersecurity is very essential for Mobile Transactions to complete
seamlessly. Mobile Commerce (Mcom.) is the very basic transaction type, which
is very commonly used (2 in 5 people uses mobile as transaction medium), To
secure this there are various technologies used by this research. The four
factors formally known as Multi-Factor-Authentication are: two of them are
Traditional methods (User Login-password and One Time Password (aka OTP)) with
addition of Geolocation and Facial Recognition. All the data is converted to a
text file, which is hidden in an image (using Babushka algorithm). The
end-point then decrypts the image using same algorithm.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 18:24:03 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Jain",
"Sanyam",
""
],
[
"gautam",
"Raju",
""
],
[
"Sharma",
"Shivani",
""
],
[
"Tomar",
"Ravi",
""
]
] |
new_dataset
| 0.978598 |
2305.09864
|
Yiping Kang
|
Jason Mars, Yiping Kang, Roland Daynauth, Baichuan Li, Ashish
Mahendra, Krisztian Flautner, Lingjia Tang
|
The Jaseci Programming Paradigm and Runtime Stack: Building Scale-out
Production Applications Easy and Fast
| null | null | null | null |
cs.CL cs.DC cs.PL cs.SE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Today's production scale-out applications include many sub-application
components, such as storage backends, logging infrastructure and AI models.
These components have drastically different characteristics, are required to
work in collaboration, and interface with each other as microservices. This
leads to increasingly high complexity in developing, optimizing, configuring,
and deploying scale-out applications, raising the barrier to entry for most
individuals and small teams. We developed a novel co-designed runtime system,
Jaseci, and programming language, Jac, which aims to reduce this complexity.
The key design principle throughout Jaseci's design is to raise the level of
abstraction by moving as much of the scale-out data management, microservice
componentization, and live update complexity into the runtime stack to be
automated and optimized automatically. We use real-world AI applications to
demonstrate Jaseci's benefit for application performance and developer
productivity.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 00:34:36 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Mars",
"Jason",
""
],
[
"Kang",
"Yiping",
""
],
[
"Daynauth",
"Roland",
""
],
[
"Li",
"Baichuan",
""
],
[
"Mahendra",
"Ashish",
""
],
[
"Flautner",
"Krisztian",
""
],
[
"Tang",
"Lingjia",
""
]
] |
new_dataset
| 0.987843 |
2305.09961
|
Sid Chi-Kin Chau
|
Mingyu Hao, Keyang Qian, Sid Chi-Kin Chau
|
Blockchain-enabled Parametric Solar Energy Insurance via Remote Sensing
|
To appear in ACM e-Energy 2023
| null |
10.1145/3575813.3576880
| null |
cs.CR cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite its popularity, the nature of solar energy is highly uncertain and
weather dependent, affecting the business viability and investment of solar
energy generation, especially for household users. To stabilize the income from
solar energy generation, there have been limited traditional options, such as
using energy storage to pool excessive solar energy in off-peak periods or
financial derivatives from future markets to hedge energy prices. In this
paper, we explore a novel idea of "parametric solar energy insurance", by which
solar panel owners can insure their solar energy generation based on a
verifiable geographically specific index (surface solar irradiation).
Parametric solar energy insurance offers opportunities of financial subsidies
for insufficient solar energy generation and amortizes the fluctuations of
renewable energy generation geographically. Furthermore, we propose to leverage
blockchain and remote sensing (satellite imagery) to provide a publicly
verifiable platform for solar energy insurance, which not only automates the
underwriting and claims of a solar energy insurance policy, but also improves
its accountability and transparency. We utilize the state-of-the-art succinct
zero-knowledge proofs (zk-SNARK) to realize privacy-preserving blockchain-based
solar energy insurance on real-world permissionless blockchain platform
Ethereum.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 05:41:35 GMT"
},
{
"version": "v2",
"created": "Thu, 18 May 2023 00:46:39 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Hao",
"Mingyu",
""
],
[
"Qian",
"Keyang",
""
],
[
"Chau",
"Sid Chi-Kin",
""
]
] |
new_dataset
| 0.998894 |
2305.10395
|
Yoonchang Sung
|
Yoonchang Sung, Peter Stone
|
Motion Planning (In)feasibility Detection using a Prior Roadmap via Path
and Cut Search
|
18 pages, 19 figures, Published in Robotics: Science and Systems
(RSS), 2023
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Motion planning seeks a collision-free path in a configuration space
(C-space), representing all possible robot configurations in the environment.
As it is challenging to construct a C-space explicitly for a high-dimensional
robot, we generally build a graph structure called a roadmap, a discrete
approximation of a complex continuous C-space, to reason about connectivity.
Checking collision-free connectivity in the roadmap requires expensive
edge-evaluation computations, and thus, reducing the number of evaluations has
become a significant research objective. However, in practice, we often face
infeasible problems: those in which there is no collision-free path in the
roadmap between the start and the goal locations. Existing studies often
overlook the possibility of infeasibility, becoming highly inefficient by
performing many edge evaluations.
In this work, we address this oversight in scenarios where a prior roadmap is
available; that is, the edges of the roadmap contain the probability of being a
collision-free edge learned from past experience. To this end, we propose an
algorithm called iterative path and cut finding (IPC) that iteratively searches
for a path and a cut in a prior roadmap to detect infeasibility while reducing
expensive edge evaluations as much as possible. We further improve the
efficiency of IPC by introducing a second algorithm, iterative decomposition
and path and cut finding (IDPC), that leverages the fact that cut-finding
algorithms partition the roadmap into smaller subgraphs. We analyze the
theoretical properties of IPC and IDPC, such as completeness and computational
complexity, and evaluate their performance in terms of completion time and the
number of edge evaluations in large-scale simulations.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 17:36:22 GMT"
},
{
"version": "v2",
"created": "Thu, 18 May 2023 17:33:40 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Sung",
"Yoonchang",
""
],
[
"Stone",
"Peter",
""
]
] |
new_dataset
| 0.979354 |
2305.10433
|
Abeer AlDayel
|
Huriyyah Althunayan, Rahaf Bahlas, Manar Alharbi, Lena Alsuwailem,
Abeer Aldayel, Rehab ALahmadi
|
Toxicity Inspector: A Framework to Evaluate Ground Truth in Toxicity
Detection Through Feedback
|
To appear in Workshop on 2nd Workshop on Novel Evaluation Approaches
for Text Classification Systems (NEATCLasS-2023).ICWSM, AAAI, 2023
| null | null | null |
cs.CL cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Toxic language is difficult to define, as it is not monolithic and has many
variations in perceptions of toxicity. This challenge of detecting toxic
language is increased by the highly contextual and subjectivity of its
interpretation, which can degrade the reliability of datasets and negatively
affect detection model performance. To fill this void, this paper introduces a
toxicity inspector framework that incorporates a human-in-the-loop pipeline
with the aim of enhancing the reliability of toxicity benchmark datasets by
centering the evaluator's values through an iterative feedback cycle. The
centerpiece of this framework is the iterative feedback process, which is
guided by two metric types (hard and soft) that provide evaluators and dataset
creators with insightful examination to balance the tradeoff between
performance gains and toxicity avoidance.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 11:56:42 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Althunayan",
"Huriyyah",
""
],
[
"Bahlas",
"Rahaf",
""
],
[
"Alharbi",
"Manar",
""
],
[
"Alsuwailem",
"Lena",
""
],
[
"Aldayel",
"Abeer",
""
],
[
"ALahmadi",
"Rehab",
""
]
] |
new_dataset
| 0.986176 |
2305.10441
|
Miao Ye
|
Jinqiang Li, Miao Ye, Linqiang Huang, Xiaofang Deng, Hongbing Qiu and
Yong Wang
|
An Intelligent SDWN Routing Algorithm Based on Network Situational
Awareness and Deep Reinforcement Learning
| null | null | null | null |
cs.NI cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to the highly dynamic changes in wireless network topologies, efficiently
obtaining network status information and flexibly forwarding data to improve
communication quality of service are important challenges. This article
introduces an intelligent routing algorithm (DRL-PPONSA) based on proximal
policy optimization deep reinforcement learning with network situational
awareness under a software-defined wireless networking architecture. First, a
specific data plane is designed for network topology construction and data
forwarding. The control plane collects network traffic information, sends flow
tables, and uses a GCN-GRU prediction mechanism to perceive future traffic
change trends to achieve network situational awareness. Second, a DRL-based
data forwarding mechanism is designed in the knowledge plane. The predicted
network traffic matrix and topology information matrix are treated as the
environment for DRL agents, while next-hop adjacent nodes are treated as
executable actions. Accordingly, action selection strategies are designed for
different network conditions to achieve more intelligent, flexible, and
efficient routing control. The reward function is designed using network link
information and various reward and penalty mechanisms. Additionally, importance
sampling and gradient clipping techniques are employed during gradient updating
to enhance convergence speed and stability. Experimental results show that
DRL-PPONSA outperforms traditional routing methods in network throughput,
delay, packet loss rate, and wireless node distance. Compared to
value-function-based Dueling DQN routing, the convergence speed is
significantly improved, and the convergence effect is more stable.
Simultaneously, its consumption of hardware storage space is reduced, and
efficient routing decisions can be made in real-time using the current network
state information.
|
[
{
"version": "v1",
"created": "Fri, 12 May 2023 14:18:09 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Li",
"Jinqiang",
""
],
[
"Ye",
"Miao",
""
],
[
"Huang",
"Linqiang",
""
],
[
"Deng",
"Xiaofang",
""
],
[
"Qiu",
"Hongbing",
""
],
[
"Wang",
"Yong",
""
]
] |
new_dataset
| 0.998929 |
2305.10462
|
Ying-Tian Liu
|
Ying-Tian Liu, Zhifei Zhang, Yuan-Chen Guo, Matthew Fisher, Zhaowen
Wang, Song-Hai Zhang
|
DualVector: Unsupervised Vector Font Synthesis with Dual-Part
Representation
|
CVPR 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Automatic generation of fonts can be an important aid to typeface design.
Many current approaches regard glyphs as pixelated images, which present
artifacts when scaling and inevitable quality losses after vectorization. On
the other hand, existing vector font synthesis methods either fail to represent
the shape concisely or require vector supervision during training. To push the
quality of vector font synthesis to the next level, we propose a novel
dual-part representation for vector glyphs, where each glyph is modeled as a
collection of closed "positive" and "negative" path pairs. The glyph contour is
then obtained by boolean operations on these paths. We first learn such a
representation only from glyph images and devise a subsequent contour
refinement step to align the contour with an image representation to further
enhance details. Our method, named DualVector, outperforms state-of-the-art
methods in vector font synthesis both quantitatively and qualitatively. Our
synthesized vector fonts can be easily converted to common digital font formats
like TrueType Font for practical use. The code is released at
https://github.com/thuliu-yt16/dualvector.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 08:18:06 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Liu",
"Ying-Tian",
""
],
[
"Zhang",
"Zhifei",
""
],
[
"Guo",
"Yuan-Chen",
""
],
[
"Fisher",
"Matthew",
""
],
[
"Wang",
"Zhaowen",
""
],
[
"Zhang",
"Song-Hai",
""
]
] |
new_dataset
| 0.952584 |
2305.10507
|
Hao Shao
|
Hao Shao, Letian Wang, Ruobing Chen, Steven L. Waslander, Hongsheng
Li, Yu Liu
|
ReasonNet: End-to-End Driving with Temporal and Global Reasoning
|
CVPR 2023
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The large-scale deployment of autonomous vehicles is yet to come, and one of
the major remaining challenges lies in urban dense traffic scenarios. In such
cases, it remains challenging to predict the future evolution of the scene and
future behaviors of objects, and to deal with rare adverse events such as the
sudden appearance of occluded objects. In this paper, we present ReasonNet, a
novel end-to-end driving framework that extensively exploits both temporal and
global information of the driving scene. By reasoning on the temporal behavior
of objects, our method can effectively process the interactions and
relationships among features in different frames. Reasoning about the global
information of the scene can also improve overall perception performance and
benefit the detection of adverse events, especially the anticipation of
potential danger from occluded objects. For comprehensive evaluation on
occlusion events, we also release publicly a driving simulation benchmark
DriveOcclusionSim consisting of diverse occlusion events. We conduct extensive
experiments on multiple CARLA benchmarks, where our model outperforms all prior
methods, ranking first on the sensor track of the public CARLA Leaderboard.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 18:24:43 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Shao",
"Hao",
""
],
[
"Wang",
"Letian",
""
],
[
"Chen",
"Ruobing",
""
],
[
"Waslander",
"Steven L.",
""
],
[
"Li",
"Hongsheng",
""
],
[
"Liu",
"Yu",
""
]
] |
new_dataset
| 0.99912 |
2305.10512
|
Denis Kuznetsov
|
Moskvoretskii Viktor, Frolov Anton, Kuznetsov Denis
|
IMAD: IMage-Augmented multi-modal Dialogue
|
Main part contains 6 pages, 4 figures. It was accepted on AINL. We
wait the publication and DOI
| null | null | null |
cs.CL cs.HC
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Currently, dialogue systems have achieved high performance in processing
text-based communication. However, they have not yet effectively incorporated
visual information, which poses a significant challenge. Furthermore, existing
models that incorporate images in dialogue generation focus on discussing the
image itself. Our proposed approach presents a novel perspective on multi-modal
dialogue systems, which interprets the image in the context of the dialogue. By
doing so, we aim to expand the capabilities of current dialogue systems and
transition them from single modality (text) to multi-modality. However, there
is a lack of validated English datasets that contain both images and dialogue
contexts for this task. Thus, we propose a two-stage approach to automatically
construct a multi-modal dialogue dataset. In the first stage, we utilize
text-to-image similarity and sentence similarity to identify which utterances
could be replaced with an image. In the second stage, we replace those
utterances by selecting a subset of relevant images and filtering them with a
visual question answering model. We used this approach, along with additional
labeling, to create the IMage Augmented multi-modal Dialogue dataset (IMAD),
which can serve as a validated dataset for this task. Furthermore, we propose a
baseline model trained on this dataset, which outperforms model trained on the
same data without images and BlenderBot.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 18:38:10 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Viktor",
"Moskvoretskii",
""
],
[
"Anton",
"Frolov",
""
],
[
"Denis",
"Kuznetsov",
""
]
] |
new_dataset
| 0.999528 |
2305.10554
|
Fabio Palmese
|
Fabio Palmese, Alessandro E. C. Redondi
|
Collecting Channel State Information in Wi-Fi Access Points for IoT
Forensics
|
Paper accepted for publication at conference Mediterranean
Communication and Computer Networking Conference (MedComNet 2023)
| null | null | null |
cs.NI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Internet of Things (IoT) has boomed in recent years, with an ever-growing
number of connected devices and a corresponding exponential increase in network
traffic. As a result, IoT devices have become potential witnesses of the
surrounding environment and people living in it, creating a vast new source of
forensic evidence. To address this need, a new field called IoT Forensics has
emerged. In this paper, we present \textit{CSI Sniffer}, a tool that integrates
the collection and management of Channel State Information (CSI) in Wi-Fi
Access Points. CSI is a physical layer indicator that enables human sensing,
including occupancy monitoring and activity recognition. After a description of
the tool architecture and implementation, we demonstrate its capabilities
through two application scenarios that use binary classification techniques to
classify user behavior based on CSI features extracted from IoT traffic. Our
results show that the proposed tool can enhance the capabilities of forensic
investigations by providing additional sources of evidence. Wi-Fi Access Points
integrated with \textit{CSI Sniffer} can be used by ISP or network managers to
facilitate the collection of information from IoT devices and the surrounding
environment. We conclude the work by analyzing the storage requirements of CSI
sample collection and discussing the impact of lossy compression techniques on
classification performance.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 20:14:37 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Palmese",
"Fabio",
""
],
[
"Redondi",
"Alessandro E. C.",
""
]
] |
new_dataset
| 0.963332 |
2305.10580
|
Juncheng Li
|
Juncheng Li, David J. Cappelleri
|
Sim-MEES: Modular End-Effector System Grasping Dataset for Mobile
Manipulators in Cluttered Environments
| null | null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present Sim-MEES: a large-scale synthetic dataset that
contains 1,550 objects with varying difficulty levels and physics properties,
as well as 11 million grasp labels for mobile manipulators to plan grasps using
different gripper modalities in cluttered environments. Our dataset generation
process combines analytic models and dynamic simulations of the entire
cluttered environment to provide accurate grasp labels. We provide a detailed
study of our proposed labeling process for both parallel jaw grippers and
suction cup grippers, comparing them with state-of-the-art methods to
demonstrate how Sim-MEES can provide precise grasp labels in cluttered
environments.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 21:40:26 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Li",
"Juncheng",
""
],
[
"Cappelleri",
"David J.",
""
]
] |
new_dataset
| 0.999158 |
2305.10621
|
Yulin Sun
|
Yulin Sun, Qingming Qu, Chenxingyu Zhao, Arvind Krishnamurthy, Hong
Chang, Ying Xiong
|
TSoR: TCP Socket over RDMA Container Network for Cloud Native Computing
| null | null | null | null |
cs.NI cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Cloud-native containerized applications constantly seek high-performance and
easy-to-operate container network solutions. RDMA network is a potential
enabler with higher throughput and lower latency than the standard TCP/IP
network stack. However, several challenges remain in equipping containerized
applications with RDMA network: 1) How to deliver transparent improvements
without modifying application code; 2) How to integrate RDMA-based network
solutions with container orchestration systems; 3) How to efficiently utilize
RDMA for container networks.
In this paper, we present an RDMA-based container network solution, TCP
Socket over RDMA (TSoR), which addresses all the above challenges. To
transparently accelerate applications using POSIX socket interfaces without
modifications, we integrate TSoR with a container runtime that can intercept
system calls for socket interfaces. To be compatible with orchestration systems
like Kubernetes, TSoR implements a container network following the Kubernetes
network model and satisfies all requirements of the model. To leverage RDMA
benefits, TSoR designs a high-performance network stack that efficiently
transfers TCP traffic using RDMA network. Thus, TSoR provides a turn-key
solution for existing Kubernetes clusters to adopt the high-performance RDMA
network with minimal effort.
Our evaluation results show that TSoR provides up to 2.3x higher throughput
and 64\% lower latency for existing containerized applications, such as Redis
key-value store and Node.js web server, with no code changes. TSoR code will be
open-sourced.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 00:20:56 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Sun",
"Yulin",
""
],
[
"Qu",
"Qingming",
""
],
[
"Zhao",
"Chenxingyu",
""
],
[
"Krishnamurthy",
"Arvind",
""
],
[
"Chang",
"Hong",
""
],
[
"Xiong",
"Ying",
""
]
] |
new_dataset
| 0.999601 |
2305.10661
|
Yitong Li
|
Yitong Li, Chang Liu, Jie Ma
|
Scribble-Supervised Target Extraction Method Based on Inner
Structure-Constraint for Remote Sensing Images
|
5 pages, 4 figures, 1 table
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Weakly supervised learning based on scribble annotations in target extraction
of remote sensing images has drawn much interest due to scribbles' flexibility
in denoting winding objects and low cost of manually labeling. However,
scribbles are too sparse to identify object structure and detailed information,
bringing great challenges in target localization and boundary description. To
alleviate these problems, in this paper, we construct two inner
structure-constraints, a deformation consistency loss and a trainable active
contour loss, together with a scribble-constraint to supervise the optimization
of the encoder-decoder network without introducing any auxiliary module or
extra operation based on prior cues. Comprehensive experiments demonstrate our
method's superiority over five state-of-the-art algorithms in this field.
Source code is available at https://github.com/yitongli123/ISC-TE.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 02:49:07 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Li",
"Yitong",
""
],
[
"Liu",
"Chang",
""
],
[
"Ma",
"Jie",
""
]
] |
new_dataset
| 0.982137 |
2305.10679
|
Xin-Ye Li
|
Xin-Ye Li, Jiang-Tian Xue, Zheng Xie and Ming Li
|
Think Outside the Code: Brainstorming Boosts Large Language Models in
Code Generation
|
13 pages, 5 figures
| null | null | null |
cs.AI cs.CL cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Code generation aims to automatically generate source code from high-level
task specifications, which can significantly increase productivity of software
engineering. Recently, approaches based on large language models (LLMs) have
shown remarkable code generation abilities on simple tasks. However, generate
code for more complex tasks, such as competition-level problems, remains
challenging. In this paper, we introduce Brainstorm framework for code
generation. It leverages a brainstorming step that generates and selects
diverse thoughts on the problem to facilitate algorithmic reasoning, where the
thoughts are possible blueprint of solving the problem. We demonstrate that
Brainstorm significantly enhances the ability of LLMs to solve
competition-level programming problems, resulting in a more than 50% increase
in the pass@$k$ metrics for ChatGPT on the CodeContests benchmark, achieving
state-of-the-art performance. Furthermore, our experiments conducted on
LeetCode contests show that our framework boosts the ability of ChatGPT to a
level comparable to that of human programmers.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 03:32:54 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Li",
"Xin-Ye",
""
],
[
"Xue",
"Jiang-Tian",
""
],
[
"Xie",
"Zheng",
""
],
[
"Li",
"Ming",
""
]
] |
new_dataset
| 0.966965 |
2305.10726
|
Tosin Ige
|
Amos Okomayin, Tosin Ige
|
Ambient Technology & Intelligence
|
10 pages
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Today, we have a mixture of young and older individuals, people with special
needs, and people who can care for themselves. Over 1 billion people are
estimated to be disabled; this figure corresponds to about 15% of the world's
population, with 3.8% (approximately 190 million people) accounting for people
aged 15 and up (Organization, 2011). The number of people with disabilities is
upward due to the increase in chronic health conditions and many other things.
These and other factors have made the need for proper care facilities urgent in
today's society. Several care facilities are built to help people with
disabilities live their everyday lives and not be left out of the community.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 05:55:41 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Okomayin",
"Amos",
""
],
[
"Ige",
"Tosin",
""
]
] |
new_dataset
| 0.999357 |
2305.10740
|
Maddie Cusimano
|
Benjamin Hoffman, Maddie Cusimano, Vittorio Baglione, Daniela
Canestrari, Damien Chevallier, Dominic L. DeSantis, Lor\`ene Jeantet, Monique
A. Ladds, Takuya Maekawa, Vicente Mata-Silva, V\'ictor Moreno-Gonz\'alez, Eva
Trapote, Outi Vainio, Antti Vehkaoja, Ken Yoda, Katherine Zacarian, Ari
Friedlaender, Christian Rutz
|
A benchmark for computational analysis of animal behavior, using
animal-borne tags
|
32 pages, 11 figures. For associated code repositories, see
https://github.com/earthspecies/BEBE/ and
https://github.com/earthspecies/BEBE-datasets/ . For data repository, see
https://zenodo.org/record/7947104
| null | null | null |
cs.LG q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Animal-borne sensors ('bio-loggers') can record a suite of kinematic and
environmental data, which can elucidate animal ecophysiology and improve
conservation efforts. Machine learning techniques are useful for interpreting
the large amounts of data recorded by bio-loggers, but there exists no standard
for comparing the different machine learning techniques in this domain. To
address this, we present the Bio-logger Ethogram Benchmark (BEBE), a collection
of datasets with behavioral annotations, standardized modeling tasks, and
evaluation metrics. BEBE is to date the largest, most taxonomically diverse,
publicly available benchmark of this type, and includes 1654 hours of data
collected from 149 individuals across nine taxa. We evaluate the performance of
ten different machine learning methods on BEBE, and identify key challenges to
be addressed in future work. Datasets, models, and evaluation code are made
publicly available at https://github.com/earthspecies/BEBE, to enable community
use of BEBE as a point of comparison in methods development.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 06:20:45 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Hoffman",
"Benjamin",
""
],
[
"Cusimano",
"Maddie",
""
],
[
"Baglione",
"Vittorio",
""
],
[
"Canestrari",
"Daniela",
""
],
[
"Chevallier",
"Damien",
""
],
[
"DeSantis",
"Dominic L.",
""
],
[
"Jeantet",
"Lorène",
""
],
[
"Ladds",
"Monique A.",
""
],
[
"Maekawa",
"Takuya",
""
],
[
"Mata-Silva",
"Vicente",
""
],
[
"Moreno-González",
"Víctor",
""
],
[
"Trapote",
"Eva",
""
],
[
"Vainio",
"Outi",
""
],
[
"Vehkaoja",
"Antti",
""
],
[
"Yoda",
"Ken",
""
],
[
"Zacarian",
"Katherine",
""
],
[
"Friedlaender",
"Ari",
""
],
[
"Rutz",
"Christian",
""
]
] |
new_dataset
| 0.995429 |
2305.10741
|
Krishna Gopal Benerjee
|
Krishna Gopal Benerjee and Adrish Banerjee
|
Bounds on Size of Homopolymer Free Codes
|
The work is accepted in IEEE International Symposium on Information
Theory (ISIT) 2023
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/publicdomain/zero/1.0/
|
For any given alphabet of size $q$, a Homopolymer Free code (HF code) refers
to an $(n, M, d)_q$ code of length $n$, size $M$ and minimum Hamming distance
$d$, where all the codewords are homopolymer free sequences. For any given
alphabet, this work provides upper and lower bounds on the maximum size of any
HF code using Sphere Packing bound and Gilbert-Varshamov bound. Further, upper
and lower bounds on the maximum size of HF codes for various HF code families
are calculated. Also, as a specific case, upper and lower bounds are obtained
on the maximum size of homopolymer free DNA codes.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 06:21:05 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Benerjee",
"Krishna Gopal",
""
],
[
"Banerjee",
"Adrish",
""
]
] |
new_dataset
| 0.999674 |
2305.10763
|
Zhenhui Ye
|
Zhenhui Ye, Rongjie Huang, Yi Ren, Ziyue Jiang, Jinglin Liu, Jinzheng
He, Xiang Yin, Zhou Zhao
|
CLAPSpeech: Learning Prosody from Text Context with Contrastive
Language-Audio Pre-training
|
Accepted by ACL 2023 (Main Conference)
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Improving text representation has attracted much attention to achieve
expressive text-to-speech (TTS). However, existing works only implicitly learn
the prosody with masked token reconstruction tasks, which leads to low training
efficiency and difficulty in prosody modeling. We propose CLAPSpeech, a
cross-modal contrastive pre-training framework that explicitly learns the
prosody variance of the same text token under different contexts. Specifically,
1) We encourage the model to connect the text context with its corresponding
prosody pattern in the joint multi-modal space with the elaborate design of the
encoder inputs and contrastive loss; 2) We introduce a multi-scale pre-training
pipeline to capture prosody patterns in multiple levels. We show how to
incorporate CLAPSpeech into existing TTS models for better prosody. Experiments
on three datasets not only show that CLAPSpeech could improve the prosody
prediction for existing TTS methods, but also demonstrate its generalization
ability to adapt to multiple languages and multi-speaker TTS. We also deeply
analyze the principle behind the performance of CLAPSpeech. Ablation studies
demonstrate the necessity of each component in our method. Source code and
audio samples are available at https://clapspeech.github.io.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 07:07:04 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Ye",
"Zhenhui",
""
],
[
"Huang",
"Rongjie",
""
],
[
"Ren",
"Yi",
""
],
[
"Jiang",
"Ziyue",
""
],
[
"Liu",
"Jinglin",
""
],
[
"He",
"Jinzheng",
""
],
[
"Yin",
"Xiang",
""
],
[
"Zhao",
"Zhou",
""
]
] |
new_dataset
| 0.996566 |
2305.10785
|
Bo Lin
|
Bo Lin, Shangwen Wang, Zhongxin Liu, Yepang Liu, Xin Xia and Xiaoguang
Mao
|
CCT5: A Code-Change-Oriented Pre-Trained Model
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Software is constantly changing, requiring developers to perform several
derived tasks in a timely manner, such as writing a description for the
intention of the code change, or identifying the defect-prone code changes.
Considering that the cost of dealing with these tasks can account for a large
proportion (typically around 70 percent) of the total development expenditure,
automating such processes will significantly lighten the burdens of developers.
To achieve such a target, existing approaches mainly rely on training deep
learning models from scratch or fine-tuning existing pretrained models on such
tasks, both of which have weaknesses. Specifically, the former uses
comparatively small-scale labelled data for training, making it difficult to
learn and exploit the domain knowledge of programming language hidden in the
large-amount unlabelled code in the wild; the latter is hard to fully leverage
the learned knowledge of the pre-trained model, as existing pre-trained models
are designed to encode a single code snippet rather than a code change (i.e.,
the difference between two code snippets). We propose to pre-train a model
specially designed for code changes to better support developers in software
maintenance. To this end, we first collect a large-scale dataset containing
1.5M+ pairwise data of code changes and commit messages. Based on these data,
we curate five different tasks for pre-training, which equip the model with
diverse domain knowledge about code changes. We fine-tune the pre-trained
model, CCT5, on three widely-studied tasks incurred by code changes and two
tasks specific to the code review process. Results show that CCT5 outperforms
both conventional deep learning approaches and existing pre-trained models on
these tasks.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 07:55:37 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Lin",
"Bo",
""
],
[
"Wang",
"Shangwen",
""
],
[
"Liu",
"Zhongxin",
""
],
[
"Liu",
"Yepang",
""
],
[
"Xia",
"Xin",
""
],
[
"Mao",
"Xiaoguang",
""
]
] |
new_dataset
| 0.997348 |
2305.10791
|
Yiling He
|
Yu Chen, Yiling He
|
BrutePrint: Expose Smartphone Fingerprint Authentication to Brute-force
Attack
| null | null | null | null |
cs.CR cs.SY eess.SY
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Fingerprint authentication has been widely adopted on smartphones to
complement traditional password authentication, making it a tempting target for
attackers. The smartphone industry is fully aware of existing threats, and
especially for the presentation attack studied by most prior works, the threats
are nearly eliminated by liveness detection and attempt limit. In this paper,
we study the seemingly impossible fingerprint brute-force attack on
off-the-shelf smartphones and propose a generic attack framework. We implement
BrutePrint to automate the attack, that acts as a middleman to bypass attempt
limit and hijack fingerprint images. Specifically, the bypassing exploits two
zero-day vulnerabilities in smartphone fingerprint authentication (SFA)
framework, and the hijacking leverages the simplicity of SPI protocol.
Moreover, we consider a practical cross-device attack scenario and tackle the
liveness and matching problems with neural style transfer (NST). We also
propose a method based on neural style transfer to generate valid brute-forcing
inputs from arbitrary fingerprint images. A case study shows that we always
bypasses liveness detection and attempt limit while 71% spoofs are accepted. We
evaluate BrutePrint on 10 representative smartphones from top-5 vendors and 3
typical types of applications involving screen lock, payment, and privacy. As
all of them are vulnerable to some extent, fingerprint brute-force attack is
validated on on all devices except iPhone, where the shortest time to unlock
the smartphone without prior knowledge about the victim is estimated at 40
minutes. Furthermore, we suggest software and hardware mitigation measures.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 08:04:20 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Chen",
"Yu",
""
],
[
"He",
"Yiling",
""
]
] |
new_dataset
| 0.997921 |
2305.10854
|
Xiyu Zhang
|
Xiyu Zhang, Jiaqi Yang, Shikun Zhang and Yanning Zhang
|
3D Registration with Maximal Cliques
|
cvpr23 best paper candidate
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
As a fundamental problem in computer vision, 3D point cloud registration
(PCR) aims to seek the optimal pose to align a point cloud pair. In this paper,
we present a 3D registration method with maximal cliques (MAC). The key insight
is to loosen the previous maximum clique constraint, and mine more local
consensus information in a graph for accurate pose hypotheses generation: 1) A
compatibility graph is constructed to render the affinity relationship between
initial correspondences. 2) We search for maximal cliques in the graph, each of
which represents a consensus set. We perform node-guided clique selection then,
where each node corresponds to the maximal clique with the greatest graph
weight. 3) Transformation hypotheses are computed for the selected cliques by
the SVD algorithm and the best hypothesis is used to perform registration.
Extensive experiments on U3M, 3DMatch, 3DLoMatch and KITTI demonstrate that MAC
effectively increases registration accuracy, outperforms various
state-of-the-art methods and boosts the performance of deep-learned methods.
MAC combined with deep-learned methods achieves state-of-the-art registration
recall of 95.7% / 78.9% on 3DMatch / 3DLoMatch.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 10:15:44 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Zhang",
"Xiyu",
""
],
[
"Yang",
"Jiaqi",
""
],
[
"Zhang",
"Shikun",
""
],
[
"Zhang",
"Yanning",
""
]
] |
new_dataset
| 0.951375 |
2305.10866
|
Wei Xiang
|
Wei Xiang and Chao Liang and Bang Wang
|
TEPrompt: Task Enlightenment Prompt Learning for Implicit Discourse
Relation Recognition
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Implicit Discourse Relation Recognition (IDRR) aims at classifying the
relation sense between two arguments without an explicit connective. Recently,
the ConnPrompt~\cite{Wei.X:et.al:2022:COLING} has leveraged the powerful prompt
learning for IDRR based on the fusion of multi-prompt decisions from three
different yet much similar connective prediction templates. Instead of
multi-prompt ensembling, we propose to design auxiliary tasks with enlightened
prompt learning for the IDRR task. Although an auxiliary task is not used to
directly output final prediction, we argue that during the joint training some
of its learned features can be useful to boost the main task. In light of such
motivations, we propose a task enlightenment prompt learning model, called
TEPrompt, to fuse learned features from three related tasks for IDRR. In
particular, the TEPrompt contains three tasks, viz., Discourse Relation
Recognition (DRR), Sense Semantics Classification (SSC) and Annotated
Connective Prediction (ACP), each with a unique prompt template and an answer
space. In the training phase, we jointly train three prompt learning tasks with
shared argument representation. In the testing phase, we only take the DRR
output with fused features as the final IDRR decision. Experiments with the
same conditions have shown that the proposed TEPrompt outperforms the
ConnPrompt. This can be attributed to the promoted decision features and
language models benefited from joint-training of auxiliary tasks.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 10:38:06 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Xiang",
"Wei",
""
],
[
"Liang",
"Chao",
""
],
[
"Wang",
"Bang",
""
]
] |
new_dataset
| 0.962432 |
2305.10872
|
Vitaly Aksenov
|
Vitaly Aksenov, Dmitry Ivanov, Ravil Galiev
|
Benchmark Framework with Skewed Workloads
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, we present a new benchmarking suite with new real-life inspired
skewed workloads to test the performance of concurrent index data structures.
We started this project to prepare workloads specifically for self-adjusting
data structures, i.e., they handle more frequent requests faster, and, thus,
should perform better than their standard counterparts. We looked over the
commonly used suites to test performance of concurrent indices trying to find
an inspiration: Synchrobench, Setbench, YCSB, and TPC - and we found several
issues with them.
The major problem is that they are not flexible: it is difficult to introduce
new workloads, it is difficult to set the duration of the experiments, and it
is difficult to change the parameters. We decided to solve this issue by
presenting a new suite based on Synchrobench.
Finally, we highlight the problem of measuring performance of data
structures. We show that the relative performance of data structures highly
depends on the workload: it is not clear which data structure is best. For
that, we take three state-of-the-art concurrent binary search trees and run
them on the workloads from our benchmarking suite. As a result, we get six
experiments with all possible relative performance of the chosen data
structures.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 10:54:24 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Aksenov",
"Vitaly",
""
],
[
"Ivanov",
"Dmitry",
""
],
[
"Galiev",
"Ravil",
""
]
] |
new_dataset
| 0.983745 |
2305.10892
|
Marco Rovera
|
Marco Rovera
|
EventNet-ITA: Italian Frame Parsing for Events
|
9 pages
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces EventNet-ITA, a large, multi-domain corpus annotated
with event frames for Italian, and presents an efficient approach for
multi-label Frame Parsing. The approach is then evaluated on the dataset.
Covering a wide range of individual, social and historical phenomena, the main
contribution of EventNet-ITA is to provide the research community with a
resource for textual event mining and a novel and extensive tool for Frame
Parsing in Italian.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 11:41:56 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Rovera",
"Marco",
""
]
] |
new_dataset
| 0.994562 |
2305.10899
|
Deyi Ji
|
Deyi Ji, Feng Zhao, Hongtao Lu, Mingyuan Tao, Jieping Ye
|
Ultra-High Resolution Segmentation with Ultra-Rich Context: A Novel
Benchmark
|
Accepted to CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the increasing interest and rapid development of methods for Ultra-High
Resolution (UHR) segmentation, a large-scale benchmark covering a wide range of
scenes with full fine-grained dense annotations is urgently needed to
facilitate the field. To this end, the URUR dataset is introduced, in the
meaning of Ultra-High Resolution dataset with Ultra-Rich Context. As the name
suggests, URUR contains amounts of images with high enough resolution (3,008
images of size 5,120x5,120), a wide range of complex scenes (from 63 cities),
rich-enough context (1 million instances with 8 categories) and fine-grained
annotations (about 80 billion manually annotated pixels), which is far superior
to all the existing UHR datasets including DeepGlobe, Inria Aerial, UDD, etc..
Moreover, we also propose WSDNet, a more efficient and effective framework for
UHR segmentation especially with ultra-rich context. Specifically, multi-level
Discrete Wavelet Transform (DWT) is naturally integrated to release computation
burden while preserve more spatial details, along with a Wavelet Smooth Loss
(WSL) to reconstruct original structured context and texture with a smooth
constrain. Experiments on several UHR datasets demonstrate its state-of-the-art
performance. The dataset is available at https://github.com/jankyee/URUR.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 11:54:13 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Ji",
"Deyi",
""
],
[
"Zhao",
"Feng",
""
],
[
"Lu",
"Hongtao",
""
],
[
"Tao",
"Mingyuan",
""
],
[
"Ye",
"Jieping",
""
]
] |
new_dataset
| 0.999919 |
2305.10945
|
Christian Becker-Asano
|
Marcel Heisler and Christian Becker-Asano
|
An Android Robot Head as Embodied Conversational Agent
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper describes, how current Machine Learning (ML) techniques combined
with simple rule-based animation routines make an android robot head an
embodied conversational agent with ChatGPT as its core component. The android
robot head is described, technical details are given of how lip-sync animation
is being achieved, and general software design decisions are presented. A
public presentation of the system revealed improvement opportunities that are
reported and that lead our iterative implementation approach.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 13:05:10 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Heisler",
"Marcel",
""
],
[
"Becker-Asano",
"Christian",
""
]
] |
new_dataset
| 0.996919 |
2305.10960
|
Eric Rosen
|
Eric Rosen, Devesh K. Jha
|
A Virtual Reality Teleoperation Interface for Industrial Robot
Manipulators
|
7 pages, 6 figures
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We address the problem of teleoperating an industrial robot manipulator via a
commercially available Virtual Reality (VR) interface. Previous works on VR
teleoperation for robot manipulators focus primarily on collaborative or
research robot platforms (whose dynamics and constraints differ from industrial
robot arms), or only address tasks where the robot's dynamics are not as
important (e.g: pick and place tasks). We investigate the usage of commercially
available VR interfaces for effectively teleoeprating industrial robot
manipulators in a variety of contact-rich manipulation tasks. We find that
applying standard practices for VR control of robot arms is challenging for
industrial platforms because torque and velocity control is not exposed, and
position control is mediated through a black-box controller. To mitigate these
problems, we propose a simplified filtering approach to process command signals
to enable operators to effectively teleoperate industrial robot arms with VR
interfaces in dexterous manipulation tasks. We hope our findings will help
robot practitioners implement and setup effective VR teleoperation interfaces
for robot manipulators. The proposed method is demonstrated on a variety of
contact-rich manipulation tasks which can also involve very precise movement of
the robot during execution (videos can be found at
https://www.youtube.com/watch?v=OhkCB9mOaBc)
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 13:26:23 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Rosen",
"Eric",
""
],
[
"Jha",
"Devesh K.",
""
]
] |
new_dataset
| 0.993107 |
2305.10963
|
Yulin Sun
|
Yulin Sun, Deepak Vij, Fenge Li, Wenjian Guo, Ying Xiong
|
Hibernate Container: A Deflated Container Mode for Fast Startup and
High-density Deployment in Serverless Computing
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Serverless computing is a popular cloud computing paradigm, which requires
low response latency to handle on-demand user requests. There are two prominent
techniques employed for reducing the response latency: keep fully initialized
containers alive (Warm Container) or reduce the new container startup (cold
start) latency.
This paper presents the 3rd container startup mode: Hibernate Container,
which starts faster than the cold start container mode and consumes less memory
than the Warm Container mode. Hibernate Container is essentially a "deflated"
Warm Container. Its application memory is swapped out to disk, the freed memory
is reclaimed and file based mmap memory is cleaned-up. The Hibernate
Container's deflated memory is inflated in response to user requests. As
Hibernate Container's application is fully initialized, its response latency is
less than the cold start mode; and as the application memory is deflated, its
memory consumption is less than the Warm Container mode. Additionally, when a
Hibernate Container is "woken up" to process a request, the Woken-up Container
has similar response latency to Warm Container but less memory consumption
because not all the deflated memory needs to be inflated. We implemented the
Hibernate technique as part of the open source Quark secure container runtime
project and our test demonstrated that Hibernate Container consumes about 7\%
to 25\% of the Warm Container memory. All of this results in a higher
deployment density, lower latency and appreciable improvements in the overall
system performance.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 13:29:44 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Sun",
"Yulin",
""
],
[
"Vij",
"Deepak",
""
],
[
"Li",
"Fenge",
""
],
[
"Guo",
"Wenjian",
""
],
[
"Xiong",
"Ying",
""
]
] |
new_dataset
| 0.992196 |
2305.10982
|
Stefano Di Carlo
|
A. Arelakis, J.M. Arnau, J. L. Berral, A. Call, R. Canal, S. Di Carlo,
J. Costa, D. Gizopoulos, V. Karakostas, F. Lubrano, K. Nikas, Y.
Nikolakopoulos, B. Otero, G. Papadimitriou, I. Papaefstathiou, D.
Pnevmatikatos, D. Raho, A. Rigo, E. Rodr\'iguez, A. Savino, A. Scionti, N.
Tampouratzis, A. Torregrosa
|
Vitamin-V: Virtual Environment and Tool-boxing for Trustworthy
Development of RISC-V based Cloud Services
|
Paper accepted and presented at the RISC-V Summit Europe, Barcelona,
5-9th June 2023. arXiv admin note: substantial text overlap with
arXiv:2305.01983
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vitamin-V is a 2023-2025 Horizon Europe project that aims to develop a
complete RISC-V open-source software stack for cloud services with comparable
performance to the cloud-dominant x86 counterpart and a powerful virtual
execution environment for software development, validation, verification, and
test that considers the relevant RISC-V ISA extensions for cloud deployment.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 13:54:42 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Arelakis",
"A.",
""
],
[
"Arnau",
"J. M.",
""
],
[
"Berral",
"J. L.",
""
],
[
"Call",
"A.",
""
],
[
"Canal",
"R.",
""
],
[
"Di Carlo",
"S.",
""
],
[
"Costa",
"J.",
""
],
[
"Gizopoulos",
"D.",
""
],
[
"Karakostas",
"V.",
""
],
[
"Lubrano",
"F.",
""
],
[
"Nikas",
"K.",
""
],
[
"Nikolakopoulos",
"Y.",
""
],
[
"Otero",
"B.",
""
],
[
"Papadimitriou",
"G.",
""
],
[
"Papaefstathiou",
"I.",
""
],
[
"Pnevmatikatos",
"D.",
""
],
[
"Raho",
"D.",
""
],
[
"Rigo",
"A.",
""
],
[
"Rodríguez",
"E.",
""
],
[
"Savino",
"A.",
""
],
[
"Scionti",
"A.",
""
],
[
"Tampouratzis",
"N.",
""
],
[
"Torregrosa",
"A.",
""
]
] |
new_dataset
| 0.958299 |
2305.10985
|
Elisa Bassignana
|
Elisa Bassignana, Filip Ginter, Sampo Pyysalo, Rob van der Goot, and
Barbara Plank
|
Multi-CrossRE A Multi-Lingual Multi-Domain Dataset for Relation
Extraction
|
Accepted at NoDaLiDa 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Most research in Relation Extraction (RE) involves the English language,
mainly due to the lack of multi-lingual resources. We propose Multi-CrossRE,
the broadest multi-lingual dataset for RE, including 26 languages in addition
to English, and covering six text domains. Multi-CrossRE is a machine
translated version of CrossRE (Bassignana and Plank, 2022), with a sub-portion
including more than 200 sentences in seven diverse languages checked by native
speakers. We run a baseline model over the 26 new datasets and--as sanity
check--over the 26 back-translations to English. Results on the back-translated
data are consistent with the ones on the original English CrossRE, indicating
high quality of the translation and the resulting dataset.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 14:01:33 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Bassignana",
"Elisa",
""
],
[
"Ginter",
"Filip",
""
],
[
"Pyysalo",
"Sampo",
""
],
[
"van der Goot",
"Rob",
""
],
[
"Plank",
"Barbara",
""
]
] |
new_dataset
| 0.999783 |
2305.11012
|
Hyungseob Shin
|
Hyungseob Shin, Hyeongyu Kim, Sewon Kim, Yohan Jun, Taejoon Eo, Dosik
Hwang
|
SDC-UDA: Volumetric Unsupervised Domain Adaptation Framework for
Slice-Direction Continuous Cross-Modality Medical Image Segmentation
|
10 pages, 7 figures, CVPR 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Recent advances in deep learning-based medical image segmentation studies
achieve nearly human-level performance in fully supervised manner. However,
acquiring pixel-level expert annotations is extremely expensive and laborious
in medical imaging fields. Unsupervised domain adaptation (UDA) can alleviate
this problem, which makes it possible to use annotated data in one imaging
modality to train a network that can successfully perform segmentation on
target imaging modality with no labels. In this work, we propose SDC-UDA, a
simple yet effective volumetric UDA framework for slice-direction continuous
cross-modality medical image segmentation which combines intra- and inter-slice
self-attentive image translation, uncertainty-constrained pseudo-label
refinement, and volumetric self-training. Our method is distinguished from
previous methods on UDA for medical image segmentation in that it can obtain
continuous segmentation in the slice direction, thereby ensuring higher
accuracy and potential in clinical practice. We validate SDC-UDA with multiple
publicly available cross-modality medical image segmentation datasets and
achieve state-of-the-art segmentation performance, not to mention the superior
slice-direction continuity of prediction compared to previous studies.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 14:44:27 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Shin",
"Hyungseob",
""
],
[
"Kim",
"Hyeongyu",
""
],
[
"Kim",
"Sewon",
""
],
[
"Jun",
"Yohan",
""
],
[
"Eo",
"Taejoon",
""
],
[
"Hwang",
"Dosik",
""
]
] |
new_dataset
| 0.97847 |
2305.11013
|
Zhifu Gao
|
Zhifu Gao, Zerui Li, Jiaming Wang, Haoneng Luo, Xian Shi, Mengzhe
Chen, Yabin Li, Lingyun Zuo, Zhihao Du, Zhangyu Xiao, Shiliang Zhang
|
FunASR: A Fundamental End-to-End Speech Recognition Toolkit
|
5 pages, 3 figures, accepted by INTERSPEECH 2023
| null | null | null |
cs.SD cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces FunASR, an open-source speech recognition toolkit
designed to bridge the gap between academic research and industrial
applications. FunASR offers models trained on large-scale industrial corpora
and the ability to deploy them in applications. The toolkit's flagship model,
Paraformer, is a non-autoregressive end-to-end speech recognition model that
has been trained on a manually annotated Mandarin speech recognition dataset
that contains 60,000 hours of speech. To improve the performance of Paraformer,
we have added timestamp prediction and hotword customization capabilities to
the standard Paraformer backbone. In addition, to facilitate model deployment,
we have open-sourced a voice activity detection model based on the Feedforward
Sequential Memory Network (FSMN-VAD) and a text post-processing punctuation
model based on the controllable time-delay Transformer (CT-Transformer), both
of which were trained on industrial corpora. These functional modules provide a
solid foundation for building high-precision long audio speech recognition
services. Compared to other models trained on open datasets, Paraformer
demonstrates superior performance.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 14:45:09 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Gao",
"Zhifu",
""
],
[
"Li",
"Zerui",
""
],
[
"Wang",
"Jiaming",
""
],
[
"Luo",
"Haoneng",
""
],
[
"Shi",
"Xian",
""
],
[
"Chen",
"Mengzhe",
""
],
[
"Li",
"Yabin",
""
],
[
"Zuo",
"Lingyun",
""
],
[
"Du",
"Zhihao",
""
],
[
"Xiao",
"Zhangyu",
""
],
[
"Zhang",
"Shiliang",
""
]
] |
new_dataset
| 0.999529 |
2305.11023
|
Harshil Shah
|
Harshil Shah, Arthur Wilcke, Marius Cobzarenco, Cristi Cobzarenco,
Edward Challis, David Barber
|
Generalized Multiple Intent Conditioned Slot Filling
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Natural language understanding includes the tasks of intent detection
(identifying a user's objectives) and slot filling (extracting the entities
relevant to those objectives). Prior slot filling methods assume that each
intent type cannot occur more than once within a message, however this is often
not a valid assumption for real-world settings. In this work, we generalize
slot filling by removing the constraint of unique intents in a message. We cast
this as a JSON generation task and approach it using a language model. We
create a pre-training dataset by combining DBpedia and existing slot filling
datasets that we convert for JSON generation. We also generate an in-domain
dataset using GPT-3. We train T5 models for this task (with and without
exemplars in the prompt) and find that both training datasets improve
performance, and that the model is able to generalize to intent types not seen
during training.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 15:04:52 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Shah",
"Harshil",
""
],
[
"Wilcke",
"Arthur",
""
],
[
"Cobzarenco",
"Marius",
""
],
[
"Cobzarenco",
"Cristi",
""
],
[
"Challis",
"Edward",
""
],
[
"Barber",
"David",
""
]
] |
new_dataset
| 0.98563 |
2305.11074
|
Tong Ye
|
Tong Ye, Lingfei Wu, Tengfei Ma, Xuhong Zhang, Yangkai Du, Peiyu Liu,
Wenhai Wang, Shouling Ji
|
Tram: A Token-level Retrieval-augmented Mechanism for Source Code
Summarization
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatically generating human-readable text describing the functionality of
a program is the intent of source code summarization. Although Neural Language
Models achieve significant performance in this field, an emerging trend is
combining neural models with external knowledge. Most previous approaches rely
on the sentence-level retrieval and combination paradigm (retrieval of similar
code snippets and use of the corresponding code and summary pairs) on the
encoder side. However, this paradigm is coarse-grained and cannot directly take
advantage of the high-quality retrieved summary tokens on the decoder side. In
this paper, we explore a fine-grained token-level retrieval-augmented mechanism
on the decoder side to help the vanilla neural model generate a better code
summary. Furthermore, to mitigate the limitation of token-level retrieval on
capturing contextual code semantics, we propose to integrate code semantics
into summary tokens. Extensive experiments and human evaluation reveal that our
token-level retrieval-augmented approach significantly improves performance and
is more interpretive.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 16:02:04 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Ye",
"Tong",
""
],
[
"Wu",
"Lingfei",
""
],
[
"Ma",
"Tengfei",
""
],
[
"Zhang",
"Xuhong",
""
],
[
"Du",
"Yangkai",
""
],
[
"Liu",
"Peiyu",
""
],
[
"Wang",
"Wenhai",
""
],
[
"Ji",
"Shouling",
""
]
] |
new_dataset
| 0.968274 |
2305.11094
|
Sicheng Yang
|
Sicheng Yang, Zhiyong Wu, Minglei Li, Zhensong Zhang, Lei Hao, Weihong
Bao, Haolin Zhuang
|
QPGesture: Quantization-Based and Phase-Guided Motion Matching for
Natural Speech-Driven Gesture Generation
|
15 pages, 12 figures, CVPR 2023 Highlight
| null | null | null |
cs.HC cs.CV cs.MM cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Speech-driven gesture generation is highly challenging due to the random
jitters of human motion. In addition, there is an inherent asynchronous
relationship between human speech and gestures. To tackle these challenges, we
introduce a novel quantization-based and phase-guided motion-matching
framework. Specifically, we first present a gesture VQ-VAE module to learn a
codebook to summarize meaningful gesture units. With each code representing a
unique gesture, random jittering problems are alleviated effectively. We then
use Levenshtein distance to align diverse gestures with different speech.
Levenshtein distance based on audio quantization as a similarity metric of
corresponding speech of gestures helps match more appropriate gestures with
speech, and solves the alignment problem of speech and gestures well. Moreover,
we introduce phase to guide the optimal gesture matching based on the semantics
of context or rhythm of audio. Phase guides when text-based or speech-based
gestures should be performed to make the generated gestures more natural.
Extensive experiments show that our method outperforms recent approaches on
speech-driven gesture generation. Our code, database, pre-trained models, and
demos are available at https://github.com/YoungSeng/QPGesture.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 16:31:25 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Yang",
"Sicheng",
""
],
[
"Wu",
"Zhiyong",
""
],
[
"Li",
"Minglei",
""
],
[
"Zhang",
"Zhensong",
""
],
[
"Hao",
"Lei",
""
],
[
"Bao",
"Weihong",
""
],
[
"Zhuang",
"Haolin",
""
]
] |
new_dataset
| 0.980562 |
2305.11101
|
Lihui Qian
|
Lihui Qian, Xintong Han, Faqiang Wang, Hongyu Liu, Haoye Dong, Zhiwen
Li, Huawei Wei, Zhe Lin and Cheng-Bin Jin
|
XFormer: Fast and Accurate Monocular 3D Body Capture
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present XFormer, a novel human mesh and motion capture method that
achieves real-time performance on consumer CPUs given only monocular images as
input. The proposed network architecture contains two branches: a keypoint
branch that estimates 3D human mesh vertices given 2D keypoints, and an image
branch that makes predictions directly from the RGB image features. At the core
of our method is a cross-modal transformer block that allows information to
flow across these two branches by modeling the attention between 2D keypoint
coordinates and image spatial features. Our architecture is smartly designed,
which enables us to train on various types of datasets including images with
2D/3D annotations, images with 3D pseudo labels, and motion capture datasets
that do not have associated images. This effectively improves the accuracy and
generalization ability of our system. Built on a lightweight backbone
(MobileNetV3), our method runs blazing fast (over 30fps on a single CPU core)
and still yields competitive accuracy. Furthermore, with an HRNet backbone,
XFormer delivers state-of-the-art performance on Huamn3.6 and 3DPW datasets.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 16:45:26 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Qian",
"Lihui",
""
],
[
"Han",
"Xintong",
""
],
[
"Wang",
"Faqiang",
""
],
[
"Liu",
"Hongyu",
""
],
[
"Dong",
"Haoye",
""
],
[
"Li",
"Zhiwen",
""
],
[
"Wei",
"Huawei",
""
],
[
"Lin",
"Zhe",
""
],
[
"Jin",
"Cheng-Bin",
""
]
] |
new_dataset
| 0.993603 |
2007.00394
|
Yizhak Ben-Shabat
|
Yizhak Ben-Shabat, Xin Yu, Fatemeh Sadat Saleh, Dylan Campbell,
Cristian Rodriguez-Opazo, Hongdong Li, Stephen Gould
|
The IKEA ASM Dataset: Understanding People Assembling Furniture through
Actions, Objects and Pose
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The availability of a large labeled dataset is a key requirement for applying
deep learning methods to solve various computer vision tasks. In the context of
understanding human activities, existing public datasets, while large in size,
are often limited to a single RGB camera and provide only per-frame or per-clip
action annotations. To enable richer analysis and understanding of human
activities, we introduce IKEA ASM -- a three million frame, multi-view,
furniture assembly video dataset that includes depth, atomic actions, object
segmentation, and human pose. Additionally, we benchmark prominent methods for
video action recognition, object segmentation and human pose estimation tasks
on this challenging dataset. The dataset enables the development of holistic
methods, which integrate multi-modal and multi-view data to better perform on
these tasks.
|
[
{
"version": "v1",
"created": "Wed, 1 Jul 2020 11:34:46 GMT"
},
{
"version": "v2",
"created": "Wed, 17 May 2023 07:56:52 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Ben-Shabat",
"Yizhak",
""
],
[
"Yu",
"Xin",
""
],
[
"Saleh",
"Fatemeh Sadat",
""
],
[
"Campbell",
"Dylan",
""
],
[
"Rodriguez-Opazo",
"Cristian",
""
],
[
"Li",
"Hongdong",
""
],
[
"Gould",
"Stephen",
""
]
] |
new_dataset
| 0.999817 |
2112.01050
|
Yizhak Ben-Shabat
|
Adi Mesika, Yizhak Ben-Shabat and Ayellet Tal
|
CloudWalker: Random walks for 3D point cloud shape analysis
| null |
Computers & Graphics Volume 106, August 2022, Pages 110-118
|
10.1016/j.cag.2022.06.001
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Point clouds are gaining prominence as a method for representing 3D shapes,
but their irregular structure poses a challenge for deep learning methods. In
this paper we propose CloudWalker, a novel method for learning 3D shapes using
random walks. Previous works attempt to adapt Convolutional Neural Networks
(CNNs) or impose a grid or mesh structure to 3D point clouds. This work
presents a different approach for representing and learning the shape from a
given point set. The key idea is to impose structure on the point set by
multiple random walks through the cloud for exploring different regions of the
3D object. Then we learn a per-point and per-walk representation and aggregate
multiple walk predictions at inference. Our approach achieves state-of-the-art
results for two 3D shape analysis tasks: classification and retrieval.
|
[
{
"version": "v1",
"created": "Thu, 2 Dec 2021 08:24:01 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Dec 2021 13:41:21 GMT"
},
{
"version": "v3",
"created": "Fri, 8 Jul 2022 07:36:24 GMT"
},
{
"version": "v4",
"created": "Wed, 17 May 2023 07:02:03 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Mesika",
"Adi",
""
],
[
"Ben-Shabat",
"Yizhak",
""
],
[
"Tal",
"Ayellet",
""
]
] |
new_dataset
| 0.995318 |
2202.09554
|
Yichuan Deng
|
Rui Duan, Hui Deng, Mao Tian, Yichuan Deng, Jiarui Lin
|
SODA: Site Object Detection dAtaset for Deep Learning in Construction
| null |
Automation in Construction, 2022
|
10.1016/j.autcon.2022.104499
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Computer vision-based deep learning object detection algorithms have been
developed sufficiently powerful to support the ability to recognize various
objects. Although there are currently general datasets for object detection,
there is still a lack of large-scale, open-source dataset for the construction
industry, which limits the developments of object detection algorithms as they
tend to be data-hungry. Therefore, this paper develops a new large-scale image
dataset specifically collected and annotated for the construction site, called
Site Object Detection dAtaset (SODA), which contains 15 kinds of object classes
categorized by workers, materials, machines, and layout. Firstly, more than
20,000 images were collected from multiple construction sites in different site
conditions, weather conditions, and construction phases, which covered
different angles and perspectives. After careful screening and processing,
19,846 images including 286,201 objects were then obtained and annotated with
labels in accordance with predefined categories. Statistical analysis shows
that the developed dataset is advantageous in terms of diversity and volume.
Further evaluation with two widely-adopted object detection algorithms based on
deep learning (YOLO v3/ YOLO v4) also illustrates the feasibility of the
dataset for typical construction scenarios, achieving a maximum mAP of 81.47%.
In this manner, this research contributes a large-scale image dataset for the
development of deep learning-based object detection methods in the construction
industry and sets up a performance benchmark for further evaluation of
corresponding algorithms in this area.
|
[
{
"version": "v1",
"created": "Sat, 19 Feb 2022 09:09:23 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Duan",
"Rui",
""
],
[
"Deng",
"Hui",
""
],
[
"Tian",
"Mao",
""
],
[
"Deng",
"Yichuan",
""
],
[
"Lin",
"Jiarui",
""
]
] |
new_dataset
| 0.99984 |
2204.05727
|
Hui Kong
|
Banghe Wu, Chengzhong Xu, Hui Kong
|
LiDAR Road-Atlas: An Efficient Map Representation for General 3D Urban
Environment
| null |
Field Robotics, 2023
|
10.55417/fr.2023014
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we propose the LiDAR Road-Atlas, a compactable and efficient 3D
map representation, for autonomous robot or vehicle navigation in general urban
environment. The LiDAR Road-Atlas can be generated by an online mapping
framework based on incrementally merging local 2D occupancy grid maps (2D-OGM).
Specifically, the contributions of our LiDAR Road-Atlas representation are
threefold. First, we solve the challenging problem of creating local 2D-OGM in
non-structured urban scenes based on a real-time delimitation of traversable
and curb regions in LiDAR point cloud. Second, we achieve accurate 3D mapping
in multiple-layer urban road scenarios by a probabilistic fusion scheme. Third,
we achieve very efficient 3D map representation of general environment thanks
to the automatic local-OGM induced traversable-region labeling and a sparse
probabilistic local point-cloud encoding. Given the LiDAR Road-Atlas, one can
achieve accurate vehicle localization, path planning and some other tasks. Our
map representation is insensitive to dynamic objects which can be filtered out
in the resulting map based on a probabilistic fusion. Empirically, we compare
our map representation with a couple of popular map representation methods in
robotics and autonomous driving societies, and our map representation is more
favorable in terms of efficiency, scalability and compactness. In addition, we
also evaluate localization accuracy extensively given the created LiDAR
Road-Atlas representations on several public benchmark datasets. With a
16-channel LiDAR sensor, our method achieves an average global localization
errors of 0.26m (translation) and 1.07 degrees (rotation) on the Apollo
dataset, and 0.89m (translation) and 1.29 degrees (rotation) on the MulRan
dataset, respectively, at 10Hz, which validates the promising performance of
our map representation for autonomous driving.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2022 11:46:09 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Mar 2023 07:16:04 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Wu",
"Banghe",
""
],
[
"Xu",
"Chengzhong",
""
],
[
"Kong",
"Hui",
""
]
] |
new_dataset
| 0.99915 |
2204.11184
|
Xiangyu Zhu
|
Xiangyu Zhu, Tingting Liao, Jiangjing Lyu, Xiang Yan, Yunfeng Wang,
Kan Guo, Qiong Cao, Stan Z. Li, and Zhen Lei
|
MVP-Human Dataset for 3D Human Avatar Reconstruction from Unconstrained
Frames
|
Accepted by IEEE Transactions on Biometrics, Behavior, and Identity
Science (TBIOM)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we consider a novel problem of reconstructing a 3D human
avatar from multiple unconstrained frames, independent of assumptions on camera
calibration, capture space, and constrained actions. The problem should be
addressed by a framework that takes multiple unconstrained images as inputs,
and generates a shape-with-skinning avatar in the canonical space, finished in
one feed-forward pass. To this end, we present 3D Avatar Reconstruction in the
wild (ARwild), which first reconstructs the implicit skinning fields in a
multi-level manner, by which the image features from multiple images are
aligned and integrated to estimate a pixel-aligned implicit function that
represents the clothed shape. To enable the training and testing of the new
framework, we contribute a large-scale dataset, MVP-Human (Multi-View and
multi-Pose 3D Human), which contains 400 subjects, each of which has 15 scans
in different poses and 8-view images for each pose, providing 6,000 3D scans
and 48,000 images in total. Overall, benefits from the specific network
architecture and the diverse data, the trained model enables 3D avatar
reconstruction from unconstrained frames and achieves state-of-the-art
performance.
|
[
{
"version": "v1",
"created": "Sun, 24 Apr 2022 03:57:59 GMT"
},
{
"version": "v2",
"created": "Wed, 17 May 2023 10:58:37 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Zhu",
"Xiangyu",
""
],
[
"Liao",
"Tingting",
""
],
[
"Lyu",
"Jiangjing",
""
],
[
"Yan",
"Xiang",
""
],
[
"Wang",
"Yunfeng",
""
],
[
"Guo",
"Kan",
""
],
[
"Cao",
"Qiong",
""
],
[
"Li",
"Stan Z.",
""
],
[
"Lei",
"Zhen",
""
]
] |
new_dataset
| 0.999846 |
2206.08722
|
J\"ames M\'en\'etrey
|
J\"ames M\'en\'etrey, Marcelo Pasin, Pascal Felber, Valerio Schiavoni
|
WaTZ: A Trusted WebAssembly Runtime Environment with Remote Attestation
for TrustZone
|
This publication incorporates results from the VEDLIoT project, which
received funding from the European Union's Horizon 2020 research and
innovation programme under grant agreement No 957197
|
ICDCS'22: Proceedings of the 42nd IEEE International Conference on
Distributed Computing Systems, July 2022
|
10.1109/ICDCS54860.2022.00116
| null |
cs.CR cs.DC cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
WebAssembly (Wasm) is a novel low-level bytecode format that swiftly gained
popularity for its efficiency, versatility and security, with near-native
performance. Besides, trusted execution environments (TEEs) shield critical
software assets against compromised infrastructures. However, TEEs do not
guarantee the code to be trustworthy or that it was not tampered with. Instead,
one relies on remote attestation to assess the code before execution. This
paper describes WaTZ, which is (i) an efficient and secure runtime for trusted
execution of Wasm code for Arm's TrustZone TEE, and (ii) a lightweight remote
attestation system optimised for Wasm applications running in TrustZone, as it
lacks built-in mechanisms for attestation. The remote attestation protocol is
formally verified using a state-of-the-art analyser and model checker. Our
extensive evaluation of Arm-based hardware uses synthetic and real-world
benchmarks, illustrating typical tasks IoT devices achieve. WaTZ's execution
speed is on par with Wasm runtimes in the normal world and reaches roughly half
the speed of native execution, which is compensated by the additional security
guarantees and the interoperability offered by Wasm. WaTZ is open-source and
available on GitHub along with instructions to reproduce our experiments.
|
[
{
"version": "v1",
"created": "Fri, 17 Jun 2022 12:19:48 GMT"
},
{
"version": "v2",
"created": "Wed, 17 May 2023 15:04:34 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Ménétrey",
"Jämes",
""
],
[
"Pasin",
"Marcelo",
""
],
[
"Felber",
"Pascal",
""
],
[
"Schiavoni",
"Valerio",
""
]
] |
new_dataset
| 0.998445 |
2209.05722
|
Kasun Weerakoon Kulathun Mudiyanselage
|
Kasun Weerakoon, Adarsh Jagan Sathyamoorthy, Jing Liang, Tianrui Guan,
Utsav Patel, Dinesh Manocha
|
GrASPE: Graph based Multimodal Fusion for Robot Navigation in
Unstructured Outdoor Environments
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We present a novel trajectory traversability estimation and planning
algorithm for robot navigation in complex outdoor environments. We incorporate
multimodal sensory inputs from an RGB camera, 3D LiDAR, and the robot's
odometry sensor to train a prediction model to estimate candidate trajectories'
success probabilities based on partially reliable multi-modal sensor
observations. We encode high-dimensional multi-modal sensory inputs to
low-dimensional feature vectors using encoder networks and represent them as a
connected graph. The graph is then used to train an attention-based Graph
Neural Network (GNN) to predict trajectory success probabilities. We further
analyze the number of features in the image (corners) and point cloud data
(edges and planes) separately to quantify their reliability to augment the
weights of the feature graph representation used in our GNN. During runtime,
our model utilizes multi-sensor inputs to predict the success probabilities of
the trajectories generated by a local planner to avoid potential collisions and
failures. Our algorithm demonstrates robust predictions when one or more sensor
modalities are unreliable or unavailable in complex outdoor environments. We
evaluate our algorithm's navigation performance using a Spot robot in
real-world outdoor environments. We observe an increase of 10-30% in terms of
navigation success rate and a 13-15% decrease in false positive estimations
compared to the state-of-the-art navigation methods.
|
[
{
"version": "v1",
"created": "Tue, 13 Sep 2022 04:16:31 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Oct 2022 03:46:26 GMT"
},
{
"version": "v3",
"created": "Tue, 16 May 2023 19:07:43 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Weerakoon",
"Kasun",
""
],
[
"Sathyamoorthy",
"Adarsh Jagan",
""
],
[
"Liang",
"Jing",
""
],
[
"Guan",
"Tianrui",
""
],
[
"Patel",
"Utsav",
""
],
[
"Manocha",
"Dinesh",
""
]
] |
new_dataset
| 0.988472 |
2212.10621
|
Nan Jiang
|
Nan Jiang, Tengyu Liu, Zhexuan Cao, Jieming Cui, Zhiyuan zhang, Yixin
Chen, He Wang, Yixin Zhu, Siyuan Huang
|
Full-Body Articulated Human-Object Interaction
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fine-grained capturing of 3D HOI boosts human activity understanding and
facilitates downstream visual tasks, including action recognition, holistic
scene reconstruction, and human motion synthesis. Despite its significance,
existing works mostly assume that humans interact with rigid objects using only
a few body parts, limiting their scope. In this paper, we address the
challenging problem of f-AHOI, wherein the whole human bodies interact with
articulated objects, whose parts are connected by movable joints. We present
CHAIRS, a large-scale motion-captured f-AHOI dataset, consisting of 16.2 hours
of versatile interactions between 46 participants and 81 articulated and rigid
sittable objects. CHAIRS provides 3D meshes of both humans and articulated
objects during the entire interactive process, as well as realistic and
physically plausible full-body interactions. We show the value of CHAIRS with
object pose estimation. By learning the geometrical relationships in HOI, we
devise the very first model that leverage human pose estimation to tackle the
estimation of articulated object poses and shapes during whole-body
interactions. Given an image and an estimated human pose, our model first
reconstructs the pose and shape of the object, then optimizes the
reconstruction according to a learned interaction prior. Under both evaluation
settings (e.g., with or without the knowledge of objects'
geometries/structures), our model significantly outperforms baselines. We hope
CHAIRS will promote the community towards finer-grained interaction
understanding. We will make the data/code publicly available.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 19:50:54 GMT"
},
{
"version": "v2",
"created": "Tue, 16 May 2023 19:32:31 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Jiang",
"Nan",
""
],
[
"Liu",
"Tengyu",
""
],
[
"Cao",
"Zhexuan",
""
],
[
"Cui",
"Jieming",
""
],
[
"zhang",
"Zhiyuan",
""
],
[
"Chen",
"Yixin",
""
],
[
"Wang",
"He",
""
],
[
"Zhu",
"Yixin",
""
],
[
"Huang",
"Siyuan",
""
]
] |
new_dataset
| 0.999139 |
2212.11533
|
Fei Wu
|
Fei Wu and Luoyu Chen
|
Multi Lane Detection
|
this work is based on other work to optimize, thus we want to
withdraw it
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lane detection is a long-standing task and a basic module in autonomous
driving. The task is to detect the lane of the current driving road, and
provide relevant information such as the ID, direction, curvature, width,
length, with visualization. Our work is based on CNN backbone DLA-34, along
with Affinity Fields, aims to achieve robust detection of various lanes without
assuming the number of lanes. Besides, we investigate novel decoding methods to
achieve more efficient lane detection algorithm.
|
[
{
"version": "v1",
"created": "Thu, 22 Dec 2022 08:20:08 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Feb 2023 02:28:50 GMT"
},
{
"version": "v3",
"created": "Thu, 30 Mar 2023 10:09:33 GMT"
},
{
"version": "v4",
"created": "Sat, 15 Apr 2023 08:32:42 GMT"
},
{
"version": "v5",
"created": "Wed, 17 May 2023 04:22:59 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Wu",
"Fei",
""
],
[
"Chen",
"Luoyu",
""
]
] |
new_dataset
| 0.995445 |
2302.10197
|
Ettore Randazzo
|
Ettore Randazzo, Alexander Mordvintsev and Craig Fouts
|
Growing Steerable Neural Cellular Automata
|
7 pages. Code base available at
https://github.com/google-research/self-organising-systems/tree/master/isotropic_nca
| null | null | null |
cs.NE cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Neural Cellular Automata (NCA) models have shown remarkable capacity for
pattern formation and complex global behaviors stemming from local
coordination. However, in the original implementation of NCA, cells are
incapable of adjusting their own orientation, and it is the responsibility of
the model designer to orient them externally. A recent isotropic variant of NCA
(Growing Isotropic Neural Cellular Automata) makes the model
orientation-independent - cells can no longer tell up from down, nor left from
right - by removing its dependency on perceiving the gradient of spatial states
in its neighborhood. In this work, we revisit NCA with a different approach: we
make each cell responsible for its own orientation by allowing it to "turn" as
determined by an adjustable internal state. The resulting Steerable NCA
contains cells of varying orientation embedded in the same pattern. We observe
how, while Isotropic NCA are orientation-agnostic, Steerable NCA have
chirality: they have a predetermined left-right symmetry. We therefore show
that we can train Steerable NCA in similar but simpler ways than their
Isotropic variant by: (1) breaking symmetries using only two seeds, or (2)
introducing a rotation-invariant training objective and relying on asynchronous
cell updates to break the up-down symmetry of the system.
|
[
{
"version": "v1",
"created": "Sun, 19 Feb 2023 09:45:46 GMT"
},
{
"version": "v2",
"created": "Wed, 17 May 2023 15:34:32 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Randazzo",
"Ettore",
""
],
[
"Mordvintsev",
"Alexander",
""
],
[
"Fouts",
"Craig",
""
]
] |
new_dataset
| 0.991 |
2303.07033
|
Cong Wang
|
Cong Wang and Jinshan Pan and Wanyu Lin and Jiangxin Dong and
Xiao-Ming Wu
|
SelfPromer: Self-Prompt Dehazing Transformers with Depth-Consistency
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This work presents an effective depth-consistency self-prompt Transformer for
image dehazing. It is motivated by an observation that the estimated depths of
an image with haze residuals and its clear counterpart vary. Enforcing the
depth consistency of dehazed images with clear ones, therefore, is essential
for dehazing. For this purpose, we develop a prompt based on the features of
depth differences between the hazy input images and corresponding clear
counterparts that can guide dehazing models for better restoration.
Specifically, we first apply deep features extracted from the input images to
the depth difference features for generating the prompt that contains the haze
residual information in the input. Then we propose a prompt embedding module
that is designed to perceive the haze residuals, by linearly adding the prompt
to the deep features. Further, we develop an effective prompt attention module
to pay more attention to haze residuals for better removal. By incorporating
the prompt, prompt embedding, and prompt attention into an encoder-decoder
network based on VQGAN, we can achieve better perception quality. As the depths
of clear images are not available at inference, and the dehazed images with
one-time feed-forward execution may still contain a portion of haze residuals,
we propose a new continuous self-prompt inference that can iteratively correct
the dehazing model towards better haze-free image generation. Extensive
experiments show that our method performs favorably against the
state-of-the-art approaches on both synthetic and real-world datasets in terms
of perception metrics including NIQE, PI, and PIQE.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 11:47:24 GMT"
},
{
"version": "v2",
"created": "Wed, 17 May 2023 06:09:01 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Wang",
"Cong",
""
],
[
"Pan",
"Jinshan",
""
],
[
"Lin",
"Wanyu",
""
],
[
"Dong",
"Jiangxin",
""
],
[
"Wu",
"Xiao-Ming",
""
]
] |
new_dataset
| 0.977854 |
2304.00788
|
Yuheng Lu
|
Yuheng Lu, Chenfeng Xu, Xiaobao Wei, Xiaodong Xie, Masayoshi Tomizuka,
Kurt Keutzer, Shanghang Zhang
|
Open-Vocabulary Point-Cloud Object Detection without 3D Annotation
|
I want to update this manuscript
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The goal of open-vocabulary detection is to identify novel objects based on
arbitrary textual descriptions. In this paper, we address open-vocabulary 3D
point-cloud detection by a dividing-and-conquering strategy, which involves: 1)
developing a point-cloud detector that can learn a general representation for
localizing various objects, and 2) connecting textual and point-cloud
representations to enable the detector to classify novel object categories
based on text prompting. Specifically, we resort to rich image pre-trained
models, by which the point-cloud detector learns localizing objects under the
supervision of predicted 2D bounding boxes from 2D pre-trained detectors.
Moreover, we propose a novel de-biased triplet cross-modal contrastive learning
to connect the modalities of image, point-cloud and text, thereby enabling the
point-cloud detector to benefit from vision-language pre-trained
models,i.e.,CLIP. The novel use of image and vision-language pre-trained models
for point-cloud detectors allows for open-vocabulary 3D object detection
without the need for 3D annotations. Experiments demonstrate that the proposed
method improves at least 3.03 points and 7.47 points over a wide range of
baselines on the ScanNet and SUN RGB-D datasets, respectively. Furthermore, we
provide a comprehensive analysis to explain why our approach works.
|
[
{
"version": "v1",
"created": "Mon, 3 Apr 2023 08:22:02 GMT"
},
{
"version": "v2",
"created": "Wed, 17 May 2023 02:09:03 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Lu",
"Yuheng",
""
],
[
"Xu",
"Chenfeng",
""
],
[
"Wei",
"Xiaobao",
""
],
[
"Xie",
"Xiaodong",
""
],
[
"Tomizuka",
"Masayoshi",
""
],
[
"Keutzer",
"Kurt",
""
],
[
"Zhang",
"Shanghang",
""
]
] |
new_dataset
| 0.999326 |
2304.12620
|
Junde Wu
|
Junde Wu and Yu Zhang and Rao Fu and Huihui Fang and Yuanpei Liu and
Zhaowei Wang and Yanwu Xu and Yueming Jin
|
Medical SAM Adapter: Adapting Segment Anything Model for Medical Image
Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Segment Anything Model (SAM) has recently gained popularity in the field
of image segmentation. Thanks to its impressive capabilities in all-round
segmentation tasks and its prompt-based interface, SAM has sparked intensive
discussion within the community. It is even said by many prestigious experts
that image segmentation task has been "finished" by SAM. However, medical image
segmentation, although an important branch of the image segmentation family,
seems not to be included in the scope of Segmenting "Anything". Many individual
experiments and recent studies have shown that SAM performs subpar in medical
image segmentation. A natural question is how to find the missing piece of the
puzzle to extend the strong segmentation capability of SAM to medical image
segmentation. In this paper, instead of fine-tuning the SAM model, we propose
Med SAM Adapter, which integrates the medical specific domain knowledge to the
segmentation model, by a simple yet effective adaptation technique. Although
this work is still one of a few to transfer the popular NLP technique Adapter
to computer vision cases, this simple implementation shows surprisingly good
performance on medical image segmentation. A medical image adapted SAM, which
we have dubbed Medical SAM Adapter (MSA), shows superior performance on 19
medical image segmentation tasks with various image modalities including CT,
MRI, ultrasound image, fundus image, and dermoscopic images. MSA outperforms a
wide range of state-of-the-art (SOTA) medical image segmentation methods, such
as nnUNet, TransUNet, UNetr, MedSegDiff, and also outperforms the fully
fine-turned MedSAM with a considerable performance gap. Code will be released
at: https://github.com/WuJunde/Medical-SAM-Adapter.
|
[
{
"version": "v1",
"created": "Tue, 25 Apr 2023 07:34:22 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Apr 2023 13:20:17 GMT"
},
{
"version": "v3",
"created": "Wed, 3 May 2023 08:25:22 GMT"
},
{
"version": "v4",
"created": "Thu, 4 May 2023 04:03:33 GMT"
},
{
"version": "v5",
"created": "Thu, 11 May 2023 12:07:35 GMT"
},
{
"version": "v6",
"created": "Sat, 13 May 2023 08:00:39 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Wu",
"Junde",
""
],
[
"Zhang",
"Yu",
""
],
[
"Fu",
"Rao",
""
],
[
"Fang",
"Huihui",
""
],
[
"Liu",
"Yuanpei",
""
],
[
"Wang",
"Zhaowei",
""
],
[
"Xu",
"Yanwu",
""
],
[
"Jin",
"Yueming",
""
]
] |
new_dataset
| 0.99706 |
2305.03688
|
Zeqi Tan
|
Zeqi Tan, Shen Huang, Zixia Jia, Jiong Cai, Yinghui Li, Weiming Lu,
Yueting Zhuang, Kewei Tu, Pengjun Xie, Fei Huang and Yong Jiang
|
DAMO-NLP at SemEval-2023 Task 2: A Unified Retrieval-augmented System
for Multilingual Named Entity Recognition
|
Accepted to SemEval 2023, winners for 9 out of 13 tracks, performance
beyond ChatGPT
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The MultiCoNER \RNum{2} shared task aims to tackle multilingual named entity
recognition (NER) in fine-grained and noisy scenarios, and it inherits the
semantic ambiguity and low-context setting of the MultiCoNER \RNum{1} task. To
cope with these problems, the previous top systems in the MultiCoNER \RNum{1}
either incorporate the knowledge bases or gazetteers. However, they still
suffer from insufficient knowledge, limited context length, single retrieval
strategy. In this paper, our team \textbf{DAMO-NLP} proposes a unified
retrieval-augmented system (U-RaNER) for fine-grained multilingual NER. We
perform error analysis on the previous top systems and reveal that their
performance bottleneck lies in insufficient knowledge. Also, we discover that
the limited context length causes the retrieval knowledge to be invisible to
the model. To enhance the retrieval context, we incorporate the entity-centric
Wikidata knowledge base, while utilizing the infusion approach to broaden the
contextual scope of the model. Also, we explore various search strategies and
refine the quality of retrieval knowledge. Our system\footnote{We will release
the dataset, code, and scripts of our system at {\small
\url{https://github.com/modelscope/AdaSeq/tree/master/examples/U-RaNER}}.} wins
9 out of 13 tracks in the MultiCoNER \RNum{2} shared task. Additionally, we
compared our system with ChatGPT, one of the large language models which have
unlocked strong capabilities on many tasks. The results show that there is
still much room for improvement for ChatGPT on the extraction task.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 16:59:26 GMT"
},
{
"version": "v2",
"created": "Tue, 9 May 2023 03:43:10 GMT"
},
{
"version": "v3",
"created": "Wed, 17 May 2023 03:14:00 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Tan",
"Zeqi",
""
],
[
"Huang",
"Shen",
""
],
[
"Jia",
"Zixia",
""
],
[
"Cai",
"Jiong",
""
],
[
"Li",
"Yinghui",
""
],
[
"Lu",
"Weiming",
""
],
[
"Zhuang",
"Yueting",
""
],
[
"Tu",
"Kewei",
""
],
[
"Xie",
"Pengjun",
""
],
[
"Huang",
"Fei",
""
],
[
"Jiang",
"Yong",
""
]
] |
new_dataset
| 0.958303 |
2305.04772
|
Pooya Rostami Mazrae
|
Mairieli Wessel, Tom Mens, Alexandre Decan, Pooya Rostami Mazrae
|
The GitHub Development Workflow Automation Ecosystems
|
31 pages, 7 figures, Chapter of the book "Software Ecosystems:
Tooling and Analytics", ISBN number: 978-3-031-36059-6, Reproduced with
permission of Springer Nature
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Large-scale software development has become a highly collaborative and
geographically distributed endeavour, especially in open-source software
development ecosystems and their associated developer communities. It has given
rise to modern development processes (e.g., pull-based development) that
involve a wide range of activities such as issue and bug handling, code
reviewing, coding, testing, and deployment. These often very effort-intensive
activities are supported by a wide variety of tools such as version control
systems, bug and issue trackers, code reviewing systems, code quality analysis
tools, test automation, dependency management, and vulnerability detection
tools. To reduce the complexity of the collaborative development process, many
of the repetitive human activities that are part of the development workflow
are being automated by CI/CD tools that help to increase the productivity and
quality of software projects. Social coding platforms aim to integrate all this
tooling and workflow automation in a single encompassing environment. These
social coding platforms gave rise to the emergence of development bots,
facilitating the integration with external CI/CD tools and enabling the
automation of many other development-related tasks. GitHub, the most popular
social coding platform, has introduced GitHub Actions to automate workflows in
its hosted software development repositories since November 2019. This chapter
explores the ecosystems of development bots and GitHub Actions and their
interconnection. It provides an extensive survey of the state-of-the-art in
this domain, discusses the opportunities and threats that these ecosystems
entail, and reports on the challenges and future perspectives for researchers
as well as software practitioners.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 15:24:23 GMT"
},
{
"version": "v2",
"created": "Wed, 17 May 2023 17:08:46 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Wessel",
"Mairieli",
""
],
[
"Mens",
"Tom",
""
],
[
"Decan",
"Alexandre",
""
],
[
"Mazrae",
"Pooya Rostami",
""
]
] |
new_dataset
| 0.997379 |
2305.06335
|
Jungyeul Park
|
Eunkyul Leah Jo and Kyuwon Kim and Xihan Wu and KyungTae Lim and
Jungyeul Park and Chulwoo Park
|
K-UniMorph: Korean Universal Morphology and its Feature Schema
|
Findings of the Association for Computational Linguistics: ACL 2023
(Camera-ready)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present in this work a new Universal Morphology dataset for Korean.
Previously, the Korean language has been underrepresented in the field of
morphological paradigms amongst hundreds of diverse world languages. Hence, we
propose this Universal Morphological paradigms for the Korean language that
preserve its distinct characteristics. For our K-UniMorph dataset, we outline
each grammatical criterion in detail for the verbal endings, clarify how to
extract inflected forms, and demonstrate how we generate the morphological
schemata. This dataset adopts morphological feature schema from Sylak-Glassman
et al. (2015) and Sylak-Glassman (2016) for the Korean language as we extract
inflected verb forms from the Sejong morphologically analyzed corpus that is
one of the largest annotated corpora for Korean. During the data creation, our
methodology also includes investigating the correctness of the conversion from
the Sejong corpus. Furthermore, we carry out the inflection task using three
different Korean word forms: letters, syllables and morphemes. Finally, we
discuss and describe future perspectives on Korean morphological paradigms and
the dataset.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 17:44:01 GMT"
},
{
"version": "v2",
"created": "Tue, 16 May 2023 14:35:29 GMT"
},
{
"version": "v3",
"created": "Wed, 17 May 2023 08:29:58 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Jo",
"Eunkyul Leah",
""
],
[
"Kim",
"Kyuwon",
""
],
[
"Wu",
"Xihan",
""
],
[
"Lim",
"KyungTae",
""
],
[
"Park",
"Jungyeul",
""
],
[
"Park",
"Chulwoo",
""
]
] |
new_dataset
| 0.999776 |
2305.08279
|
Noah Bagazinski
|
Noah J. Bagazinski and Faez Ahmed
|
Ship-D: Ship Hull Dataset for Design Optimization using Machine Learning
| null | null | null | null |
cs.LG cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine learning has recently made significant strides in reducing design
cycle time for complex products. Ship design, which currently involves years
long cycles and small batch production, could greatly benefit from these
advancements. By developing a machine learning tool for ship design that learns
from the design of many different types of ships, tradeoffs in ship design
could be identified and optimized. However, the lack of publicly available ship
design datasets currently limits the potential for leveraging machine learning
in generalized ship design. To address this gap, this paper presents a large
dataset of thirty thousand ship hulls, each with design and functional
performance information, including parameterization, mesh, point cloud, and
image representations, as well as thirty two hydrodynamic drag measures under
different operating conditions. The dataset is structured to allow human input
and is also designed for computational methods. Additionally, the paper
introduces a set of twelve ship hulls from publicly available CAD repositories
to showcase the proposed parameterizations ability to accurately reconstruct
existing hulls. A surrogate model was developed to predict the thirty two wave
drag coefficients, which was then implemented in a genetic algorithm case study
to reduce the total drag of a hull by sixty percent while maintaining the shape
of the hulls cross section and the length of the parallel midbody. Our work
provides a comprehensive dataset and application examples for other researchers
to use in advancing data driven ship design.
|
[
{
"version": "v1",
"created": "Sun, 14 May 2023 23:47:20 GMT"
},
{
"version": "v2",
"created": "Tue, 16 May 2023 18:57:55 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Bagazinski",
"Noah J.",
""
],
[
"Ahmed",
"Faez",
""
]
] |
new_dataset
| 0.999754 |
2305.08322
|
Junxian He
|
Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang,
Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Yao Fu,
Maosong Sun, Junxian He
|
C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for
Foundation Models
|
Website: https://cevalbenchmark.com
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
New NLP benchmarks are urgently needed to align with the rapid development of
large language models (LLMs). We present C-Eval, the first comprehensive
Chinese evaluation suite designed to assess advanced knowledge and reasoning
abilities of foundation models in a Chinese context. C-Eval comprises
multiple-choice questions across four difficulty levels: middle school, high
school, college, and professional. The questions span 52 diverse disciplines,
ranging from humanities to science and engineering. C-Eval is accompanied by
C-Eval Hard, a subset of very challenging subjects in C-Eval that requires
advanced reasoning abilities to solve. We conduct a comprehensive evaluation of
the most advanced LLMs on C-Eval, including both English- and Chinese-oriented
models. Results indicate that only GPT-4 could achieve an average accuracy of
over 60%, suggesting that there is still significant room for improvement for
current LLMs. We anticipate C-Eval will help analyze important strengths and
shortcomings of foundation models, and foster their development and growth for
Chinese users.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 03:20:19 GMT"
},
{
"version": "v2",
"created": "Wed, 17 May 2023 01:11:35 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Huang",
"Yuzhen",
""
],
[
"Bai",
"Yuzhuo",
""
],
[
"Zhu",
"Zhihao",
""
],
[
"Zhang",
"Junlei",
""
],
[
"Zhang",
"Jinghan",
""
],
[
"Su",
"Tangjun",
""
],
[
"Liu",
"Junteng",
""
],
[
"Lv",
"Chuancheng",
""
],
[
"Zhang",
"Yikai",
""
],
[
"Lei",
"Jiayi",
""
],
[
"Fu",
"Yao",
""
],
[
"Sun",
"Maosong",
""
],
[
"He",
"Junxian",
""
]
] |
new_dataset
| 0.999227 |
2305.08953
|
Mark Endo
|
Mark Endo, Joy Hsu, Jiaman Li, Jiajun Wu
|
Motion Question Answering via Modular Motion Programs
|
In ICML 2023; first two authors contributed equally to this work
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order to build artificial intelligence systems that can perceive and
reason with human behavior in the real world, we must first design models that
conduct complex spatio-temporal reasoning over motion sequences. Moving towards
this goal, we propose the HumanMotionQA task to evaluate complex, multi-step
reasoning abilities of models on long-form human motion sequences. We generate
a dataset of question-answer pairs that require detecting motor cues in small
portions of motion sequences, reasoning temporally about when events occur, and
querying specific motion attributes. In addition, we propose NSPose, a
neuro-symbolic method for this task that uses symbolic reasoning and a modular
design to ground motion through learning motion concepts, attribute neural
operators, and temporal relations. We demonstrate the suitability of NSPose for
the HumanMotionQA task, outperforming all baseline methods.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 18:45:55 GMT"
},
{
"version": "v2",
"created": "Wed, 17 May 2023 17:18:35 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Endo",
"Mark",
""
],
[
"Hsu",
"Joy",
""
],
[
"Li",
"Jiaman",
""
],
[
"Wu",
"Jiajun",
""
]
] |
new_dataset
| 0.9986 |
2305.09222
|
Samuel Z\"uhlke
|
Samuel Z\"uhlke, Andreas St\"ockl, David C. Schedl
|
Touch Sensing on Semi-Elastic Textiles with Border-Based Sensors
|
8 pages, 3 figures, submitted to IHSED 2023
| null | null | null |
cs.LG cs.CV cs.HC cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This study presents a novel approach for touch sensing using semi-elastic
textile surfaces that does not require the placement of additional sensors in
the sensing area, instead relying on sensors located on the border of the
textile. The proposed approach is demonstrated through experiments involving an
elastic Jersey fabric and a variety of machine-learning models. The performance
of one particular border-based sensor design is evaluated in depth. By using
visual markers, the best-performing visual sensor arrangement predicts a single
touch point with a mean squared error of 1.36 mm on an area of 125mm by 125mm.
We built a textile only prototype that is able to classify touch at three
indent levels (0, 15, and 20 mm) with an accuracy of 82.85%. Our results
suggest that this approach has potential applications in wearable technology
and smart textiles, making it a promising avenue for further exploration in
these fields.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 06:58:11 GMT"
},
{
"version": "v2",
"created": "Wed, 17 May 2023 06:35:55 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Zühlke",
"Samuel",
""
],
[
"Stöckl",
"Andreas",
""
],
[
"Schedl",
"David C.",
""
]
] |
new_dataset
| 0.995511 |
2305.09559
|
Anoubhav Agarwaal
|
Anoubhav Agarwaal, Prabhat Kanaujia, Sartaki Sinha Roy, Susmita Ghose
|
Robust and lightweight audio fingerprint for Automatic Content
Recognition
| null | null | null | null |
cs.SD cs.IR eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This research paper presents a novel audio fingerprinting system for
Automatic Content Recognition (ACR). By using signal processing techniques and
statistical transformations, our proposed method generates compact fingerprints
of audio segments that are robust to noise degradations present in real-world
audio. The system is designed to be highly scalable, with the ability to
identify thousands of hours of content using fingerprints generated from
millions of TVs. The fingerprint's high temporal correlation and utilization of
existing GPU-compatible Approximate Nearest Neighbour (ANN) search algorithms
make this possible. Furthermore, the fingerprint generation can run on
low-power devices with limited compute, making it accessible to a wide range of
applications. Experimental results show improvements in our proposed system
compared to a min-hash based audio fingerprint on all evaluated metrics,
including accuracy on proprietary ACR datasets, retrieval speed, memory usage,
and robustness to various noises. For similar retrieval accuracy, our system is
30x faster and uses 6x fewer fingerprints than the min-hash method.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 15:55:03 GMT"
},
{
"version": "v2",
"created": "Wed, 17 May 2023 06:28:30 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Agarwaal",
"Anoubhav",
""
],
[
"Kanaujia",
"Prabhat",
""
],
[
"Roy",
"Sartaki Sinha",
""
],
[
"Ghose",
"Susmita",
""
]
] |
new_dataset
| 0.959135 |
2305.09669
|
Nur Imtiazul Haque
|
Nur Imtiazul Haque, Maurice Ngouen, Mohammad Ashiqur Rahman, Selcuk
Uluagac, and Laurent Njilla
|
SHATTER: Control and Defense-Aware Attack Analytics for Activity-Driven
Smart Home Systems
|
13 Pages, 2023 IEE/IFIP DSN Conference
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Modern smart home control systems utilize real-time occupancy and activity
monitoring to ensure control efficiency, occupants' comfort, and optimal energy
consumption. Moreover, adopting machine learning-based anomaly detection models
(ADMs) enhances security and reliability. However, sufficient system knowledge
allows adversaries/attackers to alter sensor measurements through stealthy
false data injection (FDI) attacks. Although ADMs limit attack scopes, the
availability of information like occupants' location, conducted activities, and
alteration capability of smart appliances increase the attack surface.
Therefore, performing an attack space analysis of modern home control systems
is crucial to design robust defense solutions. However, state-of-the-art
analyzers do not consider contemporary control and defense solutions and
generate trivial attack vectors. To address this, we propose a control and
defense-aware novel attack analysis framework for a modern smart home control
system, efficiently extracting ADM rules. We verify and validate our framework
using a state-of-the-art dataset and a prototype testbed.
|
[
{
"version": "v1",
"created": "Thu, 27 Apr 2023 20:29:21 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Haque",
"Nur Imtiazul",
""
],
[
"Ngouen",
"Maurice",
""
],
[
"Rahman",
"Mohammad Ashiqur",
""
],
[
"Uluagac",
"Selcuk",
""
],
[
"Njilla",
"Laurent",
""
]
] |
new_dataset
| 0.95718 |
2305.09678
|
Alireza Dehlaghi Ghadim
|
Alireza Dehlaghi-Ghadim, Mahshid Helali Moghadam, Ali Balador, Hans
Hansson
|
Anomaly Detection Dataset for Industrial Control Systems
| null | null | null | null |
cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Over the past few decades, Industrial Control Systems (ICSs) have been
targeted by cyberattacks and are becoming increasingly vulnerable as more ICSs
are connected to the internet. Using Machine Learning (ML) for Intrusion
Detection Systems (IDS) is a promising approach for ICS cyber protection, but
the lack of suitable datasets for evaluating ML algorithms is a challenge.
Although there are a few commonly used datasets, they may not reflect realistic
ICS network data, lack necessary features for effective anomaly detection, or
be outdated. This paper presents the 'ICS-Flow' dataset, which offers network
data and process state variables logs for supervised and unsupervised ML-based
IDS assessment. The network data includes normal and anomalous network packets
and flows captured from simulated ICS components and emulated networks. The
anomalies were injected into the system through various attack techniques
commonly used by hackers to modify network traffic and compromise ICSs. We also
proposed open-source tools, `ICSFlowGenerator' for generating network flow
parameters from Raw network packets. The final dataset comprises over
25,000,000 raw network packets, network flow records, and process variable
logs. The paper describes the methodology used to collect and label the dataset
and provides a detailed data analysis. Finally, we implement several ML models,
including the decision tree, random forest, and artificial neural network to
detect anomalies and attacks, demonstrating that our dataset can be used
effectively for training intrusion detection ML models.
|
[
{
"version": "v1",
"created": "Thu, 11 May 2023 14:52:19 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Dehlaghi-Ghadim",
"Alireza",
""
],
[
"Moghadam",
"Mahshid Helali",
""
],
[
"Balador",
"Ali",
""
],
[
"Hansson",
"Hans",
""
]
] |
new_dataset
| 0.999832 |
2305.09736
|
Sanyam Jain
|
Sanyam Jain
|
ADDSL: Hand Gesture Detection and Sign Language Recognition on Annotated
Danish Sign Language
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For a long time, detecting hand gestures and recognizing them as letters or
numbers has been a challenging task. This creates communication barriers for
individuals with disabilities. This paper introduces a new dataset, the
Annotated Dataset for Danish Sign Language (ADDSL). Annota-tions for the
dataset were made using the open-source tool LabelImg in the YOLO format. Using
this dataset, a one-stage ob-ject detector model (YOLOv5) was trained with the
CSP-DarkNet53 backbone and YOLOv3 head to recognize letters (A-Z) and numbers
(0-9) using only seven unique images per class (without augmen-tation). Five
models were trained with 350 epochs, resulting in an average inference time of
9.02ms per image and a best accu-racy of 92% when compared to previous
research. Our results show that modified model is efficient and more accurate
than existing work in the same field. The code repository for our model is
available at the GitHub repository https://github.com/s4nyam/pvt-addsl.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 18:08:24 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Jain",
"Sanyam",
""
]
] |
new_dataset
| 0.999782 |
2305.09748
|
Alireza Vahid
|
Tiep M. Hoang, Alireza Vahid, Hoang Duong Tuan, Lajos Hanzo
|
Physical Layer Authentication and Security Design in the Machine
Learning Era
| null | null | null | null |
cs.CR cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Security at the physical layer (PHY) is a salient research topic in wireless
systems, and machine learning (ML) is emerging as a powerful tool for providing
new data-driven security solutions. Therefore, the application of ML techniques
to the PHY security is of crucial importance in the landscape of more and more
data-driven wireless services. In this context, we first summarize the family
of bespoke ML algorithms that are eminently suitable for wireless security.
Then, we review the recent progress in ML-aided PHY security, where the term
"PHY security" is classified into two different types: i) PHY authentication
and ii) secure PHY transmission. Moreover, we treat neural networks as special
types of ML and present how to deal with PHY security optimization problems
using neural networks. Finally, we identify some major challenges and
opportunities in tackling PHY security challenges by applying carefully
tailored ML tools.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 18:45:34 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Hoang",
"Tiep M.",
""
],
[
"Vahid",
"Alireza",
""
],
[
"Tuan",
"Hoang Duong",
""
],
[
"Hanzo",
"Lajos",
""
]
] |
new_dataset
| 0.990771 |
2305.09750
|
Shangbang Long
|
Shangbang Long, Siyang Qin, Dmitry Panteleev, Alessandro Bissacco,
Yasuhisa Fujii, Michalis Raptis
|
ICDAR 2023 Competition on Hierarchical Text Detection and Recognition
|
ICDAR 2023 competition report by organizers (accepted and to be
published officially later)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We organize a competition on hierarchical text detection and recognition. The
competition is aimed to promote research into deep learning models and systems
that can jointly perform text detection and recognition and geometric layout
analysis. We present details of the proposed competition organization,
including tasks, datasets, evaluations, and schedule. During the competition
period (from January 2nd 2023 to April 1st 2023), at least 50 submissions from
more than 20 teams were made in the 2 proposed tasks. Considering the number of
teams and submissions, we conclude that the HierText competition has been
successfully held. In this report, we will also present the competition results
and insights from them.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 18:56:12 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Long",
"Shangbang",
""
],
[
"Qin",
"Siyang",
""
],
[
"Panteleev",
"Dmitry",
""
],
[
"Bissacco",
"Alessandro",
""
],
[
"Fujii",
"Yasuhisa",
""
],
[
"Raptis",
"Michalis",
""
]
] |
new_dataset
| 0.998238 |
2305.09761
|
Javier Yu
|
Javier Yu and Jun En Low and Keiko Nagami and Mac Schwager
|
NerfBridge: Bringing Real-time, Online Neural Radiance Field Training to
Robotics
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This work was presented at the IEEE International Conference on Robotics and
Automation 2023 Workshop on Unconventional Spatial Representations.
Neural radiance fields (NeRFs) are a class of implicit scene representations
that model 3D environments from color images. NeRFs are expressive, and can
model the complex and multi-scale geometry of real world environments, which
potentially makes them a powerful tool for robotics applications. Modern NeRF
training libraries can generate a photo-realistic NeRF from a static data set
in just a few seconds, but are designed for offline use and require a slow pose
optimization pre-computation step.
In this work we propose NerfBridge, an open-source bridge between the Robot
Operating System (ROS) and the popular Nerfstudio library for real-time, online
training of NeRFs from a stream of images. NerfBridge enables rapid development
of research on applications of NeRFs in robotics by providing an extensible
interface to the efficient training pipelines and model libraries provided by
Nerfstudio. As an example use case we outline a hardware setup that can be used
NerfBridge to train a NeRF from images captured by a camera mounted to a
quadrotor in both indoor and outdoor environments.
For accompanying video https://youtu.be/EH0SLn-RcDg and code
https://github.com/javieryu/nerf_bridge.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 19:27:17 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Yu",
"Javier",
""
],
[
"Low",
"Jun En",
""
],
[
"Nagami",
"Keiko",
""
],
[
"Schwager",
"Mac",
""
]
] |
new_dataset
| 0.992577 |
2305.09778
|
He Chen
|
He Chen, Elie Diaz, Cem Yuksel
|
Shortest Path to Boundary for Self-Intersecting Meshes
| null | null | null | null |
cs.GR cs.CG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We introduce a method for efficiently computing the exact shortest path to
the boundary of a mesh from a given internal point in the presence of
self-intersections. We provide a formal definition of shortest boundary paths
for self-intersecting objects and present a robust algorithm for computing the
actual shortest boundary path. The resulting method offers an effective
solution for collision and self-collision handling while simulating deformable
volumetric objects, using fast simulation techniques that provide no guarantees
on collision resolution. Our evaluation includes complex self-collision
scenarios with a large number of active contacts, showing that our method can
successfully handle them by introducing a relatively minor computational
overhead.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 20:05:10 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Chen",
"He",
""
],
[
"Diaz",
"Elie",
""
],
[
"Yuksel",
"Cem",
""
]
] |
new_dataset
| 0.994679 |
2305.09788
|
Rafaela Scaciota
|
Malith Gallage, Rafaela Scaciota, Sumudu Samarakoon and Mehdi Bennis
|
Codesign of Edge Intelligence and Automated Guided Vehicle Control
|
3 pages, 3 figures, 2023 IEEE International Conference on Pervasive
Computing and Communications Workshops and other Affiliated Events (PerCom
Workshops): Demos
| null | null | null |
cs.CV cs.AI cs.LG cs.RO eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work presents a harmonic design of autonomous guided vehicle (AGV)
control, edge intelligence, and human input to enable autonomous transportation
in industrial environments. The AGV has the capability to navigate between a
source and destinations and pick/place objects. The human input implicitly
provides preferences of the destination and exact drop point, which are derived
from an artificial intelligence (AI) module at the network edge and shared with
the AGV over a wireless network. The demonstration indicates that the proposed
integrated design of hardware, software, and AI design achieve a technology
readiness level (TRL) of range 4-5
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 12:15:35 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Gallage",
"Malith",
""
],
[
"Scaciota",
"Rafaela",
""
],
[
"Samarakoon",
"Sumudu",
""
],
[
"Bennis",
"Mehdi",
""
]
] |
new_dataset
| 0.997563 |
2305.09821
|
Shenjie Huang
|
Shenjie Huang, Danial Chitnis, Cheng Chen, Harald Haas, Mohammad-Ali
Khalighi, Robert K. Henderson, and Majid Safari
|
Single-Photon Counting Receivers for 6G Optical Wireless Communications
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Optical wireless communication (OWC) offers several complementary advantages
to radio-frequency (RF) wireless networks such as its massive available
spectrum; hence, it is widely anticipated that OWC will assume a pivotal role
in the forthcoming sixth generation (6G) wireless communication networks.
Although significant progress has been achieved in OWC over the past decades,
the outage induced by occasionally low received optical power continues to pose
a key limiting factor for its deployment. In this work, we discuss the
potential role of single-photon counting (SPC) receivers as a promising
solution to overcome this limitation. We provide an overview of the
state-of-the-art of OWC systems utilizing SPC receivers and identify several
critical areas of open problems that warrant further research in the future.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 21:53:52 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Huang",
"Shenjie",
""
],
[
"Chitnis",
"Danial",
""
],
[
"Chen",
"Cheng",
""
],
[
"Haas",
"Harald",
""
],
[
"Khalighi",
"Mohammad-Ali",
""
],
[
"Henderson",
"Robert K.",
""
],
[
"Safari",
"Majid",
""
]
] |
new_dataset
| 0.994438 |
2305.09857
|
Vipul Raheja
|
Vipul Raheja, Dhruv Kumar, Ryan Koo, Dongyeop Kang
|
CoEdIT: Text Editing by Task-Specific Instruction Tuning
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Text editing or revision is an essential function of the human writing
process. Understanding the capabilities of LLMs for making high-quality
revisions and collaborating with human writers is a critical step toward
building effective writing assistants. With the prior success of LLMs and
instruction tuning, we leverage instruction-tuned LLMs for text revision to
improve the quality of user-generated text and improve the efficiency of the
process. We introduce CoEdIT, a state-of-the-art text editing model for writing
assistance. CoEdIT takes instructions from the user specifying the attributes
of the desired text, such as "Make the sentence simpler" or "Write it in a more
neutral style," and outputs the edited text. We present a large language model
fine-tuned on a diverse collection of task-specific instructions for text
editing (a total of 82K instructions). Our model (1) achieves state-of-the-art
performance on various text editing benchmarks, (2) is competitive with
publicly available largest-sized LLMs trained on instructions while being
$\sim$60x smaller, (3) is capable of generalizing to unseen edit instructions,
and (4) exhibits compositional comprehension abilities to generalize to
instructions containing different combinations of edit actions. Through
extensive qualitative and quantitative analysis, we show that writers prefer
the edits suggested by CoEdIT, relative to other state-of-the-art text editing
models. Our code and dataset are publicly available.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 00:05:24 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Raheja",
"Vipul",
""
],
[
"Kumar",
"Dhruv",
""
],
[
"Koo",
"Ryan",
""
],
[
"Kang",
"Dongyeop",
""
]
] |
new_dataset
| 0.970737 |
2305.09905
|
Karim Eldefrawy
|
Aysajan Abidin, Karim Eldefrawy, Dave Singelee
|
Entanglement-based Mutual Quantum Distance Bounding
|
23 pages
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Mutual distance bounding (DB) protocols enable two distrusting parties to
establish an upper-bound on the distance between them. DB has been so far
mainly considered in classical settings and for classical applications,
especially in wireless settings, e.g., to prevent relay attacks in wireless
authentication and access control systems, and for secure localization. While
recent research has started exploring DB in quantum settings, all current
quantum DB (QDB) protocols employ quantum-bits (qubits) in the rapid-bit
exchange phase and only perform one-way DB. Specifically, the latest QDB
proposals improve the initial ones by adding resistance to photon number
splitting attacks, and improving round complexity by avoiding communication
from the prover to the verifier in the last authentication phase. This paper
presents two new QDB protocols that differ from previously proposed protocols
in several aspects: (1) to the best of our knowledge, our protocols are the
first to utilize entangled qubits in the rapid-bit exchange phase, previous
protocols relied on sending individual qubits, not those from a pair of
entangled ones; (2) our second protocol can perform mutual QDB between two
parties in one execution, previous QDB protocols had to be executed twice with
the prover and verifier roles reversed in each execution; (3) the use of
entangled qubits in our protocols thwarts attacks that previous QDB protocols
were prone to; (4) and finally, our protocols also eliminate the need for
communication from the prover to the verifier in the last authentication phase,
which was necessary in some previous QDB protocols. Our work paves the way for
several interesting research directions which we briefly discuss in detail in
the appendix.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 02:28:00 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Abidin",
"Aysajan",
""
],
[
"Eldefrawy",
"Karim",
""
],
[
"Singelee",
"Dave",
""
]
] |
new_dataset
| 0.967134 |
2305.09924
|
Hao Zheng
|
Hao Zheng, Jinbao Wang, Xiantong Zhen, Hong Chen, Jingkuan Song, Feng
Zheng
|
CageViT: Convolutional Activation Guided Efficient Vision Transformer
|
9 pages, 3 figures, NeurIPS conference
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, Transformers have emerged as the go-to architecture for both vision
and language modeling tasks, but their computational efficiency is limited by
the length of the input sequence. To address this, several efficient variants
of Transformers have been proposed to accelerate computation or reduce memory
consumption while preserving performance. This paper presents an efficient
vision Transformer, called CageViT, that is guided by convolutional activation
to reduce computation. Our CageViT, unlike current Transformers, utilizes a new
encoder to handle the rearranged tokens, bringing several technical
contributions: 1) Convolutional activation is used to pre-process the token
after patchifying the image to select and rearrange the major tokens and minor
tokens, which substantially reduces the computation cost through an additional
fusion layer. 2) Instead of using the class activation map of the convolutional
model directly, we design a new weighted class activation to lower the model
requirements. 3) To facilitate communication between major tokens and fusion
tokens, Gated Linear SRA is proposed to further integrate fusion tokens into
the attention mechanism. We perform a comprehensive validation of CageViT on
the image classification challenge.
Experimental results demonstrate that the proposed CageViT outperforms the
most recent state-of-the-art backbones by a large margin in terms of
efficiency, while maintaining a comparable level of accuracy (e.g. a
moderate-sized 43.35M model trained solely on 224 x 224 ImageNet-1K can achieve
Top-1 accuracy of 83.4% accuracy).
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 03:19:18 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Zheng",
"Hao",
""
],
[
"Wang",
"Jinbao",
""
],
[
"Zhen",
"Xiantong",
""
],
[
"Chen",
"Hong",
""
],
[
"Song",
"Jingkuan",
""
],
[
"Zheng",
"Feng",
""
]
] |
new_dataset
| 0.99848 |
2305.09928
|
Ahmed J. Afifi
|
Ahmed J. Afifi, Samuel T. Thiele, Sandra Lorenz, Pedram Ghamisi,
Raimon Tolosana-Delgado, Moritz Kirsch, Richard Gloaguen, Michael Heizmann
|
Tinto: Multisensor Benchmark for 3D Hyperspectral Point Cloud
Segmentation in the Geosciences
| null | null | null | null |
cs.CV cs.LG eess.IV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The increasing use of deep learning techniques has reduced interpretation
time and, ideally, reduced interpreter bias by automatically deriving
geological maps from digital outcrop models. However, accurate validation of
these automated mapping approaches is a significant challenge due to the
subjective nature of geological mapping and the difficulty in collecting
quantitative validation data. Additionally, many state-of-the-art deep learning
methods are limited to 2D image data, which is insufficient for 3D digital
outcrops, such as hyperclouds. To address these challenges, we present Tinto, a
multi-sensor benchmark digital outcrop dataset designed to facilitate the
development and validation of deep learning approaches for geological mapping,
especially for non-structured 3D data like point clouds. Tinto comprises two
complementary sets: 1) a real digital outcrop model from Corta Atalaya (Spain),
with spectral attributes and ground-truth data, and 2) a synthetic twin that
uses latent features in the original datasets to reconstruct realistic spectral
data (including sensor noise and processing artifacts) from the ground-truth.
The point cloud is dense and contains 3,242,964 labeled points. We used these
datasets to explore the abilities of different deep learning approaches for
automated geological mapping. By making Tinto publicly available, we hope to
foster the development and adaptation of new deep learning tools for 3D
applications in Earth sciences. The dataset can be accessed through this link:
https://doi.org/10.14278/rodare.2256.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 03:24:08 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Afifi",
"Ahmed J.",
""
],
[
"Thiele",
"Samuel T.",
""
],
[
"Lorenz",
"Sandra",
""
],
[
"Ghamisi",
"Pedram",
""
],
[
"Tolosana-Delgado",
"Raimon",
""
],
[
"Kirsch",
"Moritz",
""
],
[
"Gloaguen",
"Richard",
""
],
[
"Heizmann",
"Michael",
""
]
] |
new_dataset
| 0.999314 |
2305.09954
|
Zhaoji Zhang
|
Zhaoji Zhang, Yuhao Chi, Qinghua Guo, Ying Li, Guanghui Song, Chongwen
Huang
|
Asynchronous Grant-Free Random Access: Receiver Design with Partially
Uni-Directional Message Passing and Interference Suppression Analysis
|
submitted to IEEE IoTJ
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Massive Machine-Type Communications (mMTC) features a massive number of
low-cost user equipments (UEs) with sparse activity. Tailor-made for these
features, grant-free random access (GF-RA) serves as an efficient access
solution for mMTC. However, most existing GF-RA schemes rely on strict
synchronization, which incurs excessive coordination burden for the low-cost
UEs. In this work, we propose a receiver design for asynchronous GF-RA, and
address the joint user activity detection (UAD) and channel estimation (CE)
problem in the presence of asynchronization-induced inter-symbol interference.
Specifically, the delay profile is exploited at the receiver to distinguish
different UEs. However, a sample correlation problem in this receiver design
impedes the factorization of the joint likelihood function, which complicates
the UAD and CE problem. To address this correlation problem, we design a
partially uni-directional (PUD) factor graph representation for the joint
likelihood function. Building on this PUD factor graph, we further propose a
PUD message passing based sparse Bayesian learning (SBL) algorithm for
asynchronous UAD and CE (PUDMP-SBL-aUADCE). Our theoretical analysis shows that
the PUDMP-SBL-aUADCE algorithm exhibits higher signal-to-interference-and-noise
ratio (SINR) in the asynchronous case than in the synchronous case, i.e., the
proposed receiver design can exploit asynchronization to suppress multi-user
interference. In addition, considering potential timing error from the low-cost
UEs, we investigate the impacts of imperfect delay profile, and reveal the
advantages of adopting the SBL method in this case. Finally, extensive
simulation results are provided to demonstrate the performance of the
PUDMP-SBL-aUADCE algorithm.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 05:23:21 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Zhang",
"Zhaoji",
""
],
[
"Chi",
"Yuhao",
""
],
[
"Guo",
"Qinghua",
""
],
[
"Li",
"Ying",
""
],
[
"Song",
"Guanghui",
""
],
[
"Huang",
"Chongwen",
""
]
] |
new_dataset
| 0.974615 |
2305.09975
|
Shaoguang Mao
|
Chenshuo Wang, Shaoguang Mao, Tao Ge, Wenshan Wu, Xun Wang, Yan Xia,
Jonathan Tien, Dongyan Zhao
|
Smart Word Suggestions for Writing Assistance
|
Accepted by Findings of ACL23
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Enhancing word usage is a desired feature for writing assistance. To further
advance research in this area, this paper introduces "Smart Word Suggestions"
(SWS) task and benchmark. Unlike other works, SWS emphasizes end-to-end
evaluation and presents a more realistic writing assistance scenario. This task
involves identifying words or phrases that require improvement and providing
substitution suggestions. The benchmark includes human-labeled data for
testing, a large distantly supervised dataset for training, and the framework
for evaluation. The test data includes 1,000 sentences written by English
learners, accompanied by over 16,000 substitution suggestions annotated by 10
native speakers. The training dataset comprises over 3.7 million sentences and
12.7 million suggestions generated through rules. Our experiments with seven
baselines demonstrate that SWS is a challenging task. Based on experimental
analysis, we suggest potential directions for future research on SWS. The
dataset and related codes is available at
https://github.com/microsoft/SmartWordSuggestions.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 06:15:41 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Wang",
"Chenshuo",
""
],
[
"Mao",
"Shaoguang",
""
],
[
"Ge",
"Tao",
""
],
[
"Wu",
"Wenshan",
""
],
[
"Wang",
"Xun",
""
],
[
"Xia",
"Yan",
""
],
[
"Tien",
"Jonathan",
""
],
[
"Zhao",
"Dongyan",
""
]
] |
new_dataset
| 0.99968 |
2305.10084
|
Talha Ilyas
|
Talha Ilyas, Dewa Made Sri Arsa, Khubaib Ahmad, Yong Chae Jeong, Okjae
Won, Jong Hoon Lee, Hyongsuk Kim
|
CWD30: A Comprehensive and Holistic Dataset for Crop Weed Recognition in
Precision Agriculture
|
15 pages, 14 figures, journal research article
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The growing demand for precision agriculture necessitates efficient and
accurate crop-weed recognition and classification systems. Current datasets
often lack the sample size, diversity, and hierarchical structure needed to
develop robust deep learning models for discriminating crops and weeds in
agricultural fields. Moreover, the similar external structure and phenomics of
crops and weeds complicate recognition tasks. To address these issues, we
present the CWD30 dataset, a large-scale, diverse, holistic, and hierarchical
dataset tailored for crop-weed recognition tasks in precision agriculture.
CWD30 comprises over 219,770 high-resolution images of 20 weed species and 10
crop species, encompassing various growth stages, multiple viewing angles, and
environmental conditions. The images were collected from diverse agricultural
fields across different geographic locations and seasons, ensuring a
representative dataset. The dataset's hierarchical taxonomy enables
fine-grained classification and facilitates the development of more accurate,
robust, and generalizable deep learning models. We conduct extensive baseline
experiments to validate the efficacy of the CWD30 dataset. Our experiments
reveal that the dataset poses significant challenges due to intra-class
variations, inter-class similarities, and data imbalance. Additionally, we
demonstrate that minor training modifications like using CWD30 pretrained
backbones can significantly enhance model performance and reduce convergence
time, saving training resources on several downstream tasks. These challenges
provide valuable insights and opportunities for future research in crop-weed
recognition. We believe that the CWD30 dataset will serve as a benchmark for
evaluating crop-weed recognition algorithms, promoting advancements in
precision agriculture, and fostering collaboration among researchers in the
field.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 09:39:01 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Ilyas",
"Talha",
""
],
[
"Arsa",
"Dewa Made Sri",
""
],
[
"Ahmad",
"Khubaib",
""
],
[
"Jeong",
"Yong Chae",
""
],
[
"Won",
"Okjae",
""
],
[
"Lee",
"Jong Hoon",
""
],
[
"Kim",
"Hyongsuk",
""
]
] |
new_dataset
| 0.999828 |
2305.10108
|
Thomas Erlebach
|
Banu Baklan \c{S}en, \"Oznur Ya\c{s}ar Diner, Thomas Erlebach
|
List 3-Coloring on Comb-Convex and Caterpillar-Convex Bipartite Graphs
| null | null | null | null |
cs.DS math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given a graph $G=(V, E)$ and a list of available colors $L(v)$ for each
vertex $v\in V$, where $L(v) \subseteq \{1, 2, \ldots, k\}$, List $k$-Coloring
refers to the problem of assigning colors to the vertices of $G$ so that each
vertex receives a color from its own list and no two neighboring vertices
receive the same color. The decision version of the problem List $3$-Coloring
is NP-complete even for bipartite graphs, and its complexity on comb-convex
bipartite graphs has been an open problem. We give a polynomial-time algorithm
to solve List $3$-Coloring for caterpillar-convex bipartite graphs, a
superclass of comb-convex bipartite graphs. We also give a polynomial-time
recognition algorithm for the class of caterpillar-convex bipartite graphs.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 10:15:44 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Şen",
"Banu Baklan",
""
],
[
"Diner",
"Öznur Yaşar",
""
],
[
"Erlebach",
"Thomas",
""
]
] |
new_dataset
| 0.990799 |
2305.10122
|
Atul Kr. Ojha Dr
|
Swapnil Fadte, Edna Vaz, Atul Kr. Ojha, Ramdas Karmali, Jyoti D. Pawar
|
Empirical Analysis of Oral and Nasal Vowels of Konkani
|
The Proceedings of the Human Language Technologies as a Challenge for
Computer Science and Linguistics-2023 (LTC-2023)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Konkani is a highly nasalised language which makes it unique among Indo-Aryan
languages. This work investigates the acoustic-phonetic properties of Konkani
oral and nasal vowels. For this study, speech samples from six speakers (3 male
and 3 female) were collected. A total of 74 unique sentences were used as a
part of the recording script, 37 each for oral and nasal vowels, respectively.
The final data set consisted of 1135 vowel phonemes. A comparative F1-F2 plot
of Konkani oral and nasal vowels is presented with an experimental result and
formant analysis. The average F1, F2 and F3 values are also reported for the
first time through experimentation for all nasal and oral vowels. This study
can be helpful for the linguistic research on vowels and speech synthesis
systems specific to the Konkani language.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 11:01:38 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Fadte",
"Swapnil",
""
],
[
"Vaz",
"Edna",
""
],
[
"Ojha",
"Atul Kr.",
""
],
[
"Karmali",
"Ramdas",
""
],
[
"Pawar",
"Jyoti D.",
""
]
] |
new_dataset
| 0.980821 |
2305.10212
|
Ramtin Zand
|
Joseph Lindsay, Ramtin Zand
|
A Novel Stochastic LSTM Model Inspired by Quantum Machine Learning
| null | null | null | null |
cs.LG cs.ET quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Works in quantum machine learning (QML) over the past few years indicate that
QML algorithms can function just as well as their classical counterparts, and
even outperform them in some cases. Among the corpus of recent work, many
current QML models take advantage of variational quantum algorithm (VQA)
circuits, given that their scale is typically small enough to be compatible
with NISQ devices and the method of automatic differentiation for optimizing
circuit parameters is familiar to machine learning (ML). While the results bear
interesting promise for an era when quantum machines are more readily
accessible, if one can achieve similar results through non-quantum methods then
there may be a more near-term advantage available to practitioners. To this
end, the nature of this work is to investigate the utilization of stochastic
methods inspired by a variational quantum version of the long short-term memory
(LSTM) model in an attempt to approach the reported successes in performance
and rapid convergence. By analyzing the performance of classical, stochastic,
and quantum methods, this work aims to elucidate if it is possible to achieve
some of QML's major reported benefits on classical machines by incorporating
aspects of its stochasticity.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 13:44:25 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Lindsay",
"Joseph",
""
],
[
"Zand",
"Ramtin",
""
]
] |
new_dataset
| 0.983384 |
2305.10254
|
Zihao Wu
|
Xiao Yang, Haixing Dai, Zihao Wu, Ramesh Bist, Sachin Subedi, Jin Sun,
Guoyu Lu, Changying Li, Tianming Liu, Lilong Chai
|
SAM for Poultry Science
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, the agricultural industry has witnessed significant
advancements in artificial intelligence (AI), particularly with the development
of large-scale foundational models. Among these foundation models, the Segment
Anything Model (SAM), introduced by Meta AI Research, stands out as a
groundbreaking solution for object segmentation tasks. While SAM has shown
success in various agricultural applications, its potential in the poultry
industry, specifically in the context of cage-free hens, remains relatively
unexplored. This study aims to assess the zero-shot segmentation performance of
SAM on representative chicken segmentation tasks, including part-based
segmentation and the use of infrared thermal images, and to explore
chicken-tracking tasks by using SAM as a segmentation tool. The results
demonstrate SAM's superior performance compared to SegFormer and SETR in both
whole and part-based chicken segmentation. SAM-based object tracking also
provides valuable data on the behavior and movement patterns of broiler birds.
The findings of this study contribute to a better understanding of SAM's
potential in poultry science and lay the foundation for future advancements in
chicken segmentation and tracking.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 14:43:05 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Yang",
"Xiao",
""
],
[
"Dai",
"Haixing",
""
],
[
"Wu",
"Zihao",
""
],
[
"Bist",
"Ramesh",
""
],
[
"Subedi",
"Sachin",
""
],
[
"Sun",
"Jin",
""
],
[
"Lu",
"Guoyu",
""
],
[
"Li",
"Changying",
""
],
[
"Liu",
"Tianming",
""
],
[
"Chai",
"Lilong",
""
]
] |
new_dataset
| 0.998485 |
2305.10260
|
Xiaoman Peng
|
Jianfeng Dong, Xiaoman Peng, Zhe Ma, Daizong Liu, Xiaoye Qu, Xun Yang,
Jixiang Zhu, Baolong Liu
|
From Region to Patch: Attribute-Aware Foreground-Background Contrastive
Learning for Fine-Grained Fashion Retrieval
|
This paper has been published as a full paper at SIGIR 2023
| null |
10.1145/3539618.3591690
| null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Attribute-specific fashion retrieval (ASFR) is a challenging information
retrieval task, which has attracted increasing attention in recent years.
Different from traditional fashion retrieval which mainly focuses on optimizing
holistic similarity, the ASFR task concentrates on attribute-specific
similarity, resulting in more fine-grained and interpretable retrieval results.
As the attribute-specific similarity typically corresponds to the specific
subtle regions of images, we propose a Region-to-Patch Framework (RPF) that
consists of a region-aware branch and a patch-aware branch to extract
fine-grained attribute-related visual features for precise retrieval in a
coarse-to-fine manner. In particular, the region-aware branch is first to be
utilized to locate the potential regions related to the semantic of the given
attribute. Then, considering that the located region is coarse and still
contains the background visual contents, the patch-aware branch is proposed to
capture patch-wise attribute-related details from the previous amplified
region. Such a hybrid architecture strikes a proper balance between region
localization and feature extraction. Besides, different from previous works
that solely focus on discriminating the attribute-relevant foreground visual
features, we argue that the attribute-irrelevant background features are also
crucial for distinguishing the detailed visual contexts in a contrastive
manner. Therefore, a novel E-InfoNCE loss based on the foreground and
background representations is further proposed to improve the discrimination of
attribute-specific representation. Extensive experiments on three datasets
demonstrate the effectiveness of our proposed framework, and also show a decent
generalization of our RPF on out-of-domain fashion images. Our source code is
available at https://github.com/HuiGuanLab/RPF.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 14:49:20 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Dong",
"Jianfeng",
""
],
[
"Peng",
"Xiaoman",
""
],
[
"Ma",
"Zhe",
""
],
[
"Liu",
"Daizong",
""
],
[
"Qu",
"Xiaoye",
""
],
[
"Yang",
"Xun",
""
],
[
"Zhu",
"Jixiang",
""
],
[
"Liu",
"Baolong",
""
]
] |
new_dataset
| 0.999064 |
2305.10338
|
Yuanxin Wu
|
Maoran Zhu and Yuanxin Wu
|
Inertial-based Navigation by Polynomial Optimization: Inertial-Magnetic
Attitude Estimation
|
12 pages, 15 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inertial-based navigation refers to the navigation methods or systems that
have inertial information or sensors as the core part and integrate a spectrum
of other kinds of sensors for enhanced performance. Through a series of papers,
the authors attempt to explore information blending of inertial-based
navigation by a polynomial optimization method. The basic idea is to model
rigid motions as finite-order polynomials and then attacks the involved
navigation problems by optimally solving their coefficients, taking into
considerations the constraints posed by inertial sensors and others. In the
current paper, a continuous-time attitude estimation approach is proposed,
which transforms the attitude estimation into a constant parameter
determination problem by the polynomial optimization. Specifically, the
continuous attitude is first approximated by a Chebyshev polynomial, of which
the unknown Chebyshev coefficients are determined by minimizing the weighted
residuals of initial conditions, dynamics and measurements. We apply the
derived estimator to the attitude estimation with the magnetic and inertial
sensors. Simulation and field tests show that the estimator has much better
stability and faster convergence than the traditional extended Kalman filter
does, especially in the challenging large initial state error scenarios.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 16:22:08 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Zhu",
"Maoran",
""
],
[
"Wu",
"Yuanxin",
""
]
] |
new_dataset
| 0.991098 |
2305.10404
|
Om Prakash
|
Om Prakash, Shikha Patel and Habibul Islam
|
$\mathbb{F}_q\mathcal{R}$-skew cyclic codes and their application to
quantum codes
|
17 pages. This paper is under modification in the Quantum Information
Processing
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Let $p$ be a prime and $\mathbb{F}_q$ be the finite field of order $q=p^m$.
In this paper, we study $\mathbb{F}_q\mathcal{R}$-skew cyclic codes where
$\mathcal{R}=\mathbb{F}_q+u\mathbb{F}_q$ with $u^2=u$. To characterize
$\mathbb{F}_q\mathcal{R}$-skew cyclic codes, we first establish their algebraic
structure and then discuss the dual-containing properties by considering a
non-degenerate inner product. Further, we define a Gray map over
$\mathbb{F}_q\mathcal{R}$ and obtain their $\mathbb{F}_q$-Gray images. As an
application, we apply the CSS (Calderbank-Shor-Steane) construction on Gray
images of dual containing $\mathbb{F}_q\mathcal{R}$-skew cyclic codes and
obtain many quantum codes with better parameters than the best-known codes
available in the literature.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 17:46:57 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Prakash",
"Om",
""
],
[
"Patel",
"Shikha",
""
],
[
"Islam",
"Habibul",
""
]
] |
new_dataset
| 0.999522 |
2305.10412
|
Stefania Druga Dr.
|
Stefania Druga and Amy J. Ko
|
AI Friends: A Design Framework for AI-Powered Creative Programming for
Youth
|
18 pages
| null | null | null |
cs.HC cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
What role can AI play in supporting and constraining creative coding by
families? To investigate these questions, we built a Wizard of Oz platform to
help families engage in creative coding in partnership with a
researcher-operated AI Friend. We designed a 3 week series of programming
activities with ten children, 7 to 12 years old, and nine parents. Using a
creative self efficacy lens, we observe that families found it easier to
generate game ideas when prompted with questions by AI Friend; parents played a
unique role in guiding children in more complex programming tasks when the AI
Friend failed to help, and children were more encouraged to write code for
novel ideas using the AI friend help. These findings suggest that AI supported
platforms should highlight unique family AI interactions focused on children's
agency and creative self-efficacy.
|
[
{
"version": "v1",
"created": "Wed, 17 May 2023 17:48:32 GMT"
}
] | 2023-05-18T00:00:00 |
[
[
"Druga",
"Stefania",
""
],
[
"Ko",
"Amy J.",
""
]
] |
new_dataset
| 0.990217 |
2012.15621
|
Won Ik Cho
|
Won Ik Cho, Sangwhan Moon, Youngsook Song
|
Open Korean Corpora: A Practical Report
|
Published in NLP-OSS @EMNLP2020; May 2023 version added with new
datasets
| null |
10.18653/v1/2020.nlposs-1.12
| null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Korean is often referred to as a low-resource language in the research
community. While this claim is partially true, it is also because the
availability of resources is inadequately advertised and curated. This work
curates and reviews a list of Korean corpora, first describing
institution-level resource development, then further iterate through a list of
current open datasets for different types of tasks. We then propose a direction
on how open-source dataset construction and releases should be done for
less-resourced languages to promote research.
|
[
{
"version": "v1",
"created": "Thu, 31 Dec 2020 14:23:55 GMT"
},
{
"version": "v2",
"created": "Tue, 16 May 2023 17:08:24 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Cho",
"Won Ik",
""
],
[
"Moon",
"Sangwhan",
""
],
[
"Song",
"Youngsook",
""
]
] |
new_dataset
| 0.999776 |
2107.09863
|
Ziqi Xu
|
Ziqi Xu, Jingcheng Li, Yanjun Pan, Loukas Lazos, Ming Li, Nirnimesh
Ghose
|
PoF: Proof-of-Following for Vehicle Platoons
|
18 pages, 23 figures, 1 table
| null |
10.14722/ndss.2022.23077
| null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Cooperative vehicle platooning significantly improves highway safety, fuel
efficiency, and traffic flow. In this model, a set of vehicles move in line
formation and coordinate acceleration, braking, and steering using a
combination of physical sensing and vehicle-to-vehicle (V2V) messaging. The
authenticity and integrity of the V2V messages are paramount to safety. For
this reason, recent V2V and V2X standards support the integration of a PKI.
However, a PKI cannot bind a vehicle's digital identity to the vehicle's
physical state (location, velocity, etc.). As a result, a vehicle with valid
cryptographic credentials can impact platoons from a remote location.
In this paper, we seek to provide the missing link between the physical and
the digital world in the context of vehicle platooning. We propose a new access
control protocol we call Proof-of-Following (PoF) that verifies the following
distance between a candidate and a verifier. The main idea is to draw security
from the common, but constantly changing environment experienced by the closely
traveling vehicles. We use the large-scale fading effect of ambient RF signals
as a common source of randomness to construct a {\em PoF} primitive. The
correlation of large-scale fading is an ideal candidate for the mobile outdoor
environment because it exponentially decays with distance and time. We evaluate
our PoF protocol on an experimental platoon of two vehicles in freeway,
highway, and urban driving conditions. We demonstrate that the PoF withstands
both the pre-recording and following attacks with overwhelming probability.
|
[
{
"version": "v1",
"created": "Wed, 21 Jul 2021 03:05:20 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Sep 2021 23:06:51 GMT"
},
{
"version": "v3",
"created": "Tue, 16 Nov 2021 21:25:44 GMT"
},
{
"version": "v4",
"created": "Mon, 15 May 2023 21:57:11 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Xu",
"Ziqi",
""
],
[
"Li",
"Jingcheng",
""
],
[
"Pan",
"Yanjun",
""
],
[
"Lazos",
"Loukas",
""
],
[
"Li",
"Ming",
""
],
[
"Ghose",
"Nirnimesh",
""
]
] |
new_dataset
| 0.993746 |
2202.13401
|
Juan M. Gandarias
|
Mattia Leonori, Juan M. Gandarias, Arash Ajoudani
|
MOCA-S: A Sensitive Mobile Collaborative Robotic Assistant exploiting
Low-Cost Capacitive Tactile Cover and Whole-Body Control
|
8 pages, 7 figures, submitted to IEEE Robotics and Automation Letters
and IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS
2022). Video: https://youtu.be/IX6fn8ODSt8
| null |
10.1109/LRA.2022.3186053
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Safety is one of the most fundamental aspects of robotics, especially when it
comes to collaborative robots (cobots) that are expected to physically interact
with humans. Although a large body of literature has focused on safety-related
aspects for fixed-based cobots, a low effort has been put into developing
collaborative mobile manipulators. In response to this need, this work presents
MOCA-S, i.e., Sensitive Mobile Collaborative Robotic Assistant, that integrates
a low-cost, capacitive tactile cover to measure interaction forces applied to
the robot base. The tactile cover comprises a set of 11 capacitive large-area
tactile sensors distributed as a 1-D tactile array around the base.
Characterization of the tactile sensors with different materials is included.
Moreover, two expanded whole-body controllers that exploit the platform's
tactile cover and the loco-manipulation features are proposed. These
controllers are tested in two experiments, demonstrating the potential of
MOCA-S for safe physical Human-Robot Interaction (pHRI). Finally, an experiment
is carried out in which an undesired collision occurs between MOCA-S and a
human during a loco-manipulation task. The results demonstrate the intrinsic
safety of MOCA-S and the proposed controllers, suggesting a new step towards
creating safe mobile manipulators.
|
[
{
"version": "v1",
"created": "Sun, 27 Feb 2022 17:24:45 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Leonori",
"Mattia",
""
],
[
"Gandarias",
"Juan M.",
""
],
[
"Ajoudani",
"Arash",
""
]
] |
new_dataset
| 0.999287 |
2203.02001
|
Jean Roberto Ponciano
|
Lucas E. Resck, Jean R. Ponciano, Luis Gustavo Nonato, Jorge Poco
|
LegalVis: Exploring and Inferring Precedent Citations in Legal Documents
|
13 pages (paper) + 2 pages (appendix). 9 figures (paper) + 3 figures
(appendix)
|
IEEE TVCG 29 (2023) 3105-3120
|
10.1109/TVCG.2022.3152450
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To reduce the number of pending cases and conflicting rulings in the
Brazilian Judiciary, the National Congress amended the Constitution, allowing
the Brazilian Supreme Court (STF) to create binding precedents (BPs), i.e., a
set of understandings that both Executive and lower Judiciary branches must
follow. The STF's justices frequently cite the 58 existing BPs in their
decisions, and it is of primary relevance that judicial experts could identify
and analyze such citations. To assist in this problem, we propose LegalVis, a
web-based visual analytics system designed to support the analysis of legal
documents that cite or could potentially cite a BP. We model the problem of
identifying potential citations (i.e., non-explicit) as a classification
problem. However, a simple score is not enough to explain the results; that is
why we use an interpretability machine learning method to explain the reason
behind each identified citation. For a compelling visual exploration of
documents and BPs, LegalVis comprises three interactive visual components: the
first presents an overview of the data showing temporal patterns, the second
allows filtering and grouping relevant documents by topic, and the last one
shows a document's text aiming to interpret the model's output by pointing out
which paragraphs are likely to mention the BP, even if not explicitly
specified. We evaluated our identification model and obtained an accuracy of
96%; we also made a quantitative and qualitative analysis of the results. The
usefulness and effectiveness of LegalVis were evaluated through two usage
scenarios and feedback from six domain experts.
|
[
{
"version": "v1",
"created": "Thu, 3 Mar 2022 20:33:36 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Resck",
"Lucas E.",
""
],
[
"Ponciano",
"Jean R.",
""
],
[
"Nonato",
"Luis Gustavo",
""
],
[
"Poco",
"Jorge",
""
]
] |
new_dataset
| 0.994545 |
2205.07347
|
Utkarsh R. Patel
|
Utkarsh R. Patel, Yiqian Mao, and Eric Michielssen
|
Wigner-Smith Time Delay Matrix for Acoustic Scattering: Theory and
Phenomenology
|
Submitted to The Journal of Acoustical Society of America
| null |
10.1121/10.0017826
| null |
cs.CE
|
http://creativecommons.org/publicdomain/zero/1.0/
|
The Wigner-Smith (WS) time delay matrix relates a lossless system's
scattering matrix to its frequency derivative. First proposed in the realm of
quantum mechanics to characterize time delays experienced by particles during a
collision, this article extends the use of WS time delay techniques to acoustic
scattering problems governed by the Helmholtz equation. Expression for the
entries of the WS time delay matrix involving renormalized volume integrals of
energy densities are derived, and shown to hold true independent of the
scatterer's geometry, boundary condition (sound-soft or sound-hard), and
excitation. Numerical examples show that the eigenmodes of the WS time delay
matrix describe distinct scattering phenomena characterized by well-defined
time delays.
|
[
{
"version": "v1",
"created": "Sun, 15 May 2022 18:02:07 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Patel",
"Utkarsh R.",
""
],
[
"Mao",
"Yiqian",
""
],
[
"Michielssen",
"Eric",
""
]
] |
new_dataset
| 0.975236 |
2207.03815
|
Juan M. Gandarias
|
Marta Lorenzini (1), Juan M. Gandarias (1), Luca Fortini (1), Wansoo
Kim (2), Arash Ajoudani (1) ((1) Human-Robot Interfaces and Physical
Interaction, Istituto Italiano di Tecnologia, Genoa, Italy) ((2) Robotics
Department, Hanyang University ERICA, Republic of Korea)
|
ErgoTac-Belt: Anticipatory Vibrotactile Feedback to Lead Centre of
Pressure during Walking
|
6 pages, 6 figures, accepted at IEEE International Conference on
Biomedical Robotics and Biomechatronics (BioRob). For associated video, see
https://youtu.be/Hz42KohAgso
| null |
10.1109/BioRob52689.2022.9925563
| null |
cs.HC cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Balance and gait disorders are the second leading cause of falls, which,
along with consequent injuries, are reported as major public health problems
all over the world. For patients who do not require mechanical support,
vibrotactile feedback interfaces have proven to be a successful approach in
restoring balance. Most of the existing strategies assess trunk or head tilt
and velocity or plantar forces, and are limited to the analysis of stance. On
the other hand, central to balance control is the need to maintain the body's
centre of pressure (CoP) within feasible limits of the support polygon (SP), as
in standing, or on track to a new SP, as in walking. Hence, this paper proposes
an exploratory study to investigate whether vibrotactile feedback can be
employed to lead human CoP during walking. The ErgoTac-Belt vibrotactile device
is introduced to instruct the users about the direction to take, both in the
antero-posterior and medio-lateral axes. An anticipatory strategy is adopted
here, to give the users enough time to react to the stimuli. Experiments on ten
healthy subjects demonstrated the promising capability of the proposed device
to guide the users' CoP along a predefined reference path, with similar
performance as the one achieved with visual feedback. Future developments will
investigate our strategy and device in guiding the CoP of elderly or
individuals with vestibular impairments, who may not be aware of or, able to
figure out, a safe and ergonomic CoP path.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2022 10:50:17 GMT"
},
{
"version": "v2",
"created": "Tue, 16 May 2023 13:19:26 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Lorenzini",
"Marta",
""
],
[
"Gandarias",
"Juan M.",
""
],
[
"Fortini",
"Luca",
""
],
[
"Kim",
"Wansoo",
""
],
[
"Ajoudani",
"Arash",
""
]
] |
new_dataset
| 0.999142 |
2210.09163
|
Tobias Deu{\ss}er
|
Tobias Deu{\ss}er, Syed Musharraf Ali, Lars Hillebrand, Desiana
Nurchalifah, Basil Jacob, Christian Bauckhage, Rafet Sifa
|
KPI-EDGAR: A Novel Dataset and Accompanying Metric for Relation
Extraction from Financial Documents
|
Accepted at ICMLA 2022, 6 pages, 5 tables
| null |
10.1109/ICMLA55696.2022.00254
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce KPI-EDGAR, a novel dataset for Joint Named Entity Recognition
and Relation Extraction building on financial reports uploaded to the
Electronic Data Gathering, Analysis, and Retrieval (EDGAR) system, where the
main objective is to extract Key Performance Indicators (KPIs) from financial
documents and link them to their numerical values and other attributes. We
further provide four accompanying baselines for benchmarking potential future
research. Additionally, we propose a new way of measuring the success of said
extraction process by incorporating a word-level weighting scheme into the
conventional F1 score to better model the inherently fuzzy borders of the
entity pairs of a relation in this domain.
|
[
{
"version": "v1",
"created": "Mon, 17 Oct 2022 15:06:20 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Deußer",
"Tobias",
""
],
[
"Ali",
"Syed Musharraf",
""
],
[
"Hillebrand",
"Lars",
""
],
[
"Nurchalifah",
"Desiana",
""
],
[
"Jacob",
"Basil",
""
],
[
"Bauckhage",
"Christian",
""
],
[
"Sifa",
"Rafet",
""
]
] |
new_dataset
| 0.999823 |
2212.07796
|
Zixian Ma
|
Zixian Ma, Jerry Hong, Mustafa Omer Gul, Mona Gandhi, Irena Gao,
Ranjay Krishna
|
CREPE: Can Vision-Language Foundation Models Reason Compositionally?
|
Updated figures and numbers
| null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
A fundamental characteristic common to both human vision and natural language
is their compositional nature. Yet, despite the performance gains contributed
by large vision and language pretraining, we find that: across 7 architectures
trained with 4 algorithms on massive datasets, they struggle at
compositionality. To arrive at this conclusion, we introduce a new
compositionality evaluation benchmark, CREPE, which measures two important
aspects of compositionality identified by cognitive science literature:
systematicity and productivity. To measure systematicity, CREPE consists of a
test dataset containing over $370K$ image-text pairs and three different
seen-unseen splits. The three splits are designed to test models trained on
three popular training datasets: CC-12M, YFCC-15M, and LAION-400M. We also
generate $325K$, $316K$, and $309K$ hard negative captions for a subset of the
pairs. To test productivity, CREPE contains $17K$ image-text pairs with nine
different complexities plus $183K$ hard negative captions with atomic, swapping
and negation foils. The datasets are generated by repurposing the Visual Genome
scene graphs and region descriptions and applying handcrafted templates and
GPT-3. For systematicity, we find that model performance decreases consistently
when novel compositions dominate the retrieval set, with Recall@1 dropping by
up to $12\%$. For productivity, models' retrieval success decays as complexity
increases, frequently nearing random chance at high complexity. These results
hold regardless of model and training dataset size.
|
[
{
"version": "v1",
"created": "Tue, 13 Dec 2022 19:17:36 GMT"
},
{
"version": "v2",
"created": "Sat, 7 Jan 2023 07:57:12 GMT"
},
{
"version": "v3",
"created": "Tue, 16 May 2023 16:27:08 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Ma",
"Zixian",
""
],
[
"Hong",
"Jerry",
""
],
[
"Gul",
"Mustafa Omer",
""
],
[
"Gandhi",
"Mona",
""
],
[
"Gao",
"Irena",
""
],
[
"Krishna",
"Ranjay",
""
]
] |
new_dataset
| 0.999484 |
2212.09577
|
Martin Funkquist
|
Martin Funkquist, Ilia Kuznetsov, Yufang Hou and Iryna Gurevych
|
CiteBench: A benchmark for Scientific Citation Text Generation
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Science progresses by incrementally building upon the prior body of knowledge
documented in scientific publications. The acceleration of research across many
fields makes it hard to stay up-to-date with the recent developments and to
summarize the ever-growing body of prior work. To target this issue, the task
of citation text generation aims to produce accurate textual summaries given a
set of papers-to-cite and the citing paper context. Existing studies in
citation text generation are based upon widely diverging task definitions,
which makes it hard to study this task systematically. To address this
challenge, we propose CiteBench: a benchmark for citation text generation that
unifies multiple diverse datasets and enables standardized evaluation of
citation text generation models across task designs and domains. Using the new
benchmark, we investigate the performance of multiple strong baselines, test
their transferability between the datasets, and deliver new insights into the
task definition and evaluation to guide future research in citation text
generation. We make the code for CiteBench publicly available at
https://github.com/UKPLab/citebench.
|
[
{
"version": "v1",
"created": "Mon, 19 Dec 2022 16:10:56 GMT"
},
{
"version": "v2",
"created": "Tue, 16 May 2023 12:29:55 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Funkquist",
"Martin",
""
],
[
"Kuznetsov",
"Ilia",
""
],
[
"Hou",
"Yufang",
""
],
[
"Gurevych",
"Iryna",
""
]
] |
new_dataset
| 0.998683 |
2212.10545
|
Jie Huang
|
Chenzhengyi Liu and Jie Huang and Kerui Zhu and Kevin Chen-Chuan Chang
|
DimonGen: Diversified Generative Commonsense Reasoning for Explaining
Concept Relationships
|
ACL 2023
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose DimonGen, which aims to generate diverse sentences
describing concept relationships in various everyday scenarios. To support
this, we first create a benchmark dataset for this task by adapting the
existing CommonGen dataset. We then propose a two-stage model called MoREE to
generate the target sentences. MoREE consists of a mixture of retrievers model
that retrieves diverse context sentences related to the given concepts, and a
mixture of generators model that generates diverse sentences based on the
retrieved contexts. We conduct experiments on the DimonGen task and show that
MoREE outperforms strong baselines in terms of both the quality and diversity
of the generated sentences. Our results demonstrate that MoREE is able to
generate diverse sentences that reflect different relationships between
concepts, leading to a comprehensive understanding of concept relationships.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 18:50:29 GMT"
},
{
"version": "v2",
"created": "Tue, 16 May 2023 06:32:51 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Liu",
"Chenzhengyi",
""
],
[
"Huang",
"Jie",
""
],
[
"Zhu",
"Kerui",
""
],
[
"Chang",
"Kevin Chen-Chuan",
""
]
] |
new_dataset
| 0.999431 |
2212.14131
|
Zhaoshuo Li
|
Zhaoshuo Li, Hongchao Shu, Ruixing Liang, Anna Goodridge, Manish Sahu,
Francis X. Creighton, Russell H. Taylor, Mathias Unberath
|
TAToo: Vision-based Joint Tracking of Anatomy and Tool for Skull-base
Surgery
|
IPCAI/IJCARS 2023, code available at:
https://github.com/mli0603/TAToo
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Purpose: Tracking the 3D motion of the surgical tool and the patient anatomy
is a fundamental requirement for computer-assisted skull-base surgery. The
estimated motion can be used both for intra-operative guidance and for
downstream skill analysis. Recovering such motion solely from surgical videos
is desirable, as it is compliant with current clinical workflows and
instrumentation.
Methods: We present Tracker of Anatomy and Tool (TAToo). TAToo jointly tracks
the rigid 3D motion of patient skull and surgical drill from stereo microscopic
videos. TAToo estimates motion via an iterative optimization process in an
end-to-end differentiable form. For robust tracking performance, TAToo adopts a
probabilistic formulation and enforces geometric constraints on the object
level.
Results: We validate TAToo on both simulation data, where ground truth motion
is available, as well as on anthropomorphic phantom data, where optical
tracking provides a strong baseline. We report sub-millimeter and millimeter
inter-frame tracking accuracy for skull and drill, respectively, with rotation
errors below 1{\deg}. We further illustrate how TAToo may be used in a surgical
navigation setting.
Conclusion: We present TAToo, which simultaneously tracks the surgical tool
and the patient anatomy in skull-base surgery. TAToo directly predicts the
motion from surgical videos, without the need of any markers. Our results show
that the performance of TAToo compares favorably to competing approaches.
Future work will include fine-tuning of our depth network to reach a 1 mm
clinical accuracy goal desired for surgical applications in the skull base.
|
[
{
"version": "v1",
"created": "Thu, 29 Dec 2022 00:28:55 GMT"
},
{
"version": "v2",
"created": "Tue, 16 May 2023 14:59:50 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Li",
"Zhaoshuo",
""
],
[
"Shu",
"Hongchao",
""
],
[
"Liang",
"Ruixing",
""
],
[
"Goodridge",
"Anna",
""
],
[
"Sahu",
"Manish",
""
],
[
"Creighton",
"Francis X.",
""
],
[
"Taylor",
"Russell H.",
""
],
[
"Unberath",
"Mathias",
""
]
] |
new_dataset
| 0.966789 |
2301.03840
|
Laurent Najman
|
Gilles Bertrand (LIGM), Nicolas Boutry (LRDE), Laurent Najman (LIGM)
|
Discrete Morse Functions and Watersheds
| null | null | null | null |
cs.DM math.AT math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Any watershed, when defined on a stack on a normal pseudomanifold of
dimension d, is a pure (d -- 1)-subcomplex that satisfies a drop-of-water
principle. In this paper, we introduce Morse stacks, a class of functions that
are equivalent to discrete Morse functions. We show that the watershed of a
Morse stack on a normal pseudomanifold is uniquely defined, and can be obtained
with a linear-time algorithm relying on a sequence of collapses. Last, we prove
that such a watershed is the cut of the unique minimum spanning forest, rooted
in the minima of the Morse stack, of the facet graph of the pseudomanifold.
|
[
{
"version": "v1",
"created": "Tue, 10 Jan 2023 08:16:38 GMT"
},
{
"version": "v2",
"created": "Tue, 16 May 2023 08:19:02 GMT"
}
] | 2023-05-17T00:00:00 |
[
[
"Bertrand",
"Gilles",
"",
"LIGM"
],
[
"Boutry",
"Nicolas",
"",
"LRDE"
],
[
"Najman",
"Laurent",
"",
"LIGM"
]
] |
new_dataset
| 0.998 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.