id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2303.08747
|
Akhil Meethal
|
Akhil Meethal, Eric Granger, Marco Pedersoli
|
Cascaded Zoom-in Detector for High Resolution Aerial Images
|
12 pages, 7 figures
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Detecting objects in aerial images is challenging because they are typically
composed of crowded small objects distributed non-uniformly over
high-resolution images. Density cropping is a widely used method to improve
this small object detection where the crowded small object regions are
extracted and processed in high resolution. However, this is typically
accomplished by adding other learnable components, thus complicating the
training and inference over a standard detection process. In this paper, we
propose an efficient Cascaded Zoom-in (CZ) detector that re-purposes the
detector itself for density-guided training and inference. During training,
density crops are located, labeled as a new class, and employed to augment the
training dataset. During inference, the density crops are first detected along
with the base class objects, and then input for a second stage of inference.
This approach is easily integrated into any detector, and creates no
significant change in the standard detection process, like the uniform cropping
approach popular in aerial image detection. Experimental results on the aerial
images of the challenging VisDrone and DOTA datasets verify the benefits of the
proposed approach. The proposed CZ detector also provides state-of-the-art
results over uniform cropping and other density cropping methods on the
VisDrone dataset, increasing the detection mAP of small objects by more than 3
points.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 16:39:21 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Meethal",
"Akhil",
""
],
[
"Granger",
"Eric",
""
],
[
"Pedersoli",
"Marco",
""
]
] |
new_dataset
| 0.995014 |
2303.08778
|
Jesse Hagenaars
|
Federico Paredes-Vall\'es, Jesse Hagenaars, Julien Dupeyroux, Stein
Stroobants, Yingfu Xu, Guido de Croon
|
Fully neuromorphic vision and control for autonomous drone flight
| null | null | null | null |
cs.RO cs.AI cs.CV cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Biological sensing and processing is asynchronous and sparse, leading to
low-latency and energy-efficient perception and action. In robotics,
neuromorphic hardware for event-based vision and spiking neural networks
promises to exhibit similar characteristics. However, robotic implementations
have been limited to basic tasks with low-dimensional sensory inputs and motor
actions due to the restricted network size in current embedded neuromorphic
processors and the difficulties of training spiking neural networks. Here, we
present the first fully neuromorphic vision-to-control pipeline for controlling
a freely flying drone. Specifically, we train a spiking neural network that
accepts high-dimensional raw event-based camera data and outputs low-level
control actions for performing autonomous vision-based flight. The vision part
of the network, consisting of five layers and 28.8k neurons, maps incoming raw
events to ego-motion estimates and is trained with self-supervised learning on
real event data. The control part consists of a single decoding layer and is
learned with an evolutionary algorithm in a drone simulator. Robotic
experiments show a successful sim-to-real transfer of the fully learned
neuromorphic pipeline. The drone can accurately follow different ego-motion
setpoints, allowing for hovering, landing, and maneuvering
sideways$\unicode{x2014}$even while yawing at the same time. The neuromorphic
pipeline runs on board on Intel's Loihi neuromorphic processor with an
execution frequency of 200 Hz, spending only 27 $\unicode{x00b5}$J per
inference. These results illustrate the potential of neuromorphic sensing and
processing for enabling smaller, more intelligent robots.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 17:19:45 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Paredes-Vallés",
"Federico",
""
],
[
"Hagenaars",
"Jesse",
""
],
[
"Dupeyroux",
"Julien",
""
],
[
"Stroobants",
"Stein",
""
],
[
"Xu",
"Yingfu",
""
],
[
"de Croon",
"Guido",
""
]
] |
new_dataset
| 0.98957 |
2303.08789
|
Andrey Kolobov
|
Garrett Thomas, Ching-An Cheng, Ricky Loynd, Vibhav Vineet, Mihai
Jalobeanu, Andrey Kolobov
|
PLEX: Making the Most of the Available Data for Robotic Manipulation
Pretraining
| null | null | null | null |
cs.RO cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
A rich representation is key to general robotic manipulation, but existing
model architectures require a lot of data to learn it. Unfortunately, ideal
robotic manipulation training data, which comes in the form of expert
visuomotor demonstrations for a variety of annotated tasks, is scarce. In this
work we propose PLEX, a transformer-based architecture that learns from
task-agnostic visuomotor trajectories accompanied by a much larger amount of
task-conditioned object manipulation videos -- a type of robotics-relevant data
available in quantity. The key insight behind PLEX is that the trajectories
with observations and actions help induce a latent feature space and train a
robot to execute task-agnostic manipulation routines, while a diverse set of
video-only demonstrations can efficiently teach the robot how to plan in this
feature space for a wide variety of tasks. In contrast to most works on robotic
manipulation pretraining, PLEX learns a generalizable sensorimotor multi-task
policy, not just an observational representation. We also show that using
relative positional encoding in PLEX's transformers further increases its data
efficiency when learning from human-collected demonstrations. Experiments
showcase \appr's generalization on Meta-World-v2 benchmark and establish
state-of-the-art performance in challenging Robosuite environments.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 17:31:37 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Thomas",
"Garrett",
""
],
[
"Cheng",
"Ching-An",
""
],
[
"Loynd",
"Ricky",
""
],
[
"Vineet",
"Vibhav",
""
],
[
"Jalobeanu",
"Mihai",
""
],
[
"Kolobov",
"Andrey",
""
]
] |
new_dataset
| 0.997722 |
2303.08810
|
Lei Zhu
|
Lei Zhu and Xinjiang Wang and Zhanghan Ke and Wayne Zhang and Rynson
Lau
|
BiFormer: Vision Transformer with Bi-Level Routing Attention
|
CVPR 2023 camera-ready
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As the core building block of vision transformers, attention is a powerful
tool to capture long-range dependency. However, such power comes at a cost: it
incurs a huge computation burden and heavy memory footprint as pairwise token
interaction across all spatial locations is computed. A series of works attempt
to alleviate this problem by introducing handcrafted and content-agnostic
sparsity into attention, such as restricting the attention operation to be
inside local windows, axial stripes, or dilated windows. In contrast to these
approaches, we propose a novel dynamic sparse attention via bi-level routing to
enable a more flexible allocation of computations with content awareness.
Specifically, for a query, irrelevant key-value pairs are first filtered out at
a coarse region level, and then fine-grained token-to-token attention is
applied in the union of remaining candidate regions (\ie, routed regions). We
provide a simple yet effective implementation of the proposed bi-level routing
attention, which utilizes the sparsity to save both computation and memory
while involving only GPU-friendly dense matrix multiplications. Built with the
proposed bi-level routing attention, a new general vision transformer, named
BiFormer, is then presented. As BiFormer attends to a small subset of relevant
tokens in a \textbf{query adaptive} manner without distraction from other
irrelevant ones, it enjoys both good performance and high computational
efficiency, especially in dense prediction tasks. Empirical results across
several computer vision tasks such as image classification, object detection,
and semantic segmentation verify the effectiveness of our design. Code is
available at \url{https://github.com/rayleizhu/BiFormer}.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 17:58:46 GMT"
}
] | 2023-03-16T00:00:00 |
[
[
"Zhu",
"Lei",
""
],
[
"Wang",
"Xinjiang",
""
],
[
"Ke",
"Zhanghan",
""
],
[
"Zhang",
"Wayne",
""
],
[
"Lau",
"Rynson",
""
]
] |
new_dataset
| 0.994769 |
2010.12669
|
Prasun Roy
|
Prasun Roy, Saumik Bhattacharya, Partha Pratim Roy, Umapada Pal
|
Position and Rotation Invariant Sign Language Recognition from 3D Kinect
Data with Recurrent Neural Networks
|
10 pages
| null | null | null |
cs.CV cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Sign language is a gesture-based symbolic communication medium among speech
and hearing impaired people. It also serves as a communication bridge between
non-impaired and impaired populations. Unfortunately, in most situations, a
non-impaired person is not well conversant in such symbolic languages
restricting the natural information flow between these two categories.
Therefore, an automated translation mechanism that seamlessly translates sign
language into natural language can be highly advantageous. In this paper, we
attempt to perform recognition of 30 basic Indian sign gestures. Gestures are
represented as temporal sequences of 3D maps (RGB + depth), each consisting of
3D coordinates of 20 body joints captured by the Kinect sensor. A recurrent
neural network (RNN) is employed as the classifier. To improve the classifier's
performance, we use geometric transformation for the alignment correction of
depth frames. In our experiments, the model achieves 84.81% accuracy.
|
[
{
"version": "v1",
"created": "Fri, 23 Oct 2020 21:07:40 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Mar 2023 15:20:15 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Roy",
"Prasun",
""
],
[
"Bhattacharya",
"Saumik",
""
],
[
"Roy",
"Partha Pratim",
""
],
[
"Pal",
"Umapada",
""
]
] |
new_dataset
| 0.992244 |
2203.02304
|
Sensen Liu
|
Sensen Liu, Zhaoying Wang, Xinjun Sheng and Wei Dong
|
Hitchhiker: A Quadrotor Aggressively Perching on a Moving Inclined
Surface Using Compliant Suction Cup Gripper
|
This paper has been submitted to IEEE Transactions on Automation
Science and Engineering at 22-Januray-2022
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Perching on {the surface} of moving objects, like vehicles, could extend the
flight {time} and range of quadrotors. Suction cups are usually adopted for
{surface attachment} due to their durability and large adhesive force. To seal
on {a surfaces}, suction cups {must} be aligned with {the surface} and {possess
proper relative tangential velocity}. {However, quadrotors' attitude and
relative velocity errors would become significant when the object surface is
moving and inclined. To address this problem, we proposed a real-time
trajectory planning algorithm. The time-optimal aggressive trajectory is
efficiently generated through multimodal search in a dynamic time-domain. The
velocity errors relative to the moving surface are alleviated.} To further
adapt to the residual errors, we design a compliant gripper using self-sealing
cups. Multiple cups in different directions are integrated into a wheel-like
mechanism to increase the tolerance to attitude errors. The wheel mechanism
also eliminates the requirement of matching the attitude and tangential
velocity. {Extensive tests are conducted to perch on static and moving surfaces
at various inclinations.} Results demonstrate that our proposed system enables
a quadrotor to reliably perch on moving inclined surfaces (up to $1.07m/s$ and
$90^\circ$) with a success rate of $70\%$ or higher. {The efficacy of the
trajectory planner is also validated. Our gripper has larger adaptability to
attitude errors and tangential velocities than conventional suction cup
grippers.} The success rate increases by 45\% in dynamic perches.
|
[
{
"version": "v1",
"created": "Fri, 4 Mar 2022 13:21:46 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Mar 2023 03:07:27 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Liu",
"Sensen",
""
],
[
"Wang",
"Zhaoying",
""
],
[
"Sheng",
"Xinjun",
""
],
[
"Dong",
"Wei",
""
]
] |
new_dataset
| 0.991869 |
2204.10416
|
Ahmet-Serdar Karakaya
|
Ahmet-Serdar Karakaya and Thomas Ritter and Felix Biessmann and David
Bermbach
|
CycleSense: Detecting Near Miss Incidents in Bicycle Traffic from Mobile
Motion Sensors
| null | null |
10.1016/j.pmcj.2023.101779
| null |
cs.LG cs.CY eess.SP
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In cities worldwide, cars cause health and traffic problems whichcould be
partly mitigated through an increased modal share of bicycles. Many people,
however, avoid cycling due to a lack of perceived safety. For city planners,
addressing this is hard as they lack insights intowhere cyclists feel safe and
where they do not. To gain such insights,we have in previous work proposed the
crowdsourcing platform SimRa,which allows cyclists to record their rides and
report near miss incidentsvia a smartphone app. In this paper, we present
CycleSense, a combination of signal pro-cessing and Machine Learning
techniques, which partially automatesthe detection of near miss incidents, thus
making the reporting of nearmiss incidents easier. Using the SimRa data set, we
evaluate CycleSenseby comparing it to a baseline method used by SimRa and show
that itsignificantly improves incident detection.
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2022 21:43:23 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Mar 2023 12:49:22 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Karakaya",
"Ahmet-Serdar",
""
],
[
"Ritter",
"Thomas",
""
],
[
"Biessmann",
"Felix",
""
],
[
"Bermbach",
"David",
""
]
] |
new_dataset
| 0.999495 |
2205.06971
|
Yiwei Tao
|
Yi Fang, Yiwei Tao, Huan Ma, Yonghui Li, Mohsen Guizani
|
Design of a Reconfigurable Intelligent Surface-Assisted FM-DCSK-SWIPT
Scheme with Non-linear Energy Harvesting Model
|
accepted by IEEE Transactions on Communications
| null |
10.1109/TCOMM.2023.3239647
| null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a reconfigurable intelligent surface (RIS)-assisted
frequency-modulated (FM) differential chaos shift keying (DCSK) scheme with
simultaneous wireless information and power transfer (SWIPT), called
RIS-FM-DCSK-SWIPT scheme, for low-power, low-cost, and high-reliability
wireless communication networks. In particular, the proposed scheme is
developed under a non-linear energy-harvesting (EH) model which can accurately
characterize the practical situation. The proposed RIS-FM-DCSK-SWIPT scheme has
an appealing feature that it does not require channel state information, thus
avoiding the complex channel estimation. We further derive the closed-form
theoretical expressions for the energy shortage probability and bit error rate
(BER) of the proposed scheme over the multipath Rayleigh fading channel. In
addition, we investigate the influence of key parameters on the performance of
the proposed transmission scheme in two different scenarios, i.e., RIS-assisted
access point (RIS-AP) and dual-hop communication (RIS-DH). Finally, we carry
out various Monte-Carlo experiments to verify the accuracy of the theoretical
derivation, illustrate the performance advantage of the proposed scheme, and
give some design insights for future study.
|
[
{
"version": "v1",
"created": "Sat, 14 May 2022 05:15:07 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Oct 2022 01:42:34 GMT"
},
{
"version": "v3",
"created": "Tue, 14 Mar 2023 09:25:59 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Fang",
"Yi",
""
],
[
"Tao",
"Yiwei",
""
],
[
"Ma",
"Huan",
""
],
[
"Li",
"Yonghui",
""
],
[
"Guizani",
"Mohsen",
""
]
] |
new_dataset
| 0.998662 |
2205.10852
|
Ningyu Zhang
|
Zhen Bi, Siyuan Cheng, Jing Chen, Xiaozhuan Liang, Ningyu Zhang, Qiang
Chen, Feiyu Xiong, Wei Guo, Huajun Chen
|
Relphormer: Relational Graph Transformer for Knowledge Graph
Representations
|
Work in progress
| null | null | null |
cs.CL cs.AI cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformers have achieved remarkable performance in widespread fields,
including natural language processing, computer vision and graph mining.
However, vanilla Transformer architectures have not yielded promising
improvements in the Knowledge Graph (KG) representations, where the
translational distance paradigm dominates this area. Note that vanilla
Transformer architectures struggle to capture the intrinsically heterogeneous
structural and semantic information of knowledge graphs. To this end, we
propose a new variant of Transformer for knowledge graph representations dubbed
Relphormer. Specifically, we introduce Triple2Seq which can dynamically sample
contextualized sub-graph sequences as the input to alleviate the heterogeneity
issue. We propose a novel structure-enhanced self-attention mechanism to encode
the relational information and keep the semantic information within entities
and relations. Moreover, we utilize masked knowledge modeling for general
knowledge graph representation learning, which can be applied to various
KG-based tasks including knowledge graph completion, question answering, and
recommendation. Experimental results on six datasets show that Relphormer can
obtain better performance compared with baselines. Code is available in
https://github.com/zjunlp/Relphormer.
|
[
{
"version": "v1",
"created": "Sun, 22 May 2022 15:30:18 GMT"
},
{
"version": "v2",
"created": "Tue, 24 May 2022 15:43:01 GMT"
},
{
"version": "v3",
"created": "Sat, 15 Oct 2022 10:00:24 GMT"
},
{
"version": "v4",
"created": "Thu, 20 Oct 2022 10:22:46 GMT"
},
{
"version": "v5",
"created": "Tue, 14 Mar 2023 10:28:49 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Bi",
"Zhen",
""
],
[
"Cheng",
"Siyuan",
""
],
[
"Chen",
"Jing",
""
],
[
"Liang",
"Xiaozhuan",
""
],
[
"Zhang",
"Ningyu",
""
],
[
"Chen",
"Qiang",
""
],
[
"Xiong",
"Feiyu",
""
],
[
"Guo",
"Wei",
""
],
[
"Chen",
"Huajun",
""
]
] |
new_dataset
| 0.966436 |
2208.00086
|
Francisco Monteiro
|
Sahar Allahkaram, Francisco A. Monteiro, Ioannis Chatzigeorgiou
|
URLLC with Coded Massive MIMO via Random Linear Codes and GRAND
| null |
Proc. of IEEE 96th Vehicular Technology Conference (VTC2022-Fall),
London, United Kingdom, Sep. 2022
|
10.1109/VTC2022-Fall57202.2022.10012803
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A present challenge in wireless communications is the assurance of
ultra-reliable and low-latency communication (URLLC). While the reliability
aspect is well known to be improved by channel coding with long codewords, this
usually implies using interleavers, which introduce undesirable delay. Using
short codewords is a needed change to minimizing the decoding delay. This work
proposes the combination of a coding and decoding scheme to be used along with
spatial signal processing as a means to provide URLLC over a fading channel.
The paper advocates the use of random linear codes (RLCs) over a massive MIMO
(mMIMO) channel with standard zero-forcing detection and guessing random
additive noise decoding (GRAND). The performance of several schemes is assessed
over a mMIMO flat fading channel. The proposed scheme greatly outperforms the
equivalent scheme using 5G's polar encoding and decoding for signal-to-noise
ratios (SNR) of interest. While the complexity of the polar code is constant at
all SNRs, using RLCs with GRAND achieves much faster decoding times for most of
the SNR range, further reducing latency.
|
[
{
"version": "v1",
"created": "Fri, 29 Jul 2022 21:57:38 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Mar 2023 15:06:29 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Allahkaram",
"Sahar",
""
],
[
"Monteiro",
"Francisco A.",
""
],
[
"Chatzigeorgiou",
"Ioannis",
""
]
] |
new_dataset
| 0.998927 |
2209.13388
|
Saeideh Nabipour
|
Saeideh Nabipour, Javad Javidan, Gholamreza Zare Fatin
|
Efficient Fault Detection Architecture of Bit-Parallel Multiplier in
Polynomial Basis of GF(2m) Using BCH Code
|
There are some errors in simulation results
| null | null | null |
cs.IT cs.HC math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The finite field multiplier is mainly used in many of today's state of the
art digital systems and its hardware implementation for bit parallel operation
may require millions of logic gates. Natural causes or soft errors in digital
design could cause some of these gates to malfunction in the field, which could
cause the multiplier to produce incorrect outputs. To ensure that they are not
susceptible to error, it is crucial to use a finite field multiplier
implementation that is effective and has a high fault detection capability. In
this paper, we propose a novel fault detection scheme for a recent bit-parallel
polynomial basis multiplier over GF(2m), where the proposed method aims at
obtaining high fault detection performance for finite field multipliers and
meanwhile maintain low-complexity implementation which is favored in resource
constrained applications such as smart cards. The proposed method is based on
BCH error correction codes, with an area-delay efficient architecture. The
experimental results show that for 45-bit multiplier with 5-bit errors the
proposed error detection and correction architecture results in 37% and %49
reduction in critical path delay with compared to the existing method in [18].
Moreover, the area overhead for 45-bit multiplier with 5 errors is within 80%
which is significantly lower than the best existing BCH based fault detection
method in finite field multiplier [18].
|
[
{
"version": "v1",
"created": "Tue, 27 Sep 2022 13:46:39 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Sep 2022 19:15:59 GMT"
},
{
"version": "v3",
"created": "Thu, 17 Nov 2022 14:35:55 GMT"
},
{
"version": "v4",
"created": "Tue, 14 Mar 2023 16:41:51 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Nabipour",
"Saeideh",
""
],
[
"Javidan",
"Javad",
""
],
[
"Fatin",
"Gholamreza Zare",
""
]
] |
new_dataset
| 0.965674 |
2209.13679
|
Hao Xiang
|
Hao Xiang, Runsheng Xu, Xin Xia, Zhaoliang Zheng, Bolei Zhou, Jiaqi Ma
|
V2XP-ASG: Generating Adversarial Scenes for Vehicle-to-Everything
Perception
|
ICRA 2023, see https://github.com/XHwind/V2XP-ASG
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Recent advancements in Vehicle-to-Everything communication technology have
enabled autonomous vehicles to share sensory information to obtain better
perception performance. With the rapid growth of autonomous vehicles and
intelligent infrastructure, the V2X perception systems will soon be deployed at
scale, which raises a safety-critical question: \textit{how can we evaluate and
improve its performance under challenging traffic scenarios before the
real-world deployment?} Collecting diverse large-scale real-world test scenes
seems to be the most straightforward solution, but it is expensive and
time-consuming, and the collections can only cover limited scenarios. To this
end, we propose the first open adversarial scene generator V2XP-ASG that can
produce realistic, challenging scenes for modern LiDAR-based multi-agent
perception systems. V2XP-ASG learns to construct an adversarial collaboration
graph and simultaneously perturb multiple agents' poses in an adversarial and
plausible manner. The experiments demonstrate that V2XP-ASG can effectively
identify challenging scenes for a large range of V2X perception systems.
Meanwhile, by training on the limited number of generated challenging scenes,
the accuracy of V2X perception systems can be further improved by 12.3\% on
challenging and 4\% on normal scenes. Our code will be released at
https://github.com/XHwind/V2XP-ASG.
|
[
{
"version": "v1",
"created": "Tue, 27 Sep 2022 20:34:41 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Mar 2023 18:39:49 GMT"
},
{
"version": "v3",
"created": "Tue, 14 Mar 2023 04:58:08 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Xiang",
"Hao",
""
],
[
"Xu",
"Runsheng",
""
],
[
"Xia",
"Xin",
""
],
[
"Zheng",
"Zhaoliang",
""
],
[
"Zhou",
"Bolei",
""
],
[
"Ma",
"Jiaqi",
""
]
] |
new_dataset
| 0.999382 |
2210.04067
|
Lara Bruderm\"uller
|
Julius Jankowski, Lara Bruderm\"uller, Nick Hawes, Sylvain Calinon
|
VP-STO: Via-point-based Stochastic Trajectory Optimization for Reactive
Robot Behavior
|
*Authors contributed equally
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Achieving reactive robot behavior in complex dynamic environments is still
challenging as it relies on being able to solve trajectory optimization
problems quickly enough, such that we can replan the future motion at
frequencies which are sufficiently high for the task at hand. We argue that
current limitations in Model Predictive Control (MPC) for robot manipulators
arise from inefficient, high-dimensional trajectory representations and the
negligence of time-optimality in the trajectory optimization process.
Therefore, we propose a motion optimization framework that optimizes jointly
over space and time, generating smooth and timing-optimal robot trajectories in
joint-space. While being task-agnostic, our formulation can incorporate
additional task-specific requirements, such as collision avoidance, and yet
maintain real-time control rates, demonstrated in simulation and real-world
robot experiments on closed-loop manipulation. For additional material, please
visit https://sites.google.com/oxfordrobotics.institute/vp-sto.
|
[
{
"version": "v1",
"created": "Sat, 8 Oct 2022 17:28:57 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Mar 2023 17:13:29 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Jankowski",
"Julius",
""
],
[
"Brudermüller",
"Lara",
""
],
[
"Hawes",
"Nick",
""
],
[
"Calinon",
"Sylvain",
""
]
] |
new_dataset
| 0.992057 |
2212.03346
|
Shrutarv Awasthi
|
Shrutarv Awasthi, Nils Gramse, Dr. Christopher Reining, Dr. Moritz
Roidl
|
UAVs for Industries and Supply Chain Management
|
Accpeted at the XXIV INTERNATIONAL CONFERENCE ON "MATERIAL HANDLING,
CONSTRUCTIONS AND LOGISTICS"
| null |
10.48550/arXiv.2212.03346
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work aims at showing that it is feasible and safe to use a swarm of
Unmanned Aerial Vehicles (UAVs) indoors alongside humans. UAVs are increasingly
being integrated under the Industry 4.0 framework. UAV swarms are primarily
deployed outdoors in civil and military applications, but the opportunities for
using them in manufacturing and supply chain management are immense. There is
extensive research on UAV technology, e.g., localization, control, and computer
vision, but less research on the practical application of UAVs in industry. UAV
technology could improve data collection and monitoring, enhance
decision-making in an Internet of Things framework and automate time-consuming
and redundant tasks in the industry. However, there is a gap between the
technological developments of UAVs and their integration into the supply chain.
Therefore, this work focuses on automating the task of transporting packages
utilizing a swarm of small UAVs operating alongside humans. MoCap system, ROS,
and unity are used for localization, inter-process communication and
visualization. Multiple experiments are performed with the UAVs in wander and
swarm mode in a warehouse like environment.
|
[
{
"version": "v1",
"created": "Tue, 6 Dec 2022 21:54:58 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Mar 2023 09:17:58 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Awasthi",
"Shrutarv",
""
],
[
"Gramse",
"Nils",
""
],
[
"Reining",
"Dr. Christopher",
""
],
[
"Roidl",
"Dr. Moritz",
""
]
] |
new_dataset
| 0.983056 |
2302.01860
|
Lihu Chen
|
Lihu Chen, Ga\"el Varoquaux, Fabian M. Suchanek
|
GLADIS: A General and Large Acronym Disambiguation Benchmark
|
Long paper at EACL 23
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Acronym Disambiguation (AD) is crucial for natural language understanding on
various sources, including biomedical reports, scientific papers, and search
engine queries. However, existing acronym disambiguation benchmarks and tools
are limited to specific domains, and the size of prior benchmarks is rather
small. To accelerate the research on acronym disambiguation, we construct a new
benchmark named GLADIS with three components: (1) a much larger acronym
dictionary with 1.5M acronyms and 6.4M long forms; (2) a pre-training corpus
with 160 million sentences; (3) three datasets that cover the general,
scientific, and biomedical domains. We then pre-train a language model,
\emph{AcroBERT}, on our constructed corpus for general acronym disambiguation,
and show the challenges and values of our new benchmark.
|
[
{
"version": "v1",
"created": "Fri, 3 Feb 2023 17:07:23 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Mar 2023 21:41:39 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Chen",
"Lihu",
""
],
[
"Varoquaux",
"Gaël",
""
],
[
"Suchanek",
"Fabian M.",
""
]
] |
new_dataset
| 0.999715 |
2302.10842
|
Yi Liu
|
Yi Liu
|
DSL-Assembly: A Robust and Safe Assembly Strategy
|
4 pages, 8 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A reinforcement learning (RL) based method that enables the robot to
accomplish the assembly-type task with safety regulations is proposed. The
overall strategy consists of grasping and assembly, and this paper mainly
considers the assembly strategy. Force feedback is used instead of visual
feedback to perceive the shape and direction of the hole in this paper.
Furthermore, since the emergency stop is triggered when the force output is too
large, a force-based dynamic safety lock (DSL) is proposed to limit the
pressing force of the robot. Finally, we train and test the robot model with a
simulator and build ablation experiments to illustrate the effectiveness of our
method. The models are independently tested 500 times in the simulator, and we
get an 88.57% success rate with a 4mm gap. These models are transferred to the
real world and deployed on a real robot. We conducted independent tests and
obtained a 79.63% success rate with a 4mm gap. Simulation environments:
https://github.com/0707yiliu/peg-in-hole-with-RL.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 17:49:38 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Mar 2023 15:18:37 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Liu",
"Yi",
""
]
] |
new_dataset
| 0.999373 |
2303.07257
|
Amandine Brunetto
|
Amandine Brunetto, Sascha Hornauer, Stella X. Yu, Fabien Moutarde
|
The Audio-Visual BatVision Dataset for Research on Sight and Sound
|
Dataset can be downloaded at https://forms.gle/W6xtshMgoXGZDwsE7 This
version contains updated link and corrected authors name
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Vision research showed remarkable success in understanding our world,
propelled by datasets of images and videos. Sensor data from radar, LiDAR and
cameras supports research in robotics and autonomous driving for at least a
decade. However, while visual sensors may fail in some conditions, sound has
recently shown potential to complement sensor data. Simulated room impulse
responses (RIR) in 3D apartment-models became a benchmark dataset for the
community, fostering a range of audiovisual research. In simulation, depth is
predictable from sound, by learning bat-like perception with a neural network.
Concurrently, the same was achieved in reality by using RGB-D images and echoes
of chirping sounds. Biomimicking bat perception is an exciting new direction
but needs dedicated datasets to explore the potential. Therefore, we collected
the BatVision dataset to provide large-scale echoes in complex real-world
scenes to the community. We equipped a robot with a speaker to emit chirps and
a binaural microphone to record their echoes. Synchronized RGB-D images from
the same perspective provide visual labels of traversed spaces. We sampled
modern US office spaces to historic French university grounds, indoor and
outdoor with large architectural variety. This dataset will allow research on
robot echolocation, general audio-visual tasks and sound phaenomena unavailable
in simulated data. We show promising results for audio-only depth prediction
and show how state-of-the-art work developed for simulated data can also
succeed on our dataset. The data can be downloaded at
https://forms.gle/W6xtshMgoXGZDwsE7
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 16:29:02 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Mar 2023 14:51:19 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Brunetto",
"Amandine",
""
],
[
"Hornauer",
"Sascha",
""
],
[
"Yu",
"Stella X.",
""
],
[
"Moutarde",
"Fabien",
""
]
] |
new_dataset
| 0.999837 |
2303.07401
|
Alexandra Weinberger
|
Oswin Aichholzer, Man-Kwun Chiu, Hung P. Hoang, Michael Hoffmann, Jan
Kyn\v{c}l, Yannic Maus, Birgit Vogtenhuber and Alexandra Weinberger
|
Drawings of Complete Multipartite Graphs Up to Triangle Flips
|
Abstract shortened for arxiv. This work (without appendix) is
available at the 39th International Symposium on Computational Geometry (SoCG
2023)
| null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
For a drawing of a labeled graph, the rotation of a vertex or crossing is the
cyclic order of its incident edges, represented by the labels of their other
endpoints. The extended rotation system (ERS) of the drawing is the collection
of the rotations of all vertices and crossings. A drawing is simple if each
pair of edges has at most one common point. Gioan's Theorem states that for any
two simple drawings of the complete graph $K_n$ with the same crossing edge
pairs, one drawing can be transformed into the other by a sequence of triangle
flips (a.k.a. Reidemeister moves of Type 3). This operation refers to the act
of moving one edge of a triangular cell formed by three pairwise crossing edges
over the opposite crossing of the cell, via a local transformation.
We investigate to what extent Gioan-type theorems can be obtained for wider
classes of graphs. A necessary (but in general not sufficient) condition for
two drawings of a graph to be transformable into each other by a sequence of
triangle flips is that they have the same ERS. As our main result, we show that
for the large class of complete multipartite graphs, this necessary condition
is in fact also sufficient. We present two different proofs of this result, one
of which is shorter, while the other one yields a polynomial time algorithm for
which the number of needed triangle flips for graphs on $n$ vertices is bounded
by $O(n^{16})$. The latter proof uses a Carath\'eodory-type theorem for simple
drawings of complete multipartite graphs, which we believe to be of independent
interest.
Moreover, we show that our Gioan-type theorem for complete multipartite
graphs is essentially tight in the sense that having the same ERS does not
remain sufficient when removing or adding very few edges.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 18:28:04 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Aichholzer",
"Oswin",
""
],
[
"Chiu",
"Man-Kwun",
""
],
[
"Hoang",
"Hung P.",
""
],
[
"Hoffmann",
"Michael",
""
],
[
"Kynčl",
"Jan",
""
],
[
"Maus",
"Yannic",
""
],
[
"Vogtenhuber",
"Birgit",
""
],
[
"Weinberger",
"Alexandra",
""
]
] |
new_dataset
| 0.997475 |
2303.07405
|
Aparajithan Nathamuni Venkatesan
|
Aparajithan Nathamuni-Venkatesan, Ram-Venkat Narayanan, Kishore Pula,
Sundarakumar Muthukumaran and Ranga Vemuri
|
Word-Level Structure Identification In FPGA Designs Using Cell Proximity
Information
|
Paper accepted into proceedings of VLSID2023 conference
| null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Reverse engineering of FPGA based designs from the flattened LUT level
netlist to high level RTL helps in verification of the design or in
understanding legacy designs. We focus on flattened netlists for FPGA devices
from Xilinx 7 series and Zynq 7000. We propose a design element grouping
algorithm that makes use of the location information of the elements on the
physical device after place and route. The proposed grouping algorithm gives
clusters with average NMI of 0.73 for groupings including all element types.
The benchmarks chosen include a range of designs from communication, arithmetic
units, processors and DSP processing units.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 17:43:20 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Nathamuni-Venkatesan",
"Aparajithan",
""
],
[
"Narayanan",
"Ram-Venkat",
""
],
[
"Pula",
"Kishore",
""
],
[
"Muthukumaran",
"Sundarakumar",
""
],
[
"Vemuri",
"Ranga",
""
]
] |
new_dataset
| 0.985897 |
2303.07406
|
Andrew 'bunnie' Huang PhD
|
Andrew 'bunnie' Huang
|
Infra-Red, In-Situ (IRIS) Inspection of Silicon
|
8 pages, 19 figures
| null | null | null |
cs.AR cs.CR eess.IV physics.app-ph
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This paper introduces the Infra-Red, In Situ (IRIS) inspection method, which
uses short-wave IR (SWIR) light to non-destructively "see through" the backside
of chips and image them with lightly modified conventional digital CMOS
cameras. With a ~1050 nm light source, IRIS is capable of constraining macro-
and meso-scale features of a chip. This hardens existing micro-scale self-test
verification techniques by ruling out the existence of extra circuitry that can
hide a hardware trojan with a test bypass. Thus, self-test techniques used in
conjunction with IRIS can ensure the correct construction of security-critical
hardware at all size scales.
|
[
{
"version": "v1",
"created": "Sun, 5 Mar 2023 06:39:54 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Huang",
"Andrew 'bunnie'",
""
]
] |
new_dataset
| 0.999316 |
2303.07427
|
Diego S. D'Antonio
|
Diego S. D'Antonio, Subhrajit Bhattacharya, and David Salda\~na
|
Forming and Controlling Hitches in Midair Using Aerial Robots
|
Paper accepted to be published in ICRA 2023
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The use of cables for aerial manipulation has shown to be a lightweight and
versatile way to interact with objects. However, fastening objects using cables
is still a challenge and human is required. In this work, we propose a novel
way to secure objects using hitches. The hitch can be formed and morphed in
midair using a team of aerial robots with cables. The hitch's shape is modeled
as a convex polygon, making it versatile and adaptable to a wide variety of
objects. We propose an algorithm to form the hitch systematically. The steps
can run in parallel, allowing hitches with a large number of robots to be
formed in constant time. We develop a set of actions that include different
actions to change the shape of the hitch. We demonstrate our methods using a
team of aerial robots via simulation and actual experiments.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 19:05:18 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"D'Antonio",
"Diego S.",
""
],
[
"Bhattacharya",
"Subhrajit",
""
],
[
"Saldaña",
"David",
""
]
] |
new_dataset
| 0.951551 |
2303.07451
|
Malay Joshi
|
Malay Joshi, Aditi Shukla, Jayesh Srivastava, Manya Rastogi
|
DRISHTI: Visual Navigation Assistant for Visually Impaired
|
Paper presented at International Conference on Advancements and Key
Challenges in Green Energy and Computing (AKGEC 2023) is accepted to be
published in the proceedings of the Journal of Physics
| null | null | null |
cs.HC cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In today's society, where independent living is becoming increasingly
important, it can be extremely constricting for those who are blind. Blind and
visually impaired (BVI) people face challenges because they need manual support
to prompt information about their environment. In this work, we took our first
step towards developing an affordable and high-performing eye wearable
assistive device, DRISHTI, to provide visual navigation assistance for BVI
people. This system comprises a camera module, ESP32 processor, Bluetooth
module, smartphone and speakers. Using artificial intelligence, this system is
proposed to detect and understand the nature of the users' path and obstacles
ahead of the user in that path and then inform BVI users about it via audio
output to enable them to acquire directions by themselves on their journey.
This first step discussed in this paper involves establishing a
proof-of-concept of achieving the right balance of affordability and
performance by testing an initial software integration of a currency detection
algorithm on a low-cost embedded arrangement. This work will lay the foundation
for our upcoming works toward achieving the goal of assisting the maximum of
BVI people around the globe in moving independently.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 20:10:44 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Joshi",
"Malay",
""
],
[
"Shukla",
"Aditi",
""
],
[
"Srivastava",
"Jayesh",
""
],
[
"Rastogi",
"Manya",
""
]
] |
new_dataset
| 0.99732 |
2303.07510
|
Hannah Kirkland
|
Hannah Kirkland, Sanjeev J. Koppal
|
Schr\"odinger's Camera: First Steps Towards a Quantum-Based Privacy
Preserving Camera
| null | null | null | null |
cs.CV quant-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Privacy-preserving vision must overcome the dual challenge of utility and
privacy. Too much anonymity renders the images useless, but too little privacy
does not protect sensitive data. We propose a novel design for privacy
preservation, where the imagery is stored in quantum states. In the future,
this will be enabled by quantum imaging cameras, and, currently, storing very
low resolution imagery in quantum states is possible. Quantum state imagery has
the advantage of being both private and non-private till the point of
measurement. This occurs even when images are manipulated, since every quantum
action is fully reversible. We propose a control algorithm, based on double
deep Q-learning, to learn how to anonymize the image before measurement. After
learning, the RL weights are fixed, and new attack neural networks are trained
from scratch to break the system's privacy. Although all our results are in
simulation, we demonstrate, with these first steps, that it is possible to
control both privacy and utility in a quantum-based manner.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 22:44:02 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Kirkland",
"Hannah",
""
],
[
"Koppal",
"Sanjeev J.",
""
]
] |
new_dataset
| 0.979027 |
2303.07525
|
Mst Akter
|
Mst Shapna Akter, Hossain Shahriar, and Zakirul Alam Bhuiya
|
Automated Vulnerability Detection in Source Code Using Quantum Natural
Language Processing
| null | null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
One of the most important challenges in the field of software code audit is
the presence of vulnerabilities in software source code. These flaws are highly
likely ex-ploited and lead to system compromise, data leakage, or denial of
ser-vice. C and C++ open source code are now available in order to create a
large-scale, classical machine-learning and quantum machine-learning system for
function-level vulnerability identification. We assembled a siz-able dataset of
millions of open-source functions that point to poten-tial exploits. We created
an efficient and scalable vulnerability detection method based on a deep neural
network model Long Short Term Memory (LSTM), and quantum machine learning model
Long Short Term Memory (QLSTM), that can learn features extracted from the
source codes. The source code is first converted into a minimal intermediate
representation to remove the pointless components and shorten the de-pendency.
Therefore, We keep the semantic and syntactic information using state of the
art word embedding algorithms such as Glove and fastText. The embedded vectors
are subsequently fed into the classical and quantum convolutional neural
networks to classify the possible vulnerabilities. To measure the performance,
we used evaluation metrics such as F1 score, precision, re-call, accuracy, and
total execution time. We made a comparison between the results derived from the
classical LSTM and quantum LSTM using basic feature representation as well as
semantic and syntactic represen-tation. We found that the QLSTM with semantic
and syntactic features detects significantly accurate vulnerability and runs
faster than its classical counterpart.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 23:27:42 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Akter",
"Mst Shapna",
""
],
[
"Shahriar",
"Hossain",
""
],
[
"Bhuiya",
"Zakirul Alam",
""
]
] |
new_dataset
| 0.999361 |
2303.07538
|
N Shashaank
|
N Shashaank, Berker Banar, Mohammad Rasool Izadi, Jeremy Kemmerer,
Shuo Zhang, Chuan-Che Huang
|
HiSSNet: Sound Event Detection and Speaker Identification via
Hierarchical Prototypical Networks for Low-Resource Headphones
| null | null | null | null |
cs.LG cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Modern noise-cancelling headphones have significantly improved users'
auditory experiences by removing unwanted background noise, but they can also
block out sounds that matter to users. Machine learning (ML) models for sound
event detection (SED) and speaker identification (SID) can enable headphones to
selectively pass through important sounds; however, implementing these models
for a user-centric experience presents several unique challenges. First, most
people spend limited time customizing their headphones, so the sound detection
should work reasonably well out of the box. Second, the models should be able
to learn over time the specific sounds that are important to users based on
their implicit and explicit interactions. Finally, such models should have a
small memory footprint to run on low-power headphones with limited on-chip
memory. In this paper, we propose addressing these challenges using HiSSNet
(Hierarchical SED and SID Network). HiSSNet is an SEID (SED and SID) model that
uses a hierarchical prototypical network to detect both general and specific
sounds of interest and characterize both alarm-like and speech sounds. We show
that HiSSNet outperforms an SEID model trained using non-hierarchical
prototypical networks by 6.9 - 8.6 percent. When compared to state-of-the-art
(SOTA) models trained specifically for SED or SID alone, HiSSNet achieves
similar or better performance while reducing the memory footprint required to
support multiple capabilities on-device.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 23:49:09 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Shashaank",
"N",
""
],
[
"Banar",
"Berker",
""
],
[
"Izadi",
"Mohammad Rasool",
""
],
[
"Kemmerer",
"Jeremy",
""
],
[
"Zhang",
"Shuo",
""
],
[
"Huang",
"Chuan-Che",
""
]
] |
new_dataset
| 0.997504 |
2303.07539
|
Xiang 'Anthony' Chen
|
Xiang 'Anthony' Chen
|
HCI Papers Cite HCI Papers, Increasingly So
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
We propose X-index -- the proportion of papers' citations coming from outside
their research field -- and use this metric to analyze citations of CHI, UIST,
and CSCW papers between 2010 and 2022. We found an overall decreasing X-index
by several measures, indicating that HCI papers have been more and more likely
to be cited by HCI papers rather than by non-HCI papers.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 23:51:33 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Chen",
"Xiang 'Anthony'",
""
]
] |
new_dataset
| 0.999446 |
2303.07547
|
Tae Eun Choe
|
Tae Eun Choe, Jane Wu, Xiaolin Lin, Karen Kwon, Minwoo Park
|
HazardNet: Road Debris Detection by Augmentation of Synthetic Models
|
11 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an algorithm to detect unseen road debris using a small set of
synthetic models. Early detection of road debris is critical for safe
autonomous or assisted driving, yet the development of a robust road debris
detection model has not been widely discussed. There are two main challenges to
building a road debris detector: first, data collection of road debris is
challenging since hazardous objects on the road are rare to encounter in real
driving scenarios; second, the variability of road debris is broad, ranging
from a very small brick to a large fallen tree. To overcome these challenges,
we propose a novel approach to few-shot learning of road debris that uses
semantic augmentation and domain randomization to augment real road images with
synthetic models. We constrain the problem domain to uncommon objects on the
road and allow the deep neural network, HazardNet, to learn the semantic
meaning of road debris to eventually detect unseen road debris. Our results
demonstrate that HazardNet is able to accurately detect real road debris when
only trained on synthetic objects in augmented images.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 00:30:24 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Choe",
"Tae Eun",
""
],
[
"Wu",
"Jane",
""
],
[
"Lin",
"Xiaolin",
""
],
[
"Kwon",
"Karen",
""
],
[
"Park",
"Minwoo",
""
]
] |
new_dataset
| 0.999587 |
2303.07578
|
Rohan Badlani
|
Rohan Badlani, Akshit Arora, Subhankar Ghosh, Rafael Valle, Kevin J.
Shih, Jo\~ao Felipe Santos, Boris Ginsburg, Bryan Catanzaro
|
VANI: Very-lightweight Accent-controllable TTS for Native and Non-native
speakers with Identity Preservation
|
Presentation accepted at ICASSP 2023
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce VANI, a very lightweight multi-lingual accent controllable
speech synthesis system. Our model builds upon disentanglement strategies
proposed in RADMMM and supports explicit control of accent, language, speaker
and fine-grained $F_0$ and energy features for speech synthesis. We utilize the
Indic languages dataset, released for LIMMITS 2023 as part of ICASSP Signal
Processing Grand Challenge, to synthesize speech in 3 different languages. Our
model supports transferring the language of a speaker while retaining their
voice and the native accent of the target language. We utilize the
large-parameter RADMMM model for Track $1$ and lightweight VANI model for Track
$2$ and $3$ of the competition.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 01:55:41 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Badlani",
"Rohan",
""
],
[
"Arora",
"Akshit",
""
],
[
"Ghosh",
"Subhankar",
""
],
[
"Valle",
"Rafael",
""
],
[
"Shih",
"Kevin J.",
""
],
[
"Santos",
"João Felipe",
""
],
[
"Ginsburg",
"Boris",
""
],
[
"Catanzaro",
"Bryan",
""
]
] |
new_dataset
| 0.999753 |
2303.07595
|
Jiangtao Gong
|
Lishuang Zhan, Yancheng Cao, Qitai Chen, Haole Guo, Jiasi Gao, Yiyue
Luo, Shihui Guo, Guyue Zhou, and Jiangtao Gong
|
Enable Natural Tactile Interaction for Robot Dog based on Large-format
Distributed Flexible Pressure Sensors
|
7 pages, 5 figures
|
ICRA 2023
| null | null |
cs.RO cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Touch is an important channel for human-robot interaction, while it is
challenging for robots to recognize human touch accurately and make appropriate
responses. In this paper, we design and implement a set of large-format
distributed flexible pressure sensors on a robot dog to enable natural
human-robot tactile interaction. Through a heuristic study, we sorted out 81
tactile gestures commonly used when humans interact with real dogs and 44 dog
reactions. A gesture classification algorithm based on ResNet is proposed to
recognize these 81 human gestures, and the classification accuracy reaches
98.7%. In addition, an action prediction algorithm based on Transformer is
proposed to predict dog actions from human gestures, reaching a 1-gram BLEU
score of 0.87. Finally, we compare the tactile interaction with the voice
interaction during a freedom human-robot-dog interactive playing study. The
results show that tactile interaction plays a more significant role in
alleviating user anxiety, stimulating user excitement and improving the
acceptability of robot dogs.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 02:35:04 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Zhan",
"Lishuang",
""
],
[
"Cao",
"Yancheng",
""
],
[
"Chen",
"Qitai",
""
],
[
"Guo",
"Haole",
""
],
[
"Gao",
"Jiasi",
""
],
[
"Luo",
"Yiyue",
""
],
[
"Guo",
"Shihui",
""
],
[
"Zhou",
"Guyue",
""
],
[
"Gong",
"Jiangtao",
""
]
] |
new_dataset
| 0.996853 |
2303.07598
|
Xiao Wang
|
Xiao Wang, Ying Wang, Ziwei Xuan, Guo-Jun Qi
|
AdPE: Adversarial Positional Embeddings for Pretraining Vision
Transformers via MAE+
|
9 pages, 5 figures
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Unsupervised learning of vision transformers seeks to pretrain an encoder via
pretext tasks without labels. Among them is the Masked Image Modeling (MIM)
aligned with pretraining of language transformers by predicting masked patches
as a pretext task. A criterion in unsupervised pretraining is the pretext task
needs to be sufficiently hard to prevent the transformer encoder from learning
trivial low-level features not generalizable well to downstream tasks. For this
purpose, we propose an Adversarial Positional Embedding (AdPE) approach -- It
distorts the local visual structures by perturbing the position encodings so
that the learned transformer cannot simply use the locally correlated patches
to predict the missing ones. We hypothesize that it forces the transformer
encoder to learn more discriminative features in a global context with stronger
generalizability to downstream tasks. We will consider both absolute and
relative positional encodings, where adversarial positions can be imposed both
in the embedding mode and the coordinate mode. We will also present a new MAE+
baseline that brings the performance of the MIM pretraining to a new level with
the AdPE. The experiments demonstrate that our approach can improve the
fine-tuning accuracy of MAE by $0.8\%$ and $0.4\%$ over 1600 epochs of
pretraining ViT-B and ViT-L on Imagenet1K. For the transfer learning task, it
outperforms the MAE with the ViT-B backbone by $2.6\%$ in mIoU on ADE20K, and
by $3.2\%$ in AP$^{bbox}$ and $1.6\%$ in AP$^{mask}$ on COCO, respectively.
These results are obtained with the AdPE being a pure MIM approach that does
not use any extra models or external datasets for pretraining. The code is
available at https://github.com/maple-research-lab/AdPE.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 02:42:01 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Wang",
"Xiao",
""
],
[
"Wang",
"Ying",
""
],
[
"Xuan",
"Ziwei",
""
],
[
"Qi",
"Guo-Jun",
""
]
] |
new_dataset
| 0.998703 |
2303.07600
|
Leila Ismail Prof.
|
Leila Ismail, Huned Materwala, Alain Hennebelle
|
Forecasting COVID-19 Infections in Gulf Cooperation Council (GCC)
Countries using Machine Learning
|
9 pages, Proceedings of the 13th International Conference on Computer
Modeling and Simulation, ICCMS 2021, Autoregressive integrated moving
average, ARIMA, Coronavirus, COVID-19, Damped Trend, Holt Linear Trend,
Machine learning, Pandemic, Time series
| null |
10.1145/3474963.3475844
| null |
cs.LG cs.AI cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
COVID-19 has infected more than 68 million people worldwide since it was
first detected about a year ago. Machine learning time series models have been
implemented to forecast COVID-19 infections. In this paper, we develop time
series models for the Gulf Cooperation Council (GCC) countries using the public
COVID-19 dataset from Johns Hopkins. The dataset set includes the one-year
cumulative COVID-19 cases between 22/01/2020 to 22/01/2021. We developed
different models for the countries under study based on the spatial
distribution of the infection data. Our experimental results show that the
developed models can forecast COVID-19 infections with high precision.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 02:46:42 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Ismail",
"Leila",
""
],
[
"Materwala",
"Huned",
""
],
[
"Hennebelle",
"Alain",
""
]
] |
new_dataset
| 0.996789 |
2303.07605
|
Zhipeng Luo
|
Zhipeng Luo, Gongjie Zhang, Changqing Zhou, Zhonghua Wu, Qingyi Tao,
Lewei Lu, Shijian Lu
|
Modeling Continuous Motion for 3D Point Cloud Object Tracking
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The task of 3D single object tracking (SOT) with LiDAR point clouds is
crucial for various applications, such as autonomous driving and robotics.
However, existing approaches have primarily relied on appearance matching or
motion modeling within only two successive frames, thereby overlooking the
long-range continuous motion property of objects in 3D space. To address this
issue, this paper presents a novel approach that views each tracklet as a
continuous stream: at each timestamp, only the current frame is fed into the
network to interact with multi-frame historical features stored in a memory
bank, enabling efficient exploitation of sequential information. To achieve
effective cross-frame message passing, a hybrid attention mechanism is designed
to account for both long-range relation modeling and local geometric feature
extraction. Furthermore, to enhance the utilization of multi-frame features for
robust tracking, a contrastive sequence enhancement strategy is designed, which
uses ground truth tracklets to augment training sequences and promote
discrimination against false positives in a contrastive manner. Extensive
experiments demonstrate that the proposed method outperforms the
state-of-the-art method by significant margins (approximately 8%, 6%, and 12%
improvements in the success performance on KITTI, nuScenes, and Waymo,
respectively).
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 02:58:27 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Luo",
"Zhipeng",
""
],
[
"Zhang",
"Gongjie",
""
],
[
"Zhou",
"Changqing",
""
],
[
"Wu",
"Zhonghua",
""
],
[
"Tao",
"Qingyi",
""
],
[
"Lu",
"Lewei",
""
],
[
"Lu",
"Shijian",
""
]
] |
new_dataset
| 0.996582 |
2303.07617
|
Huanqing Wang
|
Huanqing Wang, Kaixiang Zhang, Keyi Zhu, Ziyou Song, Zhaojian Li
|
ABatRe-Sim: A Comprehensive Framework for Automated Battery Recycling
Simulation
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-sa/4.0/
|
With the rapid surge in the number of on-road Electric Vehicles (EVs), the
amount of spent lithium-ion (Li-ion) batteries is also expected to explosively
grow. The spent battery packs contain valuable metal and materials that should
be recovered, recycled, and reused. However, only less than 5% of the Li-ion
batteries are currently recycled, due to a multitude of challenges in
technology, logistics and regulation. Existing battery recycling is performed
manually, which can pose a series of risks to the human operator as a
consequence of remaining high voltage and chemical hazards. Therefore, there is
a critical need to develop an automated battery recycling system. In this
paper, we present ABatRe-sim, an open-source robotic battery recycling
simulator, to facilitate the research and development in efficient and
effective battery recycling au-omation. Specifically, we develop a detailed CAD
model of the battery pack (with screws, wires, and battery modules), which is
imported into Gazebo to enable robot-object interaction in the robot operating
system (ROS) environment. It also allows the simulation of battery packs of
various aging conditions. Furthermore, perception, planning, and control
algorithms are developed to establish the benchmark to demonstrate the
interface and realize the basic functionalities for further user customization.
Discussions on the utilization and future extensions of the simulator are also
presented.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 03:55:58 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Wang",
"Huanqing",
""
],
[
"Zhang",
"Kaixiang",
""
],
[
"Zhu",
"Keyi",
""
],
[
"Song",
"Ziyou",
""
],
[
"Li",
"Zhaojian",
""
]
] |
new_dataset
| 0.98774 |
2303.07625
|
Heng Fan
|
Xinran Liu, Xiaoqiong Liu, Ziruo Yi, Xin Zhou, Thanh Le, Libo Zhang,
Yan Huang, Qing Yang, Heng Fan
|
PlanarTrack: A Large-scale Challenging Benchmark for Planar Object
Tracking
|
Tech. Report
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Planar object tracking is a critical computer vision problem and has drawn
increasing interest owing to its key roles in robotics, augmented reality, etc.
Despite rapid progress, its further development, especially in the deep
learning era, is largely hindered due to the lack of large-scale challenging
benchmarks. Addressing this, we introduce PlanarTrack, a large-scale
challenging planar tracking benchmark. Specifically, PlanarTrack consists of
1,000 videos with more than 490K images. All these videos are collected in
complex unconstrained scenarios from the wild, which makes PlanarTrack,
compared with existing benchmarks, more challenging but realistic for
real-world applications. To ensure the high-quality annotation, each frame in
PlanarTrack is manually labeled using four corners with multiple-round careful
inspection and refinement. To our best knowledge, PlanarTrack, to date, is the
largest and most challenging dataset dedicated to planar object tracking. In
order to analyze the proposed PlanarTrack, we evaluate 10 planar trackers and
conduct comprehensive comparisons and in-depth analysis. Our results, not
surprisingly, demonstrate that current top-performing planar trackers
degenerate significantly on the challenging PlanarTrack and more efforts are
needed to improve planar tracking in the future. In addition, we further derive
a variant named PlanarTrack$_{\mathbf{BB}}$ for generic object tracking from
PlanarTrack. Our evaluation of 10 excellent generic trackers on
PlanarTrack$_{\mathrm{BB}}$ manifests that, surprisingly,
PlanarTrack$_{\mathrm{BB}}$ is even more challenging than several popular
generic tracking benchmarks and more attention should be paid to handle such
planar objects, though they are rigid. All benchmarks and evaluations will be
released at the project webpage.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 04:48:18 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Liu",
"Xinran",
""
],
[
"Liu",
"Xiaoqiong",
""
],
[
"Yi",
"Ziruo",
""
],
[
"Zhou",
"Xin",
""
],
[
"Le",
"Thanh",
""
],
[
"Zhang",
"Libo",
""
],
[
"Huang",
"Yan",
""
],
[
"Yang",
"Qing",
""
],
[
"Fan",
"Heng",
""
]
] |
new_dataset
| 0.999833 |
2303.07648
|
Jiyong Moon
|
Jiyong Moon and Seongsik Park
|
SimFLE: Simple Facial Landmark Encoding for Self-Supervised Facial
Expression Recognition in the Wild
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the key issues in facial expression recognition in the wild (FER-W) is
that curating large-scale labeled facial images is challenging due to the
inherent complexity and ambiguity of facial images. Therefore, in this paper,
we propose a self-supervised simple facial landmark encoding (SimFLE) method
that can learn effective encoding of facial landmarks, which are important
features for improving the performance of FER-W, without expensive labels.
Specifically, we introduce novel FaceMAE module for this purpose. FaceMAE
reconstructs masked facial images with elaborately designed semantic masking.
Unlike previous random masking, semantic masking is conducted based on channel
information processed in the backbone, so rich semantics of channels can be
explored. Additionally, the semantic masking process is fully trainable,
enabling FaceMAE to guide the backbone to learn spatial details and contextual
properties of fine-grained facial landmarks. Experimental results on several
FER-W benchmarks prove that the proposed SimFLE is superior in facial landmark
localization and noticeably improved performance compared to the supervised
baseline and other self-supervised methods.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 06:30:55 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Moon",
"Jiyong",
""
],
[
"Park",
"Seongsik",
""
]
] |
new_dataset
| 0.995814 |
2303.07650
|
Wei-Qiang Zhang
|
Xuchu Chen, Yu Pu, Jinpeng Li, Wei-Qiang Zhang
|
Cross-lingual Alzheimer's Disease detection based on paralinguistic and
pre-trained features
|
accepted by ICASSP 2023
| null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We present our submission to the ICASSP-SPGC-2023 ADReSS-M Challenge Task,
which aims to investigate which acoustic features can be generalized and
transferred across languages for Alzheimer's Disease (AD) prediction. The
challenge consists of two tasks: one is to classify the speech of AD patients
and healthy individuals, and the other is to infer Mini Mental State
Examination (MMSE) score based on speech only. The difficulty is mainly
embodied in the mismatch of the dataset, in which the training set is in
English while the test set is in Greek. We extract paralinguistic features
using openSmile toolkit and acoustic features using XLSR-53. In addition, we
extract linguistic features after transcribing the speech into text. These
features are used as indicators for AD detection in our method. Our method
achieves an accuracy of 69.6% on the classification task and a root mean
squared error (RMSE) of 4.788 on the regression task. The results show that our
proposed method is expected to achieve automatic multilingual Alzheimer's
Disease detection through spontaneous speech.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 06:34:18 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Chen",
"Xuchu",
""
],
[
"Pu",
"Yu",
""
],
[
"Li",
"Jinpeng",
""
],
[
"Zhang",
"Wei-Qiang",
""
]
] |
new_dataset
| 0.996639 |
2303.07657
|
Xiaolin Wen
|
Xiaolin Wen, Kim Siang Yeo, Yong Wang, Ling Cheng, Feida Zhu and Min
Zhu
|
Code Will Tell: Visual Identification of Ponzi Schemes on Ethereum
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Ethereum has become a popular blockchain with smart contracts for investors
nowadays. Due to the decentralization and anonymity of Ethereum, Ponzi schemes
have been easily deployed and caused significant losses to investors. However,
there are still no explainable and effective methods to help investors easily
identify Ponzi schemes and validate whether a smart contract is actually a
Ponzi scheme. To fill the research gap, we propose PonziLens, a novel
visualization approach to help investors achieve early identification of Ponzi
schemes by investigating the operation codes of smart contracts. Specifically,
we conduct symbolic execution of opcode and extract the control flow for
investing and rewarding with critical opcode instructions. Then, an intuitive
directed-graph based visualization is proposed to display the investing and
rewarding flows and the crucial execution paths, enabling easy identification
of Ponzi schemes on Ethereum. Two usage scenarios involving both Ponzi and
non-Ponzi schemes demonstrate the effectiveness of PonziLens.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 06:58:39 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Wen",
"Xiaolin",
""
],
[
"Yeo",
"Kim Siang",
""
],
[
"Wang",
"Yong",
""
],
[
"Cheng",
"Ling",
""
],
[
"Zhu",
"Feida",
""
],
[
"Zhu",
"Min",
""
]
] |
new_dataset
| 0.997193 |
2303.07668
|
Tong Hua
|
Tong Hua, Tao Li and Ling Pei
|
PIEKF-VIWO: Visual-Inertial-Wheel Odometry using Partial Invariant
Extended Kalman Filter
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Invariant Extended Kalman Filter (IEKF) has been successfully applied in
Visual-inertial Odometry (VIO) as an advanced achievement of Kalman filter,
showing great potential in sensor fusion. In this paper, we propose partial
IEKF (PIEKF), which only incorporates rotation-velocity state into the Lie
group structure and apply it for Visual-Inertial-Wheel Odometry (VIWO) to
improve positioning accuracy and consistency. Specifically, we derive the
rotation-velocity measurement model, which combines wheel measurements with
kinematic constraints. The model circumvents the wheel odometer's 3D
integration and covariance propagation, which is essential for filter
consistency. And a plane constraint is also introduced to enhance the position
accuracy. A dynamic outlier detection method is adopted, leveraging the
velocity state output. Through the simulation and real-world test, we validate
the effectiveness of our approach, which outperforms the standard Multi-State
Constraint Kalman Filter (MSCKF) based VIWO in consistency and accuracy.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 07:17:08 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Hua",
"Tong",
""
],
[
"Li",
"Tao",
""
],
[
"Pei",
"Ling",
""
]
] |
new_dataset
| 0.99793 |
2303.07669
|
Kaidi Cao
|
Kaidi Cao, Jiaxuan You, Jiaju Liu, Jure Leskovec
|
AutoTransfer: AutoML with Knowledge Transfer -- An Application to Graph
Neural Networks
|
ICLR 2023
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
AutoML has demonstrated remarkable success in finding an effective neural
architecture for a given machine learning task defined by a specific dataset
and an evaluation metric. However, most present AutoML techniques consider each
task independently from scratch, which requires exploring many architectures,
leading to high computational cost. Here we propose AutoTransfer, an AutoML
solution that improves search efficiency by transferring the prior
architectural design knowledge to the novel task of interest. Our key
innovation includes a task-model bank that captures the model performance over
a diverse set of GNN architectures and tasks, and a computationally efficient
task embedding that can accurately measure the similarity among different
tasks. Based on the task-model bank and the task embeddings, we estimate the
design priors of desirable models of the novel task, by aggregating a
similarity-weighted sum of the top-K design distributions on tasks that are
similar to the task of interest. The computed design priors can be used with
any AutoML search algorithm. We evaluate AutoTransfer on six datasets in the
graph machine learning domain. Experiments demonstrate that (i) our proposed
task embedding can be computed efficiently, and that tasks with similar
embeddings have similar best-performing architectures; (ii) AutoTransfer
significantly improves search efficiency with the transferred design priors,
reducing the number of explored architectures by an order of magnitude.
Finally, we release GNN-Bank-101, a large-scale dataset of detailed GNN
training information of 120,000 task-model combinations to facilitate and
inspire future research.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 07:23:16 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Cao",
"Kaidi",
""
],
[
"You",
"Jiaxuan",
""
],
[
"Liu",
"Jiaju",
""
],
[
"Leskovec",
"Jure",
""
]
] |
new_dataset
| 0.989134 |
2303.07682
|
Haobin Tang
|
Haobin Tang, Xulong Zhang, Jianzong Wang, Ning Cheng, Jing Xiao
|
QI-TTS: Questioning Intonation Control for Emotional Speech Synthesis
|
Accepted by ICASSP 2023
| null | null | null |
cs.SD cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent expressive text to speech (TTS) models focus on synthesizing emotional
speech, but some fine-grained styles such as intonation are neglected. In this
paper, we propose QI-TTS which aims to better transfer and control intonation
to further deliver the speaker's questioning intention while transferring
emotion from reference speech. We propose a multi-style extractor to extract
style embedding from two different levels. While the sentence level represents
emotion, the final syllable level represents intonation. For fine-grained
intonation control, we use relative attributes to represent intonation
intensity at the syllable level.Experiments have validated the effectiveness of
QI-TTS for improving intonation expressiveness in emotional speech synthesis.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 07:53:19 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Tang",
"Haobin",
""
],
[
"Zhang",
"Xulong",
""
],
[
"Wang",
"Jianzong",
""
],
[
"Cheng",
"Ning",
""
],
[
"Xiao",
"Jing",
""
]
] |
new_dataset
| 0.955896 |
2303.07697
|
Gyeongsu Chae
|
Geumbyeol Hwang, Sunwon Hong, Seunghyun Lee, Sungwoo Park, Gyeongsu
Chae
|
DisCoHead: Audio-and-Video-Driven Talking Head Generation by
Disentangled Control of Head Pose and Facial Expressions
|
Accepted to ICASSP 2023
| null | null | null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For realistic talking head generation, creating natural head motion while
maintaining accurate lip synchronization is essential. To fulfill this
challenging task, we propose DisCoHead, a novel method to disentangle and
control head pose and facial expressions without supervision. DisCoHead uses a
single geometric transformation as a bottleneck to isolate and extract head
motion from a head-driving video. Either an affine or a thin-plate spline
transformation can be used and both work well as geometric bottlenecks. We
enhance the efficiency of DisCoHead by integrating a dense motion estimator and
the encoder of a generator which are originally separate modules. Taking a step
further, we also propose a neural mix approach where dense motion is estimated
and applied implicitly by the encoder. After applying the disentangled head
motion to a source identity, DisCoHead controls the mouth region according to
speech audio, and it blinks eyes and moves eyebrows following a separate
driving video of the eye region, via the weight modulation of convolutional
neural networks. The experiments using multiple datasets show that DisCoHead
successfully generates realistic audio-and-video-driven talking heads and
outperforms state-of-the-art methods. Project page:
https://deepbrainai-research.github.io/discohead/
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 08:22:18 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Hwang",
"Geumbyeol",
""
],
[
"Hong",
"Sunwon",
""
],
[
"Lee",
"Seunghyun",
""
],
[
"Park",
"Sungwoo",
""
],
[
"Chae",
"Gyeongsu",
""
]
] |
new_dataset
| 0.998635 |
2303.07716
|
Yijin Li
|
Yijin Li, Zhaoyang Huang, Shuo Chen, Xiaoyu Shi, Hongsheng Li, Hujun
Bao, Zhaopeng Cui, Guofeng Zhang
|
BlinkFlow: A Dataset to Push the Limits of Event-based Optical Flow
Estimation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Event cameras provide high temporal precision, low data rates, and high
dynamic range visual perception, which are well-suited for optical flow
estimation. While data-driven optical flow estimation has obtained great
success in RGB cameras, its generalization performance is seriously hindered in
event cameras mainly due to the limited and biased training data. In this
paper, we present a novel simulator, BlinkSim, for the fast generation of
large-scale data for event-based optical flow. BlinkSim consists of a
configurable rendering engine and a flexible engine for event data simulation.
By leveraging the wealth of current 3D assets, the rendering engine enables us
to automatically build up thousands of scenes with different objects, textures,
and motion patterns and render very high-frequency images for realistic event
data simulation. Based on BlinkSim, we construct a large training dataset and
evaluation benchmark BlinkFlow that contains sufficient, diversiform, and
challenging event data with optical flow ground truth. Experiments show that
BlinkFlow improves the generalization performance of state-of-the-art methods
by more than 40% on average and up to 90%. Moreover, we further propose an
Event optical Flow transFormer (E-FlowFormer) architecture. Powered by our
BlinkFlow, E-FlowFormer outperforms the SOTA methods by up to 91% on MVSEC
dataset and 14% on DSEC dataset and presents the best generalization
performance.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 09:03:54 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Li",
"Yijin",
""
],
[
"Huang",
"Zhaoyang",
""
],
[
"Chen",
"Shuo",
""
],
[
"Shi",
"Xiaoyu",
""
],
[
"Li",
"Hongsheng",
""
],
[
"Bao",
"Hujun",
""
],
[
"Cui",
"Zhaopeng",
""
],
[
"Zhang",
"Guofeng",
""
]
] |
new_dataset
| 0.998819 |
2303.07742
|
Silvan Mertes
|
Alexander Heimerl, Pooja Prajod, Silvan Mertes, Tobias Baur, Matthias
Kraus, Ailin Liu, Helen Risack, Nicolas Rohleder, Elisabeth Andr\'e, Linda
Becker
|
ForDigitStress: A multi-modal stress dataset employing a digital job
interview scenario
| null | null | null | null |
cs.LG cs.HC eess.SP
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present a multi-modal stress dataset that uses digital job interviews to
induce stress. The dataset provides multi-modal data of 40 participants
including audio, video (motion capturing, facial recognition, eye tracking) as
well as physiological information (photoplethysmography, electrodermal
activity). In addition to that, the dataset contains time-continuous
annotations for stress and occurred emotions (e.g. shame, anger, anxiety,
surprise). In order to establish a baseline, five different machine learning
classifiers (Support Vector Machine, K-Nearest Neighbors, Random Forest,
Long-Short-Term Memory Network) have been trained and evaluated on the proposed
dataset for a binary stress classification task. The best-performing classifier
achieved an accuracy of 88.3% and an F1-score of 87.5%.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 09:40:37 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Heimerl",
"Alexander",
""
],
[
"Prajod",
"Pooja",
""
],
[
"Mertes",
"Silvan",
""
],
[
"Baur",
"Tobias",
""
],
[
"Kraus",
"Matthias",
""
],
[
"Liu",
"Ailin",
""
],
[
"Risack",
"Helen",
""
],
[
"Rohleder",
"Nicolas",
""
],
[
"André",
"Elisabeth",
""
],
[
"Becker",
"Linda",
""
]
] |
new_dataset
| 0.999785 |
2303.07751
|
Oscar de Groot
|
O. de Groot, L. Ferranti, D. Gavrila, J. Alonso-Mora
|
Globally Guided Trajectory Planning in Dynamic Environments
|
7 pages, 6 figures, accepted to IEEE International Conference on
Robotics and Automation (ICRA) 2023
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Navigating mobile robots through environments shared with humans is
challenging. From the perspective of the robot, humans are dynamic obstacles
that must be avoided. These obstacles make the collision-free space nonconvex,
which leads to two distinct passing behaviors per obstacle (passing left or
right). For local planners, such as receding-horizon trajectory optimization,
each behavior presents a local optimum in which the planner can get stuck. This
may result in slow or unsafe motion even when a better plan exists. In this
work, we identify trajectories for multiple locally optimal driving behaviors,
by considering their topology. This identification is made consistent over
successive iterations by propagating the topology information. The most
suitable high-level trajectory guides a local optimization-based planner,
resulting in fast and safe motion plans. We validate the proposed planner on a
mobile robot in simulation and real-world experiments.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 09:54:10 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"de Groot",
"O.",
""
],
[
"Ferranti",
"L.",
""
],
[
"Gavrila",
"D.",
""
],
[
"Alonso-Mora",
"J.",
""
]
] |
new_dataset
| 0.98294 |
2303.07758
|
Moritz Neun
|
Moritz Neun, Christian Eichenberger, Henry Martin, Markus Spanring,
Rahul Siripurapu, Daniel Springer, Leyan Deng, Chenwang Wu, Defu Lian, Min
Zhou, Martin Lumiste, Andrei Ilie, Xinhua Wu, Cheng Lyu, Qing-Long Lu, Vishal
Mahajan, Yichao Lu, Jiezhang Li, Junjun Li, Yue-Jiao Gong, Florian
Gr\"otschla, Jo\"el Mathys, Ye Wei, He Haitao, Hui Fang, Kevin Malm, Fei
Tang, Michael Kopp, David Kreil, Sepp Hochreiter
|
Traffic4cast at NeurIPS 2022 -- Predict Dynamics along Graph Edges from
Sparse Node Data: Whole City Traffic and ETA from Stationary Vehicle
Detectors
|
Pre-print under review, submitted to Proceedings of Machine Learning
Research
| null | null | null |
cs.LG cs.SI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The global trends of urbanization and increased personal mobility force us to
rethink the way we live and use urban space. The Traffic4cast competition
series tackles this problem in a data-driven way, advancing the latest methods
in machine learning for modeling complex spatial systems over time. In this
edition, our dynamic road graph data combine information from road maps,
$10^{12}$ probe data points, and stationary vehicle detectors in three cities
over the span of two years. While stationary vehicle detectors are the most
accurate way to capture traffic volume, they are only available in few
locations. Traffic4cast 2022 explores models that have the ability to
generalize loosely related temporal vertex data on just a few nodes to predict
dynamic future traffic states on the edges of the entire road graph. In the
core challenge, participants are invited to predict the likelihoods of three
congestion classes derived from the speed levels in the GPS data for the entire
road graph in three cities 15 min into the future. We only provide vehicle
count data from spatially sparse stationary vehicle detectors in these three
cities as model input for this task. The data are aggregated in 15 min time
bins for one hour prior to the prediction time. For the extended challenge,
participants are tasked to predict the average travel times on super-segments
15 min into the future - super-segments are longer sequences of road segments
in the graph. The competition results provide an important advance in the
prediction of complex city-wide traffic states just from publicly available
sparse vehicle data and without the need for large amounts of real-time
floating vehicle data.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 10:03:37 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Neun",
"Moritz",
""
],
[
"Eichenberger",
"Christian",
""
],
[
"Martin",
"Henry",
""
],
[
"Spanring",
"Markus",
""
],
[
"Siripurapu",
"Rahul",
""
],
[
"Springer",
"Daniel",
""
],
[
"Deng",
"Leyan",
""
],
[
"Wu",
"Chenwang",
""
],
[
"Lian",
"Defu",
""
],
[
"Zhou",
"Min",
""
],
[
"Lumiste",
"Martin",
""
],
[
"Ilie",
"Andrei",
""
],
[
"Wu",
"Xinhua",
""
],
[
"Lyu",
"Cheng",
""
],
[
"Lu",
"Qing-Long",
""
],
[
"Mahajan",
"Vishal",
""
],
[
"Lu",
"Yichao",
""
],
[
"Li",
"Jiezhang",
""
],
[
"Li",
"Junjun",
""
],
[
"Gong",
"Yue-Jiao",
""
],
[
"Grötschla",
"Florian",
""
],
[
"Mathys",
"Joël",
""
],
[
"Wei",
"Ye",
""
],
[
"Haitao",
"He",
""
],
[
"Fang",
"Hui",
""
],
[
"Malm",
"Kevin",
""
],
[
"Tang",
"Fei",
""
],
[
"Kopp",
"Michael",
""
],
[
"Kreil",
"David",
""
],
[
"Hochreiter",
"Sepp",
""
]
] |
new_dataset
| 0.998387 |
2303.07790
|
{\O}yvind Meinich-Bache PhD
|
{\O}yvind Meinich-Bache, Kjersti Engan, Ivar Austvoll, Trygve
Eftest{\o}l, Helge Myklebust, Ladislaus Blacy Yarrot, Hussein Kidanto and
Hege Ersdal
|
Object Detection During Newborn Resuscitation Activities
|
8 pages
|
IEEE Journal of Biomedical and Health Informatics, vol. 24, no. 3,
pp. 796-803, March 2020
|
10.1109/JBHI.2019.2924808.
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Birth asphyxia is a major newborn mortality problem in low-resource
countries. International guideline provides treatment recommendations; however,
the importance and effect of the different treatments are not fully explored.
The available data is collected in Tanzania, during newborn resuscitation, for
analysis of the resuscitation activities and the response of the newborn. An
important step in the analysis is to create activity timelines of the episodes,
where activities include ventilation, suction, stimulation etc. Methods: The
available recordings are noisy real-world videos with large variations. We
propose a two-step process in order to detect activities possibly overlapping
in time. The first step is to detect and track the relevant objects, like
bag-mask resuscitator, heart rate sensors etc., and the second step is to use
this information to recognize the resuscitation activities. The topic of this
paper is the first step, and the object detection and tracking are based on
convolutional neural networks followed by post processing. Results: The
performance of the object detection during activities were 96.97 %
(ventilations), 100 % (attaching/removing heart rate sensor) and 75 % (suction)
on a test set of 20 videos. The system also estimate the number of health care
providers present with a performance of 71.16 %. Conclusion: The proposed
object detection and tracking system provides promising results in noisy
newborn resuscitation videos. Significance: This is the first step in a
thorough analysis of newborn resuscitation episodes, which could provide
important insight about the importance and effect of different newborn
resuscitation activities
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 11:04:50 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Meinich-Bache",
"Øyvind",
""
],
[
"Engan",
"Kjersti",
""
],
[
"Austvoll",
"Ivar",
""
],
[
"Eftestøl",
"Trygve",
""
],
[
"Myklebust",
"Helge",
""
],
[
"Yarrot",
"Ladislaus Blacy",
""
],
[
"Kidanto",
"Hussein",
""
],
[
"Ersdal",
"Hege",
""
]
] |
new_dataset
| 0.997342 |
2303.07798
|
Karmesh Yadav
|
Karmesh Yadav, Arjun Majumdar, Ram Ramrakhya, Naoki Yokoyama, Alexei
Baevski, Zsolt Kira, Oleksandr Maksymets, Dhruv Batra
|
OVRL-V2: A simple state-of-art baseline for ImageNav and ObjectNav
|
15 pages, 7 figures, 9 tables
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We present a single neural network architecture composed of task-agnostic
components (ViTs, convolutions, and LSTMs) that achieves state-of-art results
on both the ImageNav ("go to location in <this picture>") and ObjectNav ("find
a chair") tasks without any task-specific modules like object detection,
segmentation, mapping, or planning modules. Such general-purpose methods offer
advantages of simplicity in design, positive scaling with available compute,
and versatile applicability to multiple tasks. Our work builds upon the recent
success of self-supervised learning (SSL) for pre-training vision transformers
(ViT). However, while the training recipes for convolutional networks are
mature and robust, the recipes for ViTs are contingent and brittle, and in the
case of ViTs for visual navigation, yet to be fully discovered. Specifically,
we find that vanilla ViTs do not outperform ResNets on visual navigation. We
propose the use of a compression layer operating over ViT patch representations
to preserve spatial information along with policy training improvements. These
improvements allow us to demonstrate positive scaling laws for the first time
in visual navigation tasks. Consequently, our model advances state-of-the-art
performance on ImageNav from 54.2% to 82.0% success and performs competitively
against concurrent state-of-art on ObjectNav with success rate of 64.0% vs.
65.0%. Overall, this work does not present a fundamentally new approach, but
rather recommendations for training a general-purpose architecture that
achieves state-of-art performance today and could serve as a strong baseline
for future methods.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 11:15:37 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Yadav",
"Karmesh",
""
],
[
"Majumdar",
"Arjun",
""
],
[
"Ramrakhya",
"Ram",
""
],
[
"Yokoyama",
"Naoki",
""
],
[
"Baevski",
"Alexei",
""
],
[
"Kira",
"Zsolt",
""
],
[
"Maksymets",
"Oleksandr",
""
],
[
"Batra",
"Dhruv",
""
]
] |
new_dataset
| 0.951436 |
2303.07862
|
Rebeca Motta
|
Rebeca C. Motta, K\'athia M. de Oliveira and Guilherme H. Travassos
|
An Evidence-based Roadmap for IoT Software Systems Engineering
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Context: The Internet of Things (IoT) has brought expectations for software
inclusion in everyday objects. However, it has challenges and requires
multidisciplinary technical knowledge involving different areas that should be
combined to enable IoT software systems engineering. Goal: To present an
evidence-based roadmap for IoT development to support developers in specifying,
designing, and implementing IoT systems. Method: An iterative approach based on
experimental studies to acquire evidence to define the IoT Roadmap. Next, the
Systems Engineering Body of Knowledge life cycle was used to organize the
roadmap and set temporal dimensions for IoT software systems engineering.
Results: The studies revealed seven IoT Facets influencing IoT development. The
IoT Roadmap comprises 117 items organized into 29 categories representing
different concerns for each Facet. In addition, an experimental study was
conducted observing a real case of a healthcare IoT project, indicating the
roadmap applicability. Conclusions: The IoT Roadmap can be a feasible
instrument to assist IoT software systems engineering because it can (a)
support researchers and practitioners in understanding and characterizing the
IoT and (b) provide a checklist to identify the applicable recommendations for
engineering IoT software systems.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 12:52:36 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Motta",
"Rebeca C.",
""
],
[
"de Oliveira",
"Káthia M.",
""
],
[
"Travassos",
"Guilherme H.",
""
]
] |
new_dataset
| 0.969158 |
2303.07902
|
Xuenan Xu
|
Xuenan Xu, Zhiling Zhang, Zelin Zhou, Pingyue Zhang, Zeyu Xie, Mengyue
Wu, Kenny Q. Zhu
|
BLAT: Bootstrapping Language-Audio Pre-training based on AudioSet
Tag-guided Synthetic Data
| null | null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Compared with ample visual-text pre-training research, few works explore
audio-text pre-training, mostly due to the lack of sufficient parallel
audio-text data. Most existing methods incorporate the visual modality as a
pivot for audio-text pre-training, which inevitably induces data noise. In this
paper, we propose BLAT: Bootstrapping Language-Audio pre-training based on
Tag-guided synthetic data. We utilize audio captioning to generate text
directly from audio, without the aid of the visual modality so that potential
noise from modality mismatch is eliminated. Furthermore, we propose caption
generation under the guidance of AudioSet tags, leading to more accurate
captions. With the above two improvements, we curate high-quality, large-scale
parallel audio-text data, based on which we perform audio-text pre-training.
Evaluation on a series of downstream tasks indicates that BLAT achieves SOTA
zero-shot classification performance on most datasets and significant
performance improvement when fine-tuned on downstream tasks, suggesting the
effectiveness of our synthetic data.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 13:42:26 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Xu",
"Xuenan",
""
],
[
"Zhang",
"Zhiling",
""
],
[
"Zhou",
"Zelin",
""
],
[
"Zhang",
"Pingyue",
""
],
[
"Xie",
"Zeyu",
""
],
[
"Wu",
"Mengyue",
""
],
[
"Zhu",
"Kenny Q.",
""
]
] |
new_dataset
| 0.998978 |
2303.07997
|
Jialiang Zhao
|
Jialiang Zhao, Maria Bauza, Edward H. Adelson
|
FingerSLAM: Closed-loop Unknown Object Localization and Reconstruction
from Visuo-tactile Feedback
|
Submitted and accepted to 2023 IEEE International Conference on
Robotics and Automation (ICRA 2023)
| null | null | null |
cs.RO cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we address the problem of using visuo-tactile feedback for
6-DoF localization and 3D reconstruction of unknown in-hand objects. We propose
FingerSLAM, a closed-loop factor graph-based pose estimator that combines local
tactile sensing at finger-tip and global vision sensing from a wrist-mount
camera. FingerSLAM is constructed with two constituent pose estimators: a
multi-pass refined tactile-based pose estimator that captures movements from
detailed local textures, and a single-pass vision-based pose estimator that
predicts from a global view of the object. We also design a loop closure
mechanism that actively matches current vision and tactile images to previously
stored key-frames to reduce accumulated error. FingerSLAM incorporates the two
sensing modalities of tactile and vision, as well as the loop closure mechanism
with a factor graph-based optimization framework. Such a framework produces an
optimized pose estimation solution that is more accurate than the standalone
estimators. The estimated poses are then used to reconstruct the shape of the
unknown object incrementally by stitching the local point clouds recovered from
tactile images. We train our system on real-world data collected with 20
objects. We demonstrate reliable visuo-tactile pose estimation and shape
reconstruction through quantitative and qualitative real-world evaluations on 6
objects that are unseen during training.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 15:48:47 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Zhao",
"Jialiang",
""
],
[
"Bauza",
"Maria",
""
],
[
"Adelson",
"Edward H.",
""
]
] |
new_dataset
| 0.999488 |
2303.08014
|
Zhenguang Cai
|
Zhenguang G. Cai, David A. Haslett, Xufeng Duan, Shuqi Wang, Martin J.
Pickering
|
Does ChatGPT resemble humans in language use?
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models (LLMs) and LLM-driven chatbots such as ChatGPT have
shown remarkable capacities in comprehending and producing language. However,
their internal workings remain a black box in cognitive terms, and it is
unclear whether LLMs and chatbots can develop humanlike characteristics in
language use. Cognitive scientists have devised many experiments that probe,
and have made great progress in explaining, how people process language. We
subjected ChatGPT to 12 of these experiments, pre-registered and with 1,000
runs per experiment. In 10 of them, ChatGPT replicated the human pattern of
language use. It associated unfamiliar words with different meanings depending
on their forms, continued to access recently encountered meanings of ambiguous
words, reused recent sentence structures, reinterpreted implausible sentences
that were likely to have been corrupted by noise, glossed over errors, drew
reasonable inferences, associated causality with different discourse entities
according to verb semantics, and accessed different meanings and retrieved
different words depending on the identity of its interlocutor. However, unlike
humans, it did not prefer using shorter words to convey less informative
content and it did not use context to disambiguate syntactic ambiguities. We
discuss how these convergences and divergences may occur in the transformer
architecture. Overall, these experiments demonstrate that LLM-driven chatbots
like ChatGPT are capable of mimicking human language processing to a great
extent, and that they have the potential to provide insights into how people
learn and use language.
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 10:47:59 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Cai",
"Zhenguang G.",
""
],
[
"Haslett",
"David A.",
""
],
[
"Duan",
"Xufeng",
""
],
[
"Wang",
"Shuqi",
""
],
[
"Pickering",
"Martin J.",
""
]
] |
new_dataset
| 0.997493 |
2303.08067
|
Sergi Abadal
|
Robert Guirado, Abbas Rahimi, Geethan Karunaratne, Eduard Alarc\'on,
Abu Sebastian, Sergi Abadal
|
WHYPE: A Scale-Out Architecture with Wireless Over-the-Air Majority for
Scalable In-memory Hyperdimensional Computing
|
Accepted at IEEE Journal on Emerging and Selected Topics in Circuits
and Systems (JETCAS). arXiv admin note: text overlap with arXiv:2205.10889
| null |
10.1109/JETCAS.2023.3243064
| null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Hyperdimensional computing (HDC) is an emerging computing paradigm that
represents, manipulates, and communicates data using long random vectors known
as hypervectors. Among different hardware platforms capable of executing HDC
algorithms, in-memory computing (IMC) has shown promise as it is very efficient
in performing matrix-vector multiplications, which are common in the HDC
algebra. Although HDC architectures based on IMC already exist, how to scale
them remains a key challenge due to collective communication patterns that
these architectures required and that traditional chip-scale networks were not
designed for. To cope with this difficulty, we propose a scale-out HDC
architecture called WHYPE, which uses wireless in-package communication
technology to interconnect a large number of physically distributed IMC cores
that either encode hypervectors or perform multiple similarity searches in
parallel. In this context, the key enabler of WHYPE is the opportunistic use of
the wireless network as a medium for over-the-air computation. WHYPE implements
an optimized source coding that allows receivers to calculate the bit-wise
majority of multiple hypervectors (a useful operation in HDC) being transmitted
concurrently over the wireless channel. By doing so, we achieve a joint
broadcast distribution and computation with a performance and efficiency
unattainable with wired interconnects, which in turn enables massive
parallelization of the architecture. Through evaluations at the on-chip network
and complete architecture levels, we demonstrate that WHYPE can bundle and
distribute hypervectors faster and more efficiently than a hypothetical wired
implementation, and that it scales well to tens of receivers. We show that the
average error rate of the majority computation is low, such that it has
negligible impact on the accuracy of HDC classification tasks.
|
[
{
"version": "v1",
"created": "Sat, 4 Feb 2023 22:41:27 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Guirado",
"Robert",
""
],
[
"Rahimi",
"Abbas",
""
],
[
"Karunaratne",
"Geethan",
""
],
[
"Alarcón",
"Eduard",
""
],
[
"Sebastian",
"Abu",
""
],
[
"Abadal",
"Sergi",
""
]
] |
new_dataset
| 0.966791 |
2303.08129
|
Anthony Chen
|
Anthony Chen, Kevin Zhang, Renrui Zhang, Zihan Wang, Yuheng Lu,
Yandong Guo, Shanghang Zhang
|
PiMAE: Point Cloud and Image Interactive Masked Autoencoders for 3D
Object Detection
|
Accepted by CVPR2023. Code is available at
https://github.com/BLVLab/PiMAE
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Masked Autoencoders learn strong visual representations and achieve
state-of-the-art results in several independent modalities, yet very few works
have addressed their capabilities in multi-modality settings. In this work, we
focus on point cloud and RGB image data, two modalities that are often
presented together in the real world, and explore their meaningful
interactions. To improve upon the cross-modal synergy in existing works, we
propose PiMAE, a self-supervised pre-training framework that promotes 3D and 2D
interaction through three aspects. Specifically, we first notice the importance
of masking strategies between the two sources and utilize a projection module
to complementarily align the mask and visible tokens of the two modalities.
Then, we utilize a well-crafted two-branch MAE pipeline with a novel shared
decoder to promote cross-modality interaction in the mask tokens. Finally, we
design a unique cross-modal reconstruction module to enhance representation
learning for both modalities. Through extensive experiments performed on
large-scale RGB-D scene understanding benchmarks (SUN RGB-D and ScannetV2), we
discover it is nontrivial to interactively learn point-image features, where we
greatly improve multiple 3D detectors, 2D detectors, and few-shot classifiers
by 2.9%, 6.7%, and 2.4%, respectively. Code is available at
https://github.com/BLVLab/PiMAE.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 17:58:03 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Chen",
"Anthony",
""
],
[
"Zhang",
"Kevin",
""
],
[
"Zhang",
"Renrui",
""
],
[
"Wang",
"Zihan",
""
],
[
"Lu",
"Yuheng",
""
],
[
"Guo",
"Yandong",
""
],
[
"Zhang",
"Shanghang",
""
]
] |
new_dataset
| 0.999085 |
2303.08137
|
Naoto Inoue
|
Naoto Inoue, Kotaro Kikuchi, Edgar Simo-Serra, Mayu Otani, Kota
Yamaguchi
|
LayoutDM: Discrete Diffusion Model for Controllable Layout Generation
|
To be published in CVPR2023, project page:
https://cyberagentailab.github.io/layout-dm/
| null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Controllable layout generation aims at synthesizing plausible arrangement of
element bounding boxes with optional constraints, such as type or position of a
specific element. In this work, we try to solve a broad range of layout
generation tasks in a single model that is based on discrete state-space
diffusion models. Our model, named LayoutDM, naturally handles the structured
layout data in the discrete representation and learns to progressively infer a
noiseless layout from the initial input, where we model the layout corruption
process by modality-wise discrete diffusion. For conditional generation, we
propose to inject layout constraints in the form of masking or logit adjustment
during inference. We show in the experiments that our LayoutDM successfully
generates high-quality layouts and outperforms both task-specific and
task-agnostic baselines on several layout tasks.
|
[
{
"version": "v1",
"created": "Tue, 14 Mar 2023 17:59:47 GMT"
}
] | 2023-03-15T00:00:00 |
[
[
"Inoue",
"Naoto",
""
],
[
"Kikuchi",
"Kotaro",
""
],
[
"Simo-Serra",
"Edgar",
""
],
[
"Otani",
"Mayu",
""
],
[
"Yamaguchi",
"Kota",
""
]
] |
new_dataset
| 0.992548 |
1308.5046
|
Jes\'us Gir\'aldez-Cru
|
C. Ans\'otegui (1), M. L. Bonet (2), J. Gir\'aldez-Cru (3) and J. Levy
(3) ((1) DIEI, Univ. de Lleida, (2) LSI, UPC, (3) IIIA-CSIC)
|
The Fractal Dimension of SAT Formulas
|
20 pages, 11 Postscript figures
|
Automated Reasoning, LNCS 8562, pp 107-121, Springer (2014)
|
10.1007/978-3-319-08587-6_8
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern SAT solvers have experienced a remarkable progress on solving
industrial instances. Most of the techniques have been developed after an
intensive experimental testing process. Recently, there have been some attempts
to analyze the structure of these formulas in terms of complex networks, with
the long-term aim of explaining the success of these SAT solving techniques,
and possibly improving them.
We study the fractal dimension of SAT formulas, and show that most industrial
families of formulas are self-similar, with a small fractal dimension. We also
show that this dimension is not affected by the addition of learnt clauses. We
explore how the dimension of a formula, together with other graph properties
can be used to characterize SAT instances. Finally, we give empirical evidence
that these graph properties can be used in state-of-the-art portfolios.
|
[
{
"version": "v1",
"created": "Fri, 23 Aug 2013 04:30:37 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Ansótegui",
"C.",
"",
"DIEI, Univ. de Lleida"
],
[
"Bonet",
"M. L.",
"",
"LSI, UPC"
],
[
"Giráldez-Cru",
"J.",
"",
"IIIA-CSIC"
],
[
"Levy",
"J.",
"",
"IIIA-CSIC"
]
] |
new_dataset
| 0.99703 |
2007.08738
|
Manor Mendel
|
Sariel Har-Peled, Manor Mendel, D\'aniel Ol\'ah
|
Reliable Spanners for Metric Spaces
|
29 pages, Full version after review
|
ACM Trans. Algo. 19(1) 1549-6325, 2023
|
10.1145/3563356
| null |
cs.CG cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A spanner is reliable if it can withstand large, catastrophic failures in the
network. More precisely, any failure of some nodes can only cause a small
damage in the remaining graph in terms of the dilation, that is, the spanner
property is maintained for almost all nodes in the residual graph.
Constructions of reliable spanners of near linear size are known in the
low-dimensional Euclidean settings. Here, we present new constructions of
reliable spanners for planar graphs, trees and (general) metric spaces.
|
[
{
"version": "v1",
"created": "Fri, 17 Jul 2020 03:29:20 GMT"
},
{
"version": "v2",
"created": "Mon, 15 Mar 2021 20:37:19 GMT"
},
{
"version": "v3",
"created": "Sat, 1 Jan 2022 19:03:45 GMT"
},
{
"version": "v4",
"created": "Thu, 1 Sep 2022 08:18:12 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Har-Peled",
"Sariel",
""
],
[
"Mendel",
"Manor",
""
],
[
"Oláh",
"Dániel",
""
]
] |
new_dataset
| 0.991473 |
2103.06696
|
Maria Saumell
|
Ankush Acharyya, Maarten L\"offler, Gert G.T. Meijer, Maria Saumell,
Rodrigo I. Silveira, Frank Staals
|
Terrain prickliness: theoretical grounds for high complexity viewsheds
| null | null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An important task in terrain analysis is computing \emph{viewsheds}. A
viewshed is the union of all the parts of the terrain that are visible from a
given viewpoint or set of viewpoints. The complexity of a viewshed can vary
significantly depending on the terrain topography and the viewpoint position.
In this work we study a new topographic attribute, the \emph{prickliness}, that
measures the number of local maxima in a terrain from all possible angles of
view. We show that the prickliness effectively captures the potential of 2.5D
TIN terrains to have high complexity viewsheds. We present optimal (for 1.5D
terrains) and near-optimal (for 2.5D terrains) algorithms to compute it for TIN
terrains, and efficient approximate algorithms for raster DEMs. We validate the
usefulness of the prickliness attribute with experiments in a large set of real
terrains.
|
[
{
"version": "v1",
"created": "Thu, 11 Mar 2021 14:35:10 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Mar 2023 21:24:13 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Acharyya",
"Ankush",
""
],
[
"Löffler",
"Maarten",
""
],
[
"Meijer",
"Gert G. T.",
""
],
[
"Saumell",
"Maria",
""
],
[
"Silveira",
"Rodrigo I.",
""
],
[
"Staals",
"Frank",
""
]
] |
new_dataset
| 0.988076 |
2111.01906
|
Di Fu
|
Di Fu, Fares Abawi, Hugo Carneiro, Matthias Kerzel, Ziwei Chen, Erik
Strahl, Xun Liu, Stefan Wermter
|
A trained humanoid robot can perform human-like crossmodal social
attention and conflict resolution
|
accepted for publication in the International Journal of Social
Robotics
| null | null | null |
cs.RO cs.AI cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
To enhance human-robot social interaction, it is essential for robots to
process multiple social cues in a complex real-world environment. However,
incongruency of input information across modalities is inevitable and could be
challenging for robots to process. To tackle this challenge, our study adopted
the neurorobotic paradigm of crossmodal conflict resolution to make a robot
express human-like social attention. A behavioural experiment was conducted on
37 participants for the human study. We designed a round-table meeting scenario
with three animated avatars to improve ecological validity. Each avatar wore a
medical mask to obscure the facial cues of the nose, mouth, and jaw. The
central avatar shifted its eye gaze while the peripheral avatars generated
sound. Gaze direction and sound locations were either spatially congruent or
incongruent. We observed that the central avatar's dynamic gaze could trigger
crossmodal social attention responses. In particular, human performances are
better under the congruent audio-visual condition than the incongruent
condition. Our saliency prediction model was trained to detect social cues,
predict audio-visual saliency, and attend selectively for the robot study.
After mounting the trained model on the iCub, the robot was exposed to
laboratory conditions similar to the human experiment. While the human
performances were overall superior, our trained model demonstrated that it
could replicate attention responses similar to humans.
|
[
{
"version": "v1",
"created": "Tue, 2 Nov 2021 21:49:52 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Mar 2022 09:20:00 GMT"
},
{
"version": "v3",
"created": "Mon, 27 Feb 2023 09:24:08 GMT"
},
{
"version": "v4",
"created": "Tue, 28 Feb 2023 05:40:18 GMT"
},
{
"version": "v5",
"created": "Mon, 13 Mar 2023 00:07:10 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Fu",
"Di",
""
],
[
"Abawi",
"Fares",
""
],
[
"Carneiro",
"Hugo",
""
],
[
"Kerzel",
"Matthias",
""
],
[
"Chen",
"Ziwei",
""
],
[
"Strahl",
"Erik",
""
],
[
"Liu",
"Xun",
""
],
[
"Wermter",
"Stefan",
""
]
] |
new_dataset
| 0.989696 |
2203.03411
|
Eduardo Castell\'o Ferrer
|
Eduardo Castell\'o Ferrer, Ivan Berman, Aleksandr Kapitonov, Vadim
Manaenko, Makar Chernyaev, Pavel Tarasov
|
Gaka-chu: a self-employed autonomous robot artist
|
Accepted for publication in 2023 IEEE International Conference on
Robotics and Automation (ICRA)
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The physical autonomy of robots is well understood both theoretically and
practically. By contrast, there is almost no research exploring their potential
economic autonomy. In this paper, we present the first economically autonomous
robot -- a robot able to produce marketable goods while having full control
over the use of its generated income. Gaka-chu ("painter" in Japanese) is a
6-axis robot arm that creates paintings of Japanese characters from an
autoselected keyword. By using a blockchain-based smart contract, Gaka-chu can
autonomously list a painting it made for sale in an online auction. In this
transaction, the robot interacts with the human bidders as a peer not as a
tool. Using the blockchain-based smart contract, Gaka-chu can then use its
income from selling paintings to replenish its resources by autonomously
ordering materials from an online art shop. We built the Gaka-chu prototype
with an Ethereum-based smart contract and ran a 6-month long experiment, during
which the robot created and sold four paintings, simultaneously using its
income to purchase supplies and repay initial investors. In this work, we
present the results of the experiments conducted and discuss the implications
of economically autonomous robots.
|
[
{
"version": "v1",
"created": "Mon, 7 Mar 2022 14:02:37 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Oct 2022 11:20:27 GMT"
},
{
"version": "v3",
"created": "Mon, 13 Mar 2023 13:28:55 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Ferrer",
"Eduardo Castelló",
""
],
[
"Berman",
"Ivan",
""
],
[
"Kapitonov",
"Aleksandr",
""
],
[
"Manaenko",
"Vadim",
""
],
[
"Chernyaev",
"Makar",
""
],
[
"Tarasov",
"Pavel",
""
]
] |
new_dataset
| 0.999209 |
2203.05194
|
Shuxiao Chen
|
Shuxiao Chen, Bike Zhang, Mark W. Mueller, Akshara Rai and Koushil
Sreenath
|
Learning Torque Control for Quadrupedal Locomotion
| null | null | null | null |
cs.RO cs.AI cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reinforcement learning (RL) has become a promising approach to developing
controllers for quadrupedal robots. Conventionally, an RL design for locomotion
follows a position-based paradigm, wherein an RL policy outputs target joint
positions at a low frequency that are then tracked by a high-frequency
proportional-derivative (PD) controller to produce joint torques. In contrast,
for the model-based control of quadrupedal locomotion, there has been a
paradigm shift from position-based control to torque-based control. In light of
the recent advances in model-based control, we explore an alternative to the
position-based RL paradigm, by introducing a torque-based RL framework, where
an RL policy directly predicts joint torques at a high frequency, thus
circumventing the use of a PD controller. The proposed learning torque control
framework is validated with extensive experiments, in which a quadruped is
capable of traversing various terrain and resisting external disturbances while
following user-specified commands. Furthermore, compared to learning position
control, learning torque control demonstrates the potential to achieve a higher
reward and is more robust to significant external disturbances. To our
knowledge, this is the first sim-to-real attempt for end-to-end learning torque
control of quadrupedal locomotion.
|
[
{
"version": "v1",
"created": "Thu, 10 Mar 2022 07:09:05 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Mar 2023 03:15:48 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Chen",
"Shuxiao",
""
],
[
"Zhang",
"Bike",
""
],
[
"Mueller",
"Mark W.",
""
],
[
"Rai",
"Akshara",
""
],
[
"Sreenath",
"Koushil",
""
]
] |
new_dataset
| 0.999143 |
2205.00258
|
Chengyu Wang
|
Chengyu Wang, Minghui Qiu, Chen Shi, Taolin Zhang, Tingting Liu, Lei
Li, Jianing Wang, Ming Wang, Jun Huang, Wei Lin
|
EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language
Processing
|
8 pages
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The success of Pre-Trained Models (PTMs) has reshaped the development of
Natural Language Processing (NLP). Yet, it is not easy to obtain
high-performing models and deploy them online for industrial practitioners. To
bridge this gap, EasyNLP is designed to make it easy to build NLP applications,
which supports a comprehensive suite of NLP algorithms. It further features
knowledge-enhanced pre-training, knowledge distillation and few-shot learning
functionalities for large-scale PTMs, and provides a unified framework of model
training, inference and deployment for real-world applications. Currently,
EasyNLP has powered over ten business units within Alibaba Group and is
seamlessly integrated to the Platform of AI (PAI) products on Alibaba Cloud.
The source code of our EasyNLP toolkit is released at GitHub
(https://github.com/alibaba/EasyNLP).
|
[
{
"version": "v1",
"created": "Sat, 30 Apr 2022 13:03:53 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Mar 2023 12:40:23 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Wang",
"Chengyu",
""
],
[
"Qiu",
"Minghui",
""
],
[
"Shi",
"Chen",
""
],
[
"Zhang",
"Taolin",
""
],
[
"Liu",
"Tingting",
""
],
[
"Li",
"Lei",
""
],
[
"Wang",
"Jianing",
""
],
[
"Wang",
"Ming",
""
],
[
"Huang",
"Jun",
""
],
[
"Lin",
"Wei",
""
]
] |
new_dataset
| 0.98664 |
2207.02303
|
Steven Jecmen
|
Steven Jecmen, Minji Yoon, Vincent Conitzer, Nihar B. Shah, Fei Fang
|
A Dataset on Malicious Paper Bidding in Peer Review
| null | null | null | null |
cs.CR cs.AI cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
In conference peer review, reviewers are often asked to provide "bids" on
each submitted paper that express their interest in reviewing that paper. A
paper assignment algorithm then uses these bids (along with other data) to
compute a high-quality assignment of reviewers to papers. However, this process
has been exploited by malicious reviewers who strategically bid in order to
unethically manipulate the paper assignment, crucially undermining the peer
review process. For example, these reviewers may aim to get assigned to a
friend's paper as part of a quid-pro-quo deal. A critical impediment towards
creating and evaluating methods to mitigate this issue is the lack of any
publicly-available data on malicious paper bidding. In this work, we collect
and publicly release a novel dataset to fill this gap, collected from a mock
conference activity where participants were instructed to bid either honestly
or maliciously. We further provide a descriptive analysis of the bidding
behavior, including our categorization of different strategies employed by
participants. Finally, we evaluate the ability of each strategy to manipulate
the assignment, and also evaluate the performance of some simple algorithms
meant to detect malicious bidding. The performance of these detection
algorithms can be taken as a baseline for future research on detecting
malicious bidding.
|
[
{
"version": "v1",
"created": "Fri, 24 Jun 2022 20:23:33 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Mar 2023 20:38:08 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Jecmen",
"Steven",
""
],
[
"Yoon",
"Minji",
""
],
[
"Conitzer",
"Vincent",
""
],
[
"Shah",
"Nihar B.",
""
],
[
"Fang",
"Fei",
""
]
] |
new_dataset
| 0.978657 |
2207.05621
|
Sixiang Chen
|
Sixiang Chen, Tian Ye, Yun Liu, Taodong Liao, Jingxia Jiang, Erkang
Chen, Peng Chen
|
MSP-Former: Multi-Scale Projection Transformer for Single Image
Desnowing
|
Accepted to ICASSP'2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Snow removal causes challenges due to its characteristic of complex
degradations. To this end, targeted treatment of multi-scale snow degradations
is critical for the network to learn effective snow removal. In order to handle
the diverse scenes, we propose a multi-scale projection transformer
(MSP-Former), which understands and covers a variety of snow degradation
features in a multi-path manner, and integrates comprehensive scene context
information for clean reconstruction via self-attention operation. For the
local details of various snow degradations, the local capture module is
introduced in parallel to assist in the rebuilding of a clean image. Such
design achieves the SOTA performance on three desnowing benchmark datasets
while costing the low parameters and computational complexity, providing a
guarantee of practicality.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2022 15:44:07 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Jul 2022 04:04:13 GMT"
},
{
"version": "v3",
"created": "Sat, 11 Mar 2023 15:47:53 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Chen",
"Sixiang",
""
],
[
"Ye",
"Tian",
""
],
[
"Liu",
"Yun",
""
],
[
"Liao",
"Taodong",
""
],
[
"Jiang",
"Jingxia",
""
],
[
"Chen",
"Erkang",
""
],
[
"Chen",
"Peng",
""
]
] |
new_dataset
| 0.976471 |
2209.01851
|
Dibyayan Chakraborty
|
Dibyayan Chakraborty, Kshitij Gajjar, Irena Rusu
|
Recognizing Geometric Intersection Graphs Stabbed by a Line
|
18 pages, 11 Figures
| null | null | null |
cs.DM cs.CC cs.CG cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we determine the computational complexity of recognizing two
graph classes, \emph{grounded L}-graphs and \emph{stabbable grid intersection}
graphs. An L-shape is made by joining the bottom end-point of a vertical
($\vert$) segment to the left end-point of a horizontal ($-$) segment. The top
end-point of the vertical segment is known as the {\em anchor} of the L-shape.
Grounded L-graphs are the intersection graphs of L-shapes such that all the
L-shapes' anchors lie on the same horizontal line. We show that recognizing
grounded L-graphs is NP-complete. This answers an open question asked by
Jel{\'\i}nek \& T{\"o}pfer (Electron. J. Comb., 2019).
Grid intersection graphs are the intersection graphs of axis-parallel line
segments in which two vertical (similarly, two horizontal) segments cannot
intersect. We say that a (not necessarily axis-parallel) straight line $\ell$
stabs a segment $s$, if $s$ intersects $\ell$. A graph $G$ is a stabbable grid
intersection graph ($StabGIG$) if there is a grid intersection representation
of $G$ in which the same line stabs all its segments. We show that recognizing
$StabGIG$ graphs is $NP$-complete, even on a restricted class of graphs. This
answers an open question asked by Chaplick \etal (\textsc{O}rder, 2018).
|
[
{
"version": "v1",
"created": "Mon, 5 Sep 2022 09:17:31 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Mar 2023 07:11:51 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Chakraborty",
"Dibyayan",
""
],
[
"Gajjar",
"Kshitij",
""
],
[
"Rusu",
"Irena",
""
]
] |
new_dataset
| 0.995629 |
2210.01343
|
Brian DuSell
|
Brian DuSell, David Chiang
|
The Surprising Computational Power of Nondeterministic Stack RNNs
|
21 pages, 8 figures. Published at ICLR 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Traditional recurrent neural networks (RNNs) have a fixed, finite number of
memory cells. In theory (assuming bounded range and precision), this limits
their formal language recognition power to regular languages, and in practice,
RNNs have been shown to be unable to learn many context-free languages (CFLs).
In order to expand the class of languages RNNs recognize, prior work has
augmented RNNs with a nondeterministic stack data structure, putting them on
par with pushdown automata and increasing their language recognition power to
CFLs. Nondeterminism is needed for recognizing all CFLs (not just deterministic
CFLs), but in this paper, we show that nondeterminism and the neural controller
interact to produce two more unexpected abilities. First, the nondeterministic
stack RNN can recognize not only CFLs, but also many non-context-free
languages. Second, it can recognize languages with much larger alphabet sizes
than one might expect given the size of its stack alphabet. Finally, to
increase the information capacity in the stack and allow it to solve more
complicated tasks with large alphabet sizes, we propose a new version of the
nondeterministic stack that simulates stacks of vectors rather than discrete
symbols. We demonstrate perplexity improvements with this new model on the Penn
Treebank language modeling benchmark.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 03:18:19 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Nov 2022 00:19:43 GMT"
},
{
"version": "v3",
"created": "Sat, 11 Mar 2023 00:11:03 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"DuSell",
"Brian",
""
],
[
"Chiang",
"David",
""
]
] |
new_dataset
| 0.970606 |
2210.06006
|
Ruihao Wang
|
Ruihao Wang, Jian Qin, Kaiying Li, Yaochen Li, Dong Cao, Jintao Xu
|
BEV-LaneDet: a Simple and Effective 3D Lane Detection Baseline
|
Accepted by CVPR2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
3D lane detection which plays a crucial role in vehicle routing, has recently
been a rapidly developing topic in autonomous driving. Previous works struggle
with practicality due to their complicated spatial transformations and
inflexible representations of 3D lanes. Faced with the issues, our work
proposes an efficient and robust monocular 3D lane detection called BEV-LaneDet
with three main contributions. First, we introduce the Virtual Camera that
unifies the in/extrinsic parameters of cameras mounted on different vehicles to
guarantee the consistency of the spatial relationship among cameras. It can
effectively promote the learning procedure due to the unified visual space. We
secondly propose a simple but efficient 3D lane representation called
Key-Points Representation. This module is more suitable to represent the
complicated and diverse 3D lane structures. At last, we present a light-weight
and chip-friendly spatial transformation module named Spatial Transformation
Pyramid to transform multiscale front-view features into BEV features.
Experimental results demonstrate that our work outperforms the state-of-the-art
approaches in terms of F-Score, being 10.6% higher on the OpenLane dataset and
5.9% higher on the Apollo 3D synthetic dataset, with a speed of 185 FPS. The
source code will released at https://github.com/gigo-team/bev_lane_det.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 08:22:21 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Oct 2022 09:35:47 GMT"
},
{
"version": "v3",
"created": "Sat, 11 Mar 2023 10:25:08 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Wang",
"Ruihao",
""
],
[
"Qin",
"Jian",
""
],
[
"Li",
"Kaiying",
""
],
[
"Li",
"Yaochen",
""
],
[
"Cao",
"Dong",
""
],
[
"Xu",
"Jintao",
""
]
] |
new_dataset
| 0.998628 |
2210.07182
|
Makoto Takamoto
|
Makoto Takamoto, Timothy Praditia, Raphael Leiteritz, Dan MacKinlay,
Francesco Alesiani, Dirk Pfl\"uger, Mathias Niepert
|
PDEBENCH: An Extensive Benchmark for Scientific Machine Learning
|
16 pages (main body) + 34 pages (supplemental material), accepted for
publication in NeurIPS 2022 Track Datasets and Benchmarks
| null | null | null |
cs.LG cs.CV physics.flu-dyn physics.geo-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine learning-based modeling of physical systems has experienced increased
interest in recent years. Despite some impressive progress, there is still a
lack of benchmarks for Scientific ML that are easy to use but still challenging
and representative of a wide range of problems. We introduce PDEBench, a
benchmark suite of time-dependent simulation tasks based on Partial
Differential Equations (PDEs). PDEBench comprises both code and data to
benchmark the performance of novel machine learning models against both
classical numerical simulations and machine learning baselines. Our proposed
set of benchmark problems contribute the following unique features: (1) A much
wider range of PDEs compared to existing benchmarks, ranging from relatively
common examples to more realistic and difficult problems; (2) much larger
ready-to-use datasets compared to prior work, comprising multiple simulation
runs across a larger number of initial and boundary conditions and PDE
parameters; (3) more extensible source codes with user-friendly APIs for data
generation and baseline results with popular machine learning models (FNO,
U-Net, PINN, Gradient-Based Inverse Method). PDEBench allows researchers to
extend the benchmark freely for their own purposes using a standardized API and
to compare the performance of new models to existing baseline methods. We also
propose new evaluation metrics with the aim to provide a more holistic
understanding of learning methods in the context of Scientific ML. With those
metrics we identify tasks which are challenging for recent ML methods and
propose these tasks as future challenges for the community. The code is
available at https://github.com/pdebench/PDEBench.
|
[
{
"version": "v1",
"created": "Thu, 13 Oct 2022 17:03:36 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Oct 2022 08:35:06 GMT"
},
{
"version": "v3",
"created": "Fri, 9 Dec 2022 16:13:17 GMT"
},
{
"version": "v4",
"created": "Fri, 3 Feb 2023 12:45:47 GMT"
},
{
"version": "v5",
"created": "Fri, 3 Mar 2023 16:09:32 GMT"
},
{
"version": "v6",
"created": "Mon, 13 Mar 2023 13:27:02 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Takamoto",
"Makoto",
""
],
[
"Praditia",
"Timothy",
""
],
[
"Leiteritz",
"Raphael",
""
],
[
"MacKinlay",
"Dan",
""
],
[
"Alesiani",
"Francesco",
""
],
[
"Pflüger",
"Dirk",
""
],
[
"Niepert",
"Mathias",
""
]
] |
new_dataset
| 0.998974 |
2210.11006
|
Qin Liu
|
Qin Liu, Zhenlin Xu, Gedas Bertasius, Marc Niethammer
|
SimpleClick: Interactive Image Segmentation with Simple Vision
Transformers
|
Tech report. Update 03/11/2023: Add results on a tiny model and
append supplementary materials
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Click-based interactive image segmentation aims at extracting objects with a
limited user clicking. A hierarchical backbone is the de-facto architecture for
current methods. Recently, the plain, non-hierarchical Vision Transformer (ViT)
has emerged as a competitive backbone for dense prediction tasks. This design
allows the original ViT to be a foundation model that can be finetuned for
downstream tasks without redesigning a hierarchical backbone for pretraining.
Although this design is simple and has been proven effective, it has not yet
been explored for interactive image segmentation. To fill this gap, we propose
SimpleClick, the first interactive segmentation method that leverages a plain
backbone. Based on the plain backbone, we introduce a symmetric patch embedding
layer that encodes clicks into the backbone with minor modifications to the
backbone itself. With the plain backbone pretrained as a masked autoencoder
(MAE), SimpleClick achieves state-of-the-art performance. Remarkably, our
method achieves 4.15 NoC@90 on SBD, improving 21.8% over the previous best
result. Extensive evaluation on medical images demonstrates the
generalizability of our method. We further develop an extremely tiny ViT
backbone for SimpleClick and provide a detailed computational analysis,
highlighting its suitability as a practical annotation tool.
|
[
{
"version": "v1",
"created": "Thu, 20 Oct 2022 04:20:48 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Nov 2022 19:08:59 GMT"
},
{
"version": "v3",
"created": "Sat, 11 Mar 2023 19:36:34 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Liu",
"Qin",
""
],
[
"Xu",
"Zhenlin",
""
],
[
"Bertasius",
"Gedas",
""
],
[
"Niethammer",
"Marc",
""
]
] |
new_dataset
| 0.998944 |
2210.11940
|
Edward Vendrow
|
Edward Vendrow, Duy Tho Le, Jianfei Cai and Hamid Rezatofighi
|
JRDB-Pose: A Large-scale Dataset for Multi-Person Pose Estimation and
Tracking
|
13 pages, 11 figures
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Autonomous robotic systems operating in human environments must understand
their surroundings to make accurate and safe decisions. In crowded human scenes
with close-up human-robot interaction and robot navigation, a deep
understanding requires reasoning about human motion and body dynamics over time
with human body pose estimation and tracking. However, existing datasets either
do not provide pose annotations or include scene types unrelated to robotic
applications. Many datasets also lack the diversity of poses and occlusions
found in crowded human scenes. To address this limitation we introduce
JRDB-Pose, a large-scale dataset and benchmark for multi-person pose estimation
and tracking using videos captured from a social navigation robot. The dataset
contains challenge scenes with crowded indoor and outdoor locations and a
diverse range of scales and occlusion types. JRDB-Pose provides human pose
annotations with per-keypoint occlusion labels and track IDs consistent across
the scene. A public evaluation server is made available for fair evaluation on
a held-out test set. JRDB-Pose is available at https://jrdb.erc.monash.edu/ .
|
[
{
"version": "v1",
"created": "Thu, 20 Oct 2022 07:14:37 GMT"
},
{
"version": "v2",
"created": "Sun, 12 Mar 2023 00:07:12 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Vendrow",
"Edward",
""
],
[
"Le",
"Duy Tho",
""
],
[
"Cai",
"Jianfei",
""
],
[
"Rezatofighi",
"Hamid",
""
]
] |
new_dataset
| 0.999885 |
2210.14458
|
Zahra Esmaeilbeig
|
Zahra Esmaeilbeig, Arian Eamaz, Kumar Vijay Mishra, and Mojtaba
Soltanalian
|
Joint Waveform and Passive Beamformer Design in Multi-IRS-Aided Radar
| null | null | null | null |
cs.IT eess.SP math.IT math.OC
|
http://creativecommons.org/licenses/by/4.0/
|
Intelligent reflecting surface (IRS) technology has recently attracted a
significant interest in non-light-of-sight radar remote sensing. Prior works
have largely focused on designing single IRS beamformers for this problem. For
the first time in the literature, this paper considers multi-IRS-aided
multiple-input multiple-output (MIMO) radar and jointly designs the transmit
unimodular waveforms and optimal IRS beamformers. To this end, we derive the
Cramer-Rao lower bound (CRLB) of target direction-of-arrival (DoA) as a
performance metric. Unimodular transmit sequences are the preferred waveforms
from a hardware perspective. We show that, through suitable transformations,
the joint design problem can be reformulated as two unimodular quadratic
programs (UQP). To deal with the NP-hard nature of both UQPs, we propose
unimodular waveform and beamforming design for multi-IRS radar (UBeR) algorithm
that takes advantage of the low-cost power method-like iterations. Numerical
experiments illustrate that the MIMO waveforms and phase shifts obtained from
our UBeR algorithm are effective in improving the CRLB of DoA estimation.
|
[
{
"version": "v1",
"created": "Wed, 26 Oct 2022 04:10:47 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Oct 2022 20:02:53 GMT"
},
{
"version": "v3",
"created": "Sun, 12 Mar 2023 05:11:29 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Esmaeilbeig",
"Zahra",
""
],
[
"Eamaz",
"Arian",
""
],
[
"Mishra",
"Kumar Vijay",
""
],
[
"Soltanalian",
"Mojtaba",
""
]
] |
new_dataset
| 0.989788 |
2210.16943
|
Xu Cao
|
Xu Cao, Wenqian Ye, Elena Sizikova, Xue Bai, Megan Coffee, Hongwu
Zeng, Jianguo Cao
|
ViTASD: Robust Vision Transformer Baselines for Autism Spectrum Disorder
Facial Diagnosis
|
5 pages, 3 figures, Accepted by the ICASSP 2023
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autism spectrum disorder (ASD) is a lifelong neurodevelopmental disorder with
very high prevalence around the world. Research progress in the field of ASD
facial analysis in pediatric patients has been hindered due to a lack of
well-established baselines. In this paper, we propose the use of the Vision
Transformer (ViT) for the computational analysis of pediatric ASD. The
presented model, known as ViTASD, distills knowledge from large facial
expression datasets and offers model structure transferability. Specifically,
ViTASD employs a vanilla ViT to extract features from patients' face images and
adopts a lightweight decoder with a Gaussian Process layer to enhance the
robustness for ASD analysis. Extensive experiments conducted on standard ASD
facial analysis benchmarks show that our method outperforms all of the
representative approaches in ASD facial analysis, while the ViTASD-L achieves a
new state-of-the-art. Our code and pretrained models are available at
https://github.com/IrohXu/ViTASD.
|
[
{
"version": "v1",
"created": "Sun, 30 Oct 2022 20:38:56 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Mar 2023 05:22:12 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Cao",
"Xu",
""
],
[
"Ye",
"Wenqian",
""
],
[
"Sizikova",
"Elena",
""
],
[
"Bai",
"Xue",
""
],
[
"Coffee",
"Megan",
""
],
[
"Zeng",
"Hongwu",
""
],
[
"Cao",
"Jianguo",
""
]
] |
new_dataset
| 0.986211 |
2211.05375
|
Ahad Rauf
|
Ahad M. Rauf, Jack S. Bernardo, and Sean Follmer
|
Electroadhesive Auxetics as Programmable Layer Jamming Skins for
Formable Crust Shape Displays
|
Accepted to IEEE International Conference on Robotics and Automation
(ICRA 2023)
| null | null | null |
cs.RO cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Shape displays are a class of haptic devices that enable whole-hand haptic
exploration of 3D surfaces. However, their scalability is limited by the
mechanical complexity and high cost of traditional actuator arrays. In this
paper, we propose using electroadhesive auxetic skins as a strain-limiting
layer to create programmable shape change in a continuous ("formable crust")
shape display. Auxetic skins are manufactured as flexible printed circuit
boards with dielectric-laminated electrodes on each auxetic unit cell (AUC),
using monolithic fabrication to lower cost and assembly time. By layering
multiple sheets and applying a voltage between electrodes on subsequent layers,
electroadhesion locks individual AUCs, achieving a maximum in-plane stiffness
variation of 7.6x with a power consumption of 50 uW/AUC. We first characterize
an individual AUC and compare results to a kinematic model. We then validate
the ability of a 5x5 AUC array to actively modify its own axial and transverse
stiffness. Finally, we demonstrate this array in a continuous shape display as
a strain-limiting skin to programmatically modulate the shape output of an
inflatable LDPE pouch. Integrating electroadhesion with auxetics enables new
capabilities for scalable, low-profile, and low-power control of flexible
robotic systems.
|
[
{
"version": "v1",
"created": "Thu, 10 Nov 2022 06:57:29 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Mar 2023 05:26:20 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Rauf",
"Ahad M.",
""
],
[
"Bernardo",
"Jack S.",
""
],
[
"Follmer",
"Sean",
""
]
] |
new_dataset
| 0.995107 |
2211.07122
|
Chanda Grover Kamra
|
Chanda Grover, Indra Deep Mastan, Debayan Gupta
|
ContextCLIP: Contextual Alignment of Image-Text pairs on CLIP visual
representations
|
11 Pages, 7 Figures, 2 Tables, ICVGIP
|
ICVGIP, 2022
|
10.1145/3571600.3571653
| null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
State-of-the-art empirical work has shown that visual representations learned
by deep neural networks are robust in nature and capable of performing
classification tasks on diverse datasets. For example, CLIP demonstrated
zero-shot transfer performance on multiple datasets for classification tasks in
a joint embedding space of image and text pairs. However, it showed negative
transfer performance on standard datasets, e.g., BirdsNAP, RESISC45, and MNIST.
In this paper, we propose ContextCLIP, a contextual and contrastive learning
framework for the contextual alignment of image-text pairs by learning robust
visual representations on Conceptual Captions dataset. Our framework was
observed to improve the image-text alignment by aligning text and image
representations contextually in the joint embedding space. ContextCLIP showed
good qualitative performance for text-to-image retrieval tasks and enhanced
classification accuracy. We evaluated our model quantitatively with zero-shot
transfer and fine-tuning experiments on CIFAR-10, CIFAR-100, Birdsnap,
RESISC45, and MNIST datasets for classification task.
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 05:17:51 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Grover",
"Chanda",
""
],
[
"Mastan",
"Indra Deep",
""
],
[
"Gupta",
"Debayan",
""
]
] |
new_dataset
| 0.999752 |
2211.07436
|
Daniel S. Katz
|
William F. Godoy, Ritu Arora, Keith Beattie, David E. Bernholdt, Sarah
E. Bratt, Daniel S. Katz, Ignacio Laguna, Amiya K. Maji, Addi Malviya Thakur,
Rafael M. Mudafort, Nitin Sukhija, Damian Rouson, Cindy Rubio-Gonz\'alez,
Karan Vahi
|
Giving RSEs a Larger Stage through the Better Scientific Software
Fellowship
|
submitted to Computing in Science & Engineering (CiSE), Special Issue
on the Future of Research Software Engineers in the US
| null |
10.1109/MCSE.2023.3253847
| null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
The Better Scientific Software Fellowship (BSSwF) was launched in 2018 to
foster and promote practices, processes, and tools to improve developer
productivity and software sustainability of scientific codes. BSSwF's vision is
to grow the community with practitioners, leaders, mentors, and consultants to
increase the visibility of scientific software production and sustainability.
Over the last five years, many fellowship recipients and honorable mentions
have identified as research software engineers (RSEs). This paper provides case
studies from several of the program's participants to illustrate some of the
diverse ways BSSwF has benefited both the RSE and scientific communities. In an
environment where the contributions of RSEs are too often undervalued, we
believe that programs such as BSSwF can be a valuable means to recognize and
encourage community members to step outside of their regular commitments and
expand on their work, collaborations and ideas for a larger audience.
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 15:11:47 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Nov 2022 03:42:52 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Godoy",
"William F.",
""
],
[
"Arora",
"Ritu",
""
],
[
"Beattie",
"Keith",
""
],
[
"Bernholdt",
"David E.",
""
],
[
"Bratt",
"Sarah E.",
""
],
[
"Katz",
"Daniel S.",
""
],
[
"Laguna",
"Ignacio",
""
],
[
"Maji",
"Amiya K.",
""
],
[
"Thakur",
"Addi Malviya",
""
],
[
"Mudafort",
"Rafael M.",
""
],
[
"Sukhija",
"Nitin",
""
],
[
"Rouson",
"Damian",
""
],
[
"Rubio-González",
"Cindy",
""
],
[
"Vahi",
"Karan",
""
]
] |
new_dataset
| 0.989802 |
2211.08293
|
Dario Barberis
|
Dario Barberis (1), Igor Aleksandrov (2), Evgeny Alexandrov (2),
Zbigniew Baranowski (3), Luca Canali (3), Elizaveta Cherepanova (4), Gancho
Dimitrov (3), Andrea Favareto (1), Alvaro Fernandez Casani (5), Elizabeth J.
Gallas (6), Carlos Garcia Montoro (5), Santiago Gonzalez de la Hoz (5),
Julius Hrivnac (7), Alexander Iakovlev (2), Andrei Kazymov (2), Mikhail
Mineev (2), Fedor Prokoshin (2), Grigori Rybkin (7), Jose Salt (5), Javier
Sanchez (5), Roman Sorokoletov (8), Rainer Toebbicke (3), Petya Vasileva (3),
Miguel Villaplana Perez (5), Ruijun Yuan (7) ( (1) University and INFN
Genova, Genoa (Italy), (2) Joint Institute for Nuclear Research, Dubna
(Russia), (3) CERN, Geneva (Switzerland), (4) NIKHEF, Amsterdam
(Netherlands), (5) IFIC, Valencia (Spain), (6) University of Oxford, Oxford
(United Kingdom), (7) IJCLab, Orsay (France), (8) University of Texas,
Arlington (USA))
|
The ATLAS EventIndex: a BigData catalogue for all ATLAS experiment
events
|
21 pages
| null |
10.1007/s41781-023-00096-8
| null |
cs.DC hep-ex
|
http://creativecommons.org/licenses/by/4.0/
|
The ATLAS EventIndex system comprises the catalogue of all events collected,
processed or generated by the ATLAS experiment at the CERN LHC accelerator, and
all associated software tools to collect, store and query this information.
ATLAS records several billion particle interactions every year of operation,
processes them for analysis and generates even larger simulated data samples; a
global catalogue is needed to keep track of the location of each event record
and be able to search and retrieve specific events for in-depth investigations.
Each EventIndex record includes summary information on the event itself and the
pointers to the files containing the full event. Most components of the
EventIndex system are implemented using BigData open-source tools. This paper
describes the architectural choices and their evolution in time, as well as the
past, current and foreseen future implementations of all EventIndex components.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 16:45:49 GMT"
},
{
"version": "v2",
"created": "Sun, 12 Mar 2023 16:37:34 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Barberis",
"Dario",
""
],
[
"Aleksandrov",
"Igor",
""
],
[
"Alexandrov",
"Evgeny",
""
],
[
"Baranowski",
"Zbigniew",
""
],
[
"Canali",
"Luca",
""
],
[
"Cherepanova",
"Elizaveta",
""
],
[
"Dimitrov",
"Gancho",
""
],
[
"Favareto",
"Andrea",
""
],
[
"Casani",
"Alvaro Fernandez",
""
],
[
"Gallas",
"Elizabeth J.",
""
],
[
"Montoro",
"Carlos Garcia",
""
],
[
"de la Hoz",
"Santiago Gonzalez",
""
],
[
"Hrivnac",
"Julius",
""
],
[
"Iakovlev",
"Alexander",
""
],
[
"Kazymov",
"Andrei",
""
],
[
"Mineev",
"Mikhail",
""
],
[
"Prokoshin",
"Fedor",
""
],
[
"Rybkin",
"Grigori",
""
],
[
"Salt",
"Jose",
""
],
[
"Sanchez",
"Javier",
""
],
[
"Sorokoletov",
"Roman",
""
],
[
"Toebbicke",
"Rainer",
""
],
[
"Vasileva",
"Petya",
""
],
[
"Perez",
"Miguel Villaplana",
""
],
[
"Yuan",
"Ruijun",
""
]
] |
new_dataset
| 0.998617 |
2211.12194
|
Xiaodong Cun
|
Wenxuan Zhang, Xiaodong Cun, Xuan Wang, Yong Zhang, Xi Shen, Yu Guo,
Ying Shan, Fei Wang
|
SadTalker: Learning Realistic 3D Motion Coefficients for Stylized
Audio-Driven Single Image Talking Face Animation
|
Accepted by CVPR 2023, Project page: https://sadtalker.github.io,
Code: https://github.com/Winfredy/SadTalker
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Generating talking head videos through a face image and a piece of speech
audio still contains many challenges. ie, unnatural head movement, distorted
expression, and identity modification. We argue that these issues are mainly
because of learning from the coupled 2D motion fields. On the other hand,
explicitly using 3D information also suffers problems of stiff expression and
incoherent video. We present SadTalker, which generates 3D motion coefficients
(head pose, expression) of the 3DMM from audio and implicitly modulates a novel
3D-aware face render for talking head generation. To learn the realistic motion
coefficients, we explicitly model the connections between audio and different
types of motion coefficients individually. Precisely, we present ExpNet to
learn the accurate facial expression from audio by distilling both coefficients
and 3D-rendered faces. As for the head pose, we design PoseVAE via a
conditional VAE to synthesize head motion in different styles. Finally, the
generated 3D motion coefficients are mapped to the unsupervised 3D keypoints
space of the proposed face render, and synthesize the final video. We conducted
extensive experiments to demonstrate the superiority of our method in terms of
motion and video quality.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 11:35:07 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Mar 2023 08:40:32 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Zhang",
"Wenxuan",
""
],
[
"Cun",
"Xiaodong",
""
],
[
"Wang",
"Xuan",
""
],
[
"Zhang",
"Yong",
""
],
[
"Shen",
"Xi",
""
],
[
"Guo",
"Yu",
""
],
[
"Shan",
"Ying",
""
],
[
"Wang",
"Fei",
""
]
] |
new_dataset
| 0.992783 |
2211.15775
|
Tai Nguyen
|
Tai D. Nguyen, Shengbang Fang, Matthew C. Stamm
|
VideoFACT: Detecting Video Forgeries Using Attention, Scene Context, and
Forensic Traces
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Fake videos represent an important misinformation threat. While existing
forensic networks have demonstrated strong performance on image forgeries,
recent results reported on the Adobe VideoSham dataset show that these networks
fail to identify fake content in videos. In this paper, we show that this is
due to video coding, which introduces local variation into forensic traces. In
response, we propose VideoFACT - a new network that is able to detect and
localize a wide variety of video forgeries and manipulations. To overcome
challenges that existing networks face when analyzing videos, our network
utilizes both forensic embeddings to capture traces left by manipulation,
context embeddings to control for variation in forensic traces introduced by
video coding, and a deep self-attention mechanism to estimate the quality and
relative importance of local forensic embeddings. We create several new video
forgery datasets and use these, along with publicly available data, to
experimentally evaluate our network's performance. These results show that our
proposed network is able to identify a diverse set of video forgeries,
including those not encountered during training. Furthermore, we show that our
network can be fine-tuned to achieve even stronger performance on challenging
AI-based manipulations.
|
[
{
"version": "v1",
"created": "Mon, 28 Nov 2022 21:03:54 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Mar 2023 22:33:08 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Nguyen",
"Tai D.",
""
],
[
"Fang",
"Shengbang",
""
],
[
"Stamm",
"Matthew C.",
""
]
] |
new_dataset
| 0.999305 |
2211.15975
|
Xinyu Cai
|
Xinyu Cai, Wentao Jiang, Runsheng Xu, Wenquan Zhao, Jiaqi Ma, Si Liu,
Yikang Li
|
Analyzing Infrastructure LiDAR Placement with Realistic LiDAR Simulation
Library
|
7 pages, 6 figures, accepted to the IEEE International Conference on
Robotics and Automation (ICRA'23)
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, Vehicle-to-Everything(V2X) cooperative perception has attracted
increasing attention. Infrastructure sensors play a critical role in this
research field; however, how to find the optimal placement of infrastructure
sensors is rarely studied. In this paper, we investigate the problem of
infrastructure sensor placement and propose a pipeline that can efficiently and
effectively find optimal installation positions for infrastructure sensors in a
realistic simulated environment. To better simulate and evaluate LiDAR
placement, we establish a Realistic LiDAR Simulation library that can simulate
the unique characteristics of different popular LiDARs and produce
high-fidelity LiDAR point clouds in the CARLA simulator. Through simulating
point cloud data in different LiDAR placements, we can evaluate the perception
accuracy of these placements using multiple detection models. Then, we analyze
the correlation between the point cloud distribution and perception accuracy by
calculating the density and uniformity of regions of interest. Experiments show
that when using the same number and type of LiDAR, the placement scheme
optimized by our proposed method improves the average precision by 15%,
compared with the conventional placement scheme in the standard lane scene. We
also analyze the correlation between perception performance in the region of
interest and LiDAR point cloud distribution and validate that density and
uniformity can be indicators of performance. Both the RLS Library and related
code will be released at
https://github.com/PJLab-ADG/LiDARSimLib-and-Placement-Evaluation.
|
[
{
"version": "v1",
"created": "Tue, 29 Nov 2022 07:18:32 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Feb 2023 07:35:19 GMT"
},
{
"version": "v3",
"created": "Sat, 11 Mar 2023 11:17:18 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Cai",
"Xinyu",
""
],
[
"Jiang",
"Wentao",
""
],
[
"Xu",
"Runsheng",
""
],
[
"Zhao",
"Wenquan",
""
],
[
"Ma",
"Jiaqi",
""
],
[
"Liu",
"Si",
""
],
[
"Li",
"Yikang",
""
]
] |
new_dataset
| 0.996997 |
2212.00935
|
Shiqiang Du
|
Baokai Liu, Fengjie He, Shiqiang Du, Kaiwu Zhang, Jianhua Wang
|
Dunhuang murals contour generation network based on convolution and
self-attention fusion
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Dunhuang murals are a collection of Chinese style and national style, forming
a self-contained Chinese-style Buddhist art. It has very high historical and
cultural value and research significance. Among them, the lines of Dunhuang
murals are highly general and expressive. It reflects the character's
distinctive character and complex inner emotions. Therefore, the outline
drawing of murals is of great significance to the research of Dunhuang Culture.
The contour generation of Dunhuang murals belongs to image edge detection,
which is an important branch of computer vision, aims to extract salient
contour information in images. Although convolution-based deep learning
networks have achieved good results in image edge extraction by exploring the
contextual and semantic features of images. However, with the enlargement of
the receptive field, some local detail information is lost. This makes it
impossible for them to generate reasonable outline drawings of murals. In this
paper, we propose a novel edge detector based on self-attention combined with
convolution to generate line drawings of Dunhuang murals. Compared with
existing edge detection methods, firstly, a new residual self-attention and
convolution mixed module (Ramix) is proposed to fuse local and global features
in feature maps. Secondly, a novel densely connected backbone extraction
network is designed to efficiently propagate rich edge feature information from
shallow layers into deep layers. Compared with existing methods, it is shown on
different public datasets that our method is able to generate sharper and
richer edge maps. In addition, testing on the Dunhuang mural dataset shows that
our method can achieve very competitive performance.
|
[
{
"version": "v1",
"created": "Fri, 2 Dec 2022 02:47:30 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Mar 2023 11:24:49 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Liu",
"Baokai",
""
],
[
"He",
"Fengjie",
""
],
[
"Du",
"Shiqiang",
""
],
[
"Zhang",
"Kaiwu",
""
],
[
"Wang",
"Jianhua",
""
]
] |
new_dataset
| 0.975293 |
2212.07738
|
Jobie Budd
|
Jobie Budd, Kieran Baker, Emma Karoune, Harry Coppock, Selina Patel,
Ana Tendero Ca\~nadas, Alexander Titcomb, Richard Payne, David Hurley,
Sabrina Egglestone, Lorraine Butler, Jonathon Mellor, George Nicholson, Ivan
Kiskin, Vasiliki Koutra, Radka Jersakova, Rachel A. McKendry, Peter Diggle,
Sylvia Richardson, Bj\"orn W. Schuller, Steven Gilmour, Davide Pigoli,
Stephen Roberts, Josef Packham, Tracey Thornley, Chris Holmes
|
A large-scale and PCR-referenced vocal audio dataset for COVID-19
|
37 pages, 4 figures
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
The UK COVID-19 Vocal Audio Dataset is designed for the training and
evaluation of machine learning models that classify SARS-CoV-2 infection status
or associated respiratory symptoms using vocal audio. The UK Health Security
Agency recruited voluntary participants through the national Test and Trace
programme and the REACT-1 survey in England from March 2021 to March 2022,
during dominant transmission of the Alpha and Delta SARS-CoV-2 variants and
some Omicron variant sublineages. Audio recordings of volitional coughs,
exhalations, and speech were collected in the 'Speak up to help beat
coronavirus' digital survey alongside demographic, self-reported symptom and
respiratory condition data, and linked to SARS-CoV-2 test results. The UK
COVID-19 Vocal Audio Dataset represents the largest collection of SARS-CoV-2
PCR-referenced audio recordings to date. PCR results were linked to 70,794 of
72,999 participants and 24,155 of 25,776 positive cases. Respiratory symptoms
were reported by 45.62% of participants. This dataset has additional potential
uses for bioacoustics research, with 11.30% participants reporting asthma, and
27.20% with linked influenza PCR test results.
|
[
{
"version": "v1",
"created": "Thu, 15 Dec 2022 11:40:40 GMT"
},
{
"version": "v2",
"created": "Sat, 18 Feb 2023 16:05:52 GMT"
},
{
"version": "v3",
"created": "Sun, 12 Mar 2023 16:49:24 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Budd",
"Jobie",
""
],
[
"Baker",
"Kieran",
""
],
[
"Karoune",
"Emma",
""
],
[
"Coppock",
"Harry",
""
],
[
"Patel",
"Selina",
""
],
[
"Cañadas",
"Ana Tendero",
""
],
[
"Titcomb",
"Alexander",
""
],
[
"Payne",
"Richard",
""
],
[
"Hurley",
"David",
""
],
[
"Egglestone",
"Sabrina",
""
],
[
"Butler",
"Lorraine",
""
],
[
"Mellor",
"Jonathon",
""
],
[
"Nicholson",
"George",
""
],
[
"Kiskin",
"Ivan",
""
],
[
"Koutra",
"Vasiliki",
""
],
[
"Jersakova",
"Radka",
""
],
[
"McKendry",
"Rachel A.",
""
],
[
"Diggle",
"Peter",
""
],
[
"Richardson",
"Sylvia",
""
],
[
"Schuller",
"Björn W.",
""
],
[
"Gilmour",
"Steven",
""
],
[
"Pigoli",
"Davide",
""
],
[
"Roberts",
"Stephen",
""
],
[
"Packham",
"Josef",
""
],
[
"Thornley",
"Tracey",
""
],
[
"Holmes",
"Chris",
""
]
] |
new_dataset
| 0.999797 |
2212.11768
|
Beatrice Li
|
Beatrice Li, Arash Tavakoli, Arsalan Heydarian
|
Occupant Privacy Perception, Awareness, and Preferences in Smart Office
Environments
| null | null |
10.1038/s41598-023-30788-5
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Building management systems tout numerous benefits, such as energy efficiency
and occupant comfort but rely on vast amounts of data from various sensors.
Advancements in machine learning algorithms make it possible to extract
personal information about occupants and their activities beyond the intended
design of a non-intrusive sensor. However, occupants are not informed of data
collection and possess different privacy preferences and thresholds for privacy
loss. While privacy perceptions and preferences are most understood in smart
homes, limited studies have evaluated these factors in smart office buildings,
where there are more users and different privacy risks. To better understand
occupants' perceptions and privacy preferences, we conducted twenty-four
semi-structured interviews between April 2022 and May 2022 on occupants of a
smart office building. We found that data modality features and personal
features contribute to people's privacy preferences. The features of the
collected modality define data modality features -- spatial, security, and
temporal context. In contrast, personal features consist of one's awareness of
data modality features and data inferences, definitions of privacy and
security, and the available rewards and utility. Our proposed model of people's
privacy preferences in smart office buildings helps design more effective
measures to improve people's privacy.
|
[
{
"version": "v1",
"created": "Thu, 22 Dec 2022 15:05:17 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Li",
"Beatrice",
""
],
[
"Tavakoli",
"Arash",
""
],
[
"Heydarian",
"Arsalan",
""
]
] |
new_dataset
| 0.999432 |
2212.13857
|
Robert Hallyburton
|
R. Spencer Hallyburton, Shucheng Zhang, Miroslav Pajic
|
AVstack: An Open-Source, Reconfigurable Platform for Autonomous Vehicle
Development
| null | null | null | null |
cs.RO cs.SE cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Pioneers of autonomous vehicles (AVs) promised to revolutionize the driving
experience and driving safety. However, milestones in AVs have materialized
slower than forecast. Two culprits are (1) the lack of verifiability of
proposed state-of-the-art AV components, and (2) stagnation of pursuing
next-level evaluations, e.g., vehicle-to-infrastructure (V2I) and multi-agent
collaboration. In part, progress has been hampered by: the large volume of
software in AVs, the multiple disparate conventions, the difficulty of testing
across datasets and simulators, and the inflexibility of state-of-the-art AV
components. To address these challenges, we present AVstack, an open-source,
reconfigurable software platform for AV design, implementation, test, and
analysis. AVstack solves the validation problem by enabling first-of-a-kind
trade studies on datasets and physics-based simulators. AVstack solves the
stagnation problem as a reconfigurable AV platform built on dozens of
open-source AV components in a high-level programming language. We demonstrate
the power of AVstack through longitudinal testing across multiple benchmark
datasets and V2I-collaboration case studies that explore trade-offs of
designing multi-sensor, multi-agent algorithms.
|
[
{
"version": "v1",
"created": "Wed, 28 Dec 2022 15:12:33 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Mar 2023 21:25:48 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Hallyburton",
"R. Spencer",
""
],
[
"Zhang",
"Shucheng",
""
],
[
"Pajic",
"Miroslav",
""
]
] |
new_dataset
| 0.957464 |
2301.01123
|
Shuhao Shi
|
Shuhao Shi, Kai Qiao, Jian Chen, Shuai Yang, Jie Yang, Baojie Song,
Linyuan Wang, Bin Yan
|
MGTAB: A Multi-Relational Graph-Based Twitter Account Detection
Benchmark
|
14 pages, 7 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The development of social media user stance detection and bot detection
methods rely heavily on large-scale and high-quality benchmarks. However, in
addition to low annotation quality, existing benchmarks generally have
incomplete user relationships, suppressing graph-based account detection
research. To address these issues, we propose a Multi-Relational Graph-Based
Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based
benchmark for account detection. To our knowledge, MGTAB was built based on the
largest original data in the field, with over 1.55 million users and 130
million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of
relationships, ensuring high-quality annotation and diversified relations. In
MGTAB, we extracted the 20 user property features with the greatest information
gain and user tweet features as the user features. In addition, we performed a
thorough evaluation of MGTAB and other public datasets. Our experiments found
that graph-based approaches are generally more effective than feature-based
approaches and perform better when introducing multiple relations. By analyzing
experiment results, we identify effective approaches for account detection and
provide potential future research directions in this field. Our benchmark and
standardized evaluation procedures are freely available at:
https://github.com/GraphDetec/MGTAB.
|
[
{
"version": "v1",
"created": "Tue, 3 Jan 2023 14:43:40 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Mar 2023 08:59:01 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Shi",
"Shuhao",
""
],
[
"Qiao",
"Kai",
""
],
[
"Chen",
"Jian",
""
],
[
"Yang",
"Shuai",
""
],
[
"Yang",
"Jie",
""
],
[
"Song",
"Baojie",
""
],
[
"Wang",
"Linyuan",
""
],
[
"Yan",
"Bin",
""
]
] |
new_dataset
| 0.999556 |
2301.02886
|
Han Han
|
Han Han, Vincent Lostanlen, Mathieu Lagrange
|
Perceptual-Neural-Physical Sound Matching
| null | null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Sound matching algorithms seek to approximate a target waveform by parametric
audio synthesis. Deep neural networks have achieved promising results in
matching sustained harmonic tones. However, the task is more challenging when
targets are nonstationary and inharmonic, e.g., percussion. We attribute this
problem to the inadequacy of loss function. On one hand, mean square error in
the parametric domain, known as "P-loss", is simple and fast but fails to
accommodate the differing perceptual significance of each parameter. On the
other hand, mean square error in the spectrotemporal domain, known as "spectral
loss", is perceptually motivated and serves in differentiable digital signal
processing (DDSP). Yet, spectral loss is a poor predictor of pitch intervals
and its gradient may be computationally expensive; hence a slow convergence.
Against this conundrum, we present Perceptual-Neural-Physical loss (PNP). PNP
is the optimal quadratic approximation of spectral loss while being as fast as
P-loss during training. We instantiate PNP with physical modeling synthesis as
decoder and joint time-frequency scattering transform (JTFS) as spectral
representation. We demonstrate its potential on matching synthetic drum sounds
in comparison with other loss functions.
|
[
{
"version": "v1",
"created": "Sat, 7 Jan 2023 16:17:48 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Mar 2023 17:16:37 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Han",
"Han",
""
],
[
"Lostanlen",
"Vincent",
""
],
[
"Lagrange",
"Mathieu",
""
]
] |
new_dataset
| 0.996292 |
2302.10420
|
Chengxi Han
|
Chengxi Han, Chen Wu, Bo Du
|
HCGMNET: A Hierarchical Change Guiding Map Network For Change Detection
| null | null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Very-high-resolution (VHR) remote sensing (RS) image change detection (CD)
has been a challenging task for its very rich spatial information and sample
imbalance problem. In this paper, we have proposed a hierarchical change
guiding map network (HCGMNet) for change detection. The model uses hierarchical
convolution operations to extract multiscale features, continuously merges
multi-scale features layer by layer to improve the expression of global and
local information, and guides the model to gradually refine edge features and
comprehensive performance by a change guide module (CGM), which is a
self-attention with changing guide map. Extensive experiments on two CD
datasets show that the proposed HCGMNet architecture achieves better CD
performance than existing state-of-the-art (SOTA) CD methods.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 03:16:22 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Mar 2023 11:00:33 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Han",
"Chengxi",
""
],
[
"Wu",
"Chen",
""
],
[
"Du",
"Bo",
""
]
] |
new_dataset
| 0.992862 |
2302.14096
|
Jamie McGowan
|
Jamie McGowan, Elizabeth Guest, Ziyang Yan, Cong Zheng, Neha Patel,
Mason Cusack, Charlie Donaldson, Sofie de Cnudde, Gabriel Facini and Fabon
Dzogang
|
A Dataset for Learning Graph Representations to Predict Customer Returns
in Fashion Retail
|
The ASOS GraphReturns dataset can be found at https://osf.io/c793h/.
Accepted at FashionXRecSys 2022 workshop. Published Version
|
Lecture Notes in Electrical Engineering, vol 981. Springer, Cham.
(2023)
|
10.1007/978-3-031-22192-7_6
| null |
cs.LG cs.DB cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
We present a novel dataset collected by ASOS (a major online fashion
retailer) to address the challenge of predicting customer returns in a fashion
retail ecosystem. With the release of this substantial dataset we hope to
motivate further collaboration between research communities and the fashion
industry. We first explore the structure of this dataset with a focus on the
application of Graph Representation Learning in order to exploit the natural
data structure and provide statistical insights into particular features within
the data. In addition to this, we show examples of a return prediction
classification task with a selection of baseline models (i.e. with no
intermediate representation learning step) and a graph representation based
model. We show that in a downstream return prediction classification task, an
F1-score of 0.792 can be found using a Graph Neural Network (GNN), improving
upon other models discussed in this work. Alongside this increased F1-score, we
also present a lower cross-entropy loss by recasting the data into a graph
structure, indicating more robust predictions from a GNN based solution. These
results provide evidence that GNNs could provide more impactful and usable
classifications than other baseline models on the presented dataset and with
this motivation, we hope to encourage further research into graph-based
approaches using the ASOS GraphReturns dataset.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 19:14:37 GMT"
},
{
"version": "v2",
"created": "Sun, 12 Mar 2023 15:44:41 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"McGowan",
"Jamie",
""
],
[
"Guest",
"Elizabeth",
""
],
[
"Yan",
"Ziyang",
""
],
[
"Zheng",
"Cong",
""
],
[
"Patel",
"Neha",
""
],
[
"Cusack",
"Mason",
""
],
[
"Donaldson",
"Charlie",
""
],
[
"de Cnudde",
"Sofie",
""
],
[
"Facini",
"Gabriel",
""
],
[
"Dzogang",
"Fabon",
""
]
] |
new_dataset
| 0.954814 |
2302.14415
|
ZongTan Li
|
ZongTan Li
|
Mesh-SORT: Simple and effective location-wise tracker with lost
management strategies
|
14 pages 18 figs
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-Object Tracking (MOT) has gained extensive attention in recent years
due to its potential applications in traffic and pedestrian detection. We note
that tracking by detection may suffer from errors generated by noise detectors,
such as an imprecise bounding box before the occlusions, and observed that in
most tracking scenarios, objects tend to move and lost within specific
locations. To counter this, we present a novel tracker to deal with the bad
detector and occlusions. Firstly, we proposed a location-wise sub-region
recognition method which equally divided the frame, which we called mesh. Then
we proposed corresponding location-wise loss management strategies and
different matching strategies. The resulting Mesh-SORT, ablation studies
demonstrate its effectiveness and made 3% fragmentation 7.2% ID switches drop
and 0.4% MOTA improvement compared to the baseline on MOT17 datasets. Finally,
we analyze its limitation on the specific scene and discussed what future works
can be extended.
|
[
{
"version": "v1",
"created": "Tue, 28 Feb 2023 08:47:53 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Mar 2023 09:07:01 GMT"
},
{
"version": "v3",
"created": "Sun, 12 Mar 2023 13:07:15 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Li",
"ZongTan",
""
]
] |
new_dataset
| 0.959009 |
2303.01508
|
Shijun Wang
|
Shijun Wang, J\'on Gu{\dh}nason, Damian Borth
|
Fine-grained Emotional Control of Text-To-Speech: Learning To Rank
Inter- And Intra-Class Emotion Intensities
|
Accepted by ICASSP2023
| null | null | null |
cs.SD cs.AI eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
State-of-the-art Text-To-Speech (TTS) models are capable of producing
high-quality speech. The generated speech, however, is usually neutral in
emotional expression, whereas very often one would want fine-grained emotional
control of words or phonemes. Although still challenging, the first TTS models
have been recently proposed that are able to control voice by manually
assigning emotion intensity. Unfortunately, due to the neglect of intra-class
distance, the intensity differences are often unrecognizable. In this paper, we
propose a fine-grained controllable emotional TTS, that considers both inter-
and intra-class distances and be able to synthesize speech with recognizable
intensity difference. Our subjective and objective experiments demonstrate that
our model exceeds two state-of-the-art controllable TTS models for
controllability, emotion expressiveness and naturalness.
|
[
{
"version": "v1",
"created": "Thu, 2 Mar 2023 09:09:03 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Mar 2023 13:07:06 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Wang",
"Shijun",
""
],
[
"Guðnason",
"Jón",
""
],
[
"Borth",
"Damian",
""
]
] |
new_dataset
| 0.987947 |
2303.05491
|
Andrea Lattuada
|
Andrea Lattuada (VMware Research), Travis Hance (Carnegie Mellon
University), Chanhee Cho (Carnegie Mellon University), Matthias Brun (ETH
Zurich), Isitha Subasinghe (UNSW Sydney), Yi Zhou (Carnegie Mellon
University), Jon Howell (VMware Research), Bryan Parno (Carnegie Mellon
University), Chris Hawblitzel (Microsoft Research)
|
Verus: Verifying Rust Programs using Linear Ghost Types (extended
version)
| null | null | null | null |
cs.LO cs.PL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The Rust programming language provides a powerful type system that checks
linearity and borrowing, allowing code to safely manipulate memory without
garbage collection and making Rust ideal for developing low-level,
high-assurance systems. For such systems, formal verification can be useful to
prove functional correctness properties beyond type safety. This paper presents
Verus, an SMT-based tool for formally verifying Rust programs. With Verus,
programmers express proofs and specifications using the Rust language, allowing
proofs to take advantage of Rust's linear types and borrow checking. We show
how this allows proofs to manipulate linearly typed permissions that let Rust
code safely manipulate memory, pointers, and concurrent resources. Verus
organizes proofs and specifications using a novel mode system that
distinguishes specifications, which are not checked for linearity and
borrowing, from executable code and proofs, which are checked for linearity and
borrowing. We formalize Verus' linearity, borrowing, and modes in a small
lambda calculus, for which we prove type safety and termination of
specifications and proofs. We demonstrate Verus on a series of examples,
including pointer-manipulating code (an xor-based doubly linked list), code
with interior mutability, and concurrent code.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 18:44:45 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Mar 2023 00:58:20 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Lattuada",
"Andrea",
"",
"VMware Research"
],
[
"Hance",
"Travis",
"",
"Carnegie Mellon\n University"
],
[
"Cho",
"Chanhee",
"",
"Carnegie Mellon University"
],
[
"Brun",
"Matthias",
"",
"ETH\n Zurich"
],
[
"Subasinghe",
"Isitha",
"",
"UNSW Sydney"
],
[
"Zhou",
"Yi",
"",
"Carnegie Mellon\n University"
],
[
"Howell",
"Jon",
"",
"VMware Research"
],
[
"Parno",
"Bryan",
"",
"Carnegie Mellon\n University"
],
[
"Hawblitzel",
"Chris",
"",
"Microsoft Research"
]
] |
new_dataset
| 0.998361 |
2303.06153
|
Yiwei Yang
|
Yiwei Yang, Pooneh Safayenikoo, Jiacheng Ma, Tanvir Ahmed Khan, Andrew
Quinn
|
CXLMemSim: A pure software simulated CXL.mem for performance
characterization
| null | null | null | null |
cs.PF cs.AR cs.OS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The emerging CXL.mem standard provides a new type of byte-addressable remote
memory with a variety of memory types and hierarchies. With CXL.mem, multiple
layers of memory -- e.g., local DRAM and CXL-attached remote memory at
different locations -- are exposed to operating systems and user applications,
bringing new challenges and research opportunities. Unfortunately, since
CXL.mem devices are not commercially available, it is difficult for researchers
to conduct systems research that uses CXL.mem. In this paper, we present our
ongoing work, CXLMemSim, a fast and lightweight CXL.mem simulator for
performance characterization. CXLMemSim uses a performance model driven using
performance monitoring events, which are supported by most commodity
processors. Specifically, CXLMemSim attaches to an existing, unmodified
program, and divides the execution of the program into multiple epochs; once an
epoch finishes, CXLMemSim collects performance monitoring events and calculates
the simulated execution time of the epoch based on these events. Through this
method, CXLMemSim avoids the performance overhead of a full-system simulator
(e.g., Gem5) and allows the memory hierarchy and latency to be easily adjusted,
enabling research such as memory scheduling for complex applications. Our
preliminary evaluation shows that CXLMemSim slows down the execution of the
attached program by 4.41x on average for real-world applications.
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 04:37:07 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Yang",
"Yiwei",
""
],
[
"Safayenikoo",
"Pooneh",
""
],
[
"Ma",
"Jiacheng",
""
],
[
"Khan",
"Tanvir Ahmed",
""
],
[
"Quinn",
"Andrew",
""
]
] |
new_dataset
| 0.996526 |
2303.06172
|
Ramviyas Parasuraman
|
Nazish Tahir and Ramviyas Parasuraman
|
Mobile Robot Control and Autonomy Through Collaborative Simulation Twin
|
Accepted to the IEEE PERCOM 2023 Workshop on Pervasive Digital Twins
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
When a mobile robot lacks high onboard computing or networking capabilities,
it can rely on remote computing architecture for its control and autonomy. This
paper introduces a novel collaborative Simulation Twin (ST) strategy for
control and autonomy on resource-constrained robots. The practical
implementation of such a strategy entails a mobile robot system divided into a
cyber (simulated) and physical (real) space separated over a communication
channel where the physical robot resides on the site of operation guided by a
simulated autonomous agent from a remote location maintained over a network.
Building on top of the digital twin concept, our collaborative twin is capable
of autonomous navigation through an advanced SLAM-based path planning
algorithm, while the physical robot is capable of tracking the Simulated twin's
velocity and communicating feedback generated through interaction with its
environment. We proposed a prioritized path planning application to the test in
a collaborative teleoperation system of a physical robot guided by ST's
autonomous navigation. We examine the performance of a physical robot led by
autonomous navigation from the Collaborative Twin and assisted by a predicted
force received from the physical robot. The experimental findings indicate the
practicality of the proposed simulation-physical twinning approach and provide
computational and network performance improvements compared to typical remote
computing (or offloading), and digital twin approaches.
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 19:15:51 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Tahir",
"Nazish",
""
],
[
"Parasuraman",
"Ramviyas",
""
]
] |
new_dataset
| 0.996473 |
2303.06202
|
Vinit Katariya
|
Vinit Katariya, Ghazal Alinezhad Noghre, Armin Danesh Pazho, Hamed
Tabkhi
|
A POV-based Highway Vehicle Trajectory Dataset and Prediction
Architecture
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vehicle Trajectory datasets that provide multiple point-of-views (POVs) can
be valuable for various traffic safety and management applications. Despite the
abundance of trajectory datasets, few offer a comprehensive and diverse range
of driving scenes, capturing multiple viewpoints of various highway layouts,
merging lanes, and configurations. This limits their ability to capture the
nuanced interactions between drivers, vehicles, and the roadway infrastructure.
We introduce the \emph{Carolinas Highway Dataset (CHD\footnote{\emph{CHD}
available at: \url{https://github.com/TeCSAR-UNCC/Carolinas\_Dataset}})}, a
vehicle trajectory, detection, and tracking dataset. \emph{CHD} is a collection
of 1.6 million frames captured in highway-based videos from eye-level and
high-angle POVs at eight locations across Carolinas with 338,000 vehicle
trajectories. The locations, timing of recordings, and camera angles were
carefully selected to capture various road geometries, traffic patterns,
lighting conditions, and driving behaviors.
We also present \emph{PishguVe}\footnote{\emph{PishguVe} code available at:
\url{https://github.com/TeCSAR-UNCC/PishguVe}}, a novel vehicle trajectory
prediction architecture that uses attention-based graph isomorphism and
convolutional neural networks. The results demonstrate that \emph{PishguVe}
outperforms existing algorithms to become the new state-of-the-art (SotA) in
bird's-eye, eye-level, and high-angle POV trajectory datasets. Specifically, it
achieves a 12.50\% and 10.20\% improvement in ADE and FDE, respectively, over
the current SotA on NGSIM dataset. Compared to best-performing models on CHD,
\emph{PishguVe} achieves lower ADE and FDE on eye-level data by 14.58\% and
27.38\%, respectively, and improves ADE and FDE on high-angle data by 8.3\% and
6.9\%, respectively.
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 20:38:40 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Katariya",
"Vinit",
""
],
[
"Noghre",
"Ghazal Alinezhad",
""
],
[
"Pazho",
"Armin Danesh",
""
],
[
"Tabkhi",
"Hamed",
""
]
] |
new_dataset
| 0.999821 |
2303.06213
|
Yumeng Song
|
Yumeng Song, Yu Gu, Tianyi Li, Jianzhong Qi, Zhenghao Liu, Christian
S. Jensen and Ge Yu
|
CHGNN: A Semi-Supervised Contrastive Hypergraph Learning Network
|
14 pages, 11 figures
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hypergraphs can model higher-order relationships among data objects that are
found in applications such as social networks and bioinformatics. However,
recent studies on hypergraph learning that extend graph convolutional networks
to hypergraphs cannot learn effectively from features of unlabeled data. To
such learning, we propose a contrastive hypergraph neural network, CHGNN, that
exploits self-supervised contrastive learning techniques to learn from labeled
and unlabeled data. First, CHGNN includes an adaptive hypergraph view generator
that adopts an auto-augmentation strategy and learns a perturbed probability
distribution of minimal sufficient views. Second, CHGNN encompasses an improved
hypergraph encoder that considers hyperedge homogeneity to fuse information
effectively. Third, CHGNN is equipped with a joint loss function that combines
a similarity loss for the view generator, a node classification loss, and a
hyperedge homogeneity loss to inject supervision signals. It also includes
basic and cross-validation contrastive losses, associated with an enhanced
contrastive loss training process. Experimental results on nine real datasets
offer insight into the effectiveness of CHGNN, showing that it outperforms 13
competitors in terms of classification accuracy consistently.
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 21:28:10 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Song",
"Yumeng",
""
],
[
"Gu",
"Yu",
""
],
[
"Li",
"Tianyi",
""
],
[
"Qi",
"Jianzhong",
""
],
[
"Liu",
"Zhenghao",
""
],
[
"Jensen",
"Christian S.",
""
],
[
"Yu",
"Ge",
""
]
] |
new_dataset
| 0.990521 |
2303.06226
|
Przemys{\l}aw Spurek
|
Wojciech Zaj\k{a}c, Jacek Tabor, Maciej Zi\k{e}ba, Przemys{\l}aw
Spurek
|
NeRFlame: FLAME-based conditioning of NeRF for 3D face rendering
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditional 3D face models are based on mesh representations with texture.
One of the most important models is FLAME (Faces Learned with an Articulated
Model and Expressions), which produces meshes of human faces that are fully
controllable. Unfortunately, such models have problems with capturing geometric
and appearance details. In contrast to mesh representation, the neural radiance
field (NeRF) produces extremely sharp renders. But implicit methods are hard to
animate and do not generalize well to unseen expressions. It is not trivial to
effectively control NeRF models to obtain face manipulation. The present paper
proposes a novel approach, named NeRFlame, which combines the strengths of both
NeRF and FLAME methods. Our method enables high-quality rendering capabilities
of NeRF while also offering complete control over the visual appearance,
similar to FLAME. Unlike conventional NeRF-based architectures that utilize
neural networks to model RGB colors and volume density, NeRFlame employs FLAME
mesh as an explicit density volume. As a result, color values are non-zero only
in the proximity of the FLAME mesh. This FLAME backbone is then integrated into
the NeRF architecture to predict RGB colors, allowing NeRFlame to explicitly
model volume density and implicitly model RGB colors.
|
[
{
"version": "v1",
"created": "Fri, 10 Mar 2023 22:21:30 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Zając",
"Wojciech",
""
],
[
"Tabor",
"Jacek",
""
],
[
"Zięba",
"Maciej",
""
],
[
"Spurek",
"Przemysław",
""
]
] |
new_dataset
| 0.997764 |
2303.06266
|
Jyotish Robin
|
Jyotish Robin, Elza Erkip
|
Non-Coherent Active Device Identification for Massive Random Access
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Massive Machine-Type Communications (mMTC) is a key service category in the
current generation of wireless networks featuring an extremely high density of
energy and resource-limited devices with sparse and sporadic activity patterns.
In order to enable random access in such mMTC networks, base station needs to
identify the active devices while operating within stringent access delay
constraints. In this paper, an energy efficient active device identification
protocol is proposed in which active devices transmit On-Off Keying (OOK)
modulated preambles jointly and base station employs non-coherent energy
detection avoiding channel estimation overheads. The minimum number of
channel-uses required by the active user identification protocol is
characterized in the asymptotic regime of total number of devices $\ell$ when
the number of active devices $k$ scales as $k=\Theta(1)$ along with an
achievability scheme relying on the equivalence of activity detection to a
group testing problem. Several practical schemes based on Belief Propagation
(BP) and Combinatorial Orthogonal Matching Pursuit (COMP) are also proposed.
Simulation results show that BP strategies outperform COMP significantly and
can operate close to the theoretical achievability bounds. In a
partial-recovery setting where few misdetections are allowed, BP continues to
perform well.
|
[
{
"version": "v1",
"created": "Sat, 11 Mar 2023 01:08:55 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Robin",
"Jyotish",
""
],
[
"Erkip",
"Elza",
""
]
] |
new_dataset
| 0.995984 |
2303.06286
|
Zhou Yang
|
Ratnadira Widyasari, Zhou Yang, Ferdian Thung, Sheng Qin Sim, Fiona
Wee, Camellia Lok, Jack Phan, Haodi Qi, Constance Tan, Qijin Tay, David Lo
|
NICHE: A Curated Dataset of Engineered Machine Learning Projects in
Python
|
Accepted by MSR 2023
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine learning (ML) has gained much attention and been incorporated into
our daily lives. While there are numerous publicly available ML projects on
open source platforms such as GitHub, there have been limited attempts in
filtering those projects to curate ML projects of high quality. The limited
availability of such a high-quality dataset poses an obstacle in understanding
ML projects. To help clear this obstacle, we present NICHE, a manually labelled
dataset consisting of 572 ML projects. Based on evidences of good software
engineering practices, we label 441 of these projects as engineered and 131 as
non-engineered. This dataset can help researchers understand the practices that
are followed in high-quality ML projects. It can also be used as a benchmark
for classifiers designed to identify engineered ML projects.
|
[
{
"version": "v1",
"created": "Sat, 11 Mar 2023 02:45:55 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Widyasari",
"Ratnadira",
""
],
[
"Yang",
"Zhou",
""
],
[
"Thung",
"Ferdian",
""
],
[
"Sim",
"Sheng Qin",
""
],
[
"Wee",
"Fiona",
""
],
[
"Lok",
"Camellia",
""
],
[
"Phan",
"Jack",
""
],
[
"Qi",
"Haodi",
""
],
[
"Tan",
"Constance",
""
],
[
"Tay",
"Qijin",
""
],
[
"Lo",
"David",
""
]
] |
new_dataset
| 0.999812 |
2303.06298
|
Seungho Choe
|
Samir Mitha, Seungho Choe, Pejman Jahbedar Maralani, Alan R. Moody,
and April Khademi
|
MLP-SRGAN: A Single-Dimension Super Resolution GAN using MLP-Mixer
|
14 pages, 10 figures
| null | null | null |
cs.CV cs.LG eess.IV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We propose a novel architecture called MLP-SRGAN, which is a single-dimension
Super Resolution Generative Adversarial Network (SRGAN) that utilizes
Multi-Layer Perceptron Mixers (MLP-Mixers) along with convolutional layers to
upsample in the slice direction. MLP-SRGAN is trained and validated using high
resolution (HR) FLAIR MRI from the MSSEG2 challenge dataset. The method was
applied to three multicentre FLAIR datasets (CAIN, ADNI, CCNA) of images with
low spatial resolution in the slice dimension to examine performance on
held-out (unseen) clinical data. Upsampled results are compared to several
state-of-the-art SR networks. For images with high resolution (HR) ground
truths, peak-signal-to-noise-ratio (PSNR) and structural similarity index
(SSIM) are used to measure upsampling performance. Several new structural,
no-reference image quality metrics were proposed to quantify sharpness (edge
strength), noise (entropy), and blurriness (low frequency information) in the
absence of ground truths. Results show MLP-SRGAN results in sharper edges, less
blurring, preserves more texture and fine-anatomical detail, with fewer
parameters, faster training/evaluation time, and smaller model size than
existing methods. Code for MLP-SRGAN training and inference, data generators,
models and no-reference image quality metrics will be available at
https://github.com/IAMLAB-Ryerson/MLP-SRGAN.
|
[
{
"version": "v1",
"created": "Sat, 11 Mar 2023 04:05:57 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Mitha",
"Samir",
""
],
[
"Choe",
"Seungho",
""
],
[
"Maralani",
"Pejman Jahbedar",
""
],
[
"Moody",
"Alan R.",
""
],
[
"Khademi",
"April",
""
]
] |
new_dataset
| 0.999413 |
2303.06306
|
Jagbeer Singh Prof.
|
Jagbeer Singh, Utkarsh Rastogi, Yash Goel, Brijesh Gupta, Utkarsh
|
Blockchain-based decentralized voting system security Perspective: Safe
and secure for digital voting system
| null | null | null | null |
cs.LG cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This research study focuses primarily on Block-Chain-based voting systems,
which facilitate participation in and administration of voting for voters,
candidates, and officials. Because we used Block-Chain in the backend, which
enables everyone to trace vote fraud, our system is incredibly safe. This paper
approach any unique identification the Aadhar Card number or an OTP will be
generated then user can utilise the voting system to cast his/her vote. A
proposal for Bit-coin, a virtual currency system that is decided by a central
authority for producing money, transferring ownership, and validating
transactions, included the peer-to-peer network in a Block-Chain system, the
ledger is duplicated across several, identical databases which is hosted and
updated by a different process and all other nodes are updated concurrently if
changes made to one node and a transaction occurs, the records of the values
and assets are permanently exchanged, Only the user and the system need to be
verified no other authentication required. If any transaction carried out on a
block chain-based system would be settled in a matter of seconds while still
being safe, verifiable, and transparent. Although block-chain technology is the
foundation for Bitcoin and other digital currencies but also it may be applied
widely to greatly reduce difficulties in many other sectors, Voting is the
sector that is battling from a lack of security, centralized-authority,
management-issues, and many more despite the fact that transactions are kept in
a distributed and safe fashion.
|
[
{
"version": "v1",
"created": "Sat, 11 Mar 2023 04:52:46 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Singh",
"Jagbeer",
""
],
[
"Rastogi",
"Utkarsh",
""
],
[
"Goel",
"Yash",
""
],
[
"Gupta",
"Brijesh",
""
],
[
"Utkarsh",
"",
""
]
] |
new_dataset
| 0.960978 |
2303.06307
|
Jiayi Zhao
|
Jiayi Zhao, Denizalp Goktas, Amy Greenwald
|
Fisher Markets with Social Influence
| null | null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A Fisher market is an economic model of buyer and seller interactions in
which each buyer's utility depends only on the bundle of goods she obtains.
Many people's interests, however, are affected by their social interactions
with others. In this paper, we introduce a generalization of Fisher markets,
namely influence Fisher markets, which captures the impact of social influence
on buyers' utilities. We show that competitive equilibria in influence Fisher
markets correspond to generalized Nash equilibria in an associated pseudo-game,
which implies the existence of competitive equilibria in all influence Fisher
markets with continuous and concave utility functions. We then construct a
monotone pseudo-game, whose variational equilibria and their duals together
characterize competitive equilibria in influence Fisher markets with
continuous, jointly concave, and homogeneous utility functions. This
observation implies that competitive equilibria in these markets can be
computed in polynomial time under standard smoothness assumptions on the
utility functions. The dual of this second pseudo-game enables us to interpret
the competitive equilibria of influence CCH Fisher markets as the solutions to
a system of simultaneous Stackelberg games. Finally, we derive a novel
first-order method that solves this Stackelberg system in polynomial time,
prove that it is equivalent to computing competitive equilibrium prices via
t\^{a}tonnement, and run experiments that confirm our theoretical results.
|
[
{
"version": "v1",
"created": "Sat, 11 Mar 2023 04:55:18 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Zhao",
"Jiayi",
""
],
[
"Goktas",
"Denizalp",
""
],
[
"Greenwald",
"Amy",
""
]
] |
new_dataset
| 0.977369 |
2303.06309
|
Jagbeer Singh Prof.
|
Jagbeer Singh, Yash Goel, Shubhi Jain, Shiva Yadav
|
Virtual Mouse And Assistant: A Technological Revolution Of Artificial
Intelligence
| null | null | null | null |
cs.HC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The purpose of this paper is to enhance the performance of the virtual
assistant. So, what exactly is a virtual assistant. Application software, often
called virtual assistants, also known as AI assistants or digital assistants,
is software that understands natural language voice commands and can perform
tasks on your behalf. What does a virtual assistant do. Virtual assistants can
complete practically any specific smartphone or PC activity that you can
complete on your own, and the list is continually expanding. Virtual assistants
typically do an impressive variety of tasks, including scheduling meetings,
delivering messages, and monitoring the weather. Previous virtual assistants,
like Google Assistant and Cortana, had limits in that they could only perform
searches and were not entirely automated. For instance, these engines do not
have the ability to forward and rewind the song in order to maintain the
control function of the song; they can only have the module to search for songs
and play them. Currently, we are working on a project where we are automating
Google, YouTube, and many other new things to improve the functionality of this
project. Now, in order to simplify the process, we've added a virtual mouse
that can only be used for cursor control and clicking. It receives input from
the camera, and our index finger acts as the mouse tip, our middle finger as
the right click, and so forth.
|
[
{
"version": "v1",
"created": "Sat, 11 Mar 2023 05:00:06 GMT"
}
] | 2023-03-14T00:00:00 |
[
[
"Singh",
"Jagbeer",
""
],
[
"Goel",
"Yash",
""
],
[
"Jain",
"Shubhi",
""
],
[
"Yadav",
"Shiva",
""
]
] |
new_dataset
| 0.981399 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.