id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2010.09343
|
Yan Xu
|
Yan Xu, Zhaoyang Huang, Kwan-Yee Lin, Xinge Zhu, Jianping Shi, Hujun
Bao, Guofeng Zhang, Hongsheng Li
|
SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural
Networks
|
Accepted to CoRL 2020
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent learning-based LiDAR odometry methods have demonstrated their
competitiveness. However, most methods still face two substantial challenges:
1) the 2D projection representation of LiDAR data cannot effectively encode 3D
structures from the point clouds; 2) the needs for a large amount of labeled
data for training limit the application scope of these methods. In this paper,
we propose a self-supervised LiDAR odometry method, dubbed SelfVoxeLO, to
tackle these two difficulties. Specifically, we propose a 3D convolution
network to process the raw LiDAR data directly, which extracts features that
better encode the 3D geometric patterns. To suit our network to self-supervised
learning, we design several novel loss functions that utilize the inherent
properties of LiDAR point clouds. Moreover, an uncertainty-aware mechanism is
incorporated in the loss functions to alleviate the interference of moving
objects/noises. We evaluate our method's performances on two large-scale
datasets, i.e., KITTI and Apollo-SouthBay. Our method outperforms
state-of-the-art unsupervised methods by 27%/32% in terms of
translational/rotational errors on the KITTI dataset and also performs well on
the Apollo-SouthBay dataset. By including more unlabelled training data, our
method can further improve performance comparable to the supervised methods.
|
[
{
"version": "v1",
"created": "Mon, 19 Oct 2020 09:23:39 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Feb 2022 17:19:46 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Feb 2022 04:59:30 GMT"
}
] | 2022-02-10T00:00:00 |
[
[
"Xu",
"Yan",
""
],
[
"Huang",
"Zhaoyang",
""
],
[
"Lin",
"Kwan-Yee",
""
],
[
"Zhu",
"Xinge",
""
],
[
"Shi",
"Jianping",
""
],
[
"Bao",
"Hujun",
""
],
[
"Zhang",
"Guofeng",
""
],
[
"Li",
"Hongsheng",
""
]
] |
new_dataset
| 0.979325 |
2012.11779
|
Philip Edwards PhD
|
P.J. "Eddie'' Edwards, Dimitris Psychogyios, Stefanie Speidel, Lena
Maier-Hein and Danail Stoyanov
|
SERV-CT: A disparity dataset from CT for validation of endoscopic 3D
reconstruction
|
Submitted to Medical Image Analysis. 14 Figures, 17 pages
| null |
10.1016/j.media.2021.102302
| null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In computer vision, reference datasets have been highly successful in
promoting algorithmic development in stereo reconstruction. Surgical scenes
gives rise to specific problems, including the lack of clear corner features,
highly specular surfaces and the presence of blood and smoke. Publicly
available datasets have been produced using CT and either phantom images or
biological tissue samples covering a relatively small region of the endoscope
field-of-view. We present a stereo-endoscopic reconstruction validation dataset
based on CT (SERV-CT). Two {\it ex vivo} small porcine full torso cadavers were
placed within the view of the endoscope with both the endoscope and target
anatomy visible in the CT scan. Orientation of the endoscope was manually
aligned to the stereoscopic view. Reference disparities and occlusions were
calculated for 8 stereo pairs from each sample. For the second sample an RGB
surface was acquired to aid alignment of smooth, featureless surfaces. Repeated
manual alignments showed an RMS disparity accuracy of ~2 pixels and a depth
accuracy of ~2mm. The reference dataset includes endoscope image pairs with
corresponding calibration, disparities, depths and occlusions covering the
majority of the endoscopic image and a range of tissue types. Smooth specular
surfaces and images with significant variation of depth are included. We
assessed the performance of various stereo algorithms from online available
repositories. There is a significant variation between algorithms, highlighting
some of the challenges of surgical endoscopic images. The SERV-CT dataset
provides an easy to use stereoscopic validation for surgical applications with
smooth reference disparities and depths with coverage over the majority of the
endoscopic images. This complements existing resources well and we hope will
aid the development of surgical endoscopic anatomical reconstruction
algorithms.
|
[
{
"version": "v1",
"created": "Tue, 22 Dec 2020 01:28:30 GMT"
}
] | 2022-02-10T00:00:00 |
[
[
"Edwards",
"P. J. \"Eddie''",
""
],
[
"Psychogyios",
"Dimitris",
""
],
[
"Speidel",
"Stefanie",
""
],
[
"Maier-Hein",
"Lena",
""
],
[
"Stoyanov",
"Danail",
""
]
] |
new_dataset
| 0.999788 |
2105.02359
|
Borja Bovcon
|
Borja Bovcon, Jon Muhovi\v{c}, Du\v{s}ko Vranac, Dean Mozeti\v{c},
Janez Per\v{s}, Matej Kristan
|
MODS -- A USV-oriented object detection and obstacle segmentation
benchmark
|
16 pages, 15 figures. The dataset, as well as the proposed evaluation
protocols, are published on our website: https://www.vicos.si/resources/
| null |
10.1109/TITS.2021.3124192
| null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Small-sized unmanned surface vehicles (USV) are coastal water devices with a
broad range of applications such as environmental control and surveillance. A
crucial capability for autonomous operation is obstacle detection for timely
reaction and collision avoidance, which has been recently explored in the
context of camera-based visual scene interpretation. Owing to curated datasets,
substantial advances in scene interpretation have been made in a related field
of unmanned ground vehicles. However, the current maritime datasets do not
adequately capture the complexity of real-world USV scenes and the evaluation
protocols are not standardised, which makes cross-paper comparison of different
methods difficult and hinders the progress. To address these issues, we
introduce a new obstacle detection benchmark MODS, which considers two major
perception tasks: maritime object detection and the more general maritime
obstacle segmentation. We present a new diverse maritime evaluation dataset
containing approximately 81k stereo images synchronized with an on-board IMU,
with over 60k objects annotated. We propose a new obstacle segmentation
performance evaluation protocol that reflects the detection accuracy in a way
meaningful for practical USV navigation. Nineteen recent state-of-the-art
object detection and obstacle segmentation methods are evaluated using the
proposed protocol, creating a benchmark to facilitate development of the field.
The proposed dataset, as well as evaluation routines, are made publicly
available at vicos.si/resources.
|
[
{
"version": "v1",
"created": "Wed, 5 May 2021 22:40:27 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Feb 2022 15:00:09 GMT"
}
] | 2022-02-10T00:00:00 |
[
[
"Bovcon",
"Borja",
""
],
[
"Muhovič",
"Jon",
""
],
[
"Vranac",
"Duško",
""
],
[
"Mozetič",
"Dean",
""
],
[
"Perš",
"Janez",
""
],
[
"Kristan",
"Matej",
""
]
] |
new_dataset
| 0.999546 |
2112.06761
|
Christine Eilers
|
John Zielke, Christine Eilers, Benjamin Busam, Wolfgang Weber, Nassir
Navab and Thomas Wendler
|
RSV: Robotic Sonography for Thyroid Volumetry
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
|
IEEE Robotics and Automation Letters, vol. 7, no. 2, pp.
3342-3348, April 2022
|
10.1109/LRA.2022.3146542
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In nuclear medicine, radioiodine therapy is prescribed to treat diseases like
hyperthyroidism. The calculation of the prescribed dose depends, amongst other
factors, on the thyroid volume. This is currently estimated using conventional
2D ultrasound imaging. However, this modality is inherently user-dependant,
resulting in high variability in volume estimations. To increase
reproducibility and consistency, we uniquely combine a neural network-based
segmentation with an automatic robotic ultrasound scanning for thyroid
volumetry. The robotic acquisition is achieved by using a 6 DOF robotic arm
with an attached ultrasound probe. Its movement is based on an online
segmentation of each thyroid lobe and the appearance of the US image. During
post-processing, the US images are segmented to obtain a volume estimation. In
an ablation study, we demonstrated the superiority of the motion guidance
algorithms for the robot arm movement compared to a naive linear motion,
executed by the robot in terms of volumetric accuracy. In a user study on a
phantom, we compared conventional 2D ultrasound measurements with our robotic
system. The mean volume measurement error of ultrasound expert users could be
significantly decreased from 20.85+/-16.10% to only 8.23+/-3.10% compared to
the ground truth. This tendency was observed even more in non-expert users
where the mean error improvement with the robotic system was measured to be as
high as $85\%$ which clearly shows the advantages of the robotic support.
|
[
{
"version": "v1",
"created": "Mon, 13 Dec 2021 16:13:49 GMT"
}
] | 2022-02-10T00:00:00 |
[
[
"Zielke",
"John",
""
],
[
"Eilers",
"Christine",
""
],
[
"Busam",
"Benjamin",
""
],
[
"Weber",
"Wolfgang",
""
],
[
"Navab",
"Nassir",
""
],
[
"Wendler",
"Thomas",
""
]
] |
new_dataset
| 0.999412 |
2202.03457
|
Nisansa de Silva
|
Nisansa de Silva
|
Selecting Seed Words for Wordle using Character Statistics
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wordle, a word guessing game rose to global popularity in the January of
2022. The goal of the game is to guess a five-letter English word within six
tries. Each try provides the player with hints by means of colour changing
tiles which inform whether or not a given character is part of the solution as
well as, in cases where it is part of the solution, whether or not it is in the
correct placement. Numerous attempts have been made to find the best starting
word and best strategy to solve the daily wordle. This study uses character
statistics of five-letter words to determine the best three starting words.
|
[
{
"version": "v1",
"created": "Mon, 7 Feb 2022 19:01:19 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Feb 2022 03:40:40 GMT"
}
] | 2022-02-10T00:00:00 |
[
[
"de Silva",
"Nisansa",
""
]
] |
new_dataset
| 0.980519 |
2202.03884
|
Petra Poklukar
|
Ciwan Ceylan, Petra Poklukar, Hanna Hultin, Alexander Kravchenko,
Anastasia Varava, Danica Kragic
|
GraphDCA -- a Framework for Node Distribution Comparison in Real and
Synthetic Graphs
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We argue that when comparing two graphs, the distribution of node structural
features is more informative than global graph statistics which are often used
in practice, especially to evaluate graph generative models. Thus, we present
GraphDCA - a framework for evaluating similarity between graphs based on the
alignment of their respective node representation sets. The sets are compared
using a recently proposed method for comparing representation spaces, called
Delaunay Component Analysis (DCA), which we extend to graph data. To evaluate
our framework, we generate a benchmark dataset of graphs exhibiting different
structural patterns and show, using three node structure feature extractors,
that GraphDCA recognizes graphs with both similar and dissimilar local
structure. We then apply our framework to evaluate three publicly available
real-world graph datasets and demonstrate, using gradual edge perturbations,
that GraphDCA satisfyingly captures gradually decreasing similarity, unlike
global statistics. Finally, we use GraphDCA to evaluate two state-of-the-art
graph generative models, NetGAN and CELL, and conclude that further
improvements are needed for these models to adequately reproduce local
structural features.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 14:19:19 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Feb 2022 07:50:07 GMT"
}
] | 2022-02-10T00:00:00 |
[
[
"Ceylan",
"Ciwan",
""
],
[
"Poklukar",
"Petra",
""
],
[
"Hultin",
"Hanna",
""
],
[
"Kravchenko",
"Alexander",
""
],
[
"Varava",
"Anastasia",
""
],
[
"Kragic",
"Danica",
""
]
] |
new_dataset
| 0.992955 |
2202.04112
|
Yue Song
|
Yue Song, Hao Tang, Nicu Sebe, Wei Wang
|
Disentangle Saliency Detection into Cascaded Detail Modeling and Body
Filling
|
Accepted by TOMM; the first two authors contribute equally to this
work
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Salient object detection has been long studied to identify the most visually
attractive objects in images/videos. Recently, a growing amount of approaches
have been proposed all of which rely on the contour/edge information to improve
detection performance. The edge labels are either put into the loss directly or
used as extra supervision. The edge and body can also be learned separately and
then fused afterward. Both methods either lead to high prediction errors near
the edge or cannot be trained in an end-to-end manner. Another problem is that
existing methods may fail to detect objects of various sizes due to the lack of
efficient and effective feature fusion mechanisms. In this work, we propose to
decompose the saliency detection task into two cascaded sub-tasks, \emph{i.e.},
detail modeling and body filling. Specifically, the detail modeling focuses on
capturing the object edges by supervision of explicitly decomposed detail label
that consists of the pixels that are nested on the edge and near the edge. Then
the body filling learns the body part which will be filled into the detail map
to generate more accurate saliency map. To effectively fuse the features and
handle objects at different scales, we have also proposed two novel multi-scale
detail attention and body attention blocks for precise detail and body
modeling. Experimental results show that our method achieves state-of-the-art
performances on six public datasets.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 19:33:02 GMT"
}
] | 2022-02-10T00:00:00 |
[
[
"Song",
"Yue",
""
],
[
"Tang",
"Hao",
""
],
[
"Sebe",
"Nicu",
""
],
[
"Wang",
"Wei",
""
]
] |
new_dataset
| 0.983227 |
2202.04121
|
Muhammad Asam
|
Muhammad Asam, Saddam Hussain Khan, Tauseef Jamal, Asifullah Khan
|
IoT Malware Detection Architecture using a Novel Channel Boosted and
Squeezed CNN
| null | null | null | null |
cs.CR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Interaction between devices, people, and the Internet has given birth to a
new digital communication model, the Internet of Things (IoT). The seamless
network of these smart devices is the core of this IoT model. However, on the
other hand, integrating smart devices to constitute a network introduces many
security challenges. These connected devices have created a security blind
spot, where cybercriminals can easily launch an attack to compromise the
devices using malware proliferation techniques. Therefore, malware detection is
considered a lifeline for the survival of IoT devices against cyberattacks.
This study proposes a novel IoT Malware Detection Architecture (iMDA) using
squeezing and boosting dilated convolutional neural network (CNN). The proposed
architecture exploits the concepts of edge and smoothing, multi-path dilated
convolutional operations, channel squeezing, and boosting in CNN. Edge and
smoothing operations are employed with split-transform-merge (STM) blocks to
extract local structure and minor contrast variation in the malware images. STM
blocks performed multi-path dilated convolutional operations, which helped
recognize the global structure of malware patterns. Additionally, channel
squeezing and merging helped to get the prominent reduced and diverse feature
maps, respectively. Channel squeezing and boosting are applied with the help of
STM block at the initial, middle and final levels to capture the texture
variation along with the depth for the sake of malware pattern hunting. The
proposed architecture has shown substantial performance compared with the
customized CNN models. The proposed iMDA has achieved Accuracy: 97.93%,
F1-Score: 0.9394, Precision: 0.9864, MCC: 0. 8796, Recall: 0.8873, AUC-PR:
0.9689 and AUC-ROC: 0.9938.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 19:55:35 GMT"
}
] | 2022-02-10T00:00:00 |
[
[
"Asam",
"Muhammad",
""
],
[
"Khan",
"Saddam Hussain",
""
],
[
"Jamal",
"Tauseef",
""
],
[
"Khan",
"Asifullah",
""
]
] |
new_dataset
| 0.997934 |
2202.04192
|
Zhe Hou
|
Wilayat Khan, Zhe Hou, David Sanan, Jamel Nebhen, Yang Liu, Alwen Tiu
|
An Executable Formal Model of the VHDL in Isabelle/HOL
| null | null | null | null |
cs.CL cs.FL cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
In the hardware design process, hardware components are usually described in
a hardware description language. Most of the hardware description languages,
such as Verilog and VHDL, do not have mathematical foundation and hence are not
fit for formal reasoning about the design. To enable formal reasoning in one of
the most commonly used description language VHDL, we define a formal model of
the VHDL language in Isabelle/HOL. Our model targets the functional part of
VHDL designs used in industry, specifically the design of the LEON3 processor's
integer unit. We cover a wide range of features in the VHDL language that are
usually not modelled in the literature and define a novel operational semantics
for it. Furthermore, our model can be exported to OCaml code for execution,
turning the formal model into a VHDL simulator. We have tested our simulator
against simple designs used in the literature, as well as the div32 module in
the LEON3 design. The Isabelle/HOL code is publicly available:
https://zhehou.github.io/apps/VHDLModel.zip
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 23:10:25 GMT"
}
] | 2022-02-10T00:00:00 |
[
[
"Khan",
"Wilayat",
""
],
[
"Hou",
"Zhe",
""
],
[
"Sanan",
"David",
""
],
[
"Nebhen",
"Jamel",
""
],
[
"Liu",
"Yang",
""
],
[
"Tiu",
"Alwen",
""
]
] |
new_dataset
| 0.998766 |
2202.04231
|
Stephanie Aelmore
|
Stephanie Aelmore, Richard C. Ordonez, Shibin Parameswaran, Justin
Mauger
|
Real-Time Event-Based Tracking and Detection for Maritime Environments
|
6 pages, 7 figures. Accepted by IEEE AIPR 2021 (Oral)
| null | null | null |
cs.CV cs.NE eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Event cameras are ideal for object tracking applications due to their ability
to capture fast-moving objects while mitigating latency and data redundancy.
Existing event-based clustering and feature tracking approaches for
surveillance and object detection work well in the majority of cases, but fall
short in a maritime environment. Our application of maritime vessel detection
and tracking requires a process that can identify features and output a
confidence score representing the likelihood that the feature was produced by a
vessel, which may trigger a subsequent alert or activate a classification
system. However, the maritime environment presents unique challenges such as
the tendency of waves to produce the majority of events, demanding the majority
of computational processing and producing false positive detections. By
filtering redundant events and analyzing the movement of each event cluster, we
can identify and track vessels while ignoring shorter lived and erratic
features such as those produced by waves.
|
[
{
"version": "v1",
"created": "Wed, 9 Feb 2022 02:30:27 GMT"
}
] | 2022-02-10T00:00:00 |
[
[
"Aelmore",
"Stephanie",
""
],
[
"Ordonez",
"Richard C.",
""
],
[
"Parameswaran",
"Shibin",
""
],
[
"Mauger",
"Justin",
""
]
] |
new_dataset
| 0.994726 |
2202.04473
|
Yonatan Vaizman
|
Yonatan Vaizman, Hongcheng Wang
|
MapiFi: Using Wi-Fi Signals to Map Home Devices
|
6 pages, 4 figures, published in SCTE Technical Journal, patent
pending at US Patent and Trademark Office
|
SCTE Technical Journal, vol 1, no 3, pp 106-118, 2021
| null | null |
cs.NI eess.SP
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Imagine a map of your home with all of your connected devices (computers,
TVs, voice control devices, printers, security cameras, etc.), in their
location. You could then easily group devices into user-profiles, monitor Wi-Fi
quality and activity in different areas of your home, and even locate a lost
tablet in your home. MapiFi is a method to generate that map of the devices in
a home. The first part of MapiFi involves the user (either a technician or the
resident) walking around the home with a mobile device that listens to Wi-Fi
radio channels. The mobile device detects Wi-Fi packets that come from all of
the home's devices that connect to the gateway and measures their signal
strengths (ignoring the content of the packets). The second part is an
algorithm that uses all the signal-strength measurements to estimate the
locations of all the devices in the home. Then, MapiFi visualizes the home's
space as a coordinate system with devices marked as points in this space. A
patent has been filed based on this technology. This paper was published in
SCTE Technical Journal (see published paper at
https://wagtail-prod-storage.s3.amazonaws.com/documents/SCTE_Technical_Journal_V1N3.pdf).
|
[
{
"version": "v1",
"created": "Wed, 9 Feb 2022 14:05:15 GMT"
}
] | 2022-02-10T00:00:00 |
[
[
"Vaizman",
"Yonatan",
""
],
[
"Wang",
"Hongcheng",
""
]
] |
new_dataset
| 0.999752 |
2202.04631
|
Pol Van Aubel
|
Pol Van Aubel (1) and Erik Poll (1) ((1) Digital Security group,
Institute for Computing and Information Sciences, Radboud University)
|
Security of EV-Charging Protocols
|
19 pages, 1 figure
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The field of electric vehicle charging involves a complex combination of
actors, devices, networks, and protocols. These protocols are being developed
without a clear focus on security. In this paper, we give an overview of the
main roles and protocols in use in the Netherlands. We describe a clear
attacker model and security requirements, show that in light of this many of
the protocols have security issues, and provide suggestions on how to address
these issues. The most important conclusion is the need for end-to-end security
for data in transit and long-term authenticity for data at rest. In addition,
we highlight the need for improved authentication of the EV driver, e.g. by
using banking cards. For the communication links we advise mandatory use of
TLS, standardization of TLS options and configurations, and improved
authentication using TLS client certificates.
|
[
{
"version": "v1",
"created": "Wed, 9 Feb 2022 18:49:11 GMT"
}
] | 2022-02-10T00:00:00 |
[
[
"Van Aubel",
"Pol",
""
],
[
"Poll",
"Erik",
""
]
] |
new_dataset
| 0.995079 |
2010.01387
|
Balaji Arun
|
Balaji Arun and Binoy Ravindran
|
DuoBFT: Resilience vs. Performance Trade-off in Byzantine Fault
Tolerance
|
15 pages
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents DuoBFT, a Byzantine fault-tolerant protocol that uses
trusted components to provide commit decisions in the Hybrid fault model in
addition to commit decisions in the BFT model. By doing so, it enables the
clients to choose the response fault model for its commands. Internally, DuoBFT
commits each client command under both the hybrid and Byzantine models, but
since hybrid commits take fewer communication steps and use smaller quorums
than BFT commits, clients can benefit from the low-latency commits in the
hybrid model.
DuoBFT uses a common view-change change protocol to handle both fault models.
To achieve this, we enable a notion called Flexible Quorums in the hybrid fault
model by revisiting the quorum intersection requirements in hybrid protocols.
The flexible quorum technique enables having a hybrid view change quorum that
is of the same size as a BFT view-change quorum. This paves a path for
efficiently combining both the fault models within a single unified protocol.
Our evaluation on a wide-area deployment reveal that DuoBFT can provide hybrid
commits with 30% lower latency to existing protocols without sacrificing
throughput. In absolute terms, DuoBFT provides sub-200-millisecond latency in a
geographically replicated deployment.
|
[
{
"version": "v1",
"created": "Sat, 3 Oct 2020 16:46:43 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Feb 2022 14:02:29 GMT"
}
] | 2022-02-09T00:00:00 |
[
[
"Arun",
"Balaji",
""
],
[
"Ravindran",
"Binoy",
""
]
] |
new_dataset
| 0.996515 |
2102.00053
|
Aleksander Czechowski
|
Aleksander Czechowski and Georgios Piliouras
|
Poincar\'{e}-Bendixson Limit Sets in Multi-Agent Learning
| null | null | null | null |
cs.GT cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A key challenge of evolutionary game theory and multi-agent learning is to
characterize the limit behavior of game dynamics. Whereas convergence is often
a property of learning algorithms in games satisfying a particular reward
structure (e.g., zero-sum games), even basic learning models, such as the
replicator dynamics, are not guaranteed to converge for general payoffs. Worse
yet, chaotic behavior is possible even in rather simple games, such as variants
of the Rock-Paper-Scissors game. Although chaotic behavior in learning dynamics
can be precluded by the celebrated Poincar\'e-Bendixson theorem, it is only
applicable to low-dimensional settings. Are there other characteristics of a
game that can force regularity in the limit sets of learning? We show that
behavior consistent with the Poincar\'e-Bendixson theorem (limit cycles, but no
chaotic attractor) can follow purely from the topological structure of the
interaction graph, even for high-dimensional settings with an arbitrary number
of players and arbitrary payoff matrices. We prove our result for a wide class
of follow-the-regularized leader (FoReL) dynamics, which generalize replicator
dynamics, for binary games characterized interaction graphs where the payoffs
of each player are only affected by one other player (i.e., interaction graphs
of indegree one). Since chaos occurs already in games with only two players and
three strategies, this class of non-chaotic games may be considered maximal.
Moreover, we provide simple conditions under which such behavior translates
into efficiency guarantees, implying that FoReL learning achieves time-averaged
sum of payoffs at least as good as that of a Nash equilibrium, thereby
connecting the topology of the dynamics to social-welfare analysis.
|
[
{
"version": "v1",
"created": "Fri, 29 Jan 2021 20:32:25 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Feb 2022 17:50:48 GMT"
}
] | 2022-02-09T00:00:00 |
[
[
"Czechowski",
"Aleksander",
""
],
[
"Piliouras",
"Georgios",
""
]
] |
new_dataset
| 0.971974 |
2103.05056
|
Daniele Cattaneo
|
Daniele Cattaneo, Matteo Vaghi, Abhinav Valada
|
LCDNet: Deep Loop Closure Detection and Point Cloud Registration for
LiDAR SLAM
|
Accepted to IEEE Transactions on Robotics (T-RO), 2022
| null | null | null |
cs.RO cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Loop closure detection is an essential component of Simultaneous Localization
and Mapping (SLAM) systems, which reduces the drift accumulated over time. Over
the years, several deep learning approaches have been proposed to address this
task, however their performance has been subpar compared to handcrafted
techniques, especially while dealing with reverse loops. In this paper, we
introduce the novel LCDNet that effectively detects loop closures in LiDAR
point clouds by simultaneously identifying previously visited places and
estimating the 6-DoF relative transformation between the current scan and the
map. LCDNet is composed of a shared encoder, a place recognition head that
extracts global descriptors, and a relative pose head that estimates the
transformation between two point clouds. We introduce a novel relative pose
head based on the unbalanced optimal transport theory that we implement in a
differentiable manner to allow for end-to-end training. Extensive evaluations
of LCDNet on multiple real-world autonomous driving datasets show that our
approach outperforms state-of-the-art loop closure detection and point cloud
registration techniques by a large margin, especially while dealing with
reverse loops. Moreover, we integrate our proposed loop closure detection
approach into a LiDAR SLAM library to provide a complete mapping system and
demonstrate the generalization ability using different sensor setup in an
unseen city.
|
[
{
"version": "v1",
"created": "Mon, 8 Mar 2021 20:19:37 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Jun 2021 16:27:51 GMT"
},
{
"version": "v3",
"created": "Fri, 3 Dec 2021 12:36:34 GMT"
},
{
"version": "v4",
"created": "Tue, 8 Feb 2022 11:16:42 GMT"
}
] | 2022-02-09T00:00:00 |
[
[
"Cattaneo",
"Daniele",
""
],
[
"Vaghi",
"Matteo",
""
],
[
"Valada",
"Abhinav",
""
]
] |
new_dataset
| 0.984252 |
2104.09161
|
Meng Hua
|
Meng Hua, Qingqing Wu, Luxi Yang, Robert Schober, H. Vincent Poor
|
A Novel Wireless Communication Paradigm for Intelligent Reflecting
Surface Based Symbiotic Radio Systems
|
This manuscript has been submitted to IEEE journal for possible
publication
| null |
10.1109/TSP.2021.3135603
| null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
This paper investigates a novel intelligent reflecting surface (IRS)-based
symbiotic radio (SR) system architecture consisting of a transmitter, an IRS,
and an information receiver (IR). The primary transmitter communicates with the
IR and at the same time assists the IRS in forwarding information to the IR.
Based on the IRS's symbol period, we distinguish two scenarios, namely,
commensal SR (CSR) and parasitic SR (PSR), where two different techniques for
decoding the IRS signals at the IR are employed. We formulate bit error rate
(BER) minimization problems for both scenarios by jointly optimizing the active
beamformer at the base station and the phase shifts at the IRS, subject to a
minimum primary rate requirement. Specifically, for the CSR scenario, a
penalty-based algorithm is proposed to obtain a high-quality solution, where
semi-closed-form solutions for the active beamformer and the IRS phase shifts
are derived based on Lagrange duality and Majorization-Minimization methods,
respectively. For the PSR scenario, we apply a bisection search-based method,
successive convex approximation, and difference of convex programming to
develop a computationally efficient algorithm, which converges to a locally
optimal solution. Simulation results demonstrate the effectiveness of the
proposed algorithms and show that the proposed SR techniques are able to
achieve a lower BER than benchmark schemes.
|
[
{
"version": "v1",
"created": "Mon, 19 Apr 2021 09:39:52 GMT"
}
] | 2022-02-09T00:00:00 |
[
[
"Hua",
"Meng",
""
],
[
"Wu",
"Qingqing",
""
],
[
"Yang",
"Luxi",
""
],
[
"Schober",
"Robert",
""
],
[
"Poor",
"H. Vincent",
""
]
] |
new_dataset
| 0.990623 |
2107.09896
|
Milad Tatar Mamaghani
|
Milad Tatar Mamaghani, Yi Hong
|
Terahertz Meets Untrusted UAV-Relaying: Minimum Secrecy Energy
Efficiency Maximization via Trajectory and Communication Co-design
|
16 pages, 10 figures, Accepted by (to appear in) the IEEE
Transactions on Vehicular Technology
| null |
10.1109/TVT.2022.3150011
| null |
cs.IT cs.CE cs.NI eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unmanned aerial vehicles (UAVs) and Terahertz (THz) technology are envisioned
to play paramount roles in next-generation wireless communications. In this
paper, we present a novel secure UAV-assisted mobile relaying system operating
at THz bands for data acquisition from multiple ground user equipments (UEs)
towards a destination. We assume that the UAV-mounted relay may act, besides
providing relaying services, as a potential eavesdropper called the untrusted
UAV-relay (UUR). To safeguard end-to-end communications, we present a secure
two-phase transmission strategy with cooperative jamming. Then, we devise an
optimization framework in terms of a new measure $-$ secrecy energy efficiency
(SEE), defined as the ratio of achievable average secrecy rate to average
system power consumption, which enables us to obtain the best possible security
level while taking UUR's inherent flight power limitation into account. For the
sake of quality of service fairness amongst all the UEs, we aim to maximize the
minimum SEE (MSEE) performance via the joint design of key system parameters,
including UUR's trajectory and velocity, communication scheduling, and network
power allocation. Since the formulated problem is a mixed-integer nonconvex
optimization and computationally intractable, we decouple it into four
subproblems and propose alternative algorithms to solve it efficiently via
greedy/sequential block successive convex approximation and non-linear
fractional programming techniques. Numerical results demonstrate significant
MSEE performance improvement of our designs compared to other known benchmarks.
|
[
{
"version": "v1",
"created": "Wed, 21 Jul 2021 06:25:31 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Sep 2021 01:59:42 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Jan 2022 11:33:41 GMT"
},
{
"version": "v4",
"created": "Tue, 8 Feb 2022 02:46:58 GMT"
}
] | 2022-02-09T00:00:00 |
[
[
"Mamaghani",
"Milad Tatar",
""
],
[
"Hong",
"Yi",
""
]
] |
new_dataset
| 0.998159 |
2109.03702
|
Ziyue Zhang
|
Ziyue Zhang, Shuai Jiang, Congzhentao Huang, Richard YiDa Xu
|
Unsupervised clothing change adaptive person ReID
|
9 pages
| null |
10.1109/LSP.2021.3134195
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Clothing changes and lack of data labels are both crucial challenges in
person ReID. For the former challenge, people may occur multiple times at
different locations wearing different clothing. However, most of the current
person ReID research works focus on the benchmarks in which a person's clothing
is kept the same all the time. For the last challenge, some researchers try to
make model learn information from a labeled dataset as a source to an unlabeled
dataset. Whereas purely unsupervised training is less used. In this paper, we
aim to solve both problems at the same time. We design a novel unsupervised
model, Sync-Person-Cloud ReID, to solve the unsupervised clothing change person
ReID problem. We developer a purely unsupervised clothing change person ReID
pipeline with person sync augmentation operation and same person feature
restriction. The person sync augmentation is to supply additional same person
resources. These same person's resources can be used as part supervised input
by same person feature restriction. The extensive experiments on clothing
change ReID datasets show the out-performance of our methods.
|
[
{
"version": "v1",
"created": "Wed, 8 Sep 2021 15:08:10 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Sep 2021 14:42:00 GMT"
}
] | 2022-02-09T00:00:00 |
[
[
"Zhang",
"Ziyue",
""
],
[
"Jiang",
"Shuai",
""
],
[
"Huang",
"Congzhentao",
""
],
[
"Xu",
"Richard YiDa",
""
]
] |
new_dataset
| 0.956882 |
2110.13136
|
Dan Hendrycks
|
Dan Hendrycks, Mantas Mazeika, Andy Zou, Sahil Patel, Christine Zhu,
Jesus Navarro, Dawn Song, Bo Li, Jacob Steinhardt
|
What Would Jiminy Cricket Do? Towards Agents That Behave Morally
|
NeurIPS 2021. Environments available here
https://github.com/hendrycks/jiminy-cricket
| null | null | null |
cs.LG cs.AI cs.CL cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
When making everyday decisions, people are guided by their conscience, an
internal sense of right and wrong. By contrast, artificial agents are currently
not endowed with a moral sense. As a consequence, they may learn to behave
immorally when trained on environments that ignore moral concerns, such as
violent video games. With the advent of generally capable agents that pretrain
on many environments, it will become necessary to mitigate inherited biases
from environments that teach immoral behavior. To facilitate the development of
agents that avoid causing wanton harm, we introduce Jiminy Cricket, an
environment suite of 25 text-based adventure games with thousands of diverse,
morally salient scenarios. By annotating every possible game state, the Jiminy
Cricket environments robustly evaluate whether agents can act morally while
maximizing reward. Using models with commonsense moral knowledge, we create an
elementary artificial conscience that assesses and guides agents. In extensive
experiments, we find that the artificial conscience approach can steer agents
towards moral behavior without sacrificing performance.
|
[
{
"version": "v1",
"created": "Mon, 25 Oct 2021 17:59:31 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Feb 2022 01:59:37 GMT"
}
] | 2022-02-09T00:00:00 |
[
[
"Hendrycks",
"Dan",
""
],
[
"Mazeika",
"Mantas",
""
],
[
"Zou",
"Andy",
""
],
[
"Patel",
"Sahil",
""
],
[
"Zhu",
"Christine",
""
],
[
"Navarro",
"Jesus",
""
],
[
"Song",
"Dawn",
""
],
[
"Li",
"Bo",
""
],
[
"Steinhardt",
"Jacob",
""
]
] |
new_dataset
| 0.999432 |
2202.03482
|
Frederik Pahde
|
Frederik Pahde, Leander Weber, Christopher J. Anders, Wojciech Samek,
Sebastian Lapuschkin
|
PatClArC: Using Pattern Concept Activation Vectors for Noise-Robust
Model Debugging
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
State-of-the-art machine learning models are commonly (pre-)trained on large
benchmark datasets. These often contain biases, artifacts, or errors that have
remained unnoticed in the data collection process and therefore fail in
representing the real world truthfully. This can cause models trained on these
datasets to learn undesired behavior based upon spurious correlations, e.g.,
the existence of a copyright tag in an image. Concept Activation Vectors (CAV)
have been proposed as a tool to model known concepts in latent space and have
been used for concept sensitivity testing and model correction. Specifically,
class artifact compensation (ClArC) corrects models using CAVs to represent
data artifacts in feature space linearly. Modeling CAVs with filters of linear
models, however, causes a significant influence of the noise portion within the
data, as recent work proposes the unsuitability of linear model filters to find
the signal direction in the input, which can be avoided by instead using
patterns. In this paper we propose Pattern Concept Activation Vectors (PCAV)
for noise-robust concept representations in latent space. We demonstrate that
pattern-based artifact modeling has beneficial effects on the application of
CAVs as a means to remove influence of confounding features from models via the
ClArC framework.
|
[
{
"version": "v1",
"created": "Mon, 7 Feb 2022 19:40:20 GMT"
}
] | 2022-02-09T00:00:00 |
[
[
"Pahde",
"Frederik",
""
],
[
"Weber",
"Leander",
""
],
[
"Anders",
"Christopher J.",
""
],
[
"Samek",
"Wojciech",
""
],
[
"Lapuschkin",
"Sebastian",
""
]
] |
new_dataset
| 0.965047 |
2202.03501
|
Zhou Huang
|
Zhou Huang, Tian-Zhu Xiang, Huai-Xin Chen, Hang Dai
|
Scribble-based Boundary-aware Network for Weakly Supervised Salient
Object Detection in Remote Sensing Images
|
33 pages, 10 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing CNNs-based salient object detection (SOD) heavily depends on the
large-scale pixel-level annotations, which is labor-intensive, time-consuming,
and expensive. By contrast, the sparse annotations become appealing to the
salient object detection community. However, few efforts are devoted to
learning salient object detection from sparse annotations, especially in the
remote sensing field. In addition, the sparse annotation usually contains
scanty information, which makes it challenging to train a well-performing
model, resulting in its performance largely lagging behind the fully-supervised
models. Although some SOD methods adopt some prior cues to improve the
detection performance, they usually lack targeted discrimination of object
boundaries and thus provide saliency maps with poor boundary localization. To
this end, in this paper, we propose a novel weakly-supervised salient object
detection framework to predict the saliency of remote sensing images from
sparse scribble annotations. To implement it, we first construct the
scribble-based remote sensing saliency dataset by relabelling an existing
large-scale SOD dataset with scribbles, namely S-EOR dataset. After that, we
present a novel scribble-based boundary-aware network (SBA-Net) for remote
sensing salient object detection. Specifically, we design a boundary-aware
module (BAM) to explore the object boundary semantics, which is explicitly
supervised by the high-confidence object boundary (pseudo) labels generated by
the boundary label generation (BLG) module, forcing the model to learn features
that highlight the object structure and thus boosting the boundary localization
of objects. Then, the boundary semantics are integrated with high-level
features to guide the salient object detection under the supervision of
scribble labels.
|
[
{
"version": "v1",
"created": "Mon, 7 Feb 2022 20:32:21 GMT"
}
] | 2022-02-09T00:00:00 |
[
[
"Huang",
"Zhou",
""
],
[
"Xiang",
"Tian-Zhu",
""
],
[
"Chen",
"Huai-Xin",
""
],
[
"Dai",
"Hang",
""
]
] |
new_dataset
| 0.992527 |
2202.03610
|
Nariman Torkzaban
|
Nariman Torkzaban, Mohamamd A. (Amir) Khojastepour, and John S. Baras
|
Codebook Design for Composite Beamforming in Next-generation mmWave
Systems
|
Accepted at IEEE WCNC 2022
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In pursuance of the unused spectrum in higher frequencies, millimeter wave
(mmWave) bands have a pivotal role. However, the high path-loss and poor
scattering associated with mmWave communications highlight the necessity of
employing effective beamforming techniques. In order to efficiently search for
the beam to serve a user and to jointly serve multiple users it is often
required to use a composite beam which consists of multiple disjoint lobes. A
composite beam covers multiple desired angular coverage intervals (ACIs) and
ideally has maximum and uniform gain (smoothness) within each desired ACI,
negligible gain (leakage) outside the desired ACIs, and sharp edges. We propose
an algorithm for designing such ideal composite codebook by providing an
analytical closed-form solution with low computational complexity. There is a
fundamental trade-off between the gain, leakage and smoothness of the beams.
Our design allows to achieve different values in such trade-off based on
changing the design parameters. We highlight the shortcomings of the uniform
linear arrays (ULAs) in building arbitrary composite beams. Consequently, we
use a recently introduced twin-ULA (TULA) antenna structure to effectively
resolve these inefficiencies. Numerical results are used to validate the
theoretical findings.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 02:49:51 GMT"
}
] | 2022-02-09T00:00:00 |
[
[
"Torkzaban",
"Nariman",
"",
"Amir"
],
[
"A.",
"Mohamamd",
"",
"Amir"
],
[
"Khojastepour",
"",
""
],
[
"Baras",
"John S.",
""
]
] |
new_dataset
| 0.996682 |
2202.03677
|
Jiwei Nie
|
Nie Jiwei and Feng Joe-Mei and Xue Dingyu and Pan Feng and Liu Wei and
Hu Jun and Cheng Shuai
|
A Novel Image Descriptor with Aggregated Semantic Skeleton
Representation for Long-term Visual Place Recognition
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a Simultaneous Localization and Mapping (SLAM) system, a loop-closure can
eliminate accumulated errors, which is accomplished by Visual Place Recognition
(VPR), a task that retrieves the current scene from a set of pre-stored
sequential images through matching specific scene-descriptors. In urban scenes,
the appearance variation caused by seasons and illumination has brought great
challenges to the robustness of scene descriptors. Semantic segmentation images
can not only deliver the shape information of objects but also their categories
and spatial relations that will not be affected by the appearance variation of
the scene. Innovated by the Vector of Locally Aggregated Descriptor (VLAD), in
this paper, we propose a novel image descriptor with aggregated semantic
skeleton representation (SSR), dubbed SSR-VLAD, for the VPR under drastic
appearance-variation of environments. The SSR-VLAD of one image aggregates the
semantic skeleton features of each category and encodes the spatial-temporal
distribution information of the image semantic information. We conduct a series
of experiments on three public datasets of challenging urban scenes. Compared
with four state-of-the-art VPR methods- CoHOG, NetVLAD, LOST-X, and
Region-VLAD, VPR by matching SSR-VLAD outperforms those methods and maintains
competitive real-time performance at the same time.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 06:49:38 GMT"
}
] | 2022-02-09T00:00:00 |
[
[
"Jiwei",
"Nie",
""
],
[
"Joe-Mei",
"Feng",
""
],
[
"Dingyu",
"Xue",
""
],
[
"Feng",
"Pan",
""
],
[
"Wei",
"Liu",
""
],
[
"Jun",
"Hu",
""
],
[
"Shuai",
"Cheng",
""
]
] |
new_dataset
| 0.99197 |
2202.03687
|
Cyril Onwubiko PhD
|
Cyril Onwubiko
|
CyberOps: Situational Awareness in Cybersecurity Operations
|
26 pages, 3 figures. arXiv admin note: text overlap with
arXiv:2202.02537
|
Intl. Journal on Cyber Situational Awareness, Vol. 5, No. 1, 2020
|
10.22619/IJCSA.2020.100134
| null |
cs.CR cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Cybersecurity operations, CyberOps, is the use and application of
cybersecurity capabilities to a domain, department, organisation or nation. It
is fundamentally to protect digital investments, contribute to national
economic wellbeing by providing a safe, secure and conducive environment to
conduct business and to protect national critical national infrastructures and
citizens welfare. In this paper, we investigate operational factors that
influence situational awareness of CyberOps, specifically, the features that
deals with understanding and comprehension of operational and human factors
aspects and that helps with insights on human operator decision making such as
cognition, teamwork, knowledge, skills and abilities. The operational factors
discussed in this paper range from tools, techniques, integration, architecture
to automation, cognition, people, policy, process and procedures.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 07:14:56 GMT"
}
] | 2022-02-09T00:00:00 |
[
[
"Onwubiko",
"Cyril",
""
]
] |
new_dataset
| 0.998939 |
2202.03784
|
Gaetan Bahl
|
Gaetan Bahl, Lionel Daniel, Florent Lafarge
|
SCR: Smooth Contour Regression with Geometric Priors
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
While object detection methods traditionally make use of pixel-level masks or
bounding boxes, alternative representations such as polygons or active contours
have recently emerged. Among them, methods based on the regression of Fourier
or Chebyshev coefficients have shown high potential on freeform objects. By
defining object shapes as polar functions, they are however limited to
star-shaped domains. We address this issue with SCR: a method that captures
resolution-free object contours as complex periodic functions. The method
offers a good compromise between accuracy and compactness thanks to the design
of efficient geometric shape priors. We benchmark SCR on the popular COCO 2017
instance segmentation dataset, and show its competitiveness against existing
algorithms in the field. In addition, we design a compact version of our
network, which we benchmark on embedded hardware with a wide range of power
targets, achieving up to real-time performance.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 11:07:51 GMT"
}
] | 2022-02-09T00:00:00 |
[
[
"Bahl",
"Gaetan",
""
],
[
"Daniel",
"Lionel",
""
],
[
"Lafarge",
"Florent",
""
]
] |
new_dataset
| 0.980352 |
2202.03785
|
Simone Mentasti
|
Pragyan Dahal, Simone Mentasti, Stefano Arrigoni, Francesco Braghin,
Matteo Matteucci, Federico Cheli
|
Extended Object Tracking in Curvilinear Road Coordinates for Autonomous
Driving
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In literature, Extended Object Tracking (EOT) algorithms developed for
autonomous driving predominantly provide obstacles state estimation in
cartesian coordinates in the Vehicle Reference Frame. However, in many
scenarios, state representation in road-aligned curvilinear coordinates is
preferred when implementing autonomous driving subsystems like cruise control,
lane-keeping assist, platooning, etc. This paper proposes a Gaussian Mixture
Probability Hypothesis Density~(GM-PHD) filter with an Unscented Kalman
Filter~(UKF) estimator that provides obstacle state estimates in curvilinear
road coordinates. We employ a hybrid sensor fusion architecture between Lidar
and Radar sensors to obtain rich measurement point representations for EOT. The
measurement model for the UKF estimator is developed with the integration of
coordinate conversion from curvilinear road coordinates to cartesian
coordinates by using cubic hermit spline road model. The proposed algorithm is
validated through Matlab Driving Scenario Designer simulation and experimental
data collected at Monza Eni Circuit.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 11:09:14 GMT"
}
] | 2022-02-09T00:00:00 |
[
[
"Dahal",
"Pragyan",
""
],
[
"Mentasti",
"Simone",
""
],
[
"Arrigoni",
"Stefano",
""
],
[
"Braghin",
"Francesco",
""
],
[
"Matteucci",
"Matteo",
""
],
[
"Cheli",
"Federico",
""
]
] |
new_dataset
| 0.984761 |
2202.03807
|
Maximilian Geisslinger
|
Alexander Wischnewski, Maximilian Geisslinger, Johannes Betz, Tobias
Betz, Felix Fent, Alexander Heilmeier, Leonhard Hermansdorfer, Thomas
Herrmann, Sebastian Huch, Phillip Karle, Felix Nobis, Levent \"Ogretmen,
Matthias Rowold, Florian Sauerbeck, Tim Stahl, Rainer Trauth, Markus
Lienkamp, Boris Lohmann
|
Indy Autonomous Challenge -- Autonomous Race Cars at the Handling Limits
| null | null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motorsport has always been an enabler for technological advancement, and the
same applies to the autonomous driving industry. The team TUM Auton-omous
Motorsports will participate in the Indy Autonomous Challenge in Octo-ber 2021
to benchmark its self-driving software-stack by racing one out of ten
autonomous Dallara AV-21 racecars at the Indianapolis Motor Speedway. The first
part of this paper explains the reasons for entering an autonomous vehicle race
from an academic perspective: It allows focusing on several edge cases
en-countered by autonomous vehicles, such as challenging evasion maneuvers and
unstructured scenarios. At the same time, it is inherently safe due to the
motor-sport related track safety precautions. It is therefore an ideal testing
ground for the development of autonomous driving algorithms capable of
mastering the most challenging and rare situations. In addition, we provide
insight into our soft-ware development workflow and present our
Hardware-in-the-Loop simulation setup. It is capable of running simulations of
up to eight autonomous vehicles in real time. The second part of the paper
gives a high-level overview of the soft-ware architecture and covers our
development priorities in building a high-per-formance autonomous racing
software: maximum sensor detection range, relia-ble handling of multi-vehicle
situations, as well as reliable motion control under uncertainty.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 11:55:05 GMT"
}
] | 2022-02-09T00:00:00 |
[
[
"Wischnewski",
"Alexander",
""
],
[
"Geisslinger",
"Maximilian",
""
],
[
"Betz",
"Johannes",
""
],
[
"Betz",
"Tobias",
""
],
[
"Fent",
"Felix",
""
],
[
"Heilmeier",
"Alexander",
""
],
[
"Hermansdorfer",
"Leonhard",
""
],
[
"Herrmann",
"Thomas",
""
],
[
"Huch",
"Sebastian",
""
],
[
"Karle",
"Phillip",
""
],
[
"Nobis",
"Felix",
""
],
[
"Ögretmen",
"Levent",
""
],
[
"Rowold",
"Matthias",
""
],
[
"Sauerbeck",
"Florian",
""
],
[
"Stahl",
"Tim",
""
],
[
"Trauth",
"Rainer",
""
],
[
"Lienkamp",
"Markus",
""
],
[
"Lohmann",
"Boris",
""
]
] |
new_dataset
| 0.996805 |
2202.03822
|
PoYung Chou
|
Po-Yung Chou, Cheng-Hung Lin, Wen-Chung Kao
|
A Novel Plug-in Module for Fine-Grained Visual Classification
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Visual classification can be divided into coarse-grained and fine-grained
classification. Coarse-grained classification represents categories with a
large degree of dissimilarity, such as the classification of cats and dogs,
while fine-grained classification represents classifications with a large
degree of similarity, such as cat species, bird species, and the makes or
models of vehicles. Unlike coarse-grained visual classification, fine-grained
visual classification often requires professional experts to label data, which
makes data more expensive. To meet this challenge, many approaches propose to
automatically find the most discriminative regions and use local features to
provide more precise features. These approaches only require image-level
annotations, thereby reducing the cost of annotation. However, most of these
methods require two- or multi-stage architectures and cannot be trained
end-to-end. Therefore, we propose a novel plug-in module that can be integrated
to many common backbones, including CNN-based or Transformer-based networks to
provide strongly discriminative regions. The plugin module can output
pixel-level feature maps and fuse filtered features to enhance fine-grained
visual classification. Experimental results show that the proposed plugin
module outperforms state-of-the-art approaches and significantly improves the
accuracy to 92.77\% and 92.83\% on CUB200-2011 and NABirds, respectively. We
have released our source code in Github
https://github.com/chou141253/FGVC-PIM.git.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 12:35:58 GMT"
}
] | 2022-02-09T00:00:00 |
[
[
"Chou",
"Po-Yung",
""
],
[
"Lin",
"Cheng-Hung",
""
],
[
"Kao",
"Wen-Chung",
""
]
] |
new_dataset
| 0.959163 |
2202.03846
|
Markus Nemitz
|
Lauryn Whiteside, Savita V. Kendre, Tian Y. Fan, Jovanna A. Tracz, Gus
T. Teran, Thomas C. Underwood, Mohammed E. Sayed, Haihui J. Jiang, Adam A.
Stokes, Daniel J. Preston, George M. Whitesides, and Markus P. Nemitz
|
The Soft Compiler: A Web-Based Tool for the Design of Modular Pneumatic
Circuits for Soft Robots
|
Accepted manuscript (journal): Robotics and Automation Letter, 2022
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Developing soft circuits from individual soft logic gates poses a unique
challenge: with increasing numbers of logic gates, the design and
implementation of circuits leads to inefficiencies due to mathematically
unoptimized circuits and wiring mistakes during assembly. It is therefore
practically important to introduce design tools that support the development of
soft circuits. We developed a web-based graphical user interface, the Soft
Compiler, that accepts a user-defined robot behavior as a truth table to
generate a mathematically optimized circuit diagram that guides the assembly of
a soft fluidic circuit. We describe the design and experimental verification of
three soft circuits of increasing complexity, using the Soft Compiler as a
design tool and a novel pneumatic glove as an input interface. In one example,
we reduce the size of a soft circuit from the original 11 logic gates to 4
logic gates while maintaining circuit functionality. The Soft Compiler is a
web-based design tool for fluidic, soft circuits and published under
open-source MIT License.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 13:15:44 GMT"
}
] | 2022-02-09T00:00:00 |
[
[
"Whiteside",
"Lauryn",
""
],
[
"Kendre",
"Savita V.",
""
],
[
"Fan",
"Tian Y.",
""
],
[
"Tracz",
"Jovanna A.",
""
],
[
"Teran",
"Gus T.",
""
],
[
"Underwood",
"Thomas C.",
""
],
[
"Sayed",
"Mohammed E.",
""
],
[
"Jiang",
"Haihui J.",
""
],
[
"Stokes",
"Adam A.",
""
],
[
"Preston",
"Daniel J.",
""
],
[
"Whitesides",
"George M.",
""
],
[
"Nemitz",
"Markus P.",
""
]
] |
new_dataset
| 0.999025 |
2202.03905
|
Markus Nemitz
|
Jovanna A. Tracz, Lukas Wille, Dylan Pathiraja, Savita V. Kendre, Ron
Pfisterer, Ethan Turett, Gus T. Teran, Christoffer K. Abrahamsson, Samuel E.
Root, Won-Kyu Lee, Daniel J. Preston, Haihui Joy Jiang, George M. Whitesides,
and Markus P. Nemitz
|
Tube-Balloon Logic for the Exploration of Fluidic Control Elements
|
Accepted manuscript (journal): Robotics and Automation Letter, 2022
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The control of pneumatically driven soft robots typically requires
electronics. Microcontrollers are connected to power electronics that switch
valves and pumps on and off. As a recent alternative, fluidic control methods
have been introduced, in which soft digital logic gates permit multiple
actuation states to be achieved in soft systems. Such systems have demonstrated
autonomous behaviors without the use of electronics. However, fluidic
controllers have required complex fabrication processes. To democratize the
exploration of fluidic controllers, we developed tube-balloon logic circuitry,
which consists of logic gates made from straws and balloons. Each tube-balloon
logic device takes a novice five minutes to fabricate and costs $0.45.
Tube-balloon logic devices can also operate at pressures of up to 200 kPa and
oscillate at frequencies of up to 15 Hz. We configure the tube-balloon logic
device as NOT-, NAND-, and NOR-gates and assemble them into a three-ring
oscillator to demonstrate a vibrating sieve that separates sugar from rice.
Because tube-balloon logic devices are low-cost, easy to fabricate, and their
operating principle is simple, they are well suited for exploring fundamental
concepts of fluidic control schemes while encouraging design inquiry for
pneumatically driven soft robots
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 14:51:03 GMT"
}
] | 2022-02-09T00:00:00 |
[
[
"Tracz",
"Jovanna A.",
""
],
[
"Wille",
"Lukas",
""
],
[
"Pathiraja",
"Dylan",
""
],
[
"Kendre",
"Savita V.",
""
],
[
"Pfisterer",
"Ron",
""
],
[
"Turett",
"Ethan",
""
],
[
"Teran",
"Gus T.",
""
],
[
"Abrahamsson",
"Christoffer K.",
""
],
[
"Root",
"Samuel E.",
""
],
[
"Lee",
"Won-Kyu",
""
],
[
"Preston",
"Daniel J.",
""
],
[
"Jiang",
"Haihui Joy",
""
],
[
"Whitesides",
"George M.",
""
],
[
"Nemitz",
"Markus P.",
""
]
] |
new_dataset
| 0.999123 |
2202.03947
|
Robert Penicka
|
Robert Penicka and Davide Scaramuzza
|
Minimum-Time Quadrotor Waypoint Flight in Cluttered Environments
|
Accepted in IEEE Robotics and Automation Letters
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We tackle the problem of planning a minimum-time trajectory for a quadrotor
over a sequence of specified waypoints in the presence of obstacles while
exploiting the full quadrotor dynamics. This problem is crucial for autonomous
search and rescue and drone racing scenarios but was, so far, unaddressed by
the robotics community \emph{in its entirety} due to the challenges of
minimizing time in the presence of the non-convex constraints posed by
collision avoidance. Early works relied on simplified dynamics or polynomial
trajectory representations that did not exploit the full actuator potential of
a quadrotor and, thus, did not aim at minimizing time. We address this
challenging problem by using a hierarchical, sampling-based method with an
incrementally more complex quadrotor model. Our method first finds paths in
different topologies to guide subsequent trajectory search for a kinodynamic
point-mass model. Then, it uses an asymptotically-optimal, kinodynamic
sampling-based method based on a full quadrotor model on top of the point-mass
solution to find a feasible trajectory with a time-optimal objective. The
proposed method is shown to outperform all related baselines in cluttered
environments and is further validated in real-world flights at over 60km/h in
one of the world's largest motion capture systems. We release the code open
source.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 15:54:19 GMT"
}
] | 2022-02-09T00:00:00 |
[
[
"Penicka",
"Robert",
""
],
[
"Scaramuzza",
"Davide",
""
]
] |
new_dataset
| 0.99061 |
2202.03950
|
Yuan Li
|
Yuan Li, Wende Tan, Zhizheng Lv, Songtao Yang, Mathias Payer, Ying
Liu, Chao Zhang
|
PACSan: Enforcing Memory Safety Based on ARM PA
|
11 pages
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Memory safety is a key security property that stops memory corruption
vulnerabilities. Existing sanitizers enforce checks and catch such bugs during
development and testing. However, they either provide partial memory safety or
have overwhelmingly high performance overheads. Our novel sanitizer PACSan
enforces spatial and temporal memory safety with no false positives at low
performance overheads. PACSan removes the majority of the overheads involved in
pointer tracking by sealing metadata in pointers through ARM PA (Pointer
Authentication), and performing the memory safety checks when pointers are
dereferenced. We have developed a prototype of PACSan and systematically
evaluated its security and performance on the Magma, Juliet, Nginx, and SPEC
CPU2017 test suites, respectively. In our evaluation, PACSan shows no false
positives together with negligible false negatives, while introducing stronger
security guarantees and lower performance overheads than state-of-the-art
sanitizers, including HWASan, ASan, SoftBound+CETS, Memcheck, LowFat, and
PTAuth. Specifically, PACSan has 0.84x runtime overhead and 1.92x memory
overhead on average. Compared to the widely deployed ASan, PACSan has no false
positives and much fewer false negatives and reduces 7.172% runtime overheads
and 89.063%memory overheads.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 16:00:40 GMT"
}
] | 2022-02-09T00:00:00 |
[
[
"Li",
"Yuan",
""
],
[
"Tan",
"Wende",
""
],
[
"Lv",
"Zhizheng",
""
],
[
"Yang",
"Songtao",
""
],
[
"Payer",
"Mathias",
""
],
[
"Liu",
"Ying",
""
],
[
"Zhang",
"Chao",
""
]
] |
new_dataset
| 0.994961 |
2202.03954
|
Jiashi Gao
|
Jiashi Gao, Xinming Shi, James J.Q. Yu
|
Social-DualCVAE: Multimodal Trajectory Forecasting Based on Social
Interactions Pattern Aware and Dual Conditional Variational Auto-Encoder
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pedestrian trajectory forecasting is a fundamental task in multiple utility
areas, such as self-driving, autonomous robots, and surveillance systems. The
future trajectory forecasting is multi-modal, influenced by physical
interaction with scene contexts and intricate social interactions among
pedestrians. The mainly existing literature learns representations of social
interactions by deep learning networks, while the explicit interaction patterns
are not utilized. Different interaction patterns, such as following or
collision avoiding, will generate different trends of next movement, thus, the
awareness of social interaction patterns is important for trajectory
forecasting. Moreover, the social interaction patterns are privacy concerned or
lack of labels. To jointly address the above issues, we present a social-dual
conditional variational auto-encoder (Social-DualCVAE) for multi-modal
trajectory forecasting, which is based on a generative model conditioned not
only on the past trajectories but also the unsupervised classification of
interaction patterns. After generating the category distribution of the
unlabeled social interaction patterns, DualCVAE, conditioned on the past
trajectories and social interaction pattern, is proposed for multi-modal
trajectory prediction by latent variables estimating. A variational bound is
derived as the minimization objective during training. The proposed model is
evaluated on widely used trajectory benchmarks and outperforms the prior
state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 16:04:47 GMT"
}
] | 2022-02-09T00:00:00 |
[
[
"Gao",
"Jiashi",
""
],
[
"Shi",
"Xinming",
""
],
[
"Yu",
"James J. Q.",
""
]
] |
new_dataset
| 0.968612 |
2202.03977
|
Jens Zumbr\"agel
|
Marcus Greferath and Jens Zumbr\"agel
|
List Decoding of Quaternary Codes in the Lee Metric
|
5 pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a list decoding algorithm for quaternary negacyclic codes over the
Lee metric. To achieve this result, we use a Sudan-Guruswami type list decoding
algorithm for Reed-Solomon codes over certain ring alphabets. Our decoding
strategy for negacyclic codes over the ring $\mathbb Z_4$ combines the list
decoding algorithm by Wu with the Gr\"obner basis approach for solving a key
equation due to Byrne and Fitzpatrick.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 16:34:08 GMT"
}
] | 2022-02-09T00:00:00 |
[
[
"Greferath",
"Marcus",
""
],
[
"Zumbrägel",
"Jens",
""
]
] |
new_dataset
| 0.995482 |
2202.04015
|
Pranay Gupta
|
Pranay Gupta and Manish Gupta
|
NEWSKVQA: Knowledge-Aware News Video Question Answering
| null | null | null | null |
cs.CV cs.MM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Answering questions in the context of videos can be helpful in video
indexing, video retrieval systems, video summarization, learning management
systems and surveillance video analysis. Although there exists a large body of
work on visual question answering, work on video question answering (1) is
limited to domains like movies, TV shows, gameplay, or human activity, and (2)
is mostly based on common sense reasoning. In this paper, we explore a new
frontier in video question answering: answering knowledge-based questions in
the context of news videos. To this end, we curate a new dataset of 12K news
videos spanning across 156 hours with 1M multiple-choice question-answer pairs
covering 8263 unique entities. We make the dataset publicly available. Using
this dataset, we propose a novel approach, NEWSKVQA (Knowledge-Aware News Video
Question Answering) which performs multi-modal inferencing over textual
multiple-choice questions, videos, their transcripts and knowledge base, and
presents a strong baseline.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 17:31:31 GMT"
}
] | 2022-02-09T00:00:00 |
[
[
"Gupta",
"Pranay",
""
],
[
"Gupta",
"Manish",
""
]
] |
new_dataset
| 0.99126 |
2202.04049
|
Shyam Narayanan
|
Shyam Narayanan, Erika Covi, Viktor Havel, Charlotte Frenkel, Suzanne
Lancaster, Quang Duong, Stefan Slesazeck, Thomas Mikolajick, Melika Payvand,
Giacomo Indiveri
|
A 120dB Programmable-Range On-Chip Pulse Generator for Characterizing
Ferroelectric Devices
| null | null | null | null |
cs.ET cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Novel non-volatile memory devices based on ferroelectric thin films represent
a promising emerging technology that is ideally suited for neuromorphic
applications. The physical switching mechanism in such films is the nucleation
and growth of ferroelectric domains. Since this has a strong dependence on both
pulse width and voltage amplitude, it is important to use precise pulsing
schemes for a thorough characterization of their behaviour. In this work, we
present an on-chip 120 dB programmable range pulse generator, that can generate
pulse widths ranging from 10ns to 10ms $\pm$2.5% which eliminates the RLC
bottleneck in the device characterisation setup. We describe the pulse
generator design and show how the pulse width can be tuned with high accuracy,
using Digital to Analog converters. Finally, we present experimental results
measured from the circuit, fabricated using a standard 180nm CMOS technology.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 18:30:56 GMT"
}
] | 2022-02-09T00:00:00 |
[
[
"Narayanan",
"Shyam",
""
],
[
"Covi",
"Erika",
""
],
[
"Havel",
"Viktor",
""
],
[
"Frenkel",
"Charlotte",
""
],
[
"Lancaster",
"Suzanne",
""
],
[
"Duong",
"Quang",
""
],
[
"Slesazeck",
"Stefan",
""
],
[
"Mikolajick",
"Thomas",
""
],
[
"Payvand",
"Melika",
""
],
[
"Indiveri",
"Giacomo",
""
]
] |
new_dataset
| 0.999573 |
2006.07302
|
Tuukka Korhonen
|
Tuukka Korhonen
|
SMS in PACE 2020
|
3 pages, 3 appendix pages, 0 figures. Submitted as a solver
description of a solver in Parameterized Algorithms and Computational
Experiments Challenge 2020
| null |
10.4230/LIPIcs.IPEC.2020.30
| null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe SMS, our submission to the exact treedepth track of PACE 2020.
SMS computes the treedepth of a graph by branching on the small minimal
separators of the graph.
|
[
{
"version": "v1",
"created": "Fri, 12 Jun 2020 16:34:24 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Korhonen",
"Tuukka",
""
]
] |
new_dataset
| 0.998363 |
2008.12052
|
Junjie Huang
|
Zhibo Zou, Junjie Huang, Ping Luo
|
Compensation Tracker: Reprocessing Lost Object for Multi-Object Tracking
| null | null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tracking by detection paradigm is one of the most popular object tracking
methods. However, it is very dependent on the performance of the detector. When
the detector has a behavior of missing detection, the tracking result will be
directly affected. In this paper, we analyze the phenomenon of the lost
tracking object in real-time tracking model on MOT2020 dataset. Based on simple
and traditional methods, we propose a compensation tracker to further alleviate
the lost tracking problem caused by missing detection. It consists of a motion
compensation module and an object selection module. The proposed method not
only can re-track missing tracking objects from lost objects, but also does not
require additional networks so as to maintain speed-accuracy trade-off of the
real-time model. Our method only needs to be embedded into the tracker to work
without re-training the network. Experiments show that the compensation tracker
can efficaciously improve the performance of the model and reduce identity
switches. With limited costs, the compensation tracker successfully enhances
the baseline tracking performance by a large margin and reaches 66% of MOTA and
67% of IDF1 on MOT2020 dataset.
|
[
{
"version": "v1",
"created": "Thu, 27 Aug 2020 10:59:54 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Aug 2020 04:48:44 GMT"
},
{
"version": "v3",
"created": "Tue, 12 Jan 2021 13:29:01 GMT"
},
{
"version": "v4",
"created": "Sat, 5 Feb 2022 13:48:43 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Zou",
"Zhibo",
""
],
[
"Huang",
"Junjie",
""
],
[
"Luo",
"Ping",
""
]
] |
new_dataset
| 0.968101 |
2101.11490
|
Ioannis Papoutsidakis
|
Ioannis Papoutsidakis, Robert J. Piechocki, and Angela Doufexi
|
Non-Asymptotic Converse Bounds Via Auxiliary Channels
|
Extended version of a manuscript submitted to IEEE ISIT 2022
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a new derivation method of converse bounds on the
non-asymptotic achievable rate of discrete weakly symmetric memoryless
channels. It is based on the finite blocklength statistics of the channel,
where with the use of an auxiliary channel the converse bound is produced. This
method is general and initially is presented for an arbitrary weakly symmetric
channel. Afterwards, the main result is specialized for the $q$-ary erasure
channel (QEC), binary symmetric channel (BSC), and QEC with stop feedback.
Numerical evaluations show identical or comparable bounds to the
state-of-the-art in the cases of QEC and BSC, and a tighter bound for the QEC
with stop feedback.
|
[
{
"version": "v1",
"created": "Wed, 27 Jan 2021 15:40:10 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Jan 2021 11:04:16 GMT"
},
{
"version": "v3",
"created": "Sun, 16 May 2021 17:47:29 GMT"
},
{
"version": "v4",
"created": "Sat, 5 Feb 2022 17:48:44 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Papoutsidakis",
"Ioannis",
""
],
[
"Piechocki",
"Robert J.",
""
],
[
"Doufexi",
"Angela",
""
]
] |
new_dataset
| 0.958211 |
2102.05872
|
Yuki Okamoto
|
Yuki Okamoto, Keisuke Imoto, Shinnosuke Takamichi, Ryosuke Yamanishi,
Takahiro Fukumori, Yoichi Yamashita
|
Onoma-to-wave: Environmental sound synthesis from onomatopoeic words
|
Accepted to APSIPA Transactions on Signal and Information Processing
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a framework for environmental sound synthesis from
onomatopoeic words. As one way of expressing an environmental sound, we can use
an onomatopoeic word, which is a character sequence for phonetically imitating
a sound. An onomatopoeic word is effective for describing diverse sound
features. Therefore, using onomatopoeic words for environmental sound synthesis
will enable us to generate diverse environmental sounds. To generate diverse
sounds, we propose a method based on a sequence-to-sequence framework for
synthesizing environmental sounds from onomatopoeic words. We also propose a
method of environmental sound synthesis using onomatopoeic words and sound
event labels. The use of sound event labels in addition to onomatopoeic words
enables us to capture each sound event's feature depending on the input sound
event label. Our subjective experiments show that our proposed methods achieve
higher diversity and naturalness than conventional methods using sound event
labels.
|
[
{
"version": "v1",
"created": "Thu, 11 Feb 2021 07:15:14 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Oct 2021 02:08:35 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Oct 2021 10:15:05 GMT"
},
{
"version": "v4",
"created": "Mon, 7 Feb 2022 06:00:34 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Okamoto",
"Yuki",
""
],
[
"Imoto",
"Keisuke",
""
],
[
"Takamichi",
"Shinnosuke",
""
],
[
"Yamanishi",
"Ryosuke",
""
],
[
"Fukumori",
"Takahiro",
""
],
[
"Yamashita",
"Yoichi",
""
]
] |
new_dataset
| 0.999829 |
2103.00355
|
Weixiao Gao
|
Weixiao Gao, Liangliang Nan, Bas Boom, Hugo Ledoux
|
SUM: A Benchmark Dataset of Semantic Urban Meshes
|
27 pages, 14 figures
|
ISPRS Journal of Photogrammetry and Remote Sensing, Volume 179,
September 2021, Pages 108-120
|
10.1016/j.isprsjprs.2021.07.008
| null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent developments in data acquisition technology allow us to collect 3D
texture meshes quickly. Those can help us understand and analyse the urban
environment, and as a consequence are useful for several applications like
spatial analysis and urban planning. Semantic segmentation of texture meshes
through deep learning methods can enhance this understanding, but it requires a
lot of labelled data. The contributions of this work are threefold: (1) a new
benchmark dataset of semantic urban meshes, (2) a novel semi-automatic
annotation framework, and (3) an annotation tool for 3D meshes. In particular,
our dataset covers about 4 km2 in Helsinki (Finland), with six classes, and we
estimate that we save about 600 hours of labelling work using our annotation
framework, which includes initial segmentation and interactive refinement. We
also compare the performance of several state-of-theart 3D semantic
segmentation methods on the new benchmark dataset. Other researchers can use
our results to train their networks: the dataset is publicly available, and the
annotation tool is released as open-source.
|
[
{
"version": "v1",
"created": "Sat, 27 Feb 2021 23:26:21 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Jul 2021 14:25:37 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Gao",
"Weixiao",
""
],
[
"Nan",
"Liangliang",
""
],
[
"Boom",
"Bas",
""
],
[
"Ledoux",
"Hugo",
""
]
] |
new_dataset
| 0.99972 |
2104.10402
|
Giulio Ermanno Pibiri
|
Giulio Ermanno Pibiri and Roberto Trani
|
PTHash: Revisiting FCH Minimal Perfect Hashing
|
Accepted to SIGIR 2021
|
SIGIR 2021: Proceedings of the 44th International ACM SIGIR
Conference on Research and Development in Information Retrieval. July 2021.
Pages 1339-1348
|
10.1145/3404835.3462849
| null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Given a set $S$ of $n$ distinct keys, a function $f$ that bijectively maps
the keys of $S$ into the range $\{0,\ldots,n-1\}$ is called a minimal perfect
hash function for $S$. Algorithms that find such functions when $n$ is large
and retain constant evaluation time are of practical interest; for instance,
search engines and databases typically use minimal perfect hash functions to
quickly assign identifiers to static sets of variable-length keys such as
strings. The challenge is to design an algorithm which is efficient in three
different aspects: time to find $f$ (construction time), time to evaluate $f$
on a key of $S$ (lookup time), and space of representation for $f$. Several
algorithms have been proposed to trade-off between these aspects. In 1992, Fox,
Chen, and Heath (FCH) presented an algorithm at SIGIR providing very fast
lookup evaluation. However, the approach received little attention because of
its large construction time and higher space consumption compared to other
subsequent techniques. Almost thirty years later we revisit their framework and
present an improved algorithm that scales well to large sets and reduces space
consumption altogether, without compromising the lookup time. We conduct an
extensive experimental assessment and show that the algorithm finds functions
that are competitive in space with state-of-the art techniques and provide
$2-4\times$ better lookup time.
|
[
{
"version": "v1",
"created": "Wed, 21 Apr 2021 08:22:07 GMT"
},
{
"version": "v2",
"created": "Fri, 28 May 2021 08:58:38 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Pibiri",
"Giulio Ermanno",
""
],
[
"Trani",
"Roberto",
""
]
] |
new_dataset
| 0.98986 |
2107.08760
|
Leon Moonen
|
Guru Prasad Bhandari, Amara Naseer and Leon Moonen (Simula Research
Laboratory, Norway)
|
CVEfixes: Automated Collection of Vulnerabilities and Their Fixes from
Open-Source Software
|
Accepted for publication in Proceedings of the 17th International
Conference on Predictive Models and Data Analytics in Software Engineering
(PROMISE '21), August 19-20, 2021, Athens, Greece
| null |
10.1145/3475960.3475985
| null |
cs.SE cs.AI cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Data-driven research on the automated discovery and repair of security
vulnerabilities in source code requires comprehensive datasets of real-life
vulnerable code and their fixes. To assist in such research, we propose a
method to automatically collect and curate a comprehensive vulnerability
dataset from Common Vulnerabilities and Exposures (CVE) records in the public
National Vulnerability Database (NVD). We implement our approach in a fully
automated dataset collection tool and share an initial release of the resulting
vulnerability dataset named CVEfixes.
The CVEfixes collection tool automatically fetches all available CVE records
from the NVD, gathers the vulnerable code and corresponding fixes from
associated open-source repositories, and organizes the collected information in
a relational database. Moreover, the dataset is enriched with meta-data such as
programming language, and detailed code and security metrics at five levels of
abstraction. The collection can easily be repeated to keep up-to-date with
newly discovered or patched vulnerabilities. The initial release of CVEfixes
spans all published CVEs up to 9 June 2021, covering 5365 CVE records for 1754
open-source projects that were addressed in a total of 5495 vulnerability
fixing commits.
CVEfixes supports various types of data-driven software security research,
such as vulnerability prediction, vulnerability classification, vulnerability
severity prediction, analysis of vulnerability-related code changes, and
automated vulnerability repair.
|
[
{
"version": "v1",
"created": "Mon, 19 Jul 2021 11:34:09 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Bhandari",
"Guru Prasad",
"",
"Simula Research\n Laboratory, Norway"
],
[
"Naseer",
"Amara",
"",
"Simula Research\n Laboratory, Norway"
],
[
"Moonen",
"Leon",
"",
"Simula Research\n Laboratory, Norway"
]
] |
new_dataset
| 0.989378 |
2107.11636
|
Tanguy Gernot
|
Tanguy Gernot and Patrick Lacharme
|
Biometric Masterkeys
| null | null |
10.1016/j.cose.2022.102642
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Biometric authentication is used to secure digital or physical access. Such
an authentication system uses a biometric database, where data are sometimes
protected by cancelable transformations. This paper introduces the notion of
biometric masterkeys. A masterkey is a feature vector such that the
corresponding template matches with a significant number of templates stored in
a cancelable biometric database. Such a masterkey is directly researched from a
cancelable biometric database, but we also investigate another scenario in
which the masterkey is fixed before the creation of the cancelable biometric
database, providing additional access rights in the system for the masterkey's
owner. Experimental results on the fingerprint database FVC and the face image
database LFW show the effectiveness and the efficiency of such masterkeys in
both scenarios. In particular, from any given feature vector, we are able to
construct a cancelable database, for which the biometric template matches with
all the templates of the database.
|
[
{
"version": "v1",
"created": "Sat, 24 Jul 2021 15:38:44 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Gernot",
"Tanguy",
""
],
[
"Lacharme",
"Patrick",
""
]
] |
new_dataset
| 0.963812 |
2109.02705
|
Yu Li
|
Yu Li, Muhammad Monjurul Karim, Ruwen Qin
|
A Virtual Reality-based Training and Assessment System for Bridge
Inspectors with an Assistant Drone
|
23 pages, 10 figures. Accepted by IEEE Transactions on Human-Machine
Systems with minor revision on Jan 31, 2022
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Over 600,000 bridges in the U.S. must be inspected every two years to
identify flaws, defects, or potential problems that may need follow-up
maintenance. Bridge inspection has adopted unmanned aerial vehicles (or drones)
for improving safety, efficiency, and cost-effectiveness. Although drones can
operate in an autonomous mode, keeping inspectors in the loop is critical for
complex tasks in bridge inspection. Therefore, inspectors need to develop the
skill and confidence to operate drones in their jobs. This paper presents the
design and development of a virtual reality-based training and assessment
system for inspectors assisted by a drone in bridge inspection. The system is
composed of four integrated modules: a simulated bridge inspection developed in
Unity, an interface that allows a trainee to operate the drone in simulation
using a remote controller, data monitoring and analysis to provide real-time,
in-task feedback to trainees to assist their learning, and a post-study
assessment supporting personalized training. The paper also conducts a
proof-of-concept pilot study to illustrate the functionality of this system.
The study demonstrated that TASBID, as a tool for the early-stage training, can
objectively identify the training needs of individuals in detail and, further,
help them develop the skill and confidence in collaborating with a drone in
bridge inspection. The system has built a modeling and analysis platform for
exploring advanced solutions to the human-drone cooperative inspection of civil
infrastructure.
|
[
{
"version": "v1",
"created": "Mon, 6 Sep 2021 19:29:37 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Dec 2021 19:48:25 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Feb 2022 23:00:49 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Li",
"Yu",
""
],
[
"Karim",
"Muhammad Monjurul",
""
],
[
"Qin",
"Ruwen",
""
]
] |
new_dataset
| 0.996936 |
2110.11198
|
Penghang Liu
|
Penghang Liu, Naoki Masuda, Tomomi Kito, A. Erdem Sar{\i}y\"uce
|
Temporal Motifs in Patent Opposition and Collaboration Networks
| null |
Scientific Reports volume 12, Article number: 1917 (2022)
|
10.1038/s41598-022-05217-8
| null |
cs.SI physics.soc-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Patents are intellectual properties that reflect innovative activities of
companies and organizations. The literature is rich with the studies that
analyze the citations among the patents and the collaboration relations among
companies that own the patents. However, the adversarial relations between the
patent owners are not as well investigated. One proxy to model such relations
is the patent opposition, which is a legal activity in which a company
challenges the validity of a patent. Characterizing the patent oppositions,
collaborations, and the interplay between them can help better understand the
companies' business strategies. Temporality matters in this context as the
order and frequency of oppositions and collaborations characterize their
interplay. In this study, we construct a two-layer temporal network to model
the patent oppositions and collaborations among the companies. We utilize
temporal motifs to analyze the oppositions and collaborations from structural
and temporal perspectives. We first characterize the frequent motifs in patent
oppositions and investigate how often the companies of different sizes attack
other companies. We show that large companies tend to engage in opposition with
multiple companies. Then we analyze the temporal interplay between
collaborations and oppositions. We find that two adversarial companies are more
likely to collaborate in the future than two collaborating companies oppose
each other in the future.
|
[
{
"version": "v1",
"created": "Thu, 21 Oct 2021 15:12:36 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Feb 2022 19:55:57 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Liu",
"Penghang",
""
],
[
"Masuda",
"Naoki",
""
],
[
"Kito",
"Tomomi",
""
],
[
"Sarıyüce",
"A. Erdem",
""
]
] |
new_dataset
| 0.97167 |
2111.06020
|
Zhenhua Xu
|
Zhenhua Xu, Yuxuan Liu, Lu Gan, Xiangcheng Hu, Yuxiang Sun, Ming Liu,
Lujia Wang
|
csBoundary: City-scale Road-boundary Detection in Aerial Images for
High-definition Maps
|
Accepted by IEEE Robotics and Automation Letters and IEEE
International Conference on Robotics and Automation (ICRA) 2022
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
High-Definition (HD) maps can provide precise geometric and semantic
information of static traffic environments for autonomous driving.
Road-boundary is one of the most important information contained in HD maps
since it distinguishes between road areas and off-road areas, which can guide
vehicles to drive within road areas. But it is labor-intensive to annotate road
boundaries for HD maps at the city scale. To enable automatic HD map
annotation, current work uses semantic segmentation or iterative graph growing
for road-boundary detection. However, the former could not ensure topological
correctness since it works at the pixel level, while the latter suffers from
inefficiency and drifting issues. To provide a solution to the aforementioned
problems, in this letter, we propose a novel system termed csBoundary to
automatically detect road boundaries at the city scale for HD map annotation.
Our network takes as input an aerial image patch, and directly infers the
continuous road-boundary graph (i.e., vertices and edges) from this image. To
generate the city-scale road-boundary graph, we stitch the obtained graphs from
all the image patches. Our csBoundary is evaluated and compared on a public
benchmark dataset. The results demonstrate our superiority. The accompanied
demonstration video is available at our project page
\url{https://sites.google.com/view/csboundary/}.
|
[
{
"version": "v1",
"created": "Thu, 11 Nov 2021 02:04:36 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Feb 2022 10:22:31 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Xu",
"Zhenhua",
""
],
[
"Liu",
"Yuxuan",
""
],
[
"Gan",
"Lu",
""
],
[
"Hu",
"Xiangcheng",
""
],
[
"Sun",
"Yuxiang",
""
],
[
"Liu",
"Ming",
""
],
[
"Wang",
"Lujia",
""
]
] |
new_dataset
| 0.999818 |
2112.08991
|
Yixuan Weng
|
Yixuan Weng, Fei Xia, Bin Li, Xiusheng Huang, Shizhu He
|
ADBCMM : Acronym Disambiguation by Building Counterfactuals and
Multilingual Mixing
|
SDU@AAAI-2022
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scientific documents often contain a large number of acronyms. Disambiguation
of these acronyms will help researchers better understand the meaning of
vocabulary in the documents. In the past, thanks to large amounts of data from
English literature, acronym task was mainly applied in English literature.
However, for other low-resource languages, this task is difficult to obtain
good performance and receives less attention due to the lack of large amount of
annotation data. To address the above issue, this paper proposes an new method
for acronym disambiguation, named as ADBCMM, which can significantly improve
the performance of low-resource languages by building counterfactuals and
multilingual mixing. Specifically, by balancing data bias in low-resource
langauge, ADBCMM will able to improve the test performance outside the data
set. In SDU@AAAI-22 - Shared Task 2: Acronym Disambiguation, the proposed
method won first place in French and Spanish. You can repeat our results here
https://github.com/WENGSYX/ADBCMM.
|
[
{
"version": "v1",
"created": "Wed, 8 Dec 2021 15:08:27 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Feb 2022 15:53:37 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Weng",
"Yixuan",
""
],
[
"Xia",
"Fei",
""
],
[
"Li",
"Bin",
""
],
[
"Huang",
"Xiusheng",
""
],
[
"He",
"Shizhu",
""
]
] |
new_dataset
| 0.998592 |
2112.12028
|
Harichandana B S S
|
Sumit Kumar, Harichandana B S S, and Himanshu Arora
|
VoiceMoji: A Novel On-Device Pipeline for Seamless Emoji Insertion in
Dictation
|
Accepted at IEEE INDICON 2021, 19-21 December, 2021, India
| null |
10.1109/INDICON52576.2021.9691564
| null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Most of the speech recognition systems recover only words in the speech and
fail to capture emotions. Users have to manually add emoji(s) in text for
adding tone and making communication fun. Though there is much work done on
punctuation addition on transcribed speech, the area of emotion addition is
untouched. In this paper, we propose a novel on-device pipeline to enrich the
voice input experience. It involves, given a blob of transcribed text,
intelligently processing and identifying structure where emoji insertion makes
sense. Moreover, it includes semantic text analysis to predict emoji for each
of the sub-parts for which we propose a novel architecture Attention-based Char
Aware (ACA) LSTM which handles Out-Of-Vocabulary (OOV) words as well. All these
tasks are executed completely on-device and hence can aid on-device dictation
systems. To the best of our knowledge, this is the first work that shows how to
add emoji(s) in the transcribed text. We demonstrate that our components
achieve comparable results to previous neural approaches for punctuation
addition and emoji prediction with 80% fewer parameters. Overall, our proposed
model has a very small memory footprint of a mere 4MB to suit on-device
deployment.
|
[
{
"version": "v1",
"created": "Wed, 22 Dec 2021 16:54:57 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Kumar",
"Sumit",
""
],
[
"S",
"Harichandana B S",
""
],
[
"Arora",
"Himanshu",
""
]
] |
new_dataset
| 0.99897 |
2202.00120
|
Aleksandr Perevalov
|
Aleksandr Perevalov, Dennis Diefenbach, Ricardo Usbeck, Andreas Both
|
QALD-9-plus: A Multilingual Dataset for Question Answering over DBpedia
and Wikidata Translated by Native Speakers
| null | null | null | null |
cs.CL cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
The ability to have the same experience for different user groups (i.e.,
accessibility) is one of the most important characteristics of Web-based
systems. The same is true for Knowledge Graph Question Answering (KGQA) systems
that provide the access to Semantic Web data via natural language interface.
While following our research agenda on the multilingual aspect of accessibility
of KGQA systems, we identified several ongoing challenges. One of them is the
lack of multilingual KGQA benchmarks. In this work, we extend one of the most
popular KGQA benchmarks - QALD-9 by introducing high-quality questions'
translations to 8 languages provided by native speakers, and transferring the
SPARQL queries of QALD-9 from DBpedia to Wikidata, s.t., the usability and
relevance of the dataset is strongly increased. Five of the languages -
Armenian, Ukrainian, Lithuanian, Bashkir and Belarusian - to our best knowledge
were never considered in KGQA research community before. The latter two of the
languages are considered as "endangered" by UNESCO. We call the extended
dataset QALD-9-plus and made it available online
https://github.com/Perevalov/qald_9_plus.
|
[
{
"version": "v1",
"created": "Mon, 31 Jan 2022 22:19:55 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Feb 2022 14:57:26 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Perevalov",
"Aleksandr",
""
],
[
"Diefenbach",
"Dennis",
""
],
[
"Usbeck",
"Ricardo",
""
],
[
"Both",
"Andreas",
""
]
] |
new_dataset
| 0.999078 |
2202.02398
|
Paulo Pirozelli
|
Andr\'e F. A. Paschoal, Paulo Pirozelli, Valdinei Freire, Karina V.
Delgado, Sarajane M. Peres, Marcos M. Jos\'e, Fl\'avio Nakasato, Andr\'e S.
Oliveira, Anarosa A. F. Brand\~ao, Anna H. R. Costa, Fabio G. Cozman
|
Pir\'a: A Bilingual Portuguese-English Dataset for Question-Answering
about the Ocean
|
https://github.com/C4AI/Pira
|
CIKM '21: Proceedings of the 30th ACM International Conference on
Information & Knowledge Management, 2021
|
10.1145/3459637.3482012
| null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Current research in natural language processing is highly dependent on
carefully produced corpora. Most existing resources focus on English; some
resources focus on languages such as Chinese and French; few resources deal
with more than one language. This paper presents the Pir\'a dataset, a large
set of questions and answers about the ocean and the Brazilian coast both in
Portuguese and English. Pir\'a is, to the best of our knowledge, the first QA
dataset with supporting texts in Portuguese, and, perhaps more importantly, the
first bilingual QA dataset that includes this language. The Pir\'a dataset
consists of 2261 properly curated question/answer (QA) sets in both languages.
The QA sets were manually created based on two corpora: abstracts related to
the Brazilian coast and excerpts of United Nation reports about the ocean. The
QA sets were validated in a peer-review process with the dataset contributors.
We discuss some of the advantages as well as limitations of Pir\'a, as this new
resource can support a set of tasks in NLP such as question-answering,
information retrieval, and machine translation.
|
[
{
"version": "v1",
"created": "Fri, 4 Feb 2022 21:29:45 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Paschoal",
"André F. A.",
""
],
[
"Pirozelli",
"Paulo",
""
],
[
"Freire",
"Valdinei",
""
],
[
"Delgado",
"Karina V.",
""
],
[
"Peres",
"Sarajane M.",
""
],
[
"José",
"Marcos M.",
""
],
[
"Nakasato",
"Flávio",
""
],
[
"Oliveira",
"André S.",
""
],
[
"Brandão",
"Anarosa A. F.",
""
],
[
"Costa",
"Anna H. R.",
""
],
[
"Cozman",
"Fabio G.",
""
]
] |
new_dataset
| 0.999832 |
2202.02418
|
Cristina Mata
|
Cristina Mata, Nick Locascio, Mohammed Azeem Sheikh, Kenny Kihara and
Dan Fischetti
|
StandardSim: A Synthetic Dataset For Retail Environments
|
ICIAP 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Autonomous checkout systems rely on visual and sensory inputs to carry out
fine-grained scene understanding in retail environments. Retail environments
present unique challenges compared to typical indoor scenes owing to the vast
number of densely packed, unique yet similar objects. The problem becomes even
more difficult when only RGB input is available, especially for data-hungry
tasks such as instance segmentation. To address the lack of datasets for
retail, we present StandardSim, a large-scale photorealistic synthetic dataset
featuring annotations for semantic segmentation, instance segmentation, depth
estimation, and object detection. Our dataset provides multiple views per
scene, enabling multi-view representation learning. Further, we introduce a
novel task central to autonomous checkout called change detection, requiring
pixel-level classification of takes, puts and shifts in objects over time. We
benchmark widely-used models for segmentation and depth estimation on our
dataset, show that our test set constitutes a difficult benchmark compared to
current smaller-scale datasets and that our training set provides models with
crucial information for autonomous checkout tasks.
|
[
{
"version": "v1",
"created": "Fri, 4 Feb 2022 22:28:35 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Mata",
"Cristina",
""
],
[
"Locascio",
"Nick",
""
],
[
"Sheikh",
"Mohammed Azeem",
""
],
[
"Kihara",
"Kenny",
""
],
[
"Fischetti",
"Dan",
""
]
] |
new_dataset
| 0.999901 |
2202.02453
|
Bugra Turan
|
Bugra Turan, Ali Uyrus, Osman Nuri Koc, Emrah Kar, and Sinem Coleri
|
Vehicular Visible Light Communications for Automated Valet Parking
|
2 pages, 2 figures, 1 table
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Visible light communication (VLC) is a promising Optical Wireless
Communications (OWC) scheme that is demonstrated to provide secure,
line-of-sight (LoS), and short-distance vehicle-to-vehicle (V2V) and
vehicle-to-infrastructure(V2I) communications. Recently, automated driving
applications, supported by V2I links are proposed to increase the reliability
of the autonomous vehicles. To this regard, we propose a VLCbased V2I scheme to
increase the V2I communication redundancy of autonomous valet parking (AVP)
applications, through jam-free and location-based characteristics of VLC. In
this paper, we demonstrate a novel architecture to support indoor
parking-garage online-map update with vehicle on-board data transmissions and
location-based map update dissemination through bidirectional VLC
communications. The proposed system yields error-free LoS transmissions with
Direct Current Biased Optical OFDM (DCO-OFDM) up to 33 m transmitter-receiver
distance enabling vehicle CAN Bus data, infrastructure camera video, and LIDAR
point cloud data sharing in an indoor parking garage.
|
[
{
"version": "v1",
"created": "Sat, 20 Nov 2021 13:48:11 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Turan",
"Bugra",
""
],
[
"Uyrus",
"Ali",
""
],
[
"Koc",
"Osman Nuri",
""
],
[
"Kar",
"Emrah",
""
],
[
"Coleri",
"Sinem",
""
]
] |
new_dataset
| 0.999684 |
2202.02495
|
Zhengchao Wan
|
Samantha Chen, Sunhyuk Lim, Facundo M\'emoli, Zhengchao Wan, Yusu Wang
|
Weisfeiler-Lehman meets Gromov-Wasserstein
| null | null | null | null |
cs.LG math.MG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Weisfeiler-Lehman (WL) test is a classical procedure for graph
isomorphism testing. The WL test has also been widely used both for designing
graph kernels and for analyzing graph neural networks. In this paper, we
propose the Weisfeiler-Lehman (WL) distance, a notion of distance between
labeled measure Markov chains (LMMCs), of which labeled graphs are special
cases. The WL distance is polynomial time computable and is also compatible
with the WL test in the sense that the former is positive if and only if the WL
test can distinguish the two involved graphs. The WL distance captures and
compares subtle structures of the underlying LMMCs and, as a consequence of
this, it is more discriminating than the distance between graphs used for
defining the state-of-the-art Wasserstein Weisfeiler-Lehman graph kernel.
Inspired by the structure of the WL distance we identify a neural network
architecture on LMMCs which turns out to be universal w.r.t. continuous
functions defined on the space of all LMMCs (which includes all graphs) endowed
with the WL distance. Finally, the WL distance turns out to be stable w.r.t. a
natural variant of the Gromov-Wasserstein (GW) distance for comparing metric
Markov chains that we identify. Hence, the WL distance can also be construed as
a polynomial time lower bound for the GW distance which is in general NP-hard
to compute.
|
[
{
"version": "v1",
"created": "Sat, 5 Feb 2022 05:53:31 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Chen",
"Samantha",
""
],
[
"Lim",
"Sunhyuk",
""
],
[
"Mémoli",
"Facundo",
""
],
[
"Wan",
"Zhengchao",
""
],
[
"Wang",
"Yusu",
""
]
] |
new_dataset
| 0.988783 |
2202.02556
|
Yifu Wang
|
Yi-Fan Zuo, Jiaqi Yang, Jiaben Chen, Xia Wang, Yifu Wang, Laurent
Kneip
|
DEVO: Depth-Event Camera Visual Odometry in Challenging Conditions
|
accepted in the 2022 IEEE International Conference on Robotics and
Automation (ICRA), Philadelphia (PA), USA
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel real-time visual odometry framework for a stereo setup of
a depth and high-resolution event camera. Our framework balances accuracy and
robustness against computational efficiency towards strong performance in
challenging scenarios. We extend conventional edge-based semi-dense visual
odometry towards time-surface maps obtained from event streams. Semi-dense
depth maps are generated by warping the corresponding depth values of the
extrinsically calibrated depth camera. The tracking module updates the camera
pose through efficient, geometric semi-dense 3D-2D edge alignment. Our approach
is validated on both public and self-collected datasets captured under various
conditions. We show that the proposed method performs comparable to
state-of-the-art RGB-D camera-based alternatives in regular conditions, and
eventually outperforms in challenging conditions such as high dynamics or low
illumination.
|
[
{
"version": "v1",
"created": "Sat, 5 Feb 2022 13:46:47 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Zuo",
"Yi-Fan",
""
],
[
"Yang",
"Jiaqi",
""
],
[
"Chen",
"Jiaben",
""
],
[
"Wang",
"Xia",
""
],
[
"Wang",
"Yifu",
""
],
[
"Kneip",
"Laurent",
""
]
] |
new_dataset
| 0.968583 |
2202.02592
|
Saraju Mohanty
|
Anand K. Bapatla and Saraju P. Mohanty and Elias Kougianos
|
PharmaChain: A Blockchain to Ensure Counterfeit Free Pharmaceutical
Supply Chain
|
25 pages, 15 figures
| null | null | null |
cs.CR cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Access to essential medication is a primary right of every individual in all
developed, developing and underdeveloped countries. This can be fulfilled by
pharmaceutical supply chains (PSC) in place which will eliminate the boundaries
between different organizations and will equip them to work collectively to
make medicines reach even the remote corners of the globe. Due to multiple
entities, which are geographically widespread, being involved and very complex
goods and economic flows, PSC is very difficult to audit and resolve any issues
involved. This has given rise to many issues, including increased threats of
counterfeiting, inaccurate information propagation throughout the network
because of data fragmentation, lack of customer confidence and delays in
distribution of medication to the place in need. Hence, there is a strong need
for robust PSC which is transparent to all parties involved and in which the
whole journey of medicine from manufacturer to consumer can be tracked and
traced easily. This will not only build safety for the consumers, but will also
help manufacturers to build confidence among consumers and increase sales. In
this article, a novel Distributed Ledger Technology (DLT) based transparent
supply chain architecture is proposed and a proof-of-concept is implemented.
Efficiency and scalability of the proposed architecture is evaluated and
compared with existing solutions.
|
[
{
"version": "v1",
"created": "Sat, 5 Feb 2022 16:20:11 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Bapatla",
"Anand K.",
""
],
[
"Mohanty",
"Saraju P.",
""
],
[
"Kougianos",
"Elias",
""
]
] |
new_dataset
| 0.999777 |
2202.02704
|
Mingming Fan
|
Wentao Lei, Mingming Fan, Juliann Thang
|
"I Shake The Package To Check If It's Mine": A Study of Package Fetching
Practices and Challenges of Blind and Low Vision People in China
|
In Proceedings of CHI Conference on Human Factors in Computing
Systems (CHI '22), April 29-May 5, 2022, New Orleans, LA, USA
| null |
10.1145/3491102.3502063
| null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
With about 230 million packages delivered per day in 2020, fetching packages
has become a routine for many city dwellers in China. When fetching packages,
people usually need to go to collection sites of their apartment complexes or a
KuaiDiGui, an increasingly popular type of self-service package pickup machine.
However, little is known whether such processes are accessible to blind and low
vision (BLV) city dwellers. We interviewed BLV people (N=20) living in a large
metropolitan area in China to understand their practices and challenges of
fetching packages. Our findings show that participants encountered difficulties
in finding the collection site and localizing and recognizing their packages.
When fetching packages from KuaiDiGuis, they had difficulty in identifying the
correct KuaiDiGui, interacting with its touch screen, navigating the complex
on-screen workflow, and opening the target compartment. We discuss design
considerations to make the package fetching process more accessible to the BLV
community.
|
[
{
"version": "v1",
"created": "Sun, 6 Feb 2022 04:25:35 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Lei",
"Wentao",
""
],
[
"Fan",
"Mingming",
""
],
[
"Thang",
"Juliann",
""
]
] |
new_dataset
| 0.998778 |
2202.02734
|
Scott McLachlan Dr
|
Scott McLachlan, Evangelia Kyrimi, Kudakwashe Dube, Norman Fenton and
Burkhard Schafer
|
The Self-Driving Car: Crossroads at the Bleeding Edge of Artificial
Intelligence and Law
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Artificial intelligence (AI) features are increasingly being embedded in cars
and are central to the operation of self-driving cars (SDC). There is little or
no effort expended towards understanding and assessing the broad legal and
regulatory impact of the decisions made by AI in cars. A comprehensive
literature review was conducted to determine the perceived barriers, benefits
and facilitating factors of SDC in order to help us understand the suitability
and limitations of existing and proposed law and regulation. (1) existing and
proposed laws are largely based on claimed benefits of SDV that are still
mostly speculative and untested; (2) while publicly presented as issues of
assigning blame and identifying who pays where the SDC is involved in an
accident, the barriers broadly intersect with almost every area of society,
laws and regulations; and (3) new law and regulation are most frequently
identified as the primary factor for enabling SDC. Research on assessing the
impact of AI in SDC needs to be broadened beyond negligence and liability to
encompass barriers, benefits and facilitating factors identified in this paper.
Results of this paper are significant in that they point to the need for deeper
comprehension of the broad impact of all existing law and regulations on the
introduction of SDC technology, with a focus on identifying only those areas
truly requiring ongoing legislative attention.
|
[
{
"version": "v1",
"created": "Sun, 6 Feb 2022 08:38:30 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"McLachlan",
"Scott",
""
],
[
"Kyrimi",
"Evangelia",
""
],
[
"Dube",
"Kudakwashe",
""
],
[
"Fenton",
"Norman",
""
],
[
"Schafer",
"Burkhard",
""
]
] |
new_dataset
| 0.972308 |
2202.02791
|
Sakif Hossain
|
Sakif Hossain, Fatema T. Johora, J\"org P. M\"uller, Sven Hartmann and
Andreas Reinhardt
|
SFMGNet: A Physics-based Neural Network To Predict Pedestrian
Trajectories
|
16 pages, 6 figures, AAAI-MAKE 2022: Machine Learning and Knowledge
Engineering for Hybrid Intelligence
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Autonomous robots and vehicles are expected to soon become an integral part
of our environment. Unsatisfactory issues regarding interaction with existing
road users, performance in mixed-traffic areas and lack of interpretable
behavior remain key obstacles. To address these, we present a physics-based
neural network, based on a hybrid approach combining a social force model
extended by group force (SFMG) with Multi-Layer Perceptron (MLP) to predict
pedestrian trajectories considering its interaction with static obstacles,
other pedestrians and pedestrian groups. We quantitatively and qualitatively
evaluate the model with respect to realistic prediction, prediction performance
and prediction "interpretability". Initial results suggest, the model even when
solely trained on a synthetic dataset, can predict realistic and interpretable
trajectories with better than state-of-the-art accuracy.
|
[
{
"version": "v1",
"created": "Sun, 6 Feb 2022 14:58:09 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Hossain",
"Sakif",
""
],
[
"Johora",
"Fatema T.",
""
],
[
"Müller",
"Jörg P.",
""
],
[
"Hartmann",
"Sven",
""
],
[
"Reinhardt",
"Andreas",
""
]
] |
new_dataset
| 0.990112 |
2202.02942
|
Adnan Darwiche
|
Adnan Darwiche
|
Tractable Boolean and Arithmetic Circuits
|
An earlier version of this article appeared in the following edited
book. Pascal Hitzler and Md Kamruzzaman Sarker, editors. Neuro-Symbolic
Artificial Intelligence: The State of the Art, volume 342 of Frontiers in
Artificial Intelligence and Applications. IOS Press, 2021
| null | null | null |
cs.AI cs.CC cs.LG cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tractable Boolean and arithmetic circuits have been studied extensively in AI
for over two decades now. These circuits were initially proposed as "compiled
objects," meant to facilitate logical and probabilistic reasoning, as they
permit various types of inference to be performed in linear-time and a
feed-forward fashion like neural networks. In more recent years, the role of
tractable circuits has significantly expanded as they became a computational
and semantical backbone for some approaches that aim to integrate knowledge,
reasoning and learning. In this article, we review the foundations of tractable
circuits and some associated milestones, while focusing on their core
properties and techniques that make them particularly useful for the broad aims
of neuro-symbolic AI.
|
[
{
"version": "v1",
"created": "Mon, 7 Feb 2022 05:01:38 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Darwiche",
"Adnan",
""
]
] |
new_dataset
| 0.99826 |
2202.02964
|
Xun Jiao
|
Dongning Ma, Sizhe Zhang, Xun Jiao
|
HDCoin: A Proof-of-Useful-Work Based Blockchain for Hyperdimensional
Computing
| null | null | null | null |
cs.CR cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
Various blockchain systems and schemes have been proposed since Bitcoin was
first introduced by Nakamoto Satoshi as a distributed ledger. However,
blockchains usually face criticisms, particularly on environmental concerns as
their ``proof-of-work'' based mining process usually consumes a considerable
amount of energy which hardly makes any useful contributions to the real world.
Therefore, the concept of ``proof-of-useful-work'' (PoUW) is proposed to
connect blockchain with practical application domain problems so the
computation power consumed in the mining process can be spent on useful
activities, such as solving optimization problems or training machine learning
models. This paper introduces HDCoin, a blockchain-based framework for an
emerging machine learning scheme: the brain-inspired hyperdimensional computing
(HDC). We formulate the model development of HDC as a problem that can be used
in blockchain mining. Specifically, we define the PoUW under the HDC scenario
and develop the entire mining process of HDCoin. During mining, miners are
competing to obtain the highest test accuracy on a given dataset. The winner
also has its model recorded in the blockchain and are available for the public
as a trustworthy HDC model. In addition, we also quantitatively examine the
performance of mining under different HDC configurations to illustrate the
adaptive mining difficulty.
|
[
{
"version": "v1",
"created": "Mon, 7 Feb 2022 06:21:29 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Ma",
"Dongning",
""
],
[
"Zhang",
"Sizhe",
""
],
[
"Jiao",
"Xun",
""
]
] |
new_dataset
| 0.999436 |
2202.02974
|
Yingchen Tian
|
Yingchen Tian, Yuxia Zhang, Klaas-Jan Stol, Lin Jiang, and Hui Liu
|
What Makes a Good Commit Message?
| null | null |
10.1145/3510003.3510205
| null |
cs.SE cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A key issue in collaborative software development is communication among
developers. One modality of communication is a commit message, in which
developers describe the changes they make in a repository. As such, commit
messages serve as an "audit trail" by which developers can understand how the
source code of a project has changed-and why. Hence, the quality of commit
messages affects the effectiveness of communication among developers. Commit
messages are often of poor quality as developers lack time and motivation to
craft a good message. Several automatic approaches have been proposed to
generate commit messages. However, these are based on uncurated datasets
including considerable proportions of poorly phrased commit messages. In this
multi-method study, we first define what constitutes a "good" commit message,
and then establish what proportion of commit messages lack information using a
sample of almost 1,600 messages from five highly active open source projects.
We find that an average of circa 44% of messages could be improved, suggesting
the use of uncurated datasets may be a major threat when commit message
generators are trained with such data. We also observe that prior work has not
considered semantics of commit messages, and there is surprisingly little
guidance available for writing good commit messages. To that end, we develop a
taxonomy based on recurring patterns in commit messages' expressions. Finally,
we investigate whether "good" commit messages can be automatically identified;
such automation could prompt developers to write better commit messages.
|
[
{
"version": "v1",
"created": "Mon, 7 Feb 2022 06:48:30 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Tian",
"Yingchen",
""
],
[
"Zhang",
"Yuxia",
""
],
[
"Stol",
"Klaas-Jan",
""
],
[
"Jiang",
"Lin",
""
],
[
"Liu",
"Hui",
""
]
] |
new_dataset
| 0.987473 |
2202.03032
|
Federico Brunero
|
Federico Brunero, Petros Elia
|
Coded Caching Does Not Generally Benefit From Selfish Caching
|
6 pages. Submitted to 2022 IEEE International Symposium on
Information Theory (ISIT). arXiv admin note: substantial text overlap with
arXiv:2109.04807
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In typical coded caching scenarios, the content of a central library is
assumed to be of interest to all receiving users. However, in a realistic
scenario the users may have diverging interests which may intersect to various
degrees. What happens for example if each file is of potential interest to,
say, $40\,\%$ of the users and each user has potential interest in $40\,\%$ of
the library? What if then each user caches selfishly only from content of
potential interest? In this work, we formulate the symmetric selfish coded
caching problem, where each user naturally makes requests from a subset of the
library, which defines its own file demand set (FDS), and caches selfishly only
contents from its own FDS. For the scenario where the different FDSs
symmetrically overlap to some extent, we propose a novel information-theoretic
converse that reveals, for such general setting of symmetric FDS structures,
that selfish coded caching yields a load performance which is strictly worse
than that in standard coded caching.
|
[
{
"version": "v1",
"created": "Mon, 7 Feb 2022 09:48:29 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Brunero",
"Federico",
""
],
[
"Elia",
"Petros",
""
]
] |
new_dataset
| 0.974372 |
2202.03061
|
Petr Golovach
|
Fedor V. Fomin, Petr A. Golovach, Danil Sagunov, Kirill Simonov
|
Longest Cycle above Erd\H{o}s-Gallai Bound
| null | null | null | null |
cs.DS cs.DM math.CO
|
http://creativecommons.org/licenses/by/4.0/
|
In 1959, Erd\H{o}s and Gallai proved that every graph G with average vertex
degree ad(G)\geq 2 contains a cycle of length at least ad(G). We provide an
algorithm that for k\geq 0 in time 2^{O(k)} n^{O(1)} decides whether a
2-connected n-vertex graph G contains a cycle of length at least ad(G)+k. This
resolves an open problem explicitly mentioned in several papers. The main
ingredients of our algorithm are new graph-theoretical results interesting on
their own.
|
[
{
"version": "v1",
"created": "Mon, 7 Feb 2022 10:52:36 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Fomin",
"Fedor V.",
""
],
[
"Golovach",
"Petr A.",
""
],
[
"Sagunov",
"Danil",
""
],
[
"Simonov",
"Kirill",
""
]
] |
new_dataset
| 0.98337 |
2202.03183
|
Meixin Zhu
|
Meixin Zhu, Simon S. Du, Xuesong Wang, Hao (Frank) Yang, Ziyuan Pu,
Yinhai Wang
|
TransFollower: Long-Sequence Car-Following Trajectory Prediction through
Transformer
| null | null | null | null |
cs.AI cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Car-following refers to a control process in which the following vehicle (FV)
tries to keep a safe distance between itself and the lead vehicle (LV) by
adjusting its acceleration in response to the actions of the vehicle ahead. The
corresponding car-following models, which describe how one vehicle follows
another vehicle in the traffic flow, form the cornerstone for microscopic
traffic simulation and intelligent vehicle development. One major motivation of
car-following models is to replicate human drivers' longitudinal driving
trajectories. To model the long-term dependency of future actions on historical
driving situations, we developed a long-sequence car-following trajectory
prediction model based on the attention-based Transformer model. The model
follows a general format of encoder-decoder architecture. The encoder takes
historical speed and spacing data as inputs and forms a mixed representation of
historical driving context using multi-head self-attention. The decoder takes
the future LV speed profile as input and outputs the predicted future FV speed
profile in a generative way (instead of an auto-regressive way, avoiding
compounding errors). Through cross-attention between encoder and decoder, the
decoder learns to build a connection between historical driving and future LV
speed, based on which a prediction of future FV speed can be obtained. We train
and test our model with 112,597 real-world car-following events extracted from
the Shanghai Naturalistic Driving Study (SH-NDS). Results show that the model
outperforms the traditional intelligent driver model (IDM), a fully connected
neural network model, and a long short-term memory (LSTM) based model in terms
of long-sequence trajectory prediction accuracy. We also visualized the
self-attention and cross-attention heatmaps to explain how the model derives
its predictions.
|
[
{
"version": "v1",
"created": "Fri, 4 Feb 2022 07:59:22 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Zhu",
"Meixin",
"",
"Frank"
],
[
"Du",
"Simon S.",
"",
"Frank"
],
[
"Wang",
"Xuesong",
"",
"Frank"
],
[
"Hao",
"",
"",
"Frank"
],
[
"Yang",
"",
""
],
[
"Pu",
"Ziyuan",
""
],
[
"Wang",
"Yinhai",
""
]
] |
new_dataset
| 0.990195 |
2202.03189
|
Satoshi Sunada
|
Sho Shimadera, Kei Kitagawa, Koyo Sagehashi, Tomoaki Niiyama, and
Satoshi Sunada
|
Optical skin: Sensor-integration-free multimodal flexible sensing
|
13 pages, 11 figures
| null | null | null |
cs.CV cs.HC cs.LG physics.app-ph physics.optics
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The biological skin enables animals to sense various stimuli. Extensive
efforts have been made recently to develop smart skin-like sensors to extend
the capabilities of biological skins; however, simultaneous sensing of several
types of stimuli in a large area remains challenging because this requires
large-scale sensor integration with numerous wire connections. We propose a
simple, highly sensitive, and multimodal sensing approach, which does not
require integrating multiple sensors. The proposed approach is based on an
optical interference technique, which can encode the information of various
stimuli as a spatial pattern. In contrast to the existing approach, the
proposed approach, combined with a deep neural network, enables us to freely
select the sensing mode according to our purpose. As a key example, we
demonstrate simultaneous sensing mode of three different physical quantities,
contact force, contact location, and temperature, using a single soft material
without requiring complex integration. Another unique property of the proposed
approach is spatially continuous sensing with ultrahigh resolution of few tens
of micrometers, which enables identifying the shape of the object in contact.
Furthermore, we present a haptic soft device for a human-machine interface. The
proposed approach encourages the development of high-performance optical skins.
|
[
{
"version": "v1",
"created": "Thu, 3 Feb 2022 14:58:27 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Shimadera",
"Sho",
""
],
[
"Kitagawa",
"Kei",
""
],
[
"Sagehashi",
"Koyo",
""
],
[
"Niiyama",
"Tomoaki",
""
],
[
"Sunada",
"Satoshi",
""
]
] |
new_dataset
| 0.993667 |
2202.03243
|
Markus Nemitz
|
Tyler C. Looney, Nathan M. Savard, Gus T. Teran, Archie G. Milligan,
Ryley I. Wheelock, Michael Scalise, Daniel P. Perno, Gregory C. Lewin, Carlo
Pinciroli, Cagdas D. Onal, and Markus P. Nemitz
|
Air-Releasable Soft Robots for Explosive Ordnance Disposal
|
Accepted manuscript: IEEE Soft Robotics Conference, Edinburgh, 2022
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The demining of landmines using drones is challenging; air-releasable
payloads are typically non-intelligent (e.g., water balloons or explosives) and
deploying them at even low altitudes (~6 meter) is inherently inaccurate due to
complex deployment trajectories and constrained visual awareness by the drone
pilot. Soft robotics offers a unique approach for aerial demining, namely due
to the robust, low-cost, and lightweight designs of soft robots. Instead of
non-intelligent payloads, here, we propose the use of air-releasable soft
robots for demining. We developed a full system consisting of an unmanned
aerial vehicle retrofitted to a soft robot carrier including a custom-made
deployment mechanism, and an air-releasable, lightweight (296 g), untethered
soft hybrid robot with integrated electronics that incorporates a new type of a
vacuum-based flasher roller actuator system. We demonstrate a deployment cycle
in which the drone drops the soft robotic hybrid from an altitude of 4.5 m
meters and after which the robot approaches a dummy landmine. By deploying soft
robots at points of interest, we can transition soft robotic technologies from
the laboratory to real-world environments.
|
[
{
"version": "v1",
"created": "Mon, 7 Feb 2022 14:47:31 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Looney",
"Tyler C.",
""
],
[
"Savard",
"Nathan M.",
""
],
[
"Teran",
"Gus T.",
""
],
[
"Milligan",
"Archie G.",
""
],
[
"Wheelock",
"Ryley I.",
""
],
[
"Scalise",
"Michael",
""
],
[
"Perno",
"Daniel P.",
""
],
[
"Lewin",
"Gregory C.",
""
],
[
"Pinciroli",
"Carlo",
""
],
[
"Onal",
"Cagdas D.",
""
],
[
"Nemitz",
"Markus P.",
""
]
] |
new_dataset
| 0.997675 |
2202.03283
|
Mo YuJian
|
Xin Chao, Zhenjie Hou, Yujian Mo
|
CZU-MHAD: A multimodal dataset for human action recognition utilizing a
depth camera and 10 wearable inertial sensors
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human action recognition has been widely used in many fields of life, and
many human action datasets have been published at the same time. However, most
of the multi-modal databases have some shortcomings in the layout and number of
sensors, which cannot fully represent the action features. Regarding the
problems, this paper proposes a freely available dataset, named CZU-MHAD
(Changzhou University: a comprehensive multi-modal human action dataset). It
consists of 22 actions and three modals temporal synchronized data. These
modals include depth videos and skeleton positions from a kinect v2 camera, and
inertial signals from 10 wearable sensors. Compared with single modal sensors,
multi-modal sensors can collect different modal data, so the use of multi-modal
sensors can describe actions more accurately. Moreover, CZU-MHAD obtains the
3-axis acceleration and 3-axis angular velocity of 10 main motion joints by
binding inertial sensors to them, and these data were captured at the same
time. Experimental results are provided to show that this dataset can be used
to study structural relationships between different parts of the human body
when performing actions and fusion approaches that involve multi-modal sensor
data.
|
[
{
"version": "v1",
"created": "Mon, 7 Feb 2022 15:17:08 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Chao",
"Xin",
""
],
[
"Hou",
"Zhenjie",
""
],
[
"Mo",
"Yujian",
""
]
] |
new_dataset
| 0.999808 |
2202.03314
|
Riku Murai
|
Riku Murai, Joseph Ortiz, Sajad Saeedi, Paul H.J. Kelly, and Andrew J.
Davison
|
A Robot Web for Distributed Many-Device Localisation
|
18 pages, 7 figures
| null | null | null |
cs.RO cs.AI cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show that a distributed network of robots or other devices which make
measurements of each other can collaborate to globally localise via efficient
ad-hoc peer to peer communication. Our Robot Web solution is based on Gaussian
Belief Propagation on the fundamental non-linear factor graph describing the
probabilistic structure of all of the observations robots make internally or of
each other, and is flexible for any type of robot, motion or sensor. We define
a simple and efficient communication protocol which can be implemented by the
publishing and reading of web pages or other asynchronous communication
technologies. We show in simulations with up to 1000 robots interacting in
arbitrary patterns that our solution convergently achieves global accuracy as
accurate as a centralised non-linear factor graph solver while operating with
high distributed efficiency of computation and communication. Via the use of
robust factors in GBP, our method is tolerant to a high percentage of faults in
sensor measurements or dropped communication packets.
|
[
{
"version": "v1",
"created": "Mon, 7 Feb 2022 16:00:25 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Murai",
"Riku",
""
],
[
"Ortiz",
"Joseph",
""
],
[
"Saeedi",
"Sajad",
""
],
[
"Kelly",
"Paul H. J.",
""
],
[
"Davison",
"Andrew J.",
""
]
] |
new_dataset
| 0.997215 |
2202.03371
|
Martin Muller
|
Martin M\"uller, Florian Laurent
|
Cedille: A large autoregressive French language model
|
8 pages, 1 figure, 7 tables
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Scaling up the size and training of autoregressive language models has
enabled novel ways of solving Natural Language Processing tasks using zero-shot
and few-shot learning. While extreme-scale language models such as GPT-3 offer
multilingual capabilities, zero-shot learning for languages other than English
remain largely unexplored. Here, we introduce Cedille, a large open source
auto-regressive language model, specifically trained for the French language.
Our results show that Cedille outperforms existing French language models and
is competitive with GPT-3 on a range of French zero-shot benchmarks.
Furthermore, we provide an in-depth comparison of the toxicity exhibited by
these models, showing that Cedille marks an improvement in language model
safety thanks to dataset filtering.
|
[
{
"version": "v1",
"created": "Mon, 7 Feb 2022 17:40:43 GMT"
}
] | 2022-02-08T00:00:00 |
[
[
"Müller",
"Martin",
""
],
[
"Laurent",
"Florian",
""
]
] |
new_dataset
| 0.998994 |
1804.02801
|
Yixin Cao
|
Wenjun Li, Junjie Ye, Yixin Cao
|
A $5k$-vertex Kernel for $P_2$-packing
| null | null |
10.1016/j.tcs.2022.01.032
| null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The $P_2$-packing problem asks for whether a graph contains $k$
vertex-disjoint paths each of length two. We continue the study of its
kernelization algorithms, and develop a $5k$-vertex kernel.
|
[
{
"version": "v1",
"created": "Mon, 9 Apr 2018 03:10:18 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Feb 2022 07:20:39 GMT"
}
] | 2022-02-07T00:00:00 |
[
[
"Li",
"Wenjun",
""
],
[
"Ye",
"Junjie",
""
],
[
"Cao",
"Yixin",
""
]
] |
new_dataset
| 0.99843 |
1805.01825
|
Markus Schr\"oder
|
Markus Schr\"oder and J\"orn Hees and Ansgar Bernardi and Daniel Ewert
and Peter Klotz and Steffen Stadtm\"uller
|
Simplified SPARQL REST API - CRUD on JSON Object Graphs via URI Paths
|
5 pages, 2 figures, ESWC 2018 demo paper
|
The Semantic Web: ESWC 2018 Satellite Events - ESWC 2018 Satellite
Events, Heraklion, Crete, Greece, June 3-7, 2018, Revised Selected Papers
|
10.1007/978-3-319-98192-5_8
| null |
cs.DB cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Within the Semantic Web community, SPARQL is one of the predominant languages
to query and update RDF knowledge. However, the complexity of SPARQL, the
underlying graph structure and various encodings are common sources of
confusion for Semantic Web novices.
In this paper we present a general purpose approach to convert any given
SPARQL endpoint into a simple to use REST API. To lower the initial hurdle, we
represent the underlying graph as an interlinked view of nested JSON objects
that can be traversed by the API path.
|
[
{
"version": "v1",
"created": "Thu, 3 May 2018 09:57:13 GMT"
}
] | 2022-02-07T00:00:00 |
[
[
"Schröder",
"Markus",
""
],
[
"Hees",
"Jörn",
""
],
[
"Bernardi",
"Ansgar",
""
],
[
"Ewert",
"Daniel",
""
],
[
"Klotz",
"Peter",
""
],
[
"Stadtmüller",
"Steffen",
""
]
] |
new_dataset
| 0.998334 |
2008.02653
|
Levente Juhasz
|
Peter Mooney, A. Yair Grinberger, Marco Minghini, Serena Coetzee,
Levente Juhasz, Godwin Yeboah
|
OpenStreetMap data use cases during the early months of the COVID-19
pandemic
|
15 pages, 6 figures. Submitted to the UN GGIM
(http://unggim.academicnetwork.org/) edited book titled COVID - 19 :
Geospatial Information and Community Resilience. The volume is edited by
Prof. Abbas Rajabifard from the University of Melbourne
| null |
10.1201/9781003181590-15
| null |
cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Created by volunteers since 2004, OpenStreetMap (OSM) is a global geographic
database available under an open access license and currently used by a
multitude of actors worldwide. This chapter describes the role played by OSM
during the early months (from January to July 2020) of the ongoing COVID-19
pandemic, which - in contrast to past disasters and epidemics - is a global
event impacting both developed and developing countries. A large number of
COVID-19-related OSM use cases were collected and grouped into a number of
research frameworks which are analyzed separately: dashboards and services
simply using OSM as a basemap, applications using raw OSM data, initiatives to
collect new OSM data, imports of authoritative data into OSM, and traditional
academic research on OSM in the COVID-19 response. The wealth of examples
provided in the chapter, including an analysis of OSM tile usage in two
countries (Italy and China) deeply affected in the earliest months of 2020,
prove that OSM has been and still is heavily used to address the COVID-19
crisis, although with types and mechanisms that are often different depending
on the affected area or country and the related communities.
|
[
{
"version": "v1",
"created": "Thu, 6 Aug 2020 13:43:31 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Oct 2020 13:21:03 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Feb 2022 17:08:44 GMT"
}
] | 2022-02-07T00:00:00 |
[
[
"Mooney",
"Peter",
""
],
[
"Grinberger",
"A. Yair",
""
],
[
"Minghini",
"Marco",
""
],
[
"Coetzee",
"Serena",
""
],
[
"Juhasz",
"Levente",
""
],
[
"Yeboah",
"Godwin",
""
]
] |
new_dataset
| 0.999675 |
2009.14043
|
Henri Lotze
|
Hans-Joachim Boeckenhauer, Elisabet Burjons, Fabian Frei, Juraj
Hromkovic, Henri Lotze, Peter Rossmanith
|
Online Simple Knapsack with Reservation Costs
|
Third version, closed remaining gaps
| null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
In the online simple knapsack problem items are presented in an iterative
fashion and an algorithm has to decide for each item whether to reject or
permanently include it into the knapsack without any knowledge about the rest
of the instance. The goal is to pack the knapsack as full as possible. In this
work, we introduce the option of reserving items for the cost of a fixed
fraction $\alpha$ of their size. An algorithm may pay this fraction in order to
postpone its decision on whether to include or reject these items until after
the last item of the instance was presented.
While the classical online simple knapsack problem does not admit any
constantly bounded competitive ratio in the deterministic setting, we find that
adding the possibility of reservation makes the problem constantly competitive.
We give tight bounds for the whole range of $\alpha$ from $0$ to $1$.
|
[
{
"version": "v1",
"created": "Tue, 29 Sep 2020 14:27:39 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Jan 2021 16:48:58 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Feb 2022 10:48:17 GMT"
}
] | 2022-02-07T00:00:00 |
[
[
"Boeckenhauer",
"Hans-Joachim",
""
],
[
"Burjons",
"Elisabet",
""
],
[
"Frei",
"Fabian",
""
],
[
"Hromkovic",
"Juraj",
""
],
[
"Lotze",
"Henri",
""
],
[
"Rossmanith",
"Peter",
""
]
] |
new_dataset
| 0.999673 |
2010.06917
|
Mirco Theile
|
Mirco Theile, Harald Bayerlein, Richard Nai, David Gesbert, Marco
Caccamo
|
UAV Path Planning using Global and Local Map Information with Deep
Reinforcement Learning
|
ICAR 2021, code available at https://github.com/theilem/uavSim
| null |
10.1109/ICAR53236.2021.9659413
| null |
cs.RO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Path planning methods for autonomous unmanned aerial vehicles (UAVs) are
typically designed for one specific type of mission. This work presents a
method for autonomous UAV path planning based on deep reinforcement learning
(DRL) that can be applied to a wide range of mission scenarios. Specifically,
we compare coverage path planning (CPP), where the UAV's goal is to survey an
area of interest to data harvesting (DH), where the UAV collects data from
distributed Internet of Things (IoT) sensor devices. By exploiting structured
map information of the environment, we train double deep Q-networks (DDQNs)
with identical architectures on both distinctly different mission scenarios to
make movement decisions that balance the respective mission goal with
navigation constraints. By introducing a novel approach exploiting a compressed
global map of the environment combined with a cropped but uncompressed local
map showing the vicinity of the UAV agent, we demonstrate that the proposed
method can efficiently scale to large environments. We also extend previous
results for generalizing control policies that require no retraining when
scenario parameters change and offer a detailed analysis of crucial map
processing parameters' effects on path planning performance.
|
[
{
"version": "v1",
"created": "Wed, 14 Oct 2020 09:59:10 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Nov 2020 09:34:52 GMT"
},
{
"version": "v3",
"created": "Wed, 10 Mar 2021 09:32:33 GMT"
},
{
"version": "v4",
"created": "Thu, 21 Oct 2021 09:19:03 GMT"
}
] | 2022-02-07T00:00:00 |
[
[
"Theile",
"Mirco",
""
],
[
"Bayerlein",
"Harald",
""
],
[
"Nai",
"Richard",
""
],
[
"Gesbert",
"David",
""
],
[
"Caccamo",
"Marco",
""
]
] |
new_dataset
| 0.980487 |
2101.09162
|
Elias Iosif
|
Elias Iosif and Klitos Christodoulou and Andreas Vlachos
|
A Robust Blockchain Readiness Index Model
|
The final authenticated version is available online at
https://doi.org/10.1007/978-3-030-95947-0_7
| null | null | null |
cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As the blockchain ecosystem gets more mature many businesses, investors, and
entrepreneurs are seeking opportunities on working with blockchain systems and
cryptocurrencies. A critical challenge for these actors is to identify the most
suitable environment to start or evolve their businesses. In general, the
question is to identify which countries are offering the most suitable
conditions to host their blockchain-based activities and implement their
innovative projects. The Blockchain Readiness Index (BRI) provides a numerical
metric (referred to as the blockchain readiness score) in measuring the
maturity/readiness levels of a country in adopting blockchain and
cryptocurrencies. In doing so, BRI leverages on techniques from information
retrieval to algorithmically derive an index ranking for a set of countries.
The index considers a range of indicators organized under five pillars:
Government Regulation, Research, Technology, Industry, and User Engagement. In
this paper, we further extent BRI with the capability of deriving the index -
at the country level - even in the presence of missing information for the
indicators. In doing so, we are proposing two weighting schemes namely, linear
and sigmoid weighting for refining the initial estimates for the indicator
values. A classification framework was employed to evaluate the effectiveness
of the developed techniques which yielded to a significant classification
accuracy.
|
[
{
"version": "v1",
"created": "Wed, 20 Jan 2021 16:14:33 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Feb 2022 11:45:05 GMT"
}
] | 2022-02-07T00:00:00 |
[
[
"Iosif",
"Elias",
""
],
[
"Christodoulou",
"Klitos",
""
],
[
"Vlachos",
"Andreas",
""
]
] |
new_dataset
| 0.998586 |
2106.11810
|
Holger Caesar
|
Holger Caesar, Juraj Kabzan, Kok Seang Tan, Whye Kit Fong, Eric Wolff,
Alex Lang, Luke Fletcher, Oscar Beijbom, Sammy Omari
|
NuPlan: A closed-loop ML-based planning benchmark for autonomous
vehicles
|
Minor updates to Related Work
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we propose the world's first closed-loop ML-based planning
benchmark for autonomous driving. While there is a growing body of ML-based
motion planners, the lack of established datasets and metrics has limited the
progress in this area. Existing benchmarks for autonomous vehicle motion
prediction have focused on short-term motion forecasting, rather than long-term
planning. This has led previous works to use open-loop evaluation with L2-based
metrics, which are not suitable for fairly evaluating long-term planning. Our
benchmark overcomes these limitations by introducing a large-scale driving
dataset, lightweight closed-loop simulator, and motion-planning-specific
metrics. We provide a high-quality dataset with 1500h of human driving data
from 4 cities across the US and Asia with widely varying traffic patterns
(Boston, Pittsburgh, Las Vegas and Singapore). We will provide a closed-loop
simulation framework with reactive agents and provide a large set of both
general and scenario-specific planning metrics. We plan to release the dataset
at NeurIPS 2021 and organize benchmark challenges starting in early 2022.
|
[
{
"version": "v1",
"created": "Tue, 22 Jun 2021 14:24:55 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Jul 2021 06:35:54 GMT"
},
{
"version": "v3",
"created": "Sun, 19 Dec 2021 20:20:07 GMT"
},
{
"version": "v4",
"created": "Fri, 4 Feb 2022 02:50:02 GMT"
}
] | 2022-02-07T00:00:00 |
[
[
"Caesar",
"Holger",
""
],
[
"Kabzan",
"Juraj",
""
],
[
"Tan",
"Kok Seang",
""
],
[
"Fong",
"Whye Kit",
""
],
[
"Wolff",
"Eric",
""
],
[
"Lang",
"Alex",
""
],
[
"Fletcher",
"Luke",
""
],
[
"Beijbom",
"Oscar",
""
],
[
"Omari",
"Sammy",
""
]
] |
new_dataset
| 0.999719 |
2108.13233
|
Johannes C. Paetzold
|
Johannes C. Paetzold, Julian McGinnis, Suprosanna Shit, Ivan Ezhov,
Paul B\"uschl, Chinmay Prabhakar, Mihail I. Todorov, Anjany Sekuboyina,
Georgios Kaissis, Ali Ert\"urk, Stephan G\"unnemann, Bjoern H. Menze
|
Whole Brain Vessel Graphs: A Dataset and Benchmark for Graph Learning
and Neuroscience (VesselGraph)
|
Thirty-fifth Conference on Neural Information Processing Systems
Datasets and Benchmarks Track
|
https://neurips.cc/virtual/2021/poster/29873
|
10.5281/zenodo.5301621
| null |
cs.LG q-bio.QM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Biological neural networks define the brain function and intelligence of
humans and other mammals, and form ultra-large, spatial, structured graphs.
Their neuronal organization is closely interconnected with the spatial
organization of the brain's microvasculature, which supplies oxygen to the
neurons and builds a complementary spatial graph. This vasculature (or the
vessel structure) plays an important role in neuroscience; for example, the
organization of (and changes to) vessel structure can represent early signs of
various pathologies, e.g. Alzheimer's disease or stroke. Recently, advances in
tissue clearing have enabled whole brain imaging and segmentation of the
entirety of the mouse brain's vasculature. Building on these advances in
imaging, we are presenting an extendable dataset of whole-brain vessel graphs
based on specific imaging protocols. Specifically, we extract vascular graphs
using a refined graph extraction scheme leveraging the volume rendering engine
Voreen and provide them in an accessible and adaptable form through the OGB and
PyTorch Geometric dataloaders. Moreover, we benchmark numerous state-of-the-art
graph learning algorithms on the biologically relevant tasks of vessel
prediction and vessel classification using the introduced vessel graph dataset.
Our work paves a path towards advancing graph learning research into the field
of neuroscience. Complementarily, the presented dataset raises challenging
graph learning research questions for the machine learning community, in terms
of incorporating biological priors into learning algorithms, or in scaling
these algorithms to handle sparse,spatial graphs with millions of nodes and
edges. All datasets and code are available for download at
https://github.com/jocpae/VesselGraph .
|
[
{
"version": "v1",
"created": "Mon, 30 Aug 2021 13:40:48 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Feb 2022 15:18:42 GMT"
}
] | 2022-02-07T00:00:00 |
[
[
"Paetzold",
"Johannes C.",
""
],
[
"McGinnis",
"Julian",
""
],
[
"Shit",
"Suprosanna",
""
],
[
"Ezhov",
"Ivan",
""
],
[
"Büschl",
"Paul",
""
],
[
"Prabhakar",
"Chinmay",
""
],
[
"Todorov",
"Mihail I.",
""
],
[
"Sekuboyina",
"Anjany",
""
],
[
"Kaissis",
"Georgios",
""
],
[
"Ertürk",
"Ali",
""
],
[
"Günnemann",
"Stephan",
""
],
[
"Menze",
"Bjoern H.",
""
]
] |
new_dataset
| 0.999819 |
2110.00736
|
Nathan Kau
|
Nathan Kau
|
Stanford Pupper: A Low-Cost Agile Quadruped Robot for Benchmarking and
Education
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present Stanford Pupper, an easily-replicated open source quadruped robot
designed specifically as a benchmark platform for legged robotics research. The
robot features torque-controllable brushless motors with high specific power
that enable testing of impedance and torque-based machine learning and
optimization control approaches. Pupper can be built from the ground up in
under 8 hours for a total cost under $2000, with all components either easily
purchased or 3D printed. To rigorously compare control approaches, we introduce
two benchmarks, Sprint and Scramble with a leader board maintained by Stanford
Student Robotics. These benchmarks test high-speed dynamic locomotion
capability, and robustness to unstructured terrain. We provide a reference
controller with dynamic, omnidirectional gaits that serves as a baseline for
comparison. Reproducibility is demonstrated across multiple institutions with
robots made independently. All material is available at
https://stanfordstudentrobotics.org/quadruped-benchmark.
|
[
{
"version": "v1",
"created": "Sat, 2 Oct 2021 06:35:38 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Feb 2022 23:36:53 GMT"
}
] | 2022-02-07T00:00:00 |
[
[
"Kau",
"Nathan",
""
]
] |
new_dataset
| 0.999395 |
2201.07438
|
Zhiba Su
|
Dabiao Ma, Yitong Zhang, Meng Li, Feng Ye
|
MHTTS: Fast multi-head text-to-speech for spontaneous speech with
imperfect transcription
| null | null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural network based end-to-end Text-to-Speech (TTS) has greatly improved the
quality of synthesized speech. While how to use massive spontaneous speech
without transcription efficiently still remains an open problem. In this paper,
we propose MHTTS, a fast multi-speaker TTS system that is robust to
transcription errors and speaking style speech data. Specifically, we introduce
a multi-head model and transfer text information from high-quality corpus with
manual transcription to spontaneous speech with imperfectly recognized
transcription by jointly training them. MHTTS has three advantages: 1) Our
system synthesizes better quality multi-speaker voice with faster inference
speed. 2) Our system is capable of transferring correct text information to
data with imperfect transcription, simulated using corruption, or provided by
an Automatic Speech Recogniser (ASR). 3) Our system can utilize massive real
spontaneous speech with imperfect transcription and synthesize expressive
voice.
|
[
{
"version": "v1",
"created": "Wed, 19 Jan 2022 06:39:00 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Feb 2022 08:30:54 GMT"
}
] | 2022-02-07T00:00:00 |
[
[
"Ma",
"Dabiao",
""
],
[
"Zhang",
"Yitong",
""
],
[
"Li",
"Meng",
""
],
[
"Ye",
"Feng",
""
]
] |
new_dataset
| 0.999602 |
2201.11925
|
Alejandro Ortiz-Bernardin
|
Sergio Salinas, Nancy Hitschfeld-Kahler, Alejandro Ortiz-Bernardin,
Hang Si
|
POLYLLA: Polygonal meshing algorithm based on terminal-edge regions
| null | null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents an algorithm to generate a new kind of polygonal mesh
obtained from triangulations. Each polygon is built from a terminal-edge region
surrounded by edges that are not the longest-edge of any of the two triangles
that share them. The algorithm is termed Polylla and is divided into three
phases. The first phase consists of labeling each edge of the input
triangulation according to its size; the second phase builds polygons (simple
or not) from terminal-edges regions using the label system; and the third phase
transforms each non simple polygon into simple ones. The final mesh contains
polygons with convex and non convex shape. Since Voronoi based meshes are
currently the most used polygonal meshes, we compare some geometric properties
of our meshes against constrained Voronoi meshes. Several experiments were run
to compare the shape and size of polygons, the number of final mesh points and
polygons. For the same input, Polylla meshes contain less polygons than Voronoi
meshes, and the algorithm is simpler and faster than the algorithm to generate
constrained Voronoi meshes. Finally, we have validated Polylla meshes by
solving the Laplace equation on an L-shaped domain using the Virtual Element
Method (VEM). We show that the numerical performance of the VEM using Polylla
meshes and Voronoi meshes is similar.
|
[
{
"version": "v1",
"created": "Fri, 28 Jan 2022 04:19:55 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Feb 2022 00:26:46 GMT"
}
] | 2022-02-07T00:00:00 |
[
[
"Salinas",
"Sergio",
""
],
[
"Hitschfeld-Kahler",
"Nancy",
""
],
[
"Ortiz-Bernardin",
"Alejandro",
""
],
[
"Si",
"Hang",
""
]
] |
new_dataset
| 0.999055 |
2201.11990
|
Mostofa Patwary
|
Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley,
Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George
Zerveas, Vijay Korthikanti, Elton Zhang, Rewon Child, Reza Yazdani Aminabadi,
Julie Bernauer, Xia Song, Mohammad Shoeybi, Yuxiong He, Michael Houston,
Saurabh Tiwary, and Bryan Catanzaro
|
Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A
Large-Scale Generative Language Model
|
Shaden Smith and Mostofa Patwary contributed equally
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pretrained general-purpose language models can achieve state-of-the-art
accuracies in various natural language processing domains by adapting to
downstream tasks via zero-shot, few-shot and fine-tuning techniques. Because of
their success, the size of these models has increased rapidly, requiring
high-performance hardware, software, and algorithmic techniques to enable
training such large models. As the result of a joint effort between Microsoft
and NVIDIA, we present details on the training of the largest monolithic
transformer based language model, Megatron-Turing NLG 530B (MT-NLG), with 530
billion parameters. In this paper, we first focus on the infrastructure as well
as the 3D parallelism methodology used to train this model using DeepSpeed and
Megatron. Next, we detail the training process, the design of our training
corpus, and our data curation techniques, which we believe is a key ingredient
to the success of the model. Finally, we discuss various evaluation results, as
well as other interesting observations and new properties exhibited by MT-NLG.
We demonstrate that MT-NLG achieves superior zero-, one-, and few-shot learning
accuracies on several NLP benchmarks and establishes new state-of-the-art
results. We believe that our contributions will help further the development of
large-scale training infrastructures, large-scale language models, and natural
language generations.
|
[
{
"version": "v1",
"created": "Fri, 28 Jan 2022 08:59:57 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Jan 2022 05:25:13 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Feb 2022 18:02:23 GMT"
}
] | 2022-02-07T00:00:00 |
[
[
"Smith",
"Shaden",
""
],
[
"Patwary",
"Mostofa",
""
],
[
"Norick",
"Brandon",
""
],
[
"LeGresley",
"Patrick",
""
],
[
"Rajbhandari",
"Samyam",
""
],
[
"Casper",
"Jared",
""
],
[
"Liu",
"Zhun",
""
],
[
"Prabhumoye",
"Shrimai",
""
],
[
"Zerveas",
"George",
""
],
[
"Korthikanti",
"Vijay",
""
],
[
"Zhang",
"Elton",
""
],
[
"Child",
"Rewon",
""
],
[
"Aminabadi",
"Reza Yazdani",
""
],
[
"Bernauer",
"Julie",
""
],
[
"Song",
"Xia",
""
],
[
"Shoeybi",
"Mohammad",
""
],
[
"He",
"Yuxiong",
""
],
[
"Houston",
"Michael",
""
],
[
"Tiwary",
"Saurabh",
""
],
[
"Catanzaro",
"Bryan",
""
]
] |
new_dataset
| 0.997258 |
2202.01725
|
Felix Hensel
|
Thibault de Surrel, Felix Hensel, Mathieu Carri\`ere, Th\'eo Lacombe,
Yuichi Ike, Hiroaki Kurihara, Marc Glisse, Fr\'ed\'eric Chazal
|
RipsNet: a general architecture for fast and robust estimation of the
persistent homology of point clouds
|
23 pages, 4 figures
| null | null | null |
cs.CG cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The use of topological descriptors in modern machine learning applications,
such as Persistence Diagrams (PDs) arising from Topological Data Analysis
(TDA), has shown great potential in various domains. However, their practical
use in applications is often hindered by two major limitations: the
computational complexity required to compute such descriptors exactly, and
their sensitivity to even low-level proportions of outliers. In this work, we
propose to bypass these two burdens in a data-driven setting by entrusting the
estimation of (vectorization of) PDs built on top of point clouds to a neural
network architecture that we call RipsNet. Once trained on a given data set,
RipsNet can estimate topological descriptors on test data very efficiently with
generalization capacity. Furthermore, we prove that RipsNet is robust to input
perturbations in terms of the 1-Wasserstein distance, a major improvement over
the standard computation of PDs that only enjoys Hausdorff stability, yielding
RipsNet to substantially outperform exactly-computed PDs in noisy settings. We
showcase the use of RipsNet on both synthetic and real-world data. Our
open-source implementation is publicly available at
https://github.com/hensel-f/ripsnet and will be included in the Gudhi library.
|
[
{
"version": "v1",
"created": "Thu, 3 Feb 2022 17:40:04 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Feb 2022 11:23:37 GMT"
}
] | 2022-02-07T00:00:00 |
[
[
"de Surrel",
"Thibault",
""
],
[
"Hensel",
"Felix",
""
],
[
"Carrière",
"Mathieu",
""
],
[
"Lacombe",
"Théo",
""
],
[
"Ike",
"Yuichi",
""
],
[
"Kurihara",
"Hiroaki",
""
],
[
"Glisse",
"Marc",
""
],
[
"Chazal",
"Frédéric",
""
]
] |
new_dataset
| 0.98978 |
2202.01821
|
Frederik Warburg
|
Andrea Vallone, Frederik Warburg, Hans Hansen, S{\o}ren Hauberg and
Javier Civera
|
Danish Airs and Grounds: A Dataset for Aerial-to-Street-Level Place
Recognition and Localization
|
Submitted to RA-L (IROS)
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Place recognition and visual localization are particularly challenging in
wide baseline configurations. In this paper, we contribute with the
\emph{Danish Airs and Grounds} (DAG) dataset, a large collection of
street-level and aerial images targeting such cases. Its main challenge lies in
the extreme viewing-angle difference between query and reference images with
consequent changes in illumination and perspective. The dataset is larger and
more diverse than current publicly available data, including more than 50 km of
road in urban, suburban and rural areas. All images are associated with
accurate 6-DoF metadata that allows the benchmarking of visual localization
methods.
We also propose a map-to-image re-localization pipeline, that first estimates
a dense 3D reconstruction from the aerial images and then matches query
street-level images to street-level renderings of the 3D model. The dataset can
be downloaded at: https://frederikwarburg.github.io/DAG
|
[
{
"version": "v1",
"created": "Thu, 3 Feb 2022 19:58:09 GMT"
}
] | 2022-02-07T00:00:00 |
[
[
"Vallone",
"Andrea",
""
],
[
"Warburg",
"Frederik",
""
],
[
"Hansen",
"Hans",
""
],
[
"Hauberg",
"Søren",
""
],
[
"Civera",
"Javier",
""
]
] |
new_dataset
| 0.999841 |
2202.01871
|
Saeed-Ul Hassan
|
Sami Ul-Haq, Saeed-Ul Hassan
|
A Bibliometric Perspective of Social Science Scientific Communities of
Pakistan and India
|
35 page, 8 Tables, 10 Figures
| null | null | null |
cs.DL cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
In this study, we use research publication data from the field of social
science to identify collaboration networks among social science research
communities of India and Pakistan. We have used Scopus database to extract
information of social science journals for both countries India and Pakistan.
Study of this data is significant as both countries have common social issues
and many of common social values. Keywords analysis has been done to see common
research areas in both communities like poverty, education, the issue of gender
etc. Despite having many of the common social issues, collaboration among
social science research communities of both countries is not strong.
|
[
{
"version": "v1",
"created": "Thu, 3 Feb 2022 22:06:52 GMT"
}
] | 2022-02-07T00:00:00 |
[
[
"Ul-Haq",
"Sami",
""
],
[
"Hassan",
"Saeed-Ul",
""
]
] |
new_dataset
| 0.99768 |
2202.01914
|
Raihan Seraj
|
Raihan Seraj, Jivitesh Sharma, Ole-Christoffer Granmo
|
Tsetlin Machine for Solving Contextual Bandit Problems
| null | null | null | null |
cs.LG cs.AI cs.NE
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This paper introduces an interpretable contextual bandit algorithm using
Tsetlin Machines, which solves complex pattern recognition tasks using
propositional logic. The proposed bandit learning algorithm relies on
straightforward bit manipulation, thus simplifying computation and
interpretation. We then present a mechanism for performing Thompson sampling
with Tsetlin Machine, given its non-parametric nature. Our empirical analysis
shows that Tsetlin Machine as a base contextual bandit learner outperforms
other popular base learners on eight out of nine datasets. We further analyze
the interpretability of our learner, investigating how arms are selected based
on propositional expressions that model the context.
|
[
{
"version": "v1",
"created": "Fri, 4 Feb 2022 00:36:20 GMT"
}
] | 2022-02-07T00:00:00 |
[
[
"Seraj",
"Raihan",
""
],
[
"Sharma",
"Jivitesh",
""
],
[
"Granmo",
"Ole-Christoffer",
""
]
] |
new_dataset
| 0.999359 |
2202.01934
|
Luyang Liu
|
Luyang Liu, David Racz, Kara Vaillancourt, Julie Michelman, Matt
Barnes, Stefan Mellem, Paul Eastham, Bradley Green, Charles Armstrong, Rishi
Bal, Shawn O'Banion, Feng Guo
|
Smartphone-based Hard-braking Event Detection at Scale for Road Safety
Services
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Road crashes are the sixth leading cause of lost disability-adjusted
life-years (DALYs) worldwide. One major challenge in traffic safety research is
the sparsity of crashes, which makes it difficult to achieve a fine-grain
understanding of crash causations and predict future crash risk in a timely
manner. Hard-braking events have been widely used as a safety surrogate due to
their relatively high prevalence and ease of detection with embedded vehicle
sensors. As an alternative to using sensors fixed in vehicles, this paper
presents a scalable approach for detecting hard-braking events using the
kinematics data collected from smartphone sensors. We train a Transformer-based
machine learning model for hard-braking event detection using concurrent sensor
readings from smartphones and vehicle sensors from drivers who connect their
phone to the vehicle while navigating in Google Maps. The detection model shows
superior performance with a $0.83$ Area under the Precision-Recall Curve
(PR-AUC), which is $3.8\times$better than a GPS speed-based heuristic model,
and $166.6\times$better than an accelerometer-based heuristic model. The
detected hard-braking events are strongly correlated with crashes from publicly
available datasets, supporting their use as a safety surrogate. In addition, we
conduct model fairness and selection bias evaluation to ensure that the safety
benefits are equally shared. The developed methodology can benefit many safety
applications such as identifying safety hot spots at road network level,
evaluating the safety of new user interfaces, as well as using routing to
improve traffic safety.
|
[
{
"version": "v1",
"created": "Fri, 4 Feb 2022 01:30:32 GMT"
}
] | 2022-02-07T00:00:00 |
[
[
"Liu",
"Luyang",
""
],
[
"Racz",
"David",
""
],
[
"Vaillancourt",
"Kara",
""
],
[
"Michelman",
"Julie",
""
],
[
"Barnes",
"Matt",
""
],
[
"Mellem",
"Stefan",
""
],
[
"Eastham",
"Paul",
""
],
[
"Green",
"Bradley",
""
],
[
"Armstrong",
"Charles",
""
],
[
"Bal",
"Rishi",
""
],
[
"O'Banion",
"Shawn",
""
],
[
"Guo",
"Feng",
""
]
] |
new_dataset
| 0.998827 |
2202.01997
|
Karen Leung Ms
|
Karen Leung, Marco Pavone
|
Semi-Supervised Trajectory-Feedback Controller Synthesis for Signal
Temporal Logic Specifications
|
Accepted to American Controls Conference 2022
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There are spatio-temporal rules that dictate how robots should operate in
complex environments, e.g., road rules govern how (self-driving) vehicles
should behave on the road. However, seamlessly incorporating such rules into a
robot control policy remains challenging especially for real-time applications.
In this work, given a desired spatio-temporal specification expressed in the
Signal Temporal Logic (STL) language, we propose a semi-supervised controller
synthesis technique that is attuned to human-like behaviors while satisfying
desired STL specifications. Offline, we synthesize a trajectory-feedback neural
network controller via an adversarial training scheme that summarizes past
spatio-temporal behaviors when computing controls, and then online, we perform
gradient steps to improve specification satisfaction. Central to the offline
phase is an imitation-based regularization component that fosters better policy
exploration and helps induce naturalistic human behaviors. Our experiments
demonstrate that having imitation-based regularization leads to higher
qualitative and quantitative performance compared to optimizing an STL
objective only as done in prior work. We demonstrate the efficacy of our
approach with an illustrative case study and show that our proposed controller
outperforms a state-of-the-art shooting method in both performance and
computation time.
|
[
{
"version": "v1",
"created": "Fri, 4 Feb 2022 06:56:15 GMT"
}
] | 2022-02-07T00:00:00 |
[
[
"Leung",
"Karen",
""
],
[
"Pavone",
"Marco",
""
]
] |
new_dataset
| 0.997803 |
2202.02069
|
David Mestel
|
David Mestel
|
Beware of Greeks bearing entanglement? Quantum covert channels,
information flow and non-local games
|
35th IEEE Symposium on Computer Security Foundations (CSF 2022)
| null | null | null |
cs.CR quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Can quantum entanglement increase the capacity of (classical) covert
channels? To one familiar with Holevo's Theorem it is tempting to think that
the answer is obviously no. However, in this work we show: quantum entanglement
can in fact increase the capacity of a classical covert channel, in the
presence of an active adversary; on the other hand, a zero-capacity channel is
not improved by entanglement, so entanglement cannot create `purely quantum'
covert channels; the problem of determining the capacity of a given channel in
the presence of entanglement is undecidable; but there is an algorithm to bound
the entangled capacity of a channel from above, adapted from the semi-definite
hierarchy from the theory of non-local games, whose close connection to channel
capacity is at the core of all of our results.
|
[
{
"version": "v1",
"created": "Fri, 4 Feb 2022 10:49:20 GMT"
}
] | 2022-02-07T00:00:00 |
[
[
"Mestel",
"David",
""
]
] |
new_dataset
| 0.993426 |
2202.02071
|
Henrique Moniz
|
Afonso Oliveira, Henrique Moniz, Rodrigo Rodrigues
|
Alea-BFT: Practical Asynchronous Byzantine Fault Tolerance
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Traditional Byzantine Fault Tolerance (BFT) state machine replication
protocols assume a partial synchrony model, leading to a design where a leader
replica drives the protocol and is replaced after a timeout. Recently, we
witnessed a surge of asynchronous BFT protocols that use randomization to
remove the assumptions of bounds on message delivery times, making them more
resilient to adverse network conditions. However, these protocols still fall
short of being practical across a broad range of scenarios due to their cubic
communication costs, use of expensive primitives, and overall protocol
complexity. In this paper, we present Alea-BFT, the first asynchronous BFT
protocol to achieve quadratic communication complexity, allowing it to scale to
large networks. Alea-BFT brings the key design insight from classical protocols
of concentrating part of the work on a single designated replica, and
incorporates this principle in a two stage pipelined design, with an efficient
broadcast led by the designated replica followed by an inexpensive binary
agreement. We evaluated our prototype implementation across 10 sites in 4
continents, and our results show significant scalability gains from the
proposed design.
|
[
{
"version": "v1",
"created": "Fri, 4 Feb 2022 10:53:37 GMT"
}
] | 2022-02-07T00:00:00 |
[
[
"Oliveira",
"Afonso",
""
],
[
"Moniz",
"Henrique",
""
],
[
"Rodrigues",
"Rodrigo",
""
]
] |
new_dataset
| 0.999208 |
2202.02104
|
Sebastian K\"ohler
|
Sebastian K\"ohler, Richard Baker, Martin Strohmeier, Ivan Martinovic
|
Brokenwire : Wireless Disruption of CCS Electric Vehicle Charging
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel attack against the Combined Charging System, one of the
most widely used DC rapid charging systems for electric vehicles (EVs). Our
attack, Brokenwire, interrupts necessary control communication between the
vehicle and charger, causing charging sessions to abort. The attack can be
conducted wirelessly from a distance, allowing individual vehicles or entire
fleets to be disrupted stealthily and simultaneously. In addition, it can be
mounted with off-the-shelf radio hardware and minimal technical knowledge. The
exploited behavior is a required part of the HomePlug Green PHY, DIN 70121 &
ISO 15118 standards and all known implementations exhibit it.
We first study the attack in a controlled testbed and then demonstrate it
against seven vehicles and 18 chargers in real deployments. We find the attack
to be successful in the real world, at ranges up to 47 m, for a power budget of
less than 1 W. We further show that the attack can work between the floors of a
building (e.g., multi-story parking), through perimeter fences, and from
'drive-by' attacks. We present a heuristic model to estimate the number of
vehicles that can be attacked simultaneously for a given output power.
Brokenwire has immediate implications for many of the around 12 million
battery EVs on the roads worldwide - and profound effects on the new wave of
electrification for vehicle fleets, both for private enterprise and crucial
public services. As such, we conducted a disclosure to the industry and
discussed a range of mitigation techniques that could be deployed to limit the
impact.
|
[
{
"version": "v1",
"created": "Fri, 4 Feb 2022 12:38:35 GMT"
}
] | 2022-02-07T00:00:00 |
[
[
"Köhler",
"Sebastian",
""
],
[
"Baker",
"Richard",
""
],
[
"Strohmeier",
"Martin",
""
],
[
"Martinovic",
"Ivan",
""
]
] |
new_dataset
| 0.997834 |
2202.02259
|
Fuqun Huang
|
Fuqun Huang, Henrique Madeira
|
Targeted Code Inspection based on Human Errors
|
Fast Abstract, The 32nd International Symposium on Software
Reliability Engineering (ISSRE 2021), Oct.25-28, 2021
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
As a direct cause of software defects, human error is the key to
understanding and identifying defects. We propose a new code inspection method:
targeted code inspection based on human error mechanisms of software engineers.
Based on the common erroneous mechanisms of human cognition, the method targets
error-prone codes with high efficiency and minimum effort. The proposed method
is supported by preliminary evidence in a pilot study.
|
[
{
"version": "v1",
"created": "Wed, 2 Feb 2022 10:14:16 GMT"
}
] | 2022-02-07T00:00:00 |
[
[
"Huang",
"Fuqun",
""
],
[
"Madeira",
"Henrique",
""
]
] |
new_dataset
| 0.998836 |
2003.05841
|
Christophe Chareton
|
Christophe Chareton, S\'ebastien Bardin, Fran\c{c}ois Bobot, Valentin
Perrelle, Benoit Valiron
|
A Deductive Verification Framework for Circuit-building Quantum Programs
| null | null |
10.1007/978-3-030-72019-3_6
| null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While recent progress in quantum hardware open the door for significant
speedup in certain key areas, quantum algorithms are still hard to implement
right, and the validation of such quantum programs is a challenge. Early
attempts either suffer from the lack of automation or parametrized reasoning,
or target high-level abstract algorithm description languages far from the
current de facto consensus of circuit-building quantum programming languages.
As a consequence, no significant quantum algorithm implementation has been
currently verified in a scale-invariant manner. We propose Qbricks, the first
formal verification environment for circuit-building quantum programs,
featuring clear separation between code and proof, parametric specifications
and proofs, high degree of proof automation and allowing to encode quantum
programs in a natural way, i.e. close to textbook style. Qbricks builds on best
practice of formal verification for the classical case and tailor them to the
quantum case: we bring a new domain-specific circuit-building language for
quantum programs, namely Qbricks-DSL, together with a new logical specification
language Qbricks-Spec and a dedicated Hoare-style deductive verification rule
named Hybrid Quantum Hoare Logic. Especially, we introduce and intensively
build upon HOPS, a higher-order extension of the recent path-sum symbolic
representation, used for both specification and automation. To illustrate the
opportunity of Qbricks, we implement the first verified parametric
implementations of several famous and non-trivial quantum algorithms, including
the quantum part of Shor integer factoring (Order Finding - Shor-OF), quantum
phase estimation (QPE) - a basic building block of many quantum algorithms, and
Grover search. These breakthroughs were amply facilitated by the specification
and automated deduction principles introduced within Qbricks.
|
[
{
"version": "v1",
"created": "Thu, 12 Mar 2020 15:21:11 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Jul 2020 13:20:35 GMT"
}
] | 2022-02-04T00:00:00 |
[
[
"Chareton",
"Christophe",
""
],
[
"Bardin",
"Sébastien",
""
],
[
"Bobot",
"François",
""
],
[
"Perrelle",
"Valentin",
""
],
[
"Valiron",
"Benoit",
""
]
] |
new_dataset
| 0.999246 |
2007.05094
|
Teseo Schneider
|
Deshana Desai, Etai Shuchatowitz, Zhongshi Jiang, Teseo Schneider, and
Daniele Panozzo
|
ACORNS: An Easy-To-Use Code Generator for Gradients and Hessians
| null |
SoftwareX, Volume 17, 2022
|
10.1016/j.softx.2021.100901
| null |
cs.MS cs.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The computation of first and second-order derivatives is a staple in many
computing applications, ranging from machine learning to scientific computing.
We propose an algorithm to automatically differentiate algorithms written in a
subset of C99 code and its efficient implementation as a Python script. We
demonstrate that our algorithm enables automatic, reliable, and efficient
differentiation of common algorithms used in physical simulation and geometry
processing.
|
[
{
"version": "v1",
"created": "Thu, 9 Jul 2020 22:11:48 GMT"
}
] | 2022-02-04T00:00:00 |
[
[
"Desai",
"Deshana",
""
],
[
"Shuchatowitz",
"Etai",
""
],
[
"Jiang",
"Zhongshi",
""
],
[
"Schneider",
"Teseo",
""
],
[
"Panozzo",
"Daniele",
""
]
] |
new_dataset
| 0.997077 |
2009.09205
|
Liming Zhai
|
Liming Zhai, Felix Juefei-Xu, Qing Guo, Xiaofei Xie, Lei Ma, Wei Feng,
Shengchao Qin, Yang Liu
|
Adversarial Rain Attack and Defensive Deraining for DNN Perception
| null | null | null | null |
cs.CV cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Rain often poses inevitable threats to deep neural network (DNN) based
perception systems, and a comprehensive investigation of the potential risks of
the rain to DNNs is of great importance. However, it is rather difficult to
collect or synthesize rainy images that can represent all rain situations that
would possibly occur in the real world. To this end, in this paper, we start
from a new perspective and propose to combine two totally different studies,
i.e., rainy image synthesis and adversarial attack. We first present an
adversarial rain attack, with which we could simulate various rain situations
with the guidance of deployed DNNs and reveal the potential threat factors that
can be brought by rain. In particular, we design a factor-aware rain generation
that synthesizes rain streaks according to the camera exposure process and
models the learnable rain factors for adversarial attack. With this generator,
we perform the adversarial rain attack against the image classification and
object detection. To defend the DNNs from the negative rain effect, we also
present a defensive deraining strategy, for which we design an adversarial rain
augmentation that uses mixed adversarial rain layers to enhance deraining
models for downstream DNN perception. Our large-scale evaluation on various
datasets demonstrates that our synthesized rainy images with realistic
appearances not only exhibit strong adversarial capability against DNNs, but
also boost the deraining models for defensive purposes, building the foundation
for further rain-robust perception studies.
|
[
{
"version": "v1",
"created": "Sat, 19 Sep 2020 10:12:08 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Feb 2022 06:32:48 GMT"
}
] | 2022-02-04T00:00:00 |
[
[
"Zhai",
"Liming",
""
],
[
"Juefei-Xu",
"Felix",
""
],
[
"Guo",
"Qing",
""
],
[
"Xie",
"Xiaofei",
""
],
[
"Ma",
"Lei",
""
],
[
"Feng",
"Wei",
""
],
[
"Qin",
"Shengchao",
""
],
[
"Liu",
"Yang",
""
]
] |
new_dataset
| 0.998928 |
2011.13396
|
Sahil Verma
|
Sahil Verma and Subhajit Roy
|
Debug-Localize-Repair: A Symbiotic Construction for Heap Manipulations
|
Accepted at Formal Methods in System Design
| null | null | null |
cs.PL cs.SE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present Wolverine2, an integrated Debug-Localize-Repair environment for
heap manipulating programs. Wolverine2 provides an interactive debugging
environment: while concretely executing a program via on an interactive shell
supporting common debugging facilities, Wolverine2 displays the abstract
program states (as box-and-arrow diagrams) as a visual aid to the programmer,
packages a novel, proof-directed repair algorithm to quickly synthesize the
repair patches and a new bug localization algorithm to reduce the search space
of repairs. Wolverine2 supports "hot-patching" of the generated patches to
provide a seamless debugging environment, and also facilitates new
debug-localize-repair possibilities: \textit{specification refinement} and
\textit{checkpoint-based hopping}. We evaluate Wolverine2 on 6400 buggy
programs (generated using automated fault injection) on a variety of
data-structures like singly, doubly, and circular linked lists, AVL trees,
Red-Black trees, Splay Trees and Binary Search Trees; Wolverine2 could repair
all the buggy instances within realistic programmer wait-time (less than 5 sec
in most cases). Wolverine2 could also repair more than 80\% of the 247 (buggy)
student submissions where a reasonable attempt was made.
|
[
{
"version": "v1",
"created": "Thu, 26 Nov 2020 17:23:39 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Feb 2022 22:02:10 GMT"
}
] | 2022-02-04T00:00:00 |
[
[
"Verma",
"Sahil",
""
],
[
"Roy",
"Subhajit",
""
]
] |
new_dataset
| 0.998958 |
2012.01847
|
Fabio Zanasi
|
Filippo Bonchi, Fabio Gadducci, Aleks Kissinger, Pawel Sobocinski, and
Fabio Zanasi
|
String Diagram Rewrite Theory I: Rewriting with Frobenius Structure
| null | null | null | null |
cs.LO math.CT
|
http://creativecommons.org/licenses/by/4.0/
|
String diagrams are a powerful and intuitive graphical syntax, originated in
the study of symmetric monoidal categories. In the last few years, they have
found application in the modelling of various computational structures, in
fields as diverse as Computer Science, Physics, Control Theory, Linguistics,
and Biology.
In many such proposals, the transformations of the described systems are
modelled as rewrite rules of diagrams. These developments demand a mathematical
foundation for string diagram rewriting: whereas rewrite theory for terms is
well-understood, the two-dimensional nature of string diagrams poses additional
challenges.
This work systematises and expands a series of recent conference papers
laying down such foundation. As first step, we focus on the case of rewrite
systems for string diagrammatic theories which feature a Frobenius algebra.
This situation ubiquitously appear in various approaches: for instance, in the
algebraic semantics of linear dynamical systems, Frobenius structures model the
wiring of circuits; in categorical quantum mechanics, they model interacting
quantum observables.
Our work introduces a combinatorial interpretation of string diagram
rewriting modulo Frobenius structures, in terms of double-pushout hypergraph
rewriting. Furthermore, we prove this interpretation to be sound and complete.
In the last part, we also see that the approach can be generalised to model
rewriting modulo multiple Frobenius structures. As a proof of concept, we show
how to derive from these results a termination strategy for Interacting
Bialgebras, an important rewrite theory in the study of quantum circuits and
signal flow graphs.
|
[
{
"version": "v1",
"created": "Thu, 3 Dec 2020 11:46:06 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Feb 2022 18:25:37 GMT"
}
] | 2022-02-04T00:00:00 |
[
[
"Bonchi",
"Filippo",
""
],
[
"Gadducci",
"Fabio",
""
],
[
"Kissinger",
"Aleks",
""
],
[
"Sobocinski",
"Pawel",
""
],
[
"Zanasi",
"Fabio",
""
]
] |
new_dataset
| 0.987678 |
2103.07620
|
Zeyu Jiao
|
Zeyu Jiao, Huan Lei, Hengshan Zong, Yingjie Cai, Zhenyu Zhong
|
Potential Escalator-related Injury Identification and Prevention Based
on Multi-module Integrated System for Public Health
|
Please excuse me for taking some of your time. But that we have not
yet studied our work completely and some new great results are discovered. So
after carefully thinking, we are going to rearrange this manuscript and try
to give more precise model. Thus, we decided to withdraw this manuscript with
great pity
| null |
10.1007/s00138-022-01273-2
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Escalator-related injuries threaten public health with the widespread use of
escalators. The existing studies tend to focus on after-the-fact statistics,
reflecting on the original design and use of defects to reduce the impact of
escalator-related injuries, but few attention has been paid to ongoing and
impending injuries. In this study, a multi-module escalator safety monitoring
system based on computer vision is designed and proposed to simultaneously
monitor and deal with three major injury triggers, including losing balance,
not holding on to handrails and carrying large items. The escalator
identification module is utilized to determine the escalator region, namely the
region of interest. The passenger monitoring module is leveraged to estimate
the passengers' pose to recognize unsafe behaviors on the escalator. The
dangerous object detection module detects large items that may enter the
escalator and raises alarms. The processing results of the above three modules
are summarized in the safety assessment module as the basis for the intelligent
decision of the system. The experimental results demonstrate that the proposed
system has good performance and great application potential.
|
[
{
"version": "v1",
"created": "Sat, 13 Mar 2021 05:26:18 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Mar 2021 03:39:49 GMT"
}
] | 2022-02-04T00:00:00 |
[
[
"Jiao",
"Zeyu",
""
],
[
"Lei",
"Huan",
""
],
[
"Zong",
"Hengshan",
""
],
[
"Cai",
"Yingjie",
""
],
[
"Zhong",
"Zhenyu",
""
]
] |
new_dataset
| 0.998578 |
2107.13055
|
Gedaliah Knizhnik
|
Gedaliah Knizhnik, Mark Yim
|
Thrust Direction Control of an Underactuated Oscillating Swimming Robot
|
6 pages. Published in and presented at the 2021 IEE/RSJ International
Conference on Intelligent Robots and Systems
| null |
10.1109/IROS51168.2021.9636778
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Modboat is an autonomous surface robot that turns the oscillation of a
single motor into a controlled paddling motion through passive flippers.
Inertial control methods developed in prior work can successfully drive the
Modboat along trajectories and enable docking to neighboring modules, but have
a non-constant cycle time and cannot react to dynamic environments. In this
work we present a thrust direction control method for the Modboat that
significantly improves the time-response of the system and increases the
accuracy with which it can be controlled. We experimentally demonstrate that
this method can be used to perform more compact maneuvers than prior methods or
comparable robots can. We also present an extension to the controller that
solves the reaction wheel problem of unbounded actuator velocity, and show that
it further improves performance.
|
[
{
"version": "v1",
"created": "Tue, 27 Jul 2021 19:42:34 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Feb 2022 17:15:38 GMT"
}
] | 2022-02-04T00:00:00 |
[
[
"Knizhnik",
"Gedaliah",
""
],
[
"Yim",
"Mark",
""
]
] |
new_dataset
| 0.997987 |
2108.10869
|
Zachary Teed
|
Zachary Teed and Jia Deng
|
DROID-SLAM: Deep Visual SLAM for Monocular, Stereo, and RGB-D Cameras
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce DROID-SLAM, a new deep learning based SLAM system. DROID-SLAM
consists of recurrent iterative updates of camera pose and pixelwise depth
through a Dense Bundle Adjustment layer. DROID-SLAM is accurate, achieving
large improvements over prior work, and robust, suffering from substantially
fewer catastrophic failures. Despite training on monocular video, it can
leverage stereo or RGB-D video to achieve improved performance at test time.
The URL to our open source code is https://github.com/princeton-vl/DROID-SLAM.
|
[
{
"version": "v1",
"created": "Tue, 24 Aug 2021 17:50:10 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Feb 2022 19:28:32 GMT"
}
] | 2022-02-04T00:00:00 |
[
[
"Teed",
"Zachary",
""
],
[
"Deng",
"Jia",
""
]
] |
new_dataset
| 0.966898 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.