id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2202.12769
|
Francesco Tudisco
|
Francesco Tudisco and Desmond J. Higham
|
Core-periphery detection in hypergraphs
| null | null | null | null |
cs.SI cs.NA math.NA physics.data-an
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Core-periphery detection is a key task in exploratory network analysis where
one aims to find a core, a set of nodes well-connected internally and with the
periphery, and a periphery, a set of nodes connected only (or mostly) with the
core. In this work we propose a model of core-periphery for higher-order
networks modeled as hypergraphs and we propose a method for computing a
core-score vector that quantifies how close each node is to the core. In
particular, we show that this method solves the corresponding non-convex
core-periphery optimization problem globally to an arbitrary precision. This
method turns out to coincide with the computation of the Perron eigenvector of
a nonlinear hypergraph operator, suitably defined in term of the incidence
matrix of the hypergraph, generalizing recently proposed centrality models for
hypergraphs. We perform several experiments on synthetic and real-world
hypergraphs showing that the proposed method outperforms alternative
core-periphery detection algorithms, in particular those obtained by
transferring established graph methods to the hypergraph setting via clique
expansion.
|
[
{
"version": "v1",
"created": "Fri, 25 Feb 2022 15:40:45 GMT"
}
] | 2022-02-28T00:00:00 |
[
[
"Tudisco",
"Francesco",
""
],
[
"Higham",
"Desmond J.",
""
]
] |
new_dataset
| 0.99882 |
2202.12864
|
Mahsa Eftekhari
|
David Doty and Mahsa Eftekhari
|
Dynamic size counting in population protocols
| null | null | null | null |
cs.DC cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The population protocol model describes a network of anonymous agents that
interact asynchronously in pairs chosen at random. Each agent starts in the
same initial state $s$. We introduce the *dynamic size counting* problem:
approximately counting the number of agents in the presence of an adversary who
at any time can remove any number of agents or add any number of new agents in
state $s$. A valid solution requires that after each addition/removal event,
resulting in population size $n$, with high probability each agent "quickly"
computes the same constant-factor estimate of the value $\log_2 n$ (how quickly
is called the *convergence* time), which remains the output of every agent for
as long as possible (the *holding* time). Since the adversary can remove
agents, the holding time is necessarily finite: even after the adversary stops
altering the population, it is impossible to *stabilize* to an output that
never again changes.
We first show that a protocol solves the dynamic size counting problem if and
only if it solves the *loosely-stabilizing counting* problem: that of
estimating $\log n$ in a *fixed-size* population, but where the adversary can
initialize each agent in an arbitrary state, with the same convergence time and
holding time. We then show a protocol solving the loosely-stabilizing counting
problem with the following guarantees: if the population size is $n$, $M$ is
the largest initial estimate of $\log n$, and s is the maximum integer
initially stored in any field of the agents' memory, we have expected
convergence time $O(\log n + \log M)$, expected polynomial holding time, and
expected memory usage of $O(\log^2 (s) + (\log \log n)^2)$ bits. Interpreted as
a dynamic size counting protocol, when changing from population size $n_{prev}$
to $n_{next}$, the convergence time is $O(\log n_{next} + \log \log n_{prev})$.
|
[
{
"version": "v1",
"created": "Fri, 25 Feb 2022 18:18:02 GMT"
}
] | 2022-02-28T00:00:00 |
[
[
"Doty",
"David",
""
],
[
"Eftekhari",
"Mahsa",
""
]
] |
new_dataset
| 0.951658 |
2202.12884
|
Benedict Wilkins
|
Benedict Wilkins, Kostas Stathis
|
Learning to Identify Perceptual Bugs in 3D Video Games
| null | null | null | null |
cs.SE cs.AI cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Automated Bug Detection (ABD) in video games is composed of two distinct but
complementary problems: automated game exploration and bug identification.
Automated game exploration has received much recent attention, spurred on by
developments in fields such as reinforcement learning. The complementary
problem of identifying the bugs present in a player's experience has for the
most part relied on the manual specification of rules. Although it is widely
recognised that many bugs of interest cannot be identified with such methods,
little progress has been made in this direction. In this work we show that it
is possible to identify a range of perceptual bugs using learning-based methods
by making use of only the rendered game screen as seen by the player. To
support our work, we have developed World of Bugs (WOB) an open platform for
testing ABD methods in 3D game environments.
|
[
{
"version": "v1",
"created": "Fri, 25 Feb 2022 18:50:11 GMT"
}
] | 2022-02-28T00:00:00 |
[
[
"Wilkins",
"Benedict",
""
],
[
"Stathis",
"Kostas",
""
]
] |
new_dataset
| 0.997618 |
cs/0006036
|
Andreas Stolcke
|
E. Shriberg and A. Stolcke and D. Hakkani-Tur and G. Tur
|
Prosody-Based Automatic Segmentation of Speech into Sentences and Topics
|
30 pages, 9 figures. To appear in Speech Communication 32(1-2),
Special Issue on Accessing Information in Spoken Audio, September 2000
|
Speech Communication 32(1-2), 127-154, September 2000
|
10.1016/S0167-6393(00)00028-5
| null |
cs.CL
| null |
A crucial step in processing speech audio data for information extraction,
topic detection, or browsing/playback is to segment the input into sentence and
topic units. Speech segmentation is challenging, since the cues typically
present for segmenting text (headers, paragraphs, punctuation) are absent in
spoken language. We investigate the use of prosody (information gleaned from
the timing and melody of speech) for these tasks. Using decision tree and
hidden Markov modeling techniques, we combine prosodic cues with word-based
approaches, and evaluate performance on two speech corpora, Broadcast News and
Switchboard. Results show that the prosodic model alone performs on par with,
or better than, word-based statistical language models -- for both true and
automatically recognized words in news speech. The prosodic model achieves
comparable performance with significantly less training data, and requires no
hand-labeling of prosodic events. Across tasks and corpora, we obtain a
significant improvement over word-only models using a probabilistic combination
of prosodic and lexical information. Inspection reveals that the prosodic
models capture language-independent boundary indicators described in the
literature. Finally, cue usage is task and corpus dependent. For example, pause
and pitch features are highly informative for segmenting news speech, whereas
pause, duration and word-based cues dominate for natural conversation.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2000 04:39:57 GMT"
}
] | 2022-02-28T00:00:00 |
[
[
"Shriberg",
"E.",
""
],
[
"Stolcke",
"A.",
""
],
[
"Hakkani-Tur",
"D.",
""
],
[
"Tur",
"G.",
""
]
] |
new_dataset
| 0.993985 |
2012.02218
|
Md Saif Hassan Onim
|
Md. Saif Hassan Onim, Muhaiminul Islam Akash, Mahmudul Haque, Raiyan
Ibne Hafiz
|
Traffic Surveillance using Vehicle License Plate Detection and
Recognition in Bangladesh
| null | null |
10.1109/ICECE51571.2020.9393109
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Computer vision coupled with Deep Learning (DL) techniques bring out a
substantial prospect in the field of traffic control, monitoring and law
enforcing activities. This paper presents a YOLOv4 object detection model in
which the Convolutional Neural Network (CNN) is trained and tuned for detecting
the license plate of the vehicles of Bangladesh and recognizing characters
using tesseract from the detected license plates. Here we also present a
Graphical User Interface (GUI) based on Tkinter, a python package. The license
plate detection model is trained with mean average precision (mAP) of 90.50%
and performed in a single TESLA T4 GPU with an average of 14 frames per second
(fps) on real time video footage.
|
[
{
"version": "v1",
"created": "Thu, 3 Dec 2020 19:16:49 GMT"
}
] | 2022-02-25T00:00:00 |
[
[
"Onim",
"Md. Saif Hassan",
""
],
[
"Akash",
"Muhaiminul Islam",
""
],
[
"Haque",
"Mahmudul",
""
],
[
"Hafiz",
"Raiyan Ibne",
""
]
] |
new_dataset
| 0.980713 |
2012.03597
|
Guangshuai Gao
|
Guangshuai Gao, Qingjie Liu, Zhenghui Hu, Lu Li, Qi Wen, Yunhong Wang
|
PSGCNet: A Pyramidal Scale and Global Context Guided Network for Dense
Object Counting in Remote Sensing Images
|
Accepted by TGRS
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Object counting, which aims to count the accurate number of object instances
in images, has been attracting more and more attention. However, challenges
such as large scale variation, complex background interference, and non-uniform
density distribution greatly limit the counting accuracy, particularly striking
in remote sensing imagery. To mitigate the above issues, this paper proposes a
novel framework for dense object counting in remote sensing images, which
incorporates a pyramidal scale module (PSM) and a global context module (GCM),
dubbed PSGCNet, where PSM is used to adaptively capture multi-scale information
and GCM is to guide the model to select suitable scales generated from PSM.
Moreover, a reliable supervision manner improved from Bayesian and Counting
loss (BCL) is utilized to learn the density probability and then compute the
count expectation at each annotation. It can relieve non-uniform density
distribution to a certain extent. Extensive experiments on four remote sensing
counting datasets demonstrate the effectiveness of the proposed method and the
superiority of it compared with state-of-the-arts. Additionally, experiments
extended on four commonly used crowd counting datasets further validate the
generalization ability of the model. Code is available at
https://github.com/gaoguangshuai/PSGCNet.
|
[
{
"version": "v1",
"created": "Mon, 7 Dec 2020 11:35:56 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Sep 2021 06:17:02 GMT"
},
{
"version": "v3",
"created": "Thu, 24 Feb 2022 13:20:17 GMT"
}
] | 2022-02-25T00:00:00 |
[
[
"Gao",
"Guangshuai",
""
],
[
"Liu",
"Qingjie",
""
],
[
"Hu",
"Zhenghui",
""
],
[
"Li",
"Lu",
""
],
[
"Wen",
"Qi",
""
],
[
"Wang",
"Yunhong",
""
]
] |
new_dataset
| 0.999634 |
2102.06186
|
Viacheslav Borovitskiy
|
Fedor Pavutnitskiy, Sergei O. Ivanov, Evgeny Abramov, Viacheslav
Borovitskiy, Artem Klochkov, Viktor Vialov, Anatolii Zaikovskii, Aleksandr
Petiushko
|
Quadric Hypersurface Intersection for Manifold Learning in Feature Space
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The knowledge that data lies close to a particular submanifold of the ambient
Euclidean space may be useful in a number of ways. For instance, one may want
to automatically mark any point far away from the submanifold as an outlier or
to use the geometry to come up with a better distance metric. Manifold learning
problems are often posed in a very high dimension, e.g. for spaces of images or
spaces of words. Today, with deep representation learning on the rise in areas
such as computer vision and natural language processing, many problems of this
kind may be transformed into problems of moderately high dimension, typically
of the order of hundreds. Motivated by this, we propose a manifold learning
technique suitable for moderately high dimension and large datasets. The
manifold is learned from the training data in the form of an intersection of
quadric hypersurfaces -- simple but expressive objects. At test time, this
manifold can be used to introduce a computationally efficient outlier score for
arbitrary new data points and to improve a given similarity metric by
incorporating the learned geometric structure into it.
|
[
{
"version": "v1",
"created": "Thu, 11 Feb 2021 18:52:08 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Feb 2022 11:36:21 GMT"
}
] | 2022-02-25T00:00:00 |
[
[
"Pavutnitskiy",
"Fedor",
""
],
[
"Ivanov",
"Sergei O.",
""
],
[
"Abramov",
"Evgeny",
""
],
[
"Borovitskiy",
"Viacheslav",
""
],
[
"Klochkov",
"Artem",
""
],
[
"Vialov",
"Viktor",
""
],
[
"Zaikovskii",
"Anatolii",
""
],
[
"Petiushko",
"Aleksandr",
""
]
] |
new_dataset
| 0.957303 |
2105.14428
|
Liqi Yang
|
Liqi Yang, Linhan Luo, Lifeng Xin, Xiaofeng Zhang, Xinni Zhang
|
DAGNN: Demand-aware Graph Neural Networks for Session-based
Recommendation
|
There were errors in the experimental analysis
| null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Session-based recommendations have been widely adopted for various online
video and E-commerce Websites. Most existing approaches are intuitively
proposed to discover underlying interests or preferences out of the anonymous
session data. This apparently ignores the fact these sequential behaviors
usually reflect session user's potential demand, i.e., a semantic level factor,
and therefore how to estimate underlying demands from a session is challenging.
To address aforementioned issue, this paper proposes a demand-aware graph
neural networks (DAGNN). Particularly, a demand modeling component is designed
to first extract session demand and the underlying multiple demands of each
session is estimated using the global demand matrix. Then, the demand-aware
graph neural network is designed to extract session demand graph to learn the
demand-aware item embedddings for the later recommendations. The mutual
information loss is further designed to enhance the quality of the learnt
embeddings. Extensive experiments are evaluated on several real-world datasets
and the proposed model achieves the SOTA model performance.
|
[
{
"version": "v1",
"created": "Sun, 30 May 2021 04:55:04 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Feb 2022 18:03:18 GMT"
}
] | 2022-02-25T00:00:00 |
[
[
"Yang",
"Liqi",
""
],
[
"Luo",
"Linhan",
""
],
[
"Xin",
"Lifeng",
""
],
[
"Zhang",
"Xiaofeng",
""
],
[
"Zhang",
"Xinni",
""
]
] |
new_dataset
| 0.962664 |
2106.07856
|
Akarsh Prabhakara
|
Akarsh Prabhakara, Diana Zhang, Chao Li, Sirajum Munir, Aswin
Sankanaryanan, Anthony Rowe, Swarun Kumar
|
A Hybrid mmWave and Camera System for Long-Range Depth Imaging
| null | null | null | null |
cs.CV cs.NI cs.RO eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
mmWave radars offer excellent depth resolution even at very long ranges owing
to their high bandwidth. But their angular resolution is at least an
order-of-magnitude worse than camera and lidar systems. Hence, mmWave radar is
not a capable 3-D imaging solution in isolation. We propose Metamoran, a system
that combines the complimentary strengths of radar and camera to obtain
accurate, high resolution depth images over long ranges even in high clutter
environments, all from a single fixed vantage point. Metamoran enables rich
long-range depth imaging with applications in security and surveillance,
roadside safety infrastructure and wide-area mapping. Our approach leverages
the high angular resolution from cameras using computer vision techniques,
including image segmentation and monocular depth estimation, to obtain object
shape. Our core contribution is a method to convert this object shape into an
RF I/Q equivalent, which we use in a novel radar processing pipeline to help
declutter the scene and capture extremely weak reflections from objects at long
distances. We perform a detailed evaluation of Metamoran's depth imaging
capabilities in 400 diverse scenes. Our evaluation shows that Metamoran
estimates the depth of static objects up to 90 m and moving objects up to 305 m
and with a median error of 28 cm, an improvement of 13$\times$ compared to a
naive radar+camera baseline and 23$\times$ compared to monocular depth
estimation.
|
[
{
"version": "v1",
"created": "Tue, 15 Jun 2021 03:19:35 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Feb 2022 16:17:41 GMT"
},
{
"version": "v3",
"created": "Thu, 24 Feb 2022 16:08:52 GMT"
}
] | 2022-02-25T00:00:00 |
[
[
"Prabhakara",
"Akarsh",
""
],
[
"Zhang",
"Diana",
""
],
[
"Li",
"Chao",
""
],
[
"Munir",
"Sirajum",
""
],
[
"Sankanaryanan",
"Aswin",
""
],
[
"Rowe",
"Anthony",
""
],
[
"Kumar",
"Swarun",
""
]
] |
new_dataset
| 0.994564 |
2109.12979
|
Jean-Emmanuel Deschaud
|
Pierre Dellenbach, Jean-Emmanuel Deschaud, Bastien Jacquet,
Fran\c{c}ois Goulette
|
CT-ICP: Real-time Elastic LiDAR Odometry with Loop Closure
|
7 pages
| null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-beam LiDAR sensors are increasingly used in robotics, particularly with
autonomous cars for localization and perception tasks, both relying on the
ability to build a precise map of the environment. For this, we propose a new
real-time LiDAR-only odometry method called CT-ICP (for Continuous-Time ICP),
completed into a full SLAM with a novel loop detection procedure. The core of
this method, is the introduction of the combined continuity in the scan
matching, and discontinuity between scans. It allows both the elastic
distortion of the scan during the registration for increased precision, and the
increased robustness to high frequency motions from the discontinuity.
We build a complete SLAM on top of this odometry, using a fast pure LiDAR
loop detection based on elevation image 2D matching, providing a pose graph
with loop constraints. To show the robustness of the method, we tested it on
seven datasets: KITTI, KITTI-raw, KITTI-360, KITTI-CARLA, ParisLuco, Newer
College, and NCLT in driving and high-frequency motion scenarios. Both the
CT-ICP odometry and the loop detection are made available online. CT-ICP is
currently first, among those giving access to a public code, on the KITTI
odometry leaderboard, with an average Relative Translation Error (RTE) of 0.59%
and an average time per scan of 60ms on a CPU with a single thread.
|
[
{
"version": "v1",
"created": "Mon, 27 Sep 2021 12:08:26 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Feb 2022 18:50:34 GMT"
}
] | 2022-02-25T00:00:00 |
[
[
"Dellenbach",
"Pierre",
""
],
[
"Deschaud",
"Jean-Emmanuel",
""
],
[
"Jacquet",
"Bastien",
""
],
[
"Goulette",
"François",
""
]
] |
new_dataset
| 0.994555 |
2112.09045
|
Paul Bergmann
|
Paul Bergmann, Xin Jin, David Sattlegger, Carsten Steger
|
The MVTec 3D-AD Dataset for Unsupervised 3D Anomaly Detection and
Localization
|
Accepted for presentation at VISAPP 2022
| null |
10.5220/0010865000003124
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the first comprehensive 3D dataset for the task of unsupervised
anomaly detection and localization. It is inspired by real-world visual
inspection scenarios in which a model has to detect various types of defects on
manufactured products, even if it is trained only on anomaly-free data. There
are defects that manifest themselves as anomalies in the geometric structure of
an object. These cause significant deviations in a 3D representation of the
data. We employed a high-resolution industrial 3D sensor to acquire depth scans
of 10 different object categories. For all object categories, we present a
training and validation set, each of which solely consists of scans of
anomaly-free samples. The corresponding test sets contain samples showing
various defects such as scratches, dents, holes, contaminations, or
deformations. Precise ground-truth annotations are provided for every anomalous
test sample. An initial benchmark of 3D anomaly detection methods on our
dataset indicates a considerable room for improvement.
|
[
{
"version": "v1",
"created": "Thu, 16 Dec 2021 17:35:51 GMT"
}
] | 2022-02-25T00:00:00 |
[
[
"Bergmann",
"Paul",
""
],
[
"Jin",
"Xin",
""
],
[
"Sattlegger",
"David",
""
],
[
"Steger",
"Carsten",
""
]
] |
new_dataset
| 0.999798 |
2202.10667
|
Xiaohan Zhang
|
Xiaohan Zhang, Yifeng Zhu, Yan Ding, Yuke Zhu, Peter Stone, Shiqi
Zhang
|
Visually Grounded Task and Motion Planning for Mobile Manipulation
|
To be published in IEEE International Conference on Robotics and
Automation (ICRA), May 23-27, 2022
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Task and motion planning (TAMP) algorithms aim to help robots achieve
task-level goals, while maintaining motion-level feasibility. This paper
focuses on TAMP domains that involve robot behaviors that take extended periods
of time (e.g., long-distance navigation). In this paper, we develop a visual
grounding approach to help robots probabilistically evaluate action
feasibility, and introduce a TAMP algorithm, called GROP, that optimizes both
feasibility and efficiency. We have collected a dataset that includes 96,000
simulated trials of a robot conducting mobile manipulation tasks, and then used
the dataset to learn to ground symbolic spatial relationships for action
feasibility evaluation. Compared with competitive TAMP baselines, GROP
exhibited a higher task-completion rate while maintaining lower or comparable
action costs. In addition to these extensive experiments in simulation, GROP is
fully implemented and tested on a real robot system.
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 04:44:54 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Feb 2022 03:12:08 GMT"
}
] | 2022-02-25T00:00:00 |
[
[
"Zhang",
"Xiaohan",
""
],
[
"Zhu",
"Yifeng",
""
],
[
"Ding",
"Yan",
""
],
[
"Zhu",
"Yuke",
""
],
[
"Stone",
"Peter",
""
],
[
"Zhang",
"Shiqi",
""
]
] |
new_dataset
| 0.994454 |
2202.11742
|
Shizhe Chen
|
Shizhe Chen, Pierre-Louis Guhur, Makarand Tapaswi, Cordelia Schmid and
Ivan Laptev
|
Think Global, Act Local: Dual-scale Graph Transformer for
Vision-and-Language Navigation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Following language instructions to navigate in unseen environments is a
challenging problem for autonomous embodied agents. The agent not only needs to
ground languages in visual scenes, but also should explore the environment to
reach its target. In this work, we propose a dual-scale graph transformer
(DUET) for joint long-term action planning and fine-grained cross-modal
understanding. We build a topological map on-the-fly to enable efficient
exploration in global action space. To balance the complexity of large action
space reasoning and fine-grained language grounding, we dynamically combine a
fine-scale encoding over local observations and a coarse-scale encoding on a
global map via graph transformers. The proposed approach, DUET, significantly
outperforms state-of-the-art methods on goal-oriented vision-and-language
navigation (VLN) benchmarks REVERIE and SOON. It also improves the success rate
on the fine-grained VLN benchmark R2R.
|
[
{
"version": "v1",
"created": "Wed, 23 Feb 2022 19:06:53 GMT"
}
] | 2022-02-25T00:00:00 |
[
[
"Chen",
"Shizhe",
""
],
[
"Guhur",
"Pierre-Louis",
""
],
[
"Tapaswi",
"Makarand",
""
],
[
"Schmid",
"Cordelia",
""
],
[
"Laptev",
"Ivan",
""
]
] |
new_dataset
| 0.997419 |
2202.11811
|
Cj Barberan
|
CJ Barberan, Sina Alemohammad, Naiming Liu, Randall Balestriero,
Richard G. Baraniuk
|
NeuroView-RNN: It's About Time
|
21 pages, 13 figures, 9 tables
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Recurrent Neural Networks (RNNs) are important tools for processing
sequential data such as time-series or video. Interpretability is defined as
the ability to be understood by a person and is different from explainability,
which is the ability to be explained in a mathematical formulation. A key
interpretability issue with RNNs is that it is not clear how each hidden state
per time step contributes to the decision-making process in a quantitative
manner. We propose NeuroView-RNN as a family of new RNN architectures that
explains how all the time steps are used for the decision-making process. Each
member of the family is derived from a standard RNN architecture by
concatenation of the hidden steps into a global linear classifier. The global
linear classifier has all the hidden states as the input, so the weights of the
classifier have a linear mapping to the hidden states. Hence, from the weights,
NeuroView-RNN can quantify how important each time step is to a particular
decision. As a bonus, NeuroView-RNN also offers higher accuracy in many cases
compared to the RNNs and their variants. We showcase the benefits of
NeuroView-RNN by evaluating on a multitude of diverse time-series datasets.
|
[
{
"version": "v1",
"created": "Wed, 23 Feb 2022 22:29:11 GMT"
}
] | 2022-02-25T00:00:00 |
[
[
"Barberan",
"CJ",
""
],
[
"Alemohammad",
"Sina",
""
],
[
"Liu",
"Naiming",
""
],
[
"Balestriero",
"Randall",
""
],
[
"Baraniuk",
"Richard G.",
""
]
] |
new_dataset
| 0.982279 |
2202.11813
|
Alexander Heinrich
|
Alexander Heinrich, Niklas Bittner, Matthias Hollick
|
AirGuard -- Protecting Android Users From Stalking Attacks By Apple Find
My Devices
| null | null | null | null |
cs.CR cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Finder networks in general, and Apple's Find My network in particular, can
pose a grave threat to users' privacy and even health if these networks are
abused for stalking. Apple's release of the AirTag, a very affordable tracker
covered by the nearly ubiquitous Find My network, amplified this issue. While
Apple provides a stalking detection feature within its ecosystem, billions of
Android users are still left in the dark. Apple recently released the Android
app "Tracker Detect," which does not deliver a convincing feature set for
stalking protection. We reverse engineer Apple's tracking protection in iOS and
discuss its features regarding stalking detection. We design "AirGuard" and
release it as an Android app to protect against abuse by Apple tracking
devices. We compare the performance of our solution with the Apple-provided one
in iOS and study the use of AirGuard in the wild over multiple weeks using data
contributed by tens of thousands of active users.
|
[
{
"version": "v1",
"created": "Wed, 23 Feb 2022 22:31:28 GMT"
}
] | 2022-02-25T00:00:00 |
[
[
"Heinrich",
"Alexander",
""
],
[
"Bittner",
"Niklas",
""
],
[
"Hollick",
"Matthias",
""
]
] |
new_dataset
| 0.999557 |
2202.11840
|
Jiawei Wang
|
Li Li and Jiawei Wang and Haowei Quan
|
Scalpel: The Python Static Analysis Framework
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Despite being the most popular programming language, Python has not yet
received enough attention from the community. To the best of our knowledge,
there is no general static analysis framework proposed to facilitate the
implementation of dedicated Python static analyzers. To fill this gap, we
design and implement such a framework (named Scalpel) and make it publicly
available as an open-source project. The Scalpel framework has already
integrated a number of fundamental static analysis functions (e.g., call graph
constructions, control-flow graph constructions, alias analysis, etc.) that are
ready to be reused by developers to implement client applications focusing on
statically resolving dedicated Python problems such as detecting bugs or fixing
vulnerabilities.
|
[
{
"version": "v1",
"created": "Thu, 24 Feb 2022 00:27:56 GMT"
}
] | 2022-02-25T00:00:00 |
[
[
"Li",
"Li",
""
],
[
"Wang",
"Jiawei",
""
],
[
"Quan",
"Haowei",
""
]
] |
new_dataset
| 0.995335 |
2202.11864
|
Benjamin Nagy
|
Ben Nagy
|
Some Stylometric Remarks on Ovid's Heroides and the Epistula Sapphus
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This article aims to contribute to two well-worn areas of debate in classical
Latin philology, relating to Ovid's Heroides. The first is the question of the
authenticity (and, to a lesser extent the correct position) of the letter
placed fifteenth by almost every editor -- the so-called Epistula Sapphus
(henceforth ES). The secondary question, although perhaps now less fervently
debated, is the authenticity of the 'Double Heroides', placed by those who
accept them as letters 16-21. I employ a variety of methods drawn from the
domain of computational stylometry to consider the poetics and the
lexico-grammatical features of these elegiac poems in the broader context of a
corpus of 'shorter' (from 20 to 546 lines) elegiac works from five authors (266
poems in all) comprising more or less all of the non-fragmentary classical
corpus. Based on a variety of techniques, every measure gives clear indication
that the poetic style of the Heroides is Ovidian, but distinctive; they can be
accurately isolated from Ovid more broadly. The Single and Double Heroides
split into two clear groups, with the ES grouped consistently with the single
letters. Furthermore, by comparing the style of the letters with the 'early'
(although there are complications in this label) works of the Amores and the
late works of the Ex Ponto, the evidence supports sequential composition --
meaning that the ES is correctly placed -- and, further, supports the growing
consensus that the double letters were composed significantly later, in exile.
|
[
{
"version": "v1",
"created": "Thu, 24 Feb 2022 02:03:51 GMT"
}
] | 2022-02-25T00:00:00 |
[
[
"Nagy",
"Ben",
""
]
] |
new_dataset
| 0.99884 |
2202.11878
|
Zhize Wu
|
Zhize Wu, Huanyi Li, Xiaofeng Wang, Zijun Wu, Le Zou, Lixiang Xu, and
Ming Tan
|
New Benchmark for Household Garbage Image Recognition
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Household garbage images are usually faced with complex backgrounds, variable
illuminations, diverse angles, and changeable shapes, which bring a great
difficulty in garbage image classification. Due to the ability to discover
problem-specific features, deep learning and especially convolutional neural
networks (CNNs) have been successfully and widely used for image representation
learning. However, available and stable household garbage datasets are
insufficient, which seriously limits the development of research and
application. Besides, the state of the art in the field of garbage image
classification is not entirely clear. To solve this problem, in this study, we
built a new open benchmark dataset for household garbage image classification
by simulating different lightings, backgrounds, angles, and shapes. This
dataset is named 30 Classes of Household Garbage Images (HGI-30), which
contains 18,000 images of 30 household garbage classes. The publicly available
HGI-30 dataset allows researchers to develop accurate and robust methods for
household garbage recognition. We also conducted experiments and performance
analysis of the state-of-the-art deep CNN methods on HGI-30, which serves as
baseline results on this benchmark.
|
[
{
"version": "v1",
"created": "Thu, 24 Feb 2022 03:07:59 GMT"
}
] | 2022-02-25T00:00:00 |
[
[
"Wu",
"Zhize",
""
],
[
"Li",
"Huanyi",
""
],
[
"Wang",
"Xiaofeng",
""
],
[
"Wu",
"Zijun",
""
],
[
"Zou",
"Le",
""
],
[
"Xu",
"Lixiang",
""
],
[
"Tan",
"Ming",
""
]
] |
new_dataset
| 0.999683 |
2202.11931
|
Yuanfan Xu
|
Yuanfan Xu, Jincheng Yu, Jiahao Tang, Jiantao Qiu, Jian Wang, Yuan
Shen, Yu Wang, Huazhong Yang
|
Explore-Bench: Data Sets, Metrics and Evaluations for Frontier-based and
Deep-reinforcement-learning-based Autonomous Exploration
|
To be published in IEEE International Conference on Robotics and
Automation (ICRA), May 23-27, 2022
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Autonomous exploration and mapping of unknown terrains employing single or
multiple robots is an essential task in mobile robotics and has therefore been
widely investigated. Nevertheless, given the lack of unified data sets,
metrics, and platforms to evaluate the exploration approaches, we develop an
autonomous robot exploration benchmark entitled Explore-Bench. The benchmark
involves various exploration scenarios and presents two types of quantitative
metrics to evaluate exploration efficiency and multi-robot cooperation.
Explore-Bench is extremely useful as, recently, deep reinforcement learning
(DRL) has been widely used for robot exploration tasks and achieved promising
results. However, training DRL-based approaches requires large data sets, and
additionally, current benchmarks rely on realistic simulators with a slow
simulation speed, which is not appropriate for training exploration strategies.
Hence, to support efficient DRL training and comprehensive evaluation, the
suggested Explore-Bench designs a 3-level platform with a unified data flow and
$12 \times$ speed-up that includes a grid-based simulator for fast evaluation
and efficient training, a realistic Gazebo simulator, and a remotely accessible
robot testbed for high-accuracy tests in physical environments. The
practicality of the proposed benchmark is highlighted with the application of
one DRL-based and three frontier-based exploration approaches. Furthermore, we
analyze the performance differences and provide some insights about the
selection and design of exploration methods. Our benchmark is available at
https://github.com/efc-robot/Explore-Bench.
|
[
{
"version": "v1",
"created": "Thu, 24 Feb 2022 07:06:01 GMT"
}
] | 2022-02-25T00:00:00 |
[
[
"Xu",
"Yuanfan",
""
],
[
"Yu",
"Jincheng",
""
],
[
"Tang",
"Jiahao",
""
],
[
"Qiu",
"Jiantao",
""
],
[
"Wang",
"Jian",
""
],
[
"Shen",
"Yuan",
""
],
[
"Wang",
"Yu",
""
],
[
"Yang",
"Huazhong",
""
]
] |
new_dataset
| 0.998857 |
2202.11982
|
Daniel Braun
|
Daniel Braun, Olivier Morel, Pascal Vasseur, C\'edric Demonceaux
|
N-QGN: Navigation Map from a Monocular Camera using Quadtree Generating
Networks
|
6 pages + references, accepted to ICRA 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Monocular depth estimation has been a popular area of research for several
years, especially since self-supervised networks have shown increasingly good
results in bridging the gap with supervised and stereo methods. However, these
approaches focus their interest on dense 3D reconstruction and sometimes on
tiny details that are superfluous for autonomous navigation. In this paper, we
propose to address this issue by estimating the navigation map under a quadtree
representation. The objective is to create an adaptive depth map prediction
that only extract details that are essential for the obstacle avoidance. Other
3D space which leaves large room for navigation will be provided with
approximate distance. Experiment on KITTI dataset shows that our method can
significantly reduce the number of output information without major loss of
accuracy.
|
[
{
"version": "v1",
"created": "Thu, 24 Feb 2022 09:39:37 GMT"
}
] | 2022-02-25T00:00:00 |
[
[
"Braun",
"Daniel",
""
],
[
"Morel",
"Olivier",
""
],
[
"Vasseur",
"Pascal",
""
],
[
"Demonceaux",
"Cédric",
""
]
] |
new_dataset
| 0.990124 |
2202.12085
|
Elmira Moussavi
|
Elmira Moussavi, Dominik Sisejkovic, Fabian Brings, Daniyar Kizatov,
Animesh Singh, Xuan Thang Vu, Sven Ingebrandt, Rainer Leupers, Vivek Pachauri
and Farhad Merchant
|
pHGen: A pH-Based Key Generation Mechanism Using ISFETs
|
Accepted in HOST 2022
| null | null | null |
cs.ET
|
http://creativecommons.org/licenses/by/4.0/
|
Digital keys are a fundamental component of many hardware- and software-based
security mechanisms. However, digital keys are limited to binary values and
easily exploitable when stored in standard memories. In this paper, based on
emerging technologies, we introduce pHGen, a potential-of-hydrogen (pH)-based
key generation mechanism that leverages chemical reactions in the form of a
potential change in ion-sensitive field-effect transistors (ISFETs). The
threshold voltage of ISFETs is manipulated corresponding to a known pH buffer
solution (key) in which the transistors are immersed. To read the chemical
information effectively via ISFETs, we designed a readout circuit for stable
operation and detection of voltage thresholds. To demonstrate the applicability
of the proposed key generation, we utilize pHGen for logic locking -- a
hardware integrity protection scheme. The proposed key-generation method breaks
the limits of binary values and provides the first steps toward the utilization
of multi-valued voltage thresholds of ISFETs controlled by chemical
information. The pHGen approach is expected to be a turning point for using
more sophisticated bio-based analog keys for securing next-generation
electronics.
|
[
{
"version": "v1",
"created": "Thu, 24 Feb 2022 13:09:52 GMT"
}
] | 2022-02-25T00:00:00 |
[
[
"Moussavi",
"Elmira",
""
],
[
"Sisejkovic",
"Dominik",
""
],
[
"Brings",
"Fabian",
""
],
[
"Kizatov",
"Daniyar",
""
],
[
"Singh",
"Animesh",
""
],
[
"Vu",
"Xuan Thang",
""
],
[
"Ingebrandt",
"Sven",
""
],
[
"Leupers",
"Rainer",
""
],
[
"Pachauri",
"Vivek",
""
],
[
"Merchant",
"Farhad",
""
]
] |
new_dataset
| 0.993929 |
2202.12245
|
Marcos Faundez-Zanuy
|
Laurence Likforman-Sulem, Anna Esposito, Marcos Faundez-Zanuy, Stephan
Clemen\c{c}on, Gennaro Cordasco
|
EMOTHAW: A novel database for emotional state recognition from
handwriting
|
31 pages
|
IEEE Transactions on Human-Machine Systems, vol. 47, no. 2, pp.
273-284, April 2017
|
10.1109/THMS.2016.2635441
| null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The detection of negative emotions through daily activities such as
handwriting is useful for promoting well-being. The spread of human-machine
interfaces such as tablets makes the collection of handwriting samples easier.
In this context, we present a first publicly available handwriting database
which relates emotional states to handwriting, that we call EMOTHAW. This
database includes samples of 129 participants whose emotional states, namely
anxiety, depression and stress, are assessed by the Depression Anxiety Stress
Scales (DASS) questionnaire. Seven tasks are recorded through a digitizing
tablet: pentagons and house drawing, words copied in handprint, circles and
clock drawing, and one sentence copied in cursive writing. Records consist in
pen positions, on-paper and in-air, time stamp, pressure, pen azimuth and
altitude. We report our analysis on this database. From collected data, we
first compute measurements related to timing and ductus. We compute separate
measurements according to the position of the writing device: on paper or
in-air. We analyse and classify this set of measurements (referred to as
features) using a random forest approach. This latter is a machine learning
method [2], based on an ensemble of decision trees, which includes a feature
ranking process. We use this ranking process to identify the features which
best reveal a targeted emotional state.
We then build random forest classifiers associated to each emotional state.
Our results, obtained from cross-validation experiments, show that the targeted
emotional states can be identified with accuracies ranging from 60% to 71%.
|
[
{
"version": "v1",
"created": "Wed, 23 Feb 2022 15:15:44 GMT"
}
] | 2022-02-25T00:00:00 |
[
[
"Likforman-Sulem",
"Laurence",
""
],
[
"Esposito",
"Anna",
""
],
[
"Faundez-Zanuy",
"Marcos",
""
],
[
"Clemençon",
"Stephan",
""
],
[
"Cordasco",
"Gennaro",
""
]
] |
new_dataset
| 0.999587 |
2202.12250
|
Hussain Nyeem
|
Md. Saif Hassan Onim, Hussain Nyeem, Koushik Roy, Mahmudul Hasan,
Abtahi Ishmam, Md. Akiful Hoque Akif, Tareque Bashar Ovi
|
BLPnet: A new DNN model and Bengali OCR engine for Automatic License
Plate Recognition
|
Submitted to Neurocomputing
(https://www.sciencedirect.com/journal/neurocomputing/about/aims-and-scope)
| null | null | null |
cs.CV cs.AI cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
The development of the Automatic License Plate Recognition (ALPR) system has
received much attention for the English license plate. However, despite being
the sixth largest population around the world, no significant progress can be
tracked in the Bengali language countries or states for the ALPR system
addressing their more alarming traffic management with inadequate road-safety
measures. This paper reports a computationally efficient and reasonably
accurate Automatic License Plate Recognition (ALPR) system for Bengali
characters with a new end-to-end DNN model that we call Bengali License Plate
Network(BLPnet). The cascaded architecture for detecting vehicle regions prior
to vehicle license plate (VLP) in the model is proposed to eliminate false
positives resulting in higher detection accuracy of VLP. Besides, a lower set
of trainable parameters is considered for reducing the computational cost
making the system faster and more compatible for a real-time application. With
a Computational Neural Network (CNN)based new Bengali OCR engine and
word-mapping process, the model is characters rotation invariant, and can
readily extract, detect and output the complete license plate number of a
vehicle. The model feeding with17 frames per second (fps) on real-time video
footage can detect a vehicle with the Mean Squared Error (MSE) of 0.0152, and
the mean license plate character recognition accuracy of 95%. While compared to
the other models, an improvement of 5% and 20% were recorded for the BLPnetover
the prominent YOLO-based ALPR model and the Tesseract model for the
number-plate detection accuracy and time requirement, respectively.
|
[
{
"version": "v1",
"created": "Fri, 18 Feb 2022 22:58:53 GMT"
}
] | 2022-02-25T00:00:00 |
[
[
"Onim",
"Md. Saif Hassan",
""
],
[
"Nyeem",
"Hussain",
""
],
[
"Roy",
"Koushik",
""
],
[
"Hasan",
"Mahmudul",
""
],
[
"Ishmam",
"Abtahi",
""
],
[
"Akif",
"Md. Akiful Hoque",
""
],
[
"Ovi",
"Tareque Bashar",
""
]
] |
new_dataset
| 0.999594 |
2202.12280
|
Mahika Phutane
|
Mahika Phutane, Julie Wright, Brenda Veronica Castro, Lei Shi, Simone
R. Stern, Holly M. Lawson, Shiri Azenkot
|
Tactile Materials in Practice: Understanding the Experiences of Teachers
of the Visually Impaired
|
35 pages, 6 figures, 3 tables, to be published in TACCESS
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Teachers of the visually impaired (TVIs) regularly present tactile materials
(tactile graphics, 3D models, and real objects) to students with vision
impairments. Researchers have been increasingly interested in designing tools
to support the use of tactile materials, but we still lack an in-depth
understanding of how tactile materials are created and used in practice today.
To address this gap, we conducted interviews with 21 TVIs and a 3-week diary
study with eight of them. We found that tactile materials were regularly used
for academic as well as non-academic concepts like tactile literacy, motor
ability, and spatial awareness. Real objects and 3D models served as "stepping
stones" to tactile graphics and our participants preferred to teach with 3D
models, despite finding them difficult to create, obtain, and modify. Use of
certain materials also carried social implications; participants selected
materials that fostered student independence and allow classroom inclusion. We
contribute design considerations, encouraging future work on tactile materials
to enable student and TVI co-creation, facilitate rapid prototyping, and
promote movement and spatial awareness. To support future research in this
area, our paper provides a fundamental understanding of current practices. We
bridge these practices to established pedagogical approaches and highlight
opportunities for growth regarding this important genre of educational
materials.
|
[
{
"version": "v1",
"created": "Thu, 24 Feb 2022 18:21:47 GMT"
}
] | 2022-02-25T00:00:00 |
[
[
"Phutane",
"Mahika",
""
],
[
"Wright",
"Julie",
""
],
[
"Castro",
"Brenda Veronica",
""
],
[
"Shi",
"Lei",
""
],
[
"Stern",
"Simone R.",
""
],
[
"Lawson",
"Holly M.",
""
],
[
"Azenkot",
"Shiri",
""
]
] |
new_dataset
| 0.992382 |
2012.14195
|
Sebastian Enqvist
|
Sebastian Enqvist and Valentin Goranko
|
The temporal logic of coalitional goal assignments in concurrent
multi-player games
| null | null | null | null |
cs.LO cs.GT cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce and study a natural extension of the Alternating time temporal
logic ATL, called Temporal Logic of Coalitional Goal Assignments (TLCGA). It
features just one, but quite expressive, coalitional strategic operator, viz.
the coalitional goal assignment operator, which is based on a mapping assigning
to each set of players in the game its coalitional goal, formalised by a path
formula of the language of TLCGA, i.e. a formula prefixed with a temporal
operator X,U, or G, representing a temporalised objective for the respective
coalition, describing the property of the plays on which that objective is
satisfied. We establish fixpoint characterizations of the temporal goal
assignments in a mu-calculus extension of TLCGA, discuss its expressiveness and
illustrate it with some examples, prove bisimulation invariance and
Hennessy-Milner property for it with respect to a suitably defined notion of
bisimulation, construct a sound and complete axiomatic system for TLCGA, and
obtain its decidability via finite model property.
|
[
{
"version": "v1",
"created": "Mon, 28 Dec 2020 11:20:20 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Jan 2022 09:54:18 GMT"
},
{
"version": "v3",
"created": "Tue, 22 Feb 2022 19:05:36 GMT"
}
] | 2022-02-24T00:00:00 |
[
[
"Enqvist",
"Sebastian",
""
],
[
"Goranko",
"Valentin",
""
]
] |
new_dataset
| 0.999234 |
2109.07138
|
Raghavendra Selvan
|
Raghavendra Selvan, Erik B Dam, S{\o}ren Alexander Flensborg, Jens
Petersen
|
Patch-based Medical Image Segmentation using Matrix Product State Tensor
Networks
|
Journal extension of our preliminary conference work "Segmenting
two-dimensional structures with strided tensor networks", Selvan et al. 2021,
available at arXiv:2102.06900. 24 pages, 12 figures. Accepted to be published
at the Journal of Machine Learning for Biomedical Imaging, to be updated at
https://www.melba-journal.org/papers/2022:005.html
|
Journal of Machine Learning for Biomedical Imaging. 2022:005. pp
1-24
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Tensor networks are efficient factorisations of high-dimensional tensors into
a network of lower-order tensors. They have been most commonly used to model
entanglement in quantum many-body systems and more recently are witnessing
increased applications in supervised machine learning. In this work, we
formulate image segmentation in a supervised setting with tensor networks. The
key idea is to first lift the pixels in image patches to exponentially
high-dimensional feature spaces and using a linear decision hyper-plane to
classify the input pixels into foreground and background classes. The
high-dimensional linear model itself is approximated using the matrix product
state (MPS) tensor network. The MPS is weight-shared between the
non-overlapping image patches resulting in our strided tensor network model.
The performance of the proposed model is evaluated on three 2D- and one 3D-
biomedical imaging datasets. The performance of the proposed tensor network
segmentation model is compared with relevant baseline methods. In the 2D
experiments, the tensor network model yields competitive performance compared
to the baseline methods while being more resource efficient.
|
[
{
"version": "v1",
"created": "Wed, 15 Sep 2021 07:54:05 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Feb 2022 14:01:56 GMT"
}
] | 2022-02-24T00:00:00 |
[
[
"Selvan",
"Raghavendra",
""
],
[
"Dam",
"Erik B",
""
],
[
"Flensborg",
"Søren Alexander",
""
],
[
"Petersen",
"Jens",
""
]
] |
new_dataset
| 0.973489 |
2109.07831
|
Li Duan
|
Li Duan and Gerardo Aragon-Camarasa
|
GarNet: A Continuous Robot Vision Approach for Predicting Shapes and
Visually Perceived Weights of Garments
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We present a Garment Similarity Network (GarNet) that learns geometric and
physical similarities between known garments by continuously observing a
garment while a robot picks it up from a table. The aim is to capture and
encode geometric and physical characteristics of a garment into a manifold
where a decision can be carried out, such as predicting the garment's shape
class and its visually perceived weight. Our approach features an early stop
strategy, which means that GarNet does not need to observe a garment being
picked up from a crumpled to a hanging state to make a prediction. In our
experiments, we find that GarNet achieves prediction accuracies of 92% for
shape classification and 95.5% for predicting weights and advances state-of-art
approaches by 21% for shape classification.
|
[
{
"version": "v1",
"created": "Thu, 16 Sep 2021 09:47:32 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Feb 2022 18:56:42 GMT"
},
{
"version": "v3",
"created": "Wed, 23 Feb 2022 16:31:34 GMT"
}
] | 2022-02-24T00:00:00 |
[
[
"Duan",
"Li",
""
],
[
"Aragon-Camarasa",
"Gerardo",
""
]
] |
new_dataset
| 0.99249 |
2110.03370
|
Binbin Zhang
|
Binbin Zhang, Hang Lv, Pengcheng Guo, Qijie Shao, Chao Yang, Lei Xie,
Xin Xu, Hui Bu, Xiaoyu Chen, Chenchen Zeng, Di Wu, Zhendong Peng
|
WenetSpeech: A 10000+ Hours Multi-domain Mandarin Corpus for Speech
Recognition
| null | null | null | null |
cs.SD cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present WenetSpeech, a multi-domain Mandarin corpus
consisting of 10000+ hours high-quality labeled speech, 2400+ hours weakly
labeled speech, and about 10000 hours unlabeled speech, with 22400+ hours in
total. We collect the data from YouTube and Podcast, which covers a variety of
speaking styles, scenarios, domains, topics, and noisy conditions. An optical
character recognition (OCR) based method is introduced to generate the
audio/text segmentation candidates for the YouTube data on its corresponding
video captions, while a high-quality ASR transcription system is used to
generate audio/text pair candidates for the Podcast data. Then we propose a
novel end-to-end label error detection approach to further validate and filter
the candidates. We also provide three manually labelled high-quality test sets
along with WenetSpeech for evaluation -- Dev for cross-validation purpose in
training, Test_Net, collected from Internet for matched test, and
Test\_Meeting, recorded from real meetings for more challenging mismatched
test. Baseline systems trained with WenetSpeech are provided for three popular
speech recognition toolkits, namely Kaldi, ESPnet, and WeNet, and recognition
results on the three test sets are also provided as benchmarks. To the best of
our knowledge, WenetSpeech is the current largest open-sourced Mandarin speech
corpus with transcriptions, which benefits research on production-level speech
recognition.
|
[
{
"version": "v1",
"created": "Thu, 7 Oct 2021 12:05:29 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Oct 2021 13:42:57 GMT"
},
{
"version": "v3",
"created": "Mon, 18 Oct 2021 09:17:11 GMT"
},
{
"version": "v4",
"created": "Wed, 29 Dec 2021 10:21:22 GMT"
},
{
"version": "v5",
"created": "Wed, 23 Feb 2022 06:42:31 GMT"
}
] | 2022-02-24T00:00:00 |
[
[
"Zhang",
"Binbin",
""
],
[
"Lv",
"Hang",
""
],
[
"Guo",
"Pengcheng",
""
],
[
"Shao",
"Qijie",
""
],
[
"Yang",
"Chao",
""
],
[
"Xie",
"Lei",
""
],
[
"Xu",
"Xin",
""
],
[
"Bu",
"Hui",
""
],
[
"Chen",
"Xiaoyu",
""
],
[
"Zeng",
"Chenchen",
""
],
[
"Wu",
"Di",
""
],
[
"Peng",
"Zhendong",
""
]
] |
new_dataset
| 0.999175 |
2202.05487
|
Johannes Zerwas
|
Johannes Zerwas, Csaba Gy\"orgyi, Andreas Blenk, Stefan Schmid, Chen
Avin
|
Kevin: de Bruijn-based topology with demand-aware links and greedy
routing
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose Kevin, a novel demand-aware reconfigurable rack-to-rack datacenter
network realized with a simple and efficient control plane. In particular,
Kevin makes effective use of the network capacity by supporting integrated and
multi-hop routing as well as work-conserving scheduling. To this end, Kevin
relies on local greedy routing with small forwarding tables which require local
updates only during topological reconfigurations, making this approach ideal
for dynamic networks. Specifically, Kevin is based on a de Bruijn topology
(using a small number of optical circuit switches) in which static links are
enhanced with opportunistic links.
|
[
{
"version": "v1",
"created": "Fri, 11 Feb 2022 07:34:48 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Feb 2022 10:38:09 GMT"
}
] | 2022-02-24T00:00:00 |
[
[
"Zerwas",
"Johannes",
""
],
[
"Györgyi",
"Csaba",
""
],
[
"Blenk",
"Andreas",
""
],
[
"Schmid",
"Stefan",
""
],
[
"Avin",
"Chen",
""
]
] |
new_dataset
| 0.99124 |
2202.09955
|
Chao Lv
|
Chao Lv, Han Zhang, XinKai Du, Yunhao Zhang, Ying Huang, Wenhao Li,
Jia Han, Shanshan Gu
|
StyleBERT: Chinese pretraining by font style information
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
With the success of down streaming task using English pre-trained language
model, the pre-trained Chinese language model is also necessary to get a better
performance of Chinese NLP task. Unlike the English language, Chinese has its
special characters such as glyph information. So in this article, we propose
the Chinese pre-trained language model StyleBERT which incorporate the
following embedding information to enhance the savvy of language model, such as
word, pinyin, five stroke and chaizi. The experiments show that the model
achieves well performances on a wide range of Chinese NLP tasks.
|
[
{
"version": "v1",
"created": "Mon, 21 Feb 2022 02:45:12 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Feb 2022 01:30:45 GMT"
}
] | 2022-02-24T00:00:00 |
[
[
"Lv",
"Chao",
""
],
[
"Zhang",
"Han",
""
],
[
"Du",
"XinKai",
""
],
[
"Zhang",
"Yunhao",
""
],
[
"Huang",
"Ying",
""
],
[
"Li",
"Wenhao",
""
],
[
"Han",
"Jia",
""
],
[
"Gu",
"Shanshan",
""
]
] |
new_dataset
| 0.990566 |
2202.11134
|
Aditya Kusupati
|
Dhruv Jain, Khoa Huynh Anh Nguyen, Steven Goodman, Rachel
Grossman-Kahn, Hung Ngo, Aditya Kusupati, Ruofei Du, Alex Olwal, Leah
Findlater, Jon E. Froehlich
|
ProtoSound: A Personalized and Scalable Sound Recognition System for
Deaf and Hard-of-Hearing Users
|
Published at the ACM CHI Conference on Human Factors in Computing
Systems (CHI) 2022
| null |
10.1145/3491102.3502020
| null |
cs.HC cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advances have enabled automatic sound recognition systems for deaf and
hard of hearing (DHH) users on mobile devices. However, these tools use
pre-trained, generic sound recognition models, which do not meet the diverse
needs of DHH users. We introduce ProtoSound, an interactive system for
customizing sound recognition models by recording a few examples, thereby
enabling personalized and fine-grained categories. ProtoSound is motivated by
prior work examining sound awareness needs of DHH people and by a survey we
conducted with 472 DHH participants. To evaluate ProtoSound, we characterized
performance on two real-world sound datasets, showing significant improvement
over state-of-the-art (e.g., +9.7% accuracy on the first dataset). We then
deployed ProtoSound's end-user training and real-time recognition through a
mobile application and recruited 19 hearing participants who listened to the
real-world sounds and rated the accuracy across 56 locations (e.g., homes,
restaurants, parks). Results show that ProtoSound personalized the model
on-device in real-time and accurately learned sounds across diverse acoustic
contexts. We close by discussing open challenges in personalizable sound
recognition, including the need for better recording interfaces and algorithmic
improvements.
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 19:21:13 GMT"
}
] | 2022-02-24T00:00:00 |
[
[
"Jain",
"Dhruv",
""
],
[
"Nguyen",
"Khoa Huynh Anh",
""
],
[
"Goodman",
"Steven",
""
],
[
"Grossman-Kahn",
"Rachel",
""
],
[
"Ngo",
"Hung",
""
],
[
"Kusupati",
"Aditya",
""
],
[
"Du",
"Ruofei",
""
],
[
"Olwal",
"Alex",
""
],
[
"Findlater",
"Leah",
""
],
[
"Froehlich",
"Jon E.",
""
]
] |
new_dataset
| 0.992602 |
2202.11136
|
Bhawana Chhaglani
|
Bhawana Chhaglani, Camellia Zakaria, Adam Lechowicz, Prashant Shenoy,
Jeremy Gummeson
|
FlowSense: Monitoring Airflow in Building Ventilation Systems Using
Audio Sensing
|
26 pages, 12 figures, Will appear in March issue of the IMWUT 2022
journal
| null |
10.1145/3517258
| null |
cs.SD cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Proper indoor ventilation through buildings' heating, ventilation, and air
conditioning (HVAC) systems has become an increasing public health concern that
significantly impacts individuals' health and safety at home, work, and school.
While much work has progressed in providing energy-efficient and user comfort
for HVAC systems through IoT devices and mobile-sensing approaches, ventilation
is an aspect that has received lesser attention despite its importance. With a
motivation to monitor airflow from building ventilation systems through
commodity sensing devices, we present FlowSense, a machine learning-based
algorithm to predict airflow rate from sensed audio data in indoor spaces. Our
ML technique can predict the state of an air vent-whether it is on or off-as
well as the rate of air flowing through active vents. By exploiting a low-pass
filter to obtain low-frequency audio signals, we put together a
privacy-preserving pipeline that leverages a silence detection algorithm to
only sense for sounds of air from HVAC air vent when no human speech is
detected. We also propose the Minimum Persistent Sensing (MPS) as a
post-processing algorithm to reduce interference from ambient noise, including
ongoing human conversation, office machines, and traffic noises. Together,
these techniques ensure user privacy and improve the robustness of FlowSense.
We validate our approach yielding over 90% accuracy in predicting vent status
and 0.96 MSE in predicting airflow rate when the device is placed within 2.25
meters away from an air vent. Additionally, we demonstrate how our approach as
a mobile audio-sensing platform is robust to smartphone models, distance, and
orientation. Finally, we evaluate FlowSense privacy-preserving pipeline through
a user study and a Google Speech Recognition service, confirming that the audio
signals we used as input data are inaudible and inconstructible.
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 19:22:36 GMT"
}
] | 2022-02-24T00:00:00 |
[
[
"Chhaglani",
"Bhawana",
""
],
[
"Zakaria",
"Camellia",
""
],
[
"Lechowicz",
"Adam",
""
],
[
"Shenoy",
"Prashant",
""
],
[
"Gummeson",
"Jeremy",
""
]
] |
new_dataset
| 0.990829 |
2202.11168
|
Nitesh Goyal
|
Nitesh Goyal, Leslie Park, Lucy Vasserman
|
"You have to prove the threat is real": Understanding the needs of
Female Journalists and Activists to Document and Report Online Harassment
|
CHI Conference on Human Factors in Computing Systems (CHI '22), April
29-May 5, 2022, New Orleans, LA, USA
| null |
10.1145/3491102.3517517
| null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Online harassment is a major societal challenge that impacts multiple
communities. Some members of community, like female journalists and activists,
bear significantly higher impacts since their profession requires easy
accessibility, transparency about their identity, and involves highlighting
stories of injustice. Through a multi-phased qualitative research study
involving a focus group and interviews with 27 female journalists and
activists, we mapped the journey of a target who goes through harassment. We
introduce PMCR framework, as a way to focus on needs for Prevention,
Monitoring, Crisis and Recovery. We focused on Crisis and Recovery, and
designed a tool to satisfy a target's needs related to documenting evidence of
harassment during the crisis and creating reports that could be shared with
support networks for recovery. Finally, we discuss users' feedback to this
tool, highlighting needs for targets as they face the burden and offer
recommendations to future designers and scholars on how to develop tools that
can help targets manage their harassment.
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 20:41:55 GMT"
}
] | 2022-02-24T00:00:00 |
[
[
"Goyal",
"Nitesh",
""
],
[
"Park",
"Leslie",
""
],
[
"Vasserman",
"Lucy",
""
]
] |
new_dataset
| 0.994862 |
2202.11201
|
Weilin Zheng
|
Weilin Zheng, Bo Liu, Hong-Ning Dai, Zigui Jiang, Zibin Zheng,
Muhammad Imran
|
Unravelling Token Ecosystem of EOSIO Blockchain
|
15 pages, 12 figures, 6 tables
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Being the largest Initial Coin Offering project, EOSIO has attracted great
interest in cryptocurrency markets. Despite its popularity and prosperity
(e.g., 26,311,585,008 token transactions occurred from June 8, 2018 to Aug. 5,
2020), there is almost no work investigating the EOSIO token ecosystem. To fill
this gap, we are the first to conduct a systematic investigation on the EOSIO
token ecosystem by conducting a comprehensive graph analysis on the entire
on-chain EOSIO data (nearly 135 million blocks). We construct token creator
graphs, token-contract creator graphs, token holder graphs, and token transfer
graphs to characterize token creators, holders, and transfer activities.
Through graph analysis, we have obtained many insightful findings and observed
some abnormal trading patterns. Moreover, we propose a fake-token detection
algorithm to identify tokens generated by fake users or fake transactions and
analyze their corresponding manipulation behaviors. Evaluation results also
demonstrate the effectiveness of our algorithm.
|
[
{
"version": "v1",
"created": "Sat, 19 Feb 2022 11:37:38 GMT"
}
] | 2022-02-24T00:00:00 |
[
[
"Zheng",
"Weilin",
""
],
[
"Liu",
"Bo",
""
],
[
"Dai",
"Hong-Ning",
""
],
[
"Jiang",
"Zigui",
""
],
[
"Zheng",
"Zibin",
""
],
[
"Imran",
"Muhammad",
""
]
] |
new_dataset
| 0.998237 |
2202.11341
|
Marco Spanghero
|
M. Lenhart, M. Spanghero, P. Papadimitratos
|
Distributed and Mobile Message Level Relaying/Replaying of GNSS Signals
| null | null |
10.33012/2022.18227
| null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
With the introduction of Navigation Message Authentication (NMA), future
Global Navigation Satellite Systems (GNSSs) prevent spoofing by simulation,
i.e., the generation of forged satellite signals based on public information.
However, authentication does not prevent record-and-replay attacks, commonly
termed as meaconing. These attacks are less powerful in terms of adversarial
control over the victim receiver location and time, but by acting at the signal
level, they are not thwarted by NMA. This makes replaying/relaying attacks a
significant threat for GNSS. While there are numerous investigations on
meaconing, the majority does not rely on actual implementation and experimental
evaluation in real-world settings. In this work, we contribute to the
improvement of the experimental understanding of meaconing attacks. We design
and implement a system capable of real-time, distributed, and mobile meaconing,
built with off-the-shelf hardware. We extend from basic distributed attacks,
with signals from different locations relayed over the Internet and replayed
within range of the victim receiver(s): this has high bandwidth requirements
and thus depends on the quality of service of the available network to work. To
overcome this limitation, we propose to replay on message level, including the
authentication part of the payload. The resultant reduced bandwidth enables the
attacker to operate in mobile scenarios, as well as to replay signals from
multiple GNSS constellations and/or bands simultaneously. Additionally, the
attacker can delay individually selected satellite signals to potentially
influence the victim position and time solution in a more fine-grained manner.
Our versatile test-bench, enabling different types of replaying/relaying
attacks, facilitates testing realistic scenarios towards new and improved
replaying/relaying-focused countermeasures in GNSS receivers.
|
[
{
"version": "v1",
"created": "Wed, 23 Feb 2022 07:54:46 GMT"
}
] | 2022-02-24T00:00:00 |
[
[
"Lenhart",
"M.",
""
],
[
"Spanghero",
"M.",
""
],
[
"Papadimitratos",
"P.",
""
]
] |
new_dataset
| 0.978417 |
2202.11364
|
Daniil Gavrilov
|
Maksim Zubkov, Daniil Gavrilov
|
FastRPB: a Scalable Relative Positional Encoding for Long Sequence Tasks
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformers achieve remarkable performance in various domains, including
NLP, CV, audio processing, and graph analysis. However, they do not scale well
on long sequence tasks due to their quadratic complexity w.r.t. the inputs
length. Linear Transformers were proposed to address this limitation. However,
these models have shown weaker performance on the long sequence tasks comparing
to the original one. In this paper, we explore Linear Transformer models,
rethinking their two core components. Firstly, we improved Linear Transformer
with Shift-Invariant Kernel Function SIKF, which achieve higher accuracy
without loss in speed. Secondly, we introduce FastRPB which stands for Fast
Relative Positional Bias, which efficiently adds positional information to
self-attention using Fast Fourier Transformation. FastRPB is independent of the
self-attention mechanism and can be combined with an original self-attention
and all its efficient variants. FastRPB has O(N log(N)) computational
complexity, requiring O(N) memory w.r.t. input sequence length N.
|
[
{
"version": "v1",
"created": "Wed, 23 Feb 2022 09:12:00 GMT"
}
] | 2022-02-24T00:00:00 |
[
[
"Zubkov",
"Maksim",
""
],
[
"Gavrilov",
"Daniil",
""
]
] |
new_dataset
| 0.992905 |
2202.11374
|
Xiaoguang Zhu
|
Xiaoguang Zhu, Ye Zhu, Haoyu Wang, Honglin Wen, Yan Yan and Peilin Liu
|
Skeleton Sequence and RGB Frame Based Multi-Modality Feature Fusion
Network for Action Recognition
|
Accepted by ACM Transactions on Multimedia Computing, Communications,
and Applications (TOMM)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Action recognition has been a heated topic in computer vision for its wide
application in vision systems. Previous approaches achieve improvement by
fusing the modalities of the skeleton sequence and RGB video. However, such
methods have a dilemma between the accuracy and efficiency for the high
complexity of the RGB video network. To solve the problem, we propose a
multi-modality feature fusion network to combine the modalities of the skeleton
sequence and RGB frame instead of the RGB video, as the key information
contained by the combination of skeleton sequence and RGB frame is close to
that of the skeleton sequence and RGB video. In this way, the complementary
information is retained while the complexity is reduced by a large margin. To
better explore the correspondence of the two modalities, a two-stage fusion
framework is introduced in the network. In the early fusion stage, we introduce
a skeleton attention module that projects the skeleton sequence on the single
RGB frame to help the RGB frame focus on the limb movement regions. In the late
fusion stage, we propose a cross-attention module to fuse the skeleton feature
and the RGB feature by exploiting the correlation. Experiments on two
benchmarks NTU RGB+D and SYSU show that the proposed model achieves competitive
performance compared with the state-of-the-art methods while reduces the
complexity of the network.
|
[
{
"version": "v1",
"created": "Wed, 23 Feb 2022 09:29:53 GMT"
}
] | 2022-02-24T00:00:00 |
[
[
"Zhu",
"Xiaoguang",
""
],
[
"Zhu",
"Ye",
""
],
[
"Wang",
"Haoyu",
""
],
[
"Wen",
"Honglin",
""
],
[
"Yan",
"Yan",
""
],
[
"Liu",
"Peilin",
""
]
] |
new_dataset
| 0.99733 |
2202.11431
|
Tian Xuebo
|
Xuebo Tian, Junqiao Zhao, Chen Ye
|
DL-SLOT: Dynamic Lidar SLAM and Object Tracking Based On Graph
Optimization
| null | null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ego-pose estimation and dynamic object tracking are two key issues in an
autonomous driving system. Two assumptions are often made for them, i.e. the
static world assumption of simultaneous localization and mapping (SLAM) and the
exact ego-pose assumption of object tracking, respectively. However, these
assumptions are difficult to hold in highly dynamic road scenarios where SLAM
and object tracking become correlated and mutually beneficial. In this paper,
DL-SLOT, a dynamic Lidar SLAM and object tracking method is proposed. This
method integrates the state estimations of both the ego vehicle and the static
and dynamic objects in the environment into a unified optimization framework,
to realize SLAM and object tracking (SLOT) simultaneously. Firstly, we
implement object detection to remove all the points that belong to potential
dynamic objects. Then, LiDAR odometry is conducted using the filtered point
cloud. At the same time, detected objects are associated with the history
object trajectories based on the time-series information in a sliding window.
The states of the static and dynamic objects and ego vehicle in the sliding
window are integrated into a unified local optimization framework. We perform
SLAM and object tracking simultaneously in this framework, which significantly
improves the robustness and accuracy of SLAM in highly dynamic road scenarios
and the accuracy of objects' states estimation. Experiments on public datasets
have shown that our method achieves better accuracy than A-LOAM.
|
[
{
"version": "v1",
"created": "Wed, 23 Feb 2022 11:22:43 GMT"
}
] | 2022-02-24T00:00:00 |
[
[
"Tian",
"Xuebo",
""
],
[
"Zhao",
"Junqiao",
""
],
[
"Ye",
"Chen",
""
]
] |
new_dataset
| 0.9976 |
2202.11454
|
Cristina Fern\'andez-C\'ordoba
|
Cristina Fern\'andez-C\'ordoba, Sachin Pathak, Ashish Kumar Upadhyay
|
On $Z_{p^r}Z_{p^r}Z_{p^s}$-Additive Cyclic Codes
| null | null | null | null |
cs.IT cs.DM math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we introduce
$\mathbb{Z}_{p^r}\mathbb{Z}_{p^r}\mathbb{Z}_{p^s}$-additive cyclic codes for
$r\leq s$. These codes can be identified as $\mathbb{Z}_{p^s}[x]$-submodules of
$\mathbb{Z}_{p^r}[x]/\langle x^{\alpha}-1\rangle \times
\mathbb{Z}_{p^r}[x]/\langle x^{\beta}-1\rangle\times
\mathbb{Z}_{p^s}[x]/\langle x^{\gamma}-1\rangle$. We determine the generator
polynomials and minimal generating sets for this family of codes. Some previous
works has been done for the case $p=2$ with $r=s=1$, $r=s=2$, and $r=1,s=2$.
However, we show that in these previous works the classification of these codes
were incomplete and the statements in this paper complete such classification.
We also discuss the structure of separable
$\mathbb{Z}_{p^r}\mathbb{Z}_{p^r}\mathbb{Z}_{p^s}$-additive cyclic codes and
determine their generator polynomials. Further, we also study the duality of
$\mathbb{Z}_{p^s}[x]$-submodules. As applications, we present some examples and
construct some optimal binary codes.
|
[
{
"version": "v1",
"created": "Wed, 23 Feb 2022 12:09:08 GMT"
}
] | 2022-02-24T00:00:00 |
[
[
"Fernández-Córdoba",
"Cristina",
""
],
[
"Pathak",
"Sachin",
""
],
[
"Upadhyay",
"Ashish Kumar",
""
]
] |
new_dataset
| 0.997407 |
2202.11457
|
Guanmin Guo
|
Guanmin Guo, Ruihu Li, Yang Liu, Hao Song
|
Duality of generalized twisted Reed-Solomon codes and Hermitian
self-dual MDS or NMDS codes
|
13 pages, 1 table
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Self-dual MDS and NMDS codes over finite fields are linear codes with
significant combinatorial and cryptographic applications. In this paper,
firstly, we investigate the duality properties of generalized twisted
Reed-Solomon (abbreviated GTRS) codes in some special cases. In what follows, a
new systematic approach is proposed to draw Hermitian self-dual (+)-GTRS codes.
The necessary and sufficient conditions of a Hermitian self-dual (+)-GTRS code
are presented.With this method, several classes of Hermitian self-dual MDS and
NMDS codes are constructed.
|
[
{
"version": "v1",
"created": "Wed, 23 Feb 2022 12:19:58 GMT"
}
] | 2022-02-24T00:00:00 |
[
[
"Guo",
"Guanmin",
""
],
[
"Li",
"Ruihu",
""
],
[
"Liu",
"Yang",
""
],
[
"Song",
"Hao",
""
]
] |
new_dataset
| 0.981125 |
2202.11460
|
Pavel Hrab\'ak
|
Hana Najmanov\'a and Veronika Pe\v{s}kov\'a and Luk\'a\v{s} Kukl\'ik
and Marek Buk\'a\v{c}ek and Pavel Hrab\'ak and Daniel Va\v{s}ata
|
Evacuation trials from a double-deck electric train unit: Experimental
data and sensitivity analysis
| null |
Safety Science, Volume 146, 2022, 105523, ISSN 0925-7535
|
10.1016/j.ssci.2021.105523
| null |
cs.MA physics.soc-ph
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Passenger trains represent a challenging environment in emergencies, with
specific evacuation conditions resulting from the typical layout and interior
design inherent to public transportation vehicles. This paper describes a
dataset obtained in a full-scale controlled experiment emulating the emergency
evacuation of a double-deck electric unit railcar carried out in Prague in
2018. 15 evacuation trials involving 91 participants were conducted under
various evacuation scenarios considering different compositions of passenger
crowd, exit widths, and exit types (e.g. egress to a high platform, to an open
rail line using stairs, and a 750 mm jump without any supporting equipment).
The study's main goals were to collect experimental data on the movement
conditions in the railcar and to study the impact of various boundary
conditions on evacuation process and total evacuation time. Movement
characteristics (exit flows, speeds) and human behaviour (pre-movement
activities, exiting behaviours) were also analysed.
The data obtained was used to validate and adjust a Pathfinder model to
capture important aspects of evacuation from the railcar. Furthermore, a series
of simulations using this model was performed to provide sensitivity analysis
of the influence of crowd composition, exit width, and exit type on total
evacuation time. As a key finding, we can conclude that for the case of a
standard exit path (platform or stairs) the width of the main exit had the
greatest impact on total evacuation time, however, crowd composition played the
prevailing role in evacuation scenarios involving a jump.
|
[
{
"version": "v1",
"created": "Wed, 23 Feb 2022 12:25:06 GMT"
}
] | 2022-02-24T00:00:00 |
[
[
"Najmanová",
"Hana",
""
],
[
"Pešková",
"Veronika",
""
],
[
"Kuklík",
"Lukáš",
""
],
[
"Bukáček",
"Marek",
""
],
[
"Hrabák",
"Pavel",
""
],
[
"Vašata",
"Daniel",
""
]
] |
new_dataset
| 0.998731 |
2202.11468
|
Garima Bhandari Ms.
|
Garima Bhandari, Pushparaj Mani Pathak and Jung-Min Yang
|
Bond Graph Modelling and Simulation of Pneumatic Soft Actuator
|
10 pages, 6 figures, Robotics & Control Lab IIT Roorkee, Mechanical
and Industrial Engineering Department
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents the design and dynamic modelling of a soft pneumatic
actuator that can be used to mimic snake or worm-like locomotion. The bond
graph technique is used to derive the dynamics of the actuator. To validate the
accuracy of the derived dynamic model, we conduct numerical simulations using
20-sim software. Experimental results demonstrate that the soft actuator
achieves bi-directional bending and linear displacement, which is essential for
mimicking snake or worm-like locomotion
|
[
{
"version": "v1",
"created": "Wed, 23 Feb 2022 12:39:55 GMT"
}
] | 2022-02-24T00:00:00 |
[
[
"Bhandari",
"Garima",
""
],
[
"Pathak",
"Pushparaj Mani",
""
],
[
"Yang",
"Jung-Min",
""
]
] |
new_dataset
| 0.952678 |
2202.11542
|
Abhinav Valada
|
Rohit Mohan, Abhinav Valada
|
Amodal Panoptic Segmentation
| null | null | null | null |
cs.CV cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humans have the remarkable ability to perceive objects as a whole, even when
parts of them are occluded. This ability of amodal perception forms the basis
of our perceptual and cognitive understanding of our world. To enable robots to
reason with this capability, we formulate and propose a novel task that we name
amodal panoptic segmentation. The goal of this task is to simultaneously
predict the pixel-wise semantic segmentation labels of the visible regions of
stuff classes and the instance segmentation labels of both the visible and
occluded regions of thing classes. To facilitate research on this new task, we
extend two established benchmark datasets with pixel-level amodal panoptic
segmentation labels that we make publicly available as KITTI-360-APS and
BDD100K-APS. We present several strong baselines, along with the amodal
panoptic quality (APQ) and amodal parsing coverage (APC) metrics to quantify
the performance in an interpretable manner. Furthermore, we propose the novel
amodal panoptic segmentation network (APSNet), as a first step towards
addressing this task by explicitly modeling the complex relationships between
the occluders and occludes. Extensive experimental evaluations demonstrate that
APSNet achieves state-of-the-art performance on both benchmarks and more
importantly exemplifies the utility of amodal recognition. The benchmarks are
available at http://amodal-panoptic.cs.uni-freiburg.de.
|
[
{
"version": "v1",
"created": "Wed, 23 Feb 2022 14:41:59 GMT"
}
] | 2022-02-24T00:00:00 |
[
[
"Mohan",
"Rohit",
""
],
[
"Valada",
"Abhinav",
""
]
] |
new_dataset
| 0.998371 |
2202.11691
|
Jie Ding
|
Jie Ding, Shuai Ma, and Xin-Shan Zhu
|
Asymptotic Critical Transmission Radii in Wireless Networks over a
Convex Region
| null | null | null | null |
cs.IT cs.PF math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Critical transmission ranges (or radii) in wireless ad-hoc and sensor
networks have been extensively investigated for various performance metrics
such as connectivity, coverage, power assignment and energy consumption.
However, the regions on which the networks are distributed are typically either
squares or disks in existing works, which seriously limits the usage in
real-life applications. In this article, we consider a convex region (i.e., a
generalisation of squares and disks) on which wireless nodes are uniformly
distributed. We have investigated two types of critical transmission radii,
defined in terms of k-connectivity and the minimum vertex degree, respectively,
and have also established their precise asymptotic distributions. These make
the previous results obtained under the circumstance of squares or disks
special cases of this work. More importantly, our results reveal how the region
shape impacts on the critical transmission ranges: it is the length of the
boundary of the (fixed-area) region that completely determines the transmission
ranges. Furthermore, by isodiametric inequality, the smallest critical
transmission ranges are achieved when regions are disks only.
|
[
{
"version": "v1",
"created": "Fri, 21 Jan 2022 02:39:41 GMT"
}
] | 2022-02-24T00:00:00 |
[
[
"Ding",
"Jie",
""
],
[
"Ma",
"Shuai",
""
],
[
"Zhu",
"Xin-Shan",
""
]
] |
new_dataset
| 0.969464 |
1806.06726
|
Caleb Levy
|
Robert E. Tarjan, Caleb C. Levy, and Stephen Timmel
|
Zip Trees
|
V5 is the final published version. V4 appeared in the Workshop on
Algorithms and Data Structures in 2019. V1 was presented at Highlights of
Algorithms in 2018. 14 pages, 3 figures
|
ACM Transactions on Algorithms, 17(4), 34:1--12, 2021
|
10.1145/3476830
| null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the zip tree, a form of randomized binary search tree that
integrates previous ideas into one practical, performant, and
pleasant-to-implement package. A zip tree is a binary search tree in which each
node has a numeric rank and the tree is (max)-heap-ordered with respect to
ranks, with rank ties broken in favor of smaller keys. Zip trees are
essentially treaps (Seidel and Aragon 1996), except that ranks are drawn from a
geometric distribution instead of a uniform distribution, and we allow rank
ties. These changes enable us to use fewer random bits per node. We perform
insertions and deletions by unmerging and merging paths ("unzipping" and
"zipping") rather than by doing rotations, which avoids some pointer changes
and improves efficiency. The methods of zipping and unzipping take inspiration
from previous top-down approaches to insertion and deletion (Stephenson 1980;
Mart\'inez and Roura 1998; Sprugnoli 1980). From a theoretical standpoint, this
work provides two main results. First, zip trees require only $O(\log \log n)$
bits (with high probability) to represent the largest rank in an $n$-node
binary search tree; previous data structures require $O(\log n)$ bits for the
largest rank. Second, zip trees are naturally isomorphic to skip lists (Pugh
1990), and simplify the mapping of (Dean and Jones 2007) between skip lists and
binary search trees.
|
[
{
"version": "v1",
"created": "Mon, 18 Jun 2018 14:22:07 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Aug 2018 19:30:12 GMT"
},
{
"version": "v3",
"created": "Mon, 15 Jul 2019 01:12:57 GMT"
},
{
"version": "v4",
"created": "Tue, 12 Nov 2019 02:39:25 GMT"
},
{
"version": "v5",
"created": "Tue, 22 Feb 2022 02:05:52 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Tarjan",
"Robert E.",
""
],
[
"Levy",
"Caleb C.",
""
],
[
"Timmel",
"Stephen",
""
]
] |
new_dataset
| 0.986447 |
1906.11586
|
Evangello Flouty
|
Maria Grammatikopoulou, Evangello Flouty, Abdolrahim
Kadkhodamohammadi, Gwenol'e Quellec, Andre Chow, Jean Nehme, Imanol Luengo
and Danail Stoyanov
|
CaDIS: Cataract Dataset for Image Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video feedback provides a wealth of information about surgical procedures and
is the main sensory cue for surgeons. Scene understanding is crucial to
computer assisted interventions (CAI) and to post-operative analysis of the
surgical procedure. A fundamental building block of such capabilities is the
identification and localization of surgical instruments and anatomical
structures through semantic segmentation. Deep learning has advanced semantic
segmentation techniques in the recent years but is inherently reliant on the
availability of labelled datasets for model training. This paper introduces a
dataset for semantic segmentation of cataract surgery videos complementing the
publicly available CATARACTS challenge dataset. In addition, we benchmark the
performance of several state-of-the-art deep learning models for semantic
segmentation on the presented dataset. The dataset is publicly available at
https://cataracts-semantic-segmentation2020.grand-challenge.org/.
|
[
{
"version": "v1",
"created": "Thu, 27 Jun 2019 12:24:03 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jun 2019 09:11:10 GMT"
},
{
"version": "v3",
"created": "Mon, 1 Jul 2019 08:38:25 GMT"
},
{
"version": "v4",
"created": "Fri, 19 Jul 2019 14:51:57 GMT"
},
{
"version": "v5",
"created": "Thu, 2 Apr 2020 15:55:42 GMT"
},
{
"version": "v6",
"created": "Fri, 3 Apr 2020 08:49:48 GMT"
},
{
"version": "v7",
"created": "Tue, 22 Feb 2022 15:25:41 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Grammatikopoulou",
"Maria",
""
],
[
"Flouty",
"Evangello",
""
],
[
"Kadkhodamohammadi",
"Abdolrahim",
""
],
[
"Quellec",
"Gwenol'e",
""
],
[
"Chow",
"Andre",
""
],
[
"Nehme",
"Jean",
""
],
[
"Luengo",
"Imanol",
""
],
[
"Stoyanov",
"Danail",
""
]
] |
new_dataset
| 0.999594 |
1912.07109
|
Yue Jiang
|
Yue Jiang, Dantong Ji, Zhizhong Han, Matthias Zwicker
|
SDFDiff: Differentiable Rendering of Signed Distance Fields for 3D Shape
Optimization
|
CVPR2020 Full Paper (Oral Top 5%)
| null | null | null |
cs.CV cs.GR cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We propose SDFDiff, a novel approach for image-based shape optimization using
differentiable rendering of 3D shapes represented by signed distance functions
(SDFs). Compared to other representations, SDFs have the advantage that they
can represent shapes with arbitrary topology, and that they guarantee
watertight surfaces. We apply our approach to the problem of multi-view 3D
reconstruction, where we achieve high reconstruction quality and can capture
complex topology of 3D objects. In addition, we employ a multi-resolution
strategy to obtain a robust optimization algorithm. We further demonstrate that
our SDF-based differentiable renderer can be integrated with deep learning
models, which opens up options for learning approaches on 3D objects without 3D
supervision. In particular, we apply our method to single-view 3D
reconstruction and achieve state-of-the-art results.
|
[
{
"version": "v1",
"created": "Sun, 15 Dec 2019 21:06:46 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Feb 2022 15:39:10 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Jiang",
"Yue",
""
],
[
"Ji",
"Dantong",
""
],
[
"Han",
"Zhizhong",
""
],
[
"Zwicker",
"Matthias",
""
]
] |
new_dataset
| 0.99133 |
2004.10596
|
Amit Saha
|
Arpita Sanyal (Bhaduri), Amit Saha, Debasri Saha, Banani Saha and
Amlan Chakrabarti
|
Circuit Design for Clique Problem and Its Implementation on Quantum
Computer
|
25 pages, 18 figures. arXiv admin note: text overlap with
arXiv:1805.10224 by other authors
|
IET Quantum Communication, 2021
|
10.1049/qtc2.12029
| null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Finding cliques in a graph has several applications for its pattern matching
ability. $k$-clique problem, a special case of clique problem, determines
whether an arbitrary graph contains a clique of size $k$, has already been
addressed in quantum domain. A variant of $k$-clique problem that lists all
cliques of size $k$, has also popular modern-day applications. Albeit, the
implementation of such variant of $k$-clique problem in quantum setting still
remains untouched. In this paper, apart from theoretical solution of such
$k$-clique problem, practical quantum gate-based implementation has been
addressed using Grover's algorithm. This approach is further extended to design
circuit for the maximum clique problem in classical-quantum hybrid
architecture. The algorithm automatically generates the circuit for any given
undirected and unweighted graph and any given $k$, which makes our approach
generalized in nature. The proposed approach of solving $k$-clique problem has
exhibited a reduction of qubit cost and circuit depth as compared to the
state-of-the-art approach, for a small $k$ with respect to a large graph. A
framework that can map the automated generated circuit for clique problem to
quantum devices is also proposed. An analysis of the experimental results is
demonstrated using IBM's Qiskit.
|
[
{
"version": "v1",
"created": "Tue, 10 Mar 2020 04:29:35 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Jan 2021 11:03:36 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Jan 2021 18:20:17 GMT"
},
{
"version": "v4",
"created": "Wed, 7 Jul 2021 18:59:30 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Sanyal",
"Arpita",
"",
"Bhaduri"
],
[
"Saha",
"Amit",
""
],
[
"Saha",
"Debasri",
""
],
[
"Saha",
"Banani",
""
],
[
"Chakrabarti",
"Amlan",
""
]
] |
new_dataset
| 0.963624 |
2006.06192
|
Kanako Esaki
|
Kanako Esaki, Tadayuki Matsumura, Kiyoto Ito and Hiroyuki Mizuno
|
Sensorimotor Visual Perception on Embodied System Using Free Energy
Principle
|
This is a pre-print of an article published in Communications in
Computer and Information Science, vol 1524. The final authenticated version
is available online at: https://doi.org/10.1007/978-3-030-93736-2_62
| null |
10.1007/978-3-030-93736-2_62
| null |
cs.NE cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose an embodied system based on the free energy principle (FEP) for
sensorimotor visual perception. We evaluated it in a character-recognition task
using the MNIST dataset. Although the FEP has successfully described a rule
that living things obey mathematically and claims that a biological system
continues to change its internal models and behaviors to minimize the
difference in predicting sensory input, it is not enough to model sensorimotor
visual perception. An embodiment of the system is the key to achieving
sensorimotor visual perception. The proposed embodied system is configured by a
body and memory. The body has an ocular motor system controlling the direction
of eye gaze, which means that the eye can only observe a small focused area of
the environment. The memory is not photographic, but is a generative model
implemented with a variational autoencoder that contains prior knowledge about
the environment, and that knowledge is classified. By limiting body and memory
abilities and operating according to the FEP, the embodied system repeatedly
takes action to obtain the next sensory input based on various potentials of
future sensory inputs. In the evaluation, the inference of the environment was
represented as an approximate posterior distribution of characters (0 - 9). As
the number of repetitions increased, the attention area moved continuously,
gradually reducing the uncertainty of characters. Finally, the probability of
the correct character became the highest among the characters. Changing the
initial attention position provides a different final distribution, suggesting
that the proposed system has a confirmation bias.
|
[
{
"version": "v1",
"created": "Thu, 11 Jun 2020 05:03:45 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Feb 2022 01:46:35 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Esaki",
"Kanako",
""
],
[
"Matsumura",
"Tadayuki",
""
],
[
"Ito",
"Kiyoto",
""
],
[
"Mizuno",
"Hiroyuki",
""
]
] |
new_dataset
| 0.998236 |
2007.11869
|
Michele Polese
|
Michele Polese, Lorenzo Bertizzolo, Leonardo Bonati, Abhimanyu Gosain,
Tommaso Melodia
|
An Experimental mmWave Channel Model for UAV-to-UAV Communications
|
7 pages, 7 figures, 3 tables. Please cite it as M. Polese, L.
Bertizzolo, L. Bonati, A. Gosain, T. Melodia, An Experimental mmWave Channel
Model for UAV-to-UAV Communications, in Proc. of ACM Workshop on
Millimeter-Wave Networks and Sensing Systems (mmNets), London, UK, Sept. 2020
|
ACM Workshop on Millimeter-Wave Networks and Sensing Systems
(mmNets 2020)
|
10.1145/3412060.3418431
| null |
cs.NI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unmanned Aerial Vehicle (UAV) networks can provide a resilient communication
infrastructure to enhance terrestrial networks in case of traffic spikes or
disaster scenarios. However, to be able to do so, they need to be based on
high-bandwidth wireless technologies for both radio access and backhaul. With
this respect, the millimeter wave (mmWave) spectrum represents an enticing
solution, since it provides large chunks of untapped spectrum that can enable
ultra-high data-rates for aerial platforms. Aerial mmWave channels, however,
experience characteristics that are significantly different from terrestrial
deployments in the same frequency bands. As of today, mmWave aerial channels
have not been extensively studied and modeled. Specifically, the combination of
UAV micro-mobility (because of imprecisions in the control loop, and external
factors including wind) and the highly directional mmWave transmissions require
ad hoc models to accurately capture the performance of UAV deployments. To fill
this gap, we propose an empirical propagation loss model for UAV-to-UAV
communications at 60 GHz, based on an extensive aerial measurement campaign
conducted with the Facebook Terragraph channel sounders. We compare it with
3GPP channel models and make the measurement dataset publicly available.
|
[
{
"version": "v1",
"created": "Thu, 23 Jul 2020 09:15:04 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Aug 2020 08:59:49 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Polese",
"Michele",
""
],
[
"Bertizzolo",
"Lorenzo",
""
],
[
"Bonati",
"Leonardo",
""
],
[
"Gosain",
"Abhimanyu",
""
],
[
"Melodia",
"Tommaso",
""
]
] |
new_dataset
| 0.990041 |
2009.10868
|
Uehwan Kim
|
Ue-Hwan Kim, Dongho Ka, Hwasoo Yeo, Jong-Hwan Kim
|
A Real-Time Predictive Pedestrian Collision Warning Service for
Cooperative Intelligent Transportation Systems Using 3D Pose Estimation
|
12 pages, 8 figures, 4 tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Minimizing traffic accidents between vehicles and pedestrians is one of the
primary research goals in intelligent transportation systems. To achieve the
goal, pedestrian orientation recognition and prediction of pedestrian's
crossing or not-crossing intention play a central role. Contemporary approaches
do not guarantee satisfactory performance due to limited field-of-view, lack of
generalization, and high computational complexity. To overcome these
limitations, we propose a real-time predictive pedestrian collision warning
service (P2CWS) for two tasks: pedestrian orientation recognition (100.53 FPS)
and intention prediction (35.76 FPS). Our framework obtains satisfying
generalization over multiple sites because of the proposed site-independent
features. At the center of the feature extraction lies 3D pose estimation. The
3D pose analysis enables robust and accurate recognition of pedestrian
orientations and prediction of intentions over multiple sites. The proposed
vision framework realizes 89.3% accuracy in the behavior recognition task on
the TUD dataset without any training process and 91.28% accuracy in intention
prediction on our dataset achieving new state-of-the-art performance. To
contribute to the corresponding research community, we make our source codes
public which are available at https://github.com/Uehwan/VisionForPedestrian
|
[
{
"version": "v1",
"created": "Wed, 23 Sep 2020 00:55:12 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Feb 2022 12:19:11 GMT"
},
{
"version": "v3",
"created": "Thu, 10 Feb 2022 10:42:07 GMT"
},
{
"version": "v4",
"created": "Tue, 22 Feb 2022 03:40:11 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Kim",
"Ue-Hwan",
""
],
[
"Ka",
"Dongho",
""
],
[
"Yeo",
"Hwasoo",
""
],
[
"Kim",
"Jong-Hwan",
""
]
] |
new_dataset
| 0.95524 |
2012.02600
|
Bilal Farooq
|
Irum Sanaullah and Nael Alsaleh and Shadi Djavadian and Bilal Farooq
|
Spatio-Temporal Analysis of On Demand Transit: A Case Study of
Belleville, Canada
| null |
Transportation Research Part A: Planning and Policy, 2021, Volume
145, Pages 284-301
|
10.1016/j.tra.2021.01.020
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rapid increase in the cyber-physical nature of transportation,
availability of GPS data, mobile applications, and effective communication
technologies have led to the emergence of On-Demand Transit (ODT) systems. In
September 2018, the City of Belleville in Canada started an on-demand public
transit pilot project, where the late-night fixed-route (RT 11) was substituted
with the ODT providing a real-time ride-hailing service. We present an in-depth
analysis of the spatio-temporal demand and supply, level of service, and origin
and destination patterns of Belleville ODT users, based on the data collected
from September 2018 till May 2019. The independent and combined effects of the
demographic characteristics (population density, working-age, and median
income) on the ODT trip production and attraction levels were studied using GIS
and the K-means machine learning clustering algorithm. The results indicate
that ODT trips demand is highest for 11:00 pm-11:45 pm during the weekdays and
8:00 pm-8:30 pm during the weekends. We expect this to be the result of users
returning home from work or shopping. Results showed that 39% of the trips were
found to have a waiting time of smaller than 15 minutes, while 28% of trips had
a waiting time of 15-30 minutes. The dissemination areas with higher population
density, lower median income, or higher working-age percentages tend to have
higher ODT trip attraction levels, except for the dissemination areas that have
highly attractive places like commercial areas.
|
[
{
"version": "v1",
"created": "Fri, 4 Dec 2020 13:56:18 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Feb 2022 01:48:53 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Sanaullah",
"Irum",
""
],
[
"Alsaleh",
"Nael",
""
],
[
"Djavadian",
"Shadi",
""
],
[
"Farooq",
"Bilal",
""
]
] |
new_dataset
| 0.998861 |
2102.05606
|
Leonardo Bonati
|
Leonardo Bonati, Salvatore D'Oro, Francesco Restuccia, Stefano
Basagni, Tommaso Melodia
|
SteaLTE: Private 5G Cellular Connectivity as a Service with Full-stack
Wireless Steganography
| null |
Proceedings of IEEE INFOCOM, May 2021
|
10.1109/INFOCOM42981.2021.9488889
| null |
cs.NI cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fifth-generation (5G) systems will extensively employ radio access network
(RAN) softwarization. This key innovation enables the instantiation of "virtual
cellular networks" running on different slices of the shared physical
infrastructure. In this paper, we propose the concept of Private Cellular
Connectivity as a Service (PCCaaS), where infrastructure providers deploy
covert network slices known only to a subset of users. We then present SteaLTE
as the first realization of a PCCaaS-enabling system for cellular networks. At
its core, SteaLTE utilizes wireless steganography to disguise data as noise to
adversarial receivers. Differently from previous work, however, it takes a
full-stack approach to steganography, contributing an LTE-compliant
steganographic protocol stack for PCCaaS-based communications, and packet
schedulers and operations to embed covert data streams on top of traditional
cellular traffic (primary traffic). SteaLTE balances undetectability and
performance by mimicking channel impairments so that covert data waveforms are
almost indistinguishable from noise. We evaluate the performance of SteaLTE on
an indoor LTE-compliant testbed under different traffic profiles, distance and
mobility patterns. We further test it on the outdoor PAWR POWDER platform over
long-range cellular links. Results show that in most experiments SteaLTE
imposes little loss of primary traffic throughput in presence of covert data
transmissions (< 6%), making it suitable for undetectable PCCaaS networking.
|
[
{
"version": "v1",
"created": "Wed, 10 Feb 2021 18:09:30 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Bonati",
"Leonardo",
""
],
[
"D'Oro",
"Salvatore",
""
],
[
"Restuccia",
"Francesco",
""
],
[
"Basagni",
"Stefano",
""
],
[
"Melodia",
"Tommaso",
""
]
] |
new_dataset
| 0.998677 |
2102.13477
|
Lam Duc Nguyen
|
Lam Duc Nguyen, Amari N. Lewis, Israel Leyva-Mayorga, Amelia Regan,
and Petar Popovski
|
B-ETS: A Trusted Blockchain-based Emissions Trading System for
Vehicle-to-Vehicle Networks
|
Paper got accepted in 7th International Conference on Vehicle
Technology and Intelligent Transport Systems (VEHITS) 2021
|
7th International Conference on Vehicle Technology and Intelligent
Transport Systems 2021
| null | null |
cs.MA cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Urban areas are negatively impacted by Carbon Dioxide (CO2 ) and Nitrogen
Oxide (NOx) emissions. In order to achieve a cost-effective reduction of
greenhouse gas emissions and to combat climate change, the European Union (EU)
introduced an Emissions Trading System (ETS) where organizations can buy or
receive emission allowances as needed. The current ETS is a centralized one,
consisting of a set of complex rules. It is currently administered at the
organizational level and is used for fixed-point sources of pollution such as
factories, power plants, and refineries. However, the current ETS cannot
efficiently cope with vehicle mobility, even though vehicles are one of the
primary sources of CO2 and NOx emissions. In this study, we propose a new
distributed Blockchain-based emissions allowance trading system called B-ETS.
This system enables transparent and trustworthy data exchange as well as
trading of allowances among vehicles, relying on vehicle-to-vehicle
communication. In addition, we introduce an economic incentive-based mechanism
that appeals to individual drivers and leads them to modify their driving
behavior in order to reduce emissions. The efficiency of the proposed system is
studied through extensive simulations, showing how increased vehicle
connectivity can lead to a reduction of the emissions generated from those
vehicles. We demonstrate that our method can be used for full life-cycle
monitoring and fuel economy reporting. This leads us to conjecture that the
proposed system could lead to important behavioral changes among the drivers
|
[
{
"version": "v1",
"created": "Thu, 18 Feb 2021 21:52:56 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Mar 2021 01:30:15 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Nguyen",
"Lam Duc",
""
],
[
"Lewis",
"Amari N.",
""
],
[
"Leyva-Mayorga",
"Israel",
""
],
[
"Regan",
"Amelia",
""
],
[
"Popovski",
"Petar",
""
]
] |
new_dataset
| 0.988633 |
2104.03634
|
Pablo Pueyo
|
Pablo Pueyo, Eduardo Montijano, Ana C. Murillo and Mac Schwager
|
CineMPC: Controlling Camera Intrinsics and Extrinsics for Autonomous
Cinematography
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present CineMPC, an algorithm to autonomously control a UAV-borne video
camera in a nonlinear Model Predicted Control (MPC) loop. CineMPC controls both
the position and orientation of the camera -- the camera extrinsics -- as well
as the lens focal length, focal distance, and aperture -- the camera
intrinsics. While some existing solutions autonomously control the position and
orientation of the camera, no existing solutions also control the intrinsic
parameters, which are essential tools for rich cinematographic expression. The
intrinsic parameters control the parts of the scene that are focused or
blurred, the viewers' perception of depth in the scene and the position of the
targets in the image. CineMPC closes the loop from camera images to UAV
trajectory and lens parameters in order to follow the desired relative
trajectory and image composition as the targets move through the scene.
Experiments using a photo-realistic environment demonstrate the capabilities
of the proposed control framework to successfully achieve a full array of
cinematographic effects not possible without full camera control.
|
[
{
"version": "v1",
"created": "Thu, 8 Apr 2021 09:36:24 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Feb 2022 10:32:46 GMT"
},
{
"version": "v3",
"created": "Tue, 22 Feb 2022 12:09:11 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Pueyo",
"Pablo",
""
],
[
"Montijano",
"Eduardo",
""
],
[
"Murillo",
"Ana C.",
""
],
[
"Schwager",
"Mac",
""
]
] |
new_dataset
| 0.977193 |
2104.14817
|
Shunchuan Yang
|
Zekun Zhu, Aipeng Sun, Xiaochao Zhou, Shunchuan Yang, Zhizhang (David)
Chen
|
Single-Source SIE for Two-Dimensional Arbitrarily Connected Penetrable
and PEC Objects with Nonconformal Meshes
| null | null |
10.1109/TMTT.2021.3129514
| null |
cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We proposed a simple and efficient modular single-source surface integral
equation (SS-SIE) formulation for electromagnetic analysis of arbitrarily
connected penetrable and perfectly electrical conductor (PEC) objects in
two-dimensional space. In this formulation, a modular equivalent model for each
penetrable object consisting of the composite structure is first independently
constructed through replacing it by the background medium, no matter whether it
is surrounded by the background medium, other media, or partially connected
objects, and enforcing an equivalent electric current density on the boundary
to remain fields in the exterior region unchanged. Then, by combining all the
modular models and any possible PEC objects together, an equivalent model for
the composite structure can be derived. The troublesome junction handling
techniques are not needed and non-conformal meshes are intrinsically supported.
The proposed SS-SIE formulation is simple to implement, efficient, and
flexible, which shows significant performance improvement in terms of CPU time
compared with the original SS-SIE formulation and the
Poggio-Miller-Chang-Harrington-Wu-Tsai (PMCHWT) formulation. Several numerical
examples including the coated dielectric cuboid, the large lossy objects, the
planar layered dielectric structure, and the partially connected dielectric and
PEC structure are carried out to validate its accuracy, efficiency and
robustness.
|
[
{
"version": "v1",
"created": "Fri, 30 Apr 2021 08:03:50 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Aug 2021 02:18:41 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Zhu",
"Zekun",
"",
"David"
],
[
"Sun",
"Aipeng",
"",
"David"
],
[
"Zhou",
"Xiaochao",
"",
"David"
],
[
"Yang",
"Shunchuan",
"",
"David"
],
[
"Zhizhang",
"",
"",
"David"
],
[
"Chen",
"",
""
]
] |
new_dataset
| 0.998721 |
2107.02840
|
Ren Wang
|
Ren Wang, Tianqi Chen, Stephen Lindsly, Cooper Stansbury, Alnawaz
Rehemtulla, Indika Rajapakse, Alfred Hero
|
RAILS: A Robust Adversarial Immune-inspired Learning System
|
arXiv admin note: text overlap with arXiv:2012.10485
| null | null | null |
cs.NE cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adversarial attacks against deep neural networks (DNNs) are continuously
evolving, requiring increasingly powerful defense strategies. We develop a
novel adversarial defense framework inspired by the adaptive immune system: the
Robust Adversarial Immune-inspired Learning System (RAILS). Initializing a
population of exemplars that is balanced across classes, RAILS starts from a
uniform label distribution that encourages diversity and uses an evolutionary
optimization process to adaptively adjust the predictive label distribution in
a manner that emulates the way the natural immune system recognizes novel
pathogens. RAILS' evolutionary optimization process explicitly captures the
tradeoff between robustness (diversity) and accuracy (specificity) of the
network, and represents a new immune-inspired perspective on adversarial
learning. The benefits of RAILS are empirically demonstrated under eight types
of adversarial attacks on a DNN adversarial image classifier for several
benchmark datasets, including: MNIST; SVHN; CIFAR-10; and CIFAR-10. We find
that PGD is the most damaging attack strategy and that for this attack RAILS is
significantly more robust than other methods, achieving improvements in
adversarial robustness by $\geq 5.62\%, 12.5\%$, $10.32\%$, and $8.39\%$, on
these respective datasets, without appreciable loss of classification accuracy.
Codes for the results in this paper are available at
https://github.com/wangren09/RAILS.
|
[
{
"version": "v1",
"created": "Sun, 27 Jun 2021 17:57:45 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Feb 2022 19:50:19 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Wang",
"Ren",
""
],
[
"Chen",
"Tianqi",
""
],
[
"Lindsly",
"Stephen",
""
],
[
"Stansbury",
"Cooper",
""
],
[
"Rehemtulla",
"Alnawaz",
""
],
[
"Rajapakse",
"Indika",
""
],
[
"Hero",
"Alfred",
""
]
] |
new_dataset
| 0.989223 |
2109.03438
|
Jiexin Wang
|
Jiexin Wang, Adam Jatowt, Masatoshi Yoshikawa
|
ArchivalQA: A Large-scale Benchmark Dataset for Open Domain Question
Answering over Historical News Collections
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In the last few years, open-domain question answering (ODQA) has advanced
rapidly due to the development of deep learning techniques and the availability
of large-scale QA datasets. However, the current datasets are essentially
designed for synchronic document collections (e.g., Wikipedia). Temporal news
collections such as long-term news archives spanning several decades, are
rarely used in training the models despite they are quite valuable for our
society. To foster the research in the field of ODQA on such historical
collections, we present ArchivalQA, a large question answering dataset
consisting of 532,444 question-answer pairs which is designed for temporal news
QA. We divide our dataset into four subparts based on the question difficulty
levels and the containment of temporal expressions, which we believe are useful
for training and testing ODQA systems characterized by different strengths and
abilities. The novel QA dataset-constructing framework that we introduce can be
also applied to generate non-ambiguous questions of good quality over other
types of temporal document collections.
|
[
{
"version": "v1",
"created": "Wed, 8 Sep 2021 05:21:51 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Sep 2021 11:52:12 GMT"
},
{
"version": "v3",
"created": "Mon, 21 Feb 2022 09:40:19 GMT"
},
{
"version": "v4",
"created": "Tue, 22 Feb 2022 04:51:20 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Wang",
"Jiexin",
""
],
[
"Jatowt",
"Adam",
""
],
[
"Yoshikawa",
"Masatoshi",
""
]
] |
new_dataset
| 0.999548 |
2110.04280
|
Linghao Song
|
Linghao Song, Yuze Chi, Jason Cong
|
Pyxis: An Open-Source Performance Dataset of Sparse Accelerators
|
To appear in ICASSP'22
| null | null | null |
cs.LG cs.AR cs.DC cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Specialized accelerators provide gains of performance and efficiency in
specific domains of applications. Sparse data structures or/and representations
exist in a wide range of applications. However, it is challenging to design
accelerators for sparse applications because no architecture or
performance-level analytic models are able to fully capture the spectrum of the
sparse data. Accelerator researchers rely on real execution to get precise
feedback for their designs. In this work, we present PYXIS, a performance
dataset for specialized accelerators on sparse data. PYXIS collects accelerator
designs and real execution performance statistics. Currently, there are 73.8 K
instances in PYXIS. PYXIS is open-source, and we are constantly growing PYXIS
with new accelerator designs and performance statistics. PYXIS can benefit
researchers in the fields of accelerator, architecture, performance, algorithm,
and many related topics.
|
[
{
"version": "v1",
"created": "Fri, 8 Oct 2021 17:46:51 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Feb 2022 01:35:34 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Song",
"Linghao",
""
],
[
"Chi",
"Yuze",
""
],
[
"Cong",
"Jason",
""
]
] |
new_dataset
| 0.999703 |
2111.04460
|
Cuncheng Zhu
|
Cuncheng Zhu, Christopher T. Lee, Padmini Rangamani
|
Mem3DG: Modeling Membrane Mechanochemical Dynamics in 3D using Discrete
Differential Geometry
| null | null |
10.1016/j.bpj.2021.11.2371
| null |
cs.CE cond-mat.soft physics.bio-ph q-bio.CB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Biomembranes adopt varying morphologies that are vital to cellular functions.
Many studies use computational modeling to understand how various
mechanochemical factors contribute to membrane shape transformations. Compared
to approximation-based methods (e.g., finite element method), the class of
discrete mesh models offers greater flexibility to simulate complex physics and
shapes in three dimensions; its formulation produces an efficient algorithm
while maintaining coordinate-free geometric descriptions. However, ambiguities
in geometric definitions in the discrete context have led to a lack of
consensus on which discrete mesh model is theoretically and numerically
optimal; a bijective relationship between the terms contributing to both the
energy and forces from the discrete and smooth geometric theories remains to be
established. We address this and present an extensible framework,
$\texttt{Mem3DG}$, for modeling 3D mechanochemical dynamics of membranes based
on Discrete Differential Geometry (DDG) on triangulated meshes. The formalism
of DDG resolves the inconsistency and provides a unifying perspective on how to
relate the smooth and discrete energy and forces. To demonstrate,
$\texttt{Mem3DG}$ is used to model a sequence of examples with increasing
mechanochemical complexity: recovering classical shape transformations such as
1) biconcave disk, dumbbell, and unduloid and 2) spherical bud on spherical,
flat-patch membrane; investigating how the coupling of membrane mechanics with
protein mobility jointly affects phase and shape transformation. As
high-resolution 3D imaging of membrane ultrastructure becomes more readily
available, we envision Mem3DG to be applied as an end-to-end tool to simulate
realistic cell geometry under user-specified mechanochemical conditions.
|
[
{
"version": "v1",
"created": "Mon, 1 Nov 2021 02:41:42 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Zhu",
"Cuncheng",
""
],
[
"Lee",
"Christopher T.",
""
],
[
"Rangamani",
"Padmini",
""
]
] |
new_dataset
| 0.997956 |
2111.05224
|
Mikhail Fomichev
|
Mikhail Fomichev, Luis F. Abanto-Leon, Max Stiegler, Alejandro Molina,
Jakob Link, Matthias Hollick
|
Next2You: Robust Copresence Detection Based on Channel State Information
|
Added correct metadata from ACM Transactions on Internet of Things.
Code and data are available at https://github.com/seemoo-lab/next2you
| null |
10.1145/3491244
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Context-based copresence detection schemes are a necessary prerequisite to
building secure and usable authentication systems in the Internet of Things
(IoT). Such schemes allow one device to verify proximity of another device
without user assistance utilizing their physical context (e.g., audio). The
state-of-the-art copresence detection schemes suffer from two major
limitations: (1) they cannot accurately detect copresence in low-entropy
context (e.g., empty room with few events occurring) and insufficiently
separated environments (e.g., adjacent rooms), (2) they require devices to have
common sensors (e.g., microphones) to capture context, making them impractical
on devices with heterogeneous sensors. We address these limitations, proposing
Next2You, a novel copresence detection scheme utilizing channel state
information (CSI). In particular, we leverage magnitude and phase values from a
range of subcarriers specifying a Wi-Fi channel to capture a robust wireless
context created when devices communicate. We implement Next2You on
off-the-shelf smartphones relying only on ubiquitous Wi-Fi chipsets and
evaluate it based on over 95 hours of CSI measurements that we collect in five
real-world scenarios. Next2You achieves error rates below 4%, maintaining
accurate copresence detection both in low-entropy context and insufficiently
separated environments. We also demonstrate the capability of Next2You to work
reliably in real-time and its robustness to various attacks.
|
[
{
"version": "v1",
"created": "Tue, 9 Nov 2021 16:05:34 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Feb 2022 10:23:59 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Fomichev",
"Mikhail",
""
],
[
"Abanto-Leon",
"Luis F.",
""
],
[
"Stiegler",
"Max",
""
],
[
"Molina",
"Alejandro",
""
],
[
"Link",
"Jakob",
""
],
[
"Hollick",
"Matthias",
""
]
] |
new_dataset
| 0.982249 |
2111.10342
|
Desheng Cai
|
Desheng Cai, Jun Hu, Quan Zhao, Shengsheng Qian, Quan Fang, Changsheng
Xu
|
GRecX: An Efficient and Unified Benchmark for GNN-based Recommendation
| null | null | null | null |
cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present GRecX, an open-source TensorFlow framework for
benchmarking GNN-based recommendation models in an efficient and unified way.
GRecX consists of core libraries for building GNN-based recommendation
benchmarks, as well as the implementations of popular GNN-based recommendation
models. The core libraries provide essential components for building efficient
and unified benchmarks, including FastMetrics (efficient metrics computation
libraries), VectorSearch (efficient similarity search libraries for dense
vectors), BatchEval (efficient mini-batch evaluation libraries), and
DataManager (unified dataset management libraries). Especially, to provide a
unified benchmark for the fair comparison of different complex GNN-based
recommendation models, we design a new metric GRMF-X and integrate it into the
FastMetrics component. Based on a TensorFlow GNN library tf_geometric, GRecX
carefully implements a variety of popular GNN-based recommendation models. We
carefully implement these baseline models to reproduce the performance reported
in the literature, and our implementations are usually more efficient and
friendly. In conclusion, GRecX enables uses to train and benchmark GNN-based
recommendation baselines in an efficient and unified way. We conduct
experiments with GRecX, and the experimental results show that GRecX allows us
to train and benchmark GNN-based recommendation baselines in an efficient and
unified way. The source code of GRecX is available at
https://github.com/maenzhier/GRecX.
|
[
{
"version": "v1",
"created": "Fri, 19 Nov 2021 17:45:46 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Dec 2021 14:53:06 GMT"
},
{
"version": "v3",
"created": "Tue, 22 Feb 2022 15:50:08 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Cai",
"Desheng",
""
],
[
"Hu",
"Jun",
""
],
[
"Zhao",
"Quan",
""
],
[
"Qian",
"Shengsheng",
""
],
[
"Fang",
"Quan",
""
],
[
"Xu",
"Changsheng",
""
]
] |
new_dataset
| 0.970014 |
2201.10127
|
Sanjay Chandlekar
|
Sanjay Chandlekar, Easwar Subramanian, Sanjay Bhat, Praveen Paruchuri,
Sujit Gujar
|
Multi-unit Double Auctions: Equilibrium Analysis and Bidding Strategy
using DDPG in Smart-grids
|
Accepted for publication in the proceedings of the 21st International
Conference on Autonomous Agents and Multiagent Systems (AAMAS-22)
| null | null | null |
cs.GT econ.TH
|
http://creativecommons.org/licenses/by/4.0/
|
Periodic double auctions (PDA) have applications in many areas such as in
e-commerce, intra-day equity markets, and day-ahead energy markets in
smart-grids. While the trades accomplished using PDAs are worth trillions of
dollars, finding a reliable bidding strategy in such auctions is still a
challenge as it requires the consideration of future auctions. A participating
buyer in a PDA has to design its bidding strategy by planning for current and
future auctions. Many equilibrium-based bidding strategies proposed are complex
to use in real-time. In the current exposition, we propose a scale-based
bidding strategy for buyers participating in PDA. We first present an
equilibrium analysis for single-buyer single-seller multi-unit single-shot
k-Double auctions. Specifically, we analyze the situation when a seller and a
buyer trade two identical units of quantity in a double auction where both the
buyer and the seller deploy a simple, scale-based bidding strategy. The
equilibrium analysis becomes intractable as the number of participants
increases. To be useful in more complex settings such as wholesale markets in
smart-grids, we model equilibrium bidding strategy as a learning problem. We
develop a deep deterministic policy gradient (DDPG) based learning strategy,
DDPGBBS, for a participating agent in PDAs to suggest an action at any auction
instance. DDPGBBS, which empirically follows the obtained theoretical
equilibrium, is easily extendable when the number of buyers/sellers increases.
We take Power Trading Agent Competition's (PowerTAC) wholesale market PDA as a
testbed to evaluate our novel bidding strategy. We benchmark our DDPG based
strategy against several baselines and state-of-the-art bidding strategies of
the PowerTAC wholesale market PDA and demonstrate the efficacy of DDPGBBS
against several benchmarked strategies.
|
[
{
"version": "v1",
"created": "Tue, 25 Jan 2022 06:50:37 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Feb 2022 17:58:14 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Chandlekar",
"Sanjay",
""
],
[
"Subramanian",
"Easwar",
""
],
[
"Bhat",
"Sanjay",
""
],
[
"Paruchuri",
"Praveen",
""
],
[
"Gujar",
"Sujit",
""
]
] |
new_dataset
| 0.968319 |
2201.10150
|
Hongyu Song
|
Hongyu Song, Jincheng Yu, Jiantao Qiu, Zhixiao Sun, Kuijun Lang, Qing
Luo, Yuan Shen and Yu Wang
|
Multi-UAV Coverage Planning with Limited Endurance in Disaster
Environment
| null | null | null | null |
cs.RO cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For scenes such as floods and earthquakes, the disaster area is large, and
rescue time is tight. Multi-UAV exploration is more efficient than a single
UAV. Existing UAV exploration work is modeled as a Coverage Path Planning (CPP)
task to achieve full coverage of the area in the presence of obstacles.
However, the endurance capability of UAV is limited, and the rescue time is
urgent. Thus, even using multiple UAVs cannot achieve complete disaster area
coverage in time. Therefore, in this paper we propose a multi-Agent
Endurance-limited CPP (MAEl-CPP) problem based on a priori heatmap of the
disaster area, which requires the exploration of more valuable areas under
limited energy. Furthermore, we propose a path planning algorithm for the
MAEl-CPP problem, by ranking the possible disaster areas according to their
importance through satellite or remote aerial images and completing path
planning according to the importance level. Experimental results show that our
proposed algorithm is at least twice as effective as the existing method in
terms of search efficiency.
|
[
{
"version": "v1",
"created": "Tue, 25 Jan 2022 07:48:06 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Feb 2022 18:34:18 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Song",
"Hongyu",
""
],
[
"Yu",
"Jincheng",
""
],
[
"Qiu",
"Jiantao",
""
],
[
"Sun",
"Zhixiao",
""
],
[
"Lang",
"Kuijun",
""
],
[
"Luo",
"Qing",
""
],
[
"Shen",
"Yuan",
""
],
[
"Wang",
"Yu",
""
]
] |
new_dataset
| 0.991315 |
2202.10452
|
Viraj Kulkarni
|
Viraj Kulkarni, Sanjesh Pawale, Amit Kharat
|
A Classical-Quantum Convolutional Neural Network for Detecting Pneumonia
from Chest Radiographs
|
15 pages
| null | null | null |
cs.CV cs.LG quant-ph
|
http://creativecommons.org/licenses/by/4.0/
|
While many quantum computing techniques for machine learning have been
proposed, their performance on real-world datasets remains to be studied. In
this paper, we explore how a variational quantum circuit could be integrated
into a classical neural network for the problem of detecting pneumonia from
chest radiographs. We substitute one layer of a classical convolutional neural
network with a variational quantum circuit to create a hybrid neural network.
We train both networks on an image dataset containing chest radiographs and
benchmark their performance. To mitigate the influence of different sources of
randomness in network training, we sample the results over multiple rounds. We
show that the hybrid network outperforms the classical network on different
performance measures, and that these improvements are statistically
significant. Our work serves as an experimental demonstration of the potential
of quantum computing to significantly improve neural network performance for
real-world, non-trivial problems relevant to society and industry.
|
[
{
"version": "v1",
"created": "Sat, 19 Feb 2022 05:13:37 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Kulkarni",
"Viraj",
""
],
[
"Pawale",
"Sanjesh",
""
],
[
"Kharat",
"Amit",
""
]
] |
new_dataset
| 0.999472 |
2202.10547
|
Priyabrata Parida
|
Priyabrata Parida and Harpreet S. Dhillon
|
Multilayer Random Sequential Adsorption
| null |
J Stat Phys 187, 1 (2022)
|
10.1007/s10955-022-02896-5
| null |
cs.IT math.IT physics.data-an
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present a variant of the multilayer random sequential
adsorption (RSA) process that is inspired by orthogonal resource sharing in
wireless communication networks. In the one-dimensional (1D) version of this
variant, the deposition of overlapping rods is allowed only if they are
assigned two different colors, where colors are symbolic of orthogonal
resources, such as frequency bands, in communication networks. Owing to a
strong spatial coupling among the deposited rods of different colors, finding
an exact solution for the density of deposited rods of a given color as a
function of time seems intractable. Hence, we propose two useful approximations
to obtain the time-varying density of rods of a given color. The first
approximation is based on the recursive use of the known monolayer RSA result
for the indirect estimation of the density of rods for the multilayer version.
The second approximation, which is more accurate but computationally intensive,
involves accurate characterization of the time evolution of the gap density
function. This gap density function is subsequently used to estimate the
density of rods of a given color. We also consider the two-dimensional (2D)
version of this problem, where we estimate the time-varying density of
deposited circles of a given color as a function of time by extending the first
approximation approach developed for the 1D case. The accuracy of all the
results is validated through extensive Monte Carlo simulations.
|
[
{
"version": "v1",
"created": "Mon, 21 Feb 2022 22:10:09 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Parida",
"Priyabrata",
""
],
[
"Dhillon",
"Harpreet S.",
""
]
] |
new_dataset
| 0.96324 |
2202.10594
|
Mohamed Reda Bouadjenek
|
Ngoc Dung Huynh, Mohamed Reda Bouadjenek, Imran Razzak, Kevin Lee,
Chetan Arora, Ali Hassani, Arkady Zaslavsky
|
Adversarial Attacks on Speech Recognition Systems for Mission-Critical
Applications: A Survey
| null | null | null | null |
cs.SD cs.CR cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
A Machine-Critical Application is a system that is fundamentally necessary to
the success of specific and sensitive operations such as search and recovery,
rescue, military, and emergency management actions. Recent advances in Machine
Learning, Natural Language Processing, voice recognition, and speech processing
technologies have naturally allowed the development and deployment of
speech-based conversational interfaces to interact with various
machine-critical applications. While these conversational interfaces have
allowed users to give voice commands to carry out strategic and critical
activities, their robustness to adversarial attacks remains uncertain and
unclear. Indeed, Adversarial Artificial Intelligence (AI) which refers to a set
of techniques that attempt to fool machine learning models with deceptive data,
is a growing threat in the AI and machine learning research community, in
particular for machine-critical applications. The most common reason of
adversarial attacks is to cause a malfunction in a machine learning model. An
adversarial attack might entail presenting a model with inaccurate or
fabricated samples as it's training data, or introducing maliciously designed
data to deceive an already trained model. While focusing on speech recognition
for machine-critical applications, in this paper, we first review existing
speech recognition techniques, then, we investigate the effectiveness of
adversarial attacks and defenses against these systems, before outlining
research challenges, defense recommendations, and future work. This paper is
expected to serve researchers and practitioners as a reference to help them in
understanding the challenges, position themselves and, ultimately, help them to
improve existing models of speech recognition for mission-critical
applications. Keywords: Mission-Critical Applications, Adversarial AI, Speech
Recognition Systems.
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 00:29:40 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Huynh",
"Ngoc Dung",
""
],
[
"Bouadjenek",
"Mohamed Reda",
""
],
[
"Razzak",
"Imran",
""
],
[
"Lee",
"Kevin",
""
],
[
"Arora",
"Chetan",
""
],
[
"Hassani",
"Ali",
""
],
[
"Zaslavsky",
"Arkady",
""
]
] |
new_dataset
| 0.976414 |
2202.10642
|
Yan Zhuang
|
Yan Zhuang, Shiying Li, Mohammad Shifat-E-Rabbi, Xuwang Yin, Abu
Hasnat Mohammad Rubaiyat, Gustavo K. Rohde
|
Local Sliced-Wasserstein Feature Sets for Illumination-invariant Face
Recognition
|
14 pages, 9 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new method for face recognition from digital images acquired
under varying illumination conditions. The method is based on mathematical
modeling of local gradient distributions using the Radon Cumulative
Distribution Transform (R-CDT). We demonstrate that lighting variations cause
certain types of deformations of local image gradient distributions which, when
expressed in R-CDT domain, can be modeled as a subspace. Face recognition is
then performed using a nearest subspace in R-CDT domain of local gradient
distributions. Experiment results demonstrate the proposed method outperforms
other alternatives in several face recognition tasks with challenging
illumination conditions. Python code implementing the proposed method is
available, which is integrated as a part of the software package PyTransKit.
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 03:01:21 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Zhuang",
"Yan",
""
],
[
"Li",
"Shiying",
""
],
[
"Shifat-E-Rabbi",
"Mohammad",
""
],
[
"Yin",
"Xuwang",
""
],
[
"Rubaiyat",
"Abu Hasnat Mohammad",
""
],
[
"Rohde",
"Gustavo K.",
""
]
] |
new_dataset
| 0.958148 |
2202.10655
|
Clement Zheng
|
Clement Zheng, Zhen Zhou Yong, Hongnan Lin, HyunJoo Oh, and Ching
Chiuan Yen
|
Shape-Haptics: Planar & Passive Force Feedback Mechanisms for Physical
Interfaces
|
To appear in the conference proceedings at ACM CHI 2022
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Shape-Haptics, an approach for designers to rapidly design and
fabricate passive force feedback mechanisms for physical interfaces. Such
mechanisms are used in everyday interfaces and tools, and they are challenging
to design. Shape-Haptics abstracts and broadens the haptic expression of this
class of force feedback systems through 2D laser cut configurations that are
simple to fabricate. They leverage the properties of polyoxymethylene plastic
and comprise a compliant spring structure that engages with a sliding profile
during tangible interaction. By shaping the sliding profile, designers can
easily customize the haptic force feedback delivered by the mechanism. We
provide a computational design sandbox to facilitate designers to explore and
fabricate Shape-Haptics mechanisms. We also propose a series of applications
that demonstrate the utility of Shape-Haptics in creating and customizing
haptics for different physical interfaces.
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 03:42:08 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Zheng",
"Clement",
""
],
[
"Yong",
"Zhen Zhou",
""
],
[
"Lin",
"Hongnan",
""
],
[
"Oh",
"HyunJoo",
""
],
[
"Yen",
"Ching Chiuan",
""
]
] |
new_dataset
| 0.958841 |
2202.10701
|
Suvidha Tripathi Dr
|
Suvidha Tripathi, Satish Kumar Singh and Lee Hwee Kuan
|
Bag of Visual Words (BoVW) with Deep Features -- Patch Classification
Model for Limited Dataset of Breast Tumours
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Currently, the computational complexity limits the training of high
resolution gigapixel images using Convolutional Neural Networks. Therefore,
such images are divided into patches or tiles. Since, these high resolution
patches are encoded with discriminative information therefore; CNNs are trained
on these patches to perform patch-level predictions. However, the problem with
patch-level prediction is that pathologist generally annotates at image-level
and not at patch level. Due to this limitation most of the patches may not
contain enough class-relevant features. Through this work, we tried to
incorporate patch descriptive capability within the deep framework by using Bag
of Visual Words (BoVW) as a kind of regularisation to improve generalizability.
Using this hypothesis, we aim to build a patch based classifier to discriminate
between four classes of breast biopsy image patches (normal, benign, \textit{In
situ} carcinoma, invasive carcinoma). The task is to incorporate quality deep
features using CNN to describe relevant information in the images while
simultaneously discarding irrelevant information using Bag of Visual Words
(BoVW). The proposed method passes patches obtained from WSI and microscopy
images through pre-trained CNN to extract features. BoVW is used as a feature
selector to select most discriminative features among the CNN features.
Finally, the selected feature sets are classified as one of the four classes.
The hybrid model provides flexibility in terms of choice of pre-trained models
for feature extraction. The pipeline is end-to-end since it does not require
post processing of patch predictions to select discriminative patches. We
compared our observations with state-of-the-art methods like ResNet50,
DenseNet169, and InceptionV3 on the BACH-2018 challenge dataset. Our proposed
method shows better performance than all the three methods.
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 07:19:18 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Tripathi",
"Suvidha",
""
],
[
"Singh",
"Satish Kumar",
""
],
[
"Kuan",
"Lee Hwee",
""
]
] |
new_dataset
| 0.999334 |
2202.10710
|
Fan Jiang
|
Fan Jiang and Trevor Cohn
|
Incorporating Constituent Syntax for Coreference Resolution
|
9 pages, 2 figures, and 6 tables. In Proceedings of the 36th AAAI
Conference on Artificial Intelligence. AAAI 2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Syntax has been shown to benefit Coreference Resolution from incorporating
long-range dependencies and structured information captured by syntax trees,
either in traditional statistical machine learning based systems or recently
proposed neural models. However, most leading systems use only dependency
trees. We argue that constituent trees also encode important information, such
as explicit span-boundary signals captured by nested multi-word phrases, extra
linguistic labels and hierarchical structures useful for detecting anaphora. In
this work, we propose a simple yet effective graph-based method to incorporate
constituent syntactic structures. Moreover, we also explore to utilise
higher-order neighbourhood information to encode rich structures in constituent
trees. A novel message propagation mechanism is therefore proposed to enable
information flow among elements in syntax trees. Experiments on the English and
Chinese portions of OntoNotes 5.0 benchmark show that our proposed model either
beats a strong baseline or achieves new state-of-the-art performance. (Code is
available at https://github.com/Fantabulous-J/Coref-Constituent-Graph)
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 07:40:42 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Jiang",
"Fan",
""
],
[
"Cohn",
"Trevor",
""
]
] |
new_dataset
| 0.998319 |
2202.10712
|
Xulong Zhang
|
Botao Zhao, Xulong Zhang, Jianzong Wang, Ning Cheng, Jing Xiao
|
nnSpeech: Speaker-Guided Conditional Variational Autoencoder for
Zero-shot Multi-speaker Text-to-Speech
| null | null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-speaker text-to-speech (TTS) using a few adaption data is a challenge
in practical applications. To address that, we propose a zero-shot
multi-speaker TTS, named nnSpeech, that could synthesis a new speaker voice
without fine-tuning and using only one adaption utterance. Compared with using
a speaker representation module to extract the characteristics of new speakers,
our method bases on a speaker-guided conditional variational autoencoder and
can generate a variable Z, which contains both speaker characteristics and
content information. The latent variable Z distribution is approximated by
another variable conditioned on reference mel-spectrogram and phoneme.
Experiments on the English corpus, Mandarin corpus, and cross-dataset proves
that our model could generate natural and similar speech with only one adaption
speech.
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 07:43:30 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Zhao",
"Botao",
""
],
[
"Zhang",
"Xulong",
""
],
[
"Wang",
"Jianzong",
""
],
[
"Cheng",
"Ning",
""
],
[
"Xiao",
"Jing",
""
]
] |
new_dataset
| 0.97296 |
2202.10739
|
Michiharu Yamashita
|
Michiharu Yamashita, Jia Tracy Shen, Hamoon Ekhtiari, Thanh Tran,
Dongwon Lee
|
JAMES: Job Title Mapping with Multi-Aspect Embeddings and Reasoning
| null | null | null | null |
cs.AI cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
One of the most essential tasks needed for various downstream tasks in career
analytics (e.g., career trajectory analysis, job mobility prediction, and job
recommendation) is Job Title Mapping (JTM), where the goal is to map
user-created (noisy and non-standard) job titles to predefined and standard job
titles. However, solving JTM is domain-specific and non-trivial due to its
inherent challenges: (1) user-created job titles are messy, (2) different job
titles often overlap their job requirements, (3) job transition trajectories
are inconsistent, and (4) the number of job titles in real world applications
is large-scale. Toward this JTM problem, in this work, we propose a novel
solution, named as JAMES, that constructs three unique embeddings of a target
job title: topological, semantic, and syntactic embeddings, together with
multi-aspect co-attention. In addition, we employ logical reasoning
representations to collaboratively estimate similarities between messy job
titles and standard job titles in the reasoning space. We conduct comprehensive
experiments against ten competing models on the large-scale real-world dataset
with more than 350,000 job titles. Our results show that JAMES significantly
outperforms the best baseline by 10.06% in Precision@10 and by 17.52% in
NDCG@10, respectively.
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 08:57:08 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Yamashita",
"Michiharu",
""
],
[
"Shen",
"Jia Tracy",
""
],
[
"Ekhtiari",
"Hamoon",
""
],
[
"Tran",
"Thanh",
""
],
[
"Lee",
"Dongwon",
""
]
] |
new_dataset
| 0.994273 |
2202.10744
|
Zhongxuan Xue
|
Zhongxuan Xue, Rongzhen Li, Qizhu Dai, Zhong Jiang
|
CorefDRE: Document-level Relation Extraction with coreference resolution
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Document-level relation extraction is to extract relation facts from a
document consisting of multiple sentences, in which pronoun crossed sentences
are a ubiquitous phenomenon against a single sentence. However, most of the
previous works focus more on mentions coreference resolution except for
pronouns, and rarely pay attention to mention-pronoun coreference and capturing
the relations. To represent multi-sentence features by pronouns, we imitate the
reading process of humans by leveraging coreference information when
dynamically constructing a heterogeneous graph to enhance semantic information.
Since the pronoun is notoriously ambiguous in the graph, a mention-pronoun
coreference resolution is introduced to calculate the affinity between pronouns
and corresponding mentions, and the noise suppression mechanism is proposed to
reduce the noise caused by pronouns. Experiments on the public dataset, DocRED,
DialogRE and MPDD, show that Coref-aware Doc-level Relation Extraction based on
Graph Inference Network outperforms the state-of-the-art.
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 09:03:59 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Xue",
"Zhongxuan",
""
],
[
"Li",
"Rongzhen",
""
],
[
"Dai",
"Qizhu",
""
],
[
"Jiang",
"Zhong",
""
]
] |
new_dataset
| 0.998623 |
2202.10784
|
Andrey Kuznetsov
|
Alex Shonenkov, Andrey Kuznetsov, Denis Dimitrov, Tatyana Shavrina,
Daniil Chesakov, Anastasia Maltseva, Alena Fenogenova, Igor Pavlov, Anton
Emelyanov, Sergey Markov, Daria Bakshandaeva, Vera Shybaeva, Andrey Chertok
|
RuCLIP -- new models and experiments: a technical report
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In the report we propose six new implementations of ruCLIP model trained on
our 240M pairs. The accuracy results are compared with original CLIP model with
Ru-En translation (OPUS-MT) on 16 datasets from different domains. Our best
implementations outperform CLIP + OPUS-MT solution on most of the datasets in
few-show and zero-shot tasks. In the report we briefly describe the
implementations and concentrate on the conducted experiments. Inference
execution time comparison is also presented in the report.
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 10:15:13 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Shonenkov",
"Alex",
""
],
[
"Kuznetsov",
"Andrey",
""
],
[
"Dimitrov",
"Denis",
""
],
[
"Shavrina",
"Tatyana",
""
],
[
"Chesakov",
"Daniil",
""
],
[
"Maltseva",
"Anastasia",
""
],
[
"Fenogenova",
"Alena",
""
],
[
"Pavlov",
"Igor",
""
],
[
"Emelyanov",
"Anton",
""
],
[
"Markov",
"Sergey",
""
],
[
"Bakshandaeva",
"Daria",
""
],
[
"Shybaeva",
"Vera",
""
],
[
"Chertok",
"Andrey",
""
]
] |
new_dataset
| 0.998964 |
2202.10879
|
Danial Kamali
|
Danial Kamali, Behrooz Janfada, Mohammad Ebrahim Shenasa, Behrouz
Minaei-Bidgoli
|
Evaluating Persian Tokenizers
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Tokenization plays a significant role in the process of lexical analysis.
Tokens become the input for other natural language processing tasks, like
semantic parsing and language modeling. Natural Language Processing in Persian
is challenging due to Persian's exceptional cases, such as half-spaces. Thus,
it is crucial to have a precise tokenizer for Persian. This article provides a
novel work by introducing the most widely used tokenizers for Persian and
comparing and evaluating their performance on Persian texts using a simple
algorithm with a pre-tagged Persian dependency dataset. After evaluating
tokenizers with the F1-Score, the hybrid version of the Farsi Verb and Hazm
with bounded morphemes fixing showed the best performance with an F1 score of
98.97%.
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 13:27:24 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Kamali",
"Danial",
""
],
[
"Janfada",
"Behrooz",
""
],
[
"Shenasa",
"Mohammad Ebrahim",
""
],
[
"Minaei-Bidgoli",
"Behrouz",
""
]
] |
new_dataset
| 0.993703 |
2202.10897
|
Marco Spanghero
|
M.Lenhart, M. Spanghero, P. Papadimitratos
|
DEMO: Relay/Replay Attacks on GNSS signals
| null | null |
10.1145/3448300.3468256
| null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Global Navigation Satellite Systems (GNSS) are ubiquitously relied upon for
positioning and timing. Detection and prevention of attacks against GNSS have
been researched over the last decades, but many of these attacks and
countermeasures were evaluated based on simulation. This work contributes to
the experimental investigation of GNSS vulnerabilities, implementing a
relay/replay attack with off-the-shelf hardware. Operating at the signal level,
this attack type is not hindered by cryptographically protected transmissions,
such as Galileo's Open Signals Navigation Message Authentication (OS-NMA). The
attack we investigate involves two colluding adversaries, relaying signals over
large distances, to effectively spoof a GNSS receiver. We demonstrate the
attack using off-the-shelf hardware, we investigate the requirements for such
successful colluding attacks, and how they can be enhanced, e.g., allowing for
finer adversarial control over the victim receiver.
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 13:54:22 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Lenhart",
"M.",
""
],
[
"Spanghero",
"M.",
""
],
[
"Papadimitratos",
"P.",
""
]
] |
new_dataset
| 0.995665 |
2202.10910
|
Yinfeng Yu
|
Yinfeng Yu, Wenbing Huang, Fuchun Sun, Changan Chen, Yikai Wang,
Xiaohong Liu
|
Sound Adversarial Audio-Visual Navigation
|
This work aims to do an adversarial sound intervention for robust
audio-visual navigation
| null | null | null |
cs.SD cs.CV cs.RO eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Audio-visual navigation task requires an agent to find a sound source in a
realistic, unmapped 3D environment by utilizing egocentric audio-visual
observations. Existing audio-visual navigation works assume a clean environment
that solely contains the target sound, which, however, would not be suitable in
most real-world applications due to the unexpected sound noise or intentional
interference. In this work, we design an acoustically complex environment in
which, besides the target sound, there exists a sound attacker playing a
zero-sum game with the agent. More specifically, the attacker can move and
change the volume and category of the sound to make the agent suffer from
finding the sounding object while the agent tries to dodge the attack and
navigate to the goal under the intervention. Under certain constraints to the
attacker, we can improve the robustness of the agent towards unexpected sound
attacks in audio-visual navigation. For better convergence, we develop a joint
training mechanism by employing the property of a centralized critic with
decentralized actors. Experiments on two real-world 3D scan datasets, Replica,
and Matterport3D, verify the effectiveness and the robustness of the agent
trained under our designed environment when transferred to the clean
environment or the one containing sound attackers with random policy. Project:
\url{https://yyf17.github.io/SAAVN}.
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 14:19:42 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Yu",
"Yinfeng",
""
],
[
"Huang",
"Wenbing",
""
],
[
"Sun",
"Fuchun",
""
],
[
"Chen",
"Changan",
""
],
[
"Wang",
"Yikai",
""
],
[
"Liu",
"Xiaohong",
""
]
] |
new_dataset
| 0.999658 |
2202.10978
|
Ryo Suzuki
|
Samin Farajian, Hiroki Kaimoto, Ryo Suzuki
|
Swarm Fabrication: Reconfigurable 3D Printers and Drawing Plotters Made
of Swarm Robots
|
UIST'21 Student Innovation Contest
| null | null | null |
cs.RO cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce Swarm Fabrication, a novel concept of creating on-demand,
scalable, and reconfigurable fabrication machines made of swarm robots. We
present ways to construct an element of fabrication machines, such as motors,
elevator, table, feeder, and extruder, by leveraging toio robots and 3D printed
attachments. By combining these elements, we demonstrate constructing a X-Y-Z
plotter with multiple toio robots, which can be used for drawing plotters and
3D printers. We also show the possibility to extend our idea to more
general-purpose fabrication machines, which include 3D printers, CNC machining,
foam cutters, line drawing devices, pick and place machines, 3D scanning, etc.
Through this, we draw a future vision, where the swarm robots can construct a
scalable and reconfigurable fabrication machines on-demand, which can be
deployed anywhere the user wishes. We believe this fabrication technique will
become a means of interactive and highly flexible fabrication in the future.
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 15:33:18 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Farajian",
"Samin",
""
],
[
"Kaimoto",
"Hiroki",
""
],
[
"Suzuki",
"Ryo",
""
]
] |
new_dataset
| 0.999812 |
2202.10986
|
Panagiotis Kanellopoulos
|
Panagiotis Kanellopoulos, Maria Kyropoulou, Hao Zhou
|
Forgiving Debt in Financial Network Games
| null | null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A financial system is represented by a network, where nodes correspond to
banks, and directed labeled edges correspond to debt contracts between banks.
Once a payment schedule has been defined, where we assume that a bank cannot
refuse a payment towards one of its lenders if it has sufficient funds, the
liquidity of the system is defined as the sum of total payments made in the
network. Maximizing systemic liquidity is a natural objective of any financial
authority, so, we study the setting where the financial authority offers
bailout money to some bank(s) or forgives the debts of others in order to
maximize liquidity, and examine efficient ways to achieve this. We investigate
the approximation ratio provided by the greedy bailout policy compared to the
optimal one, and we study the computational hardness of finding the optimal
debt-removal and budget-constrained optimal bailout policy, respectively.
We also study financial systems from a game-theoretic standpoint. We observe
that the removal of some incoming debt might be in the best interest of a bank,
if that helps one of its borrowers remain solvent and avoid costs related to
default. Assuming that a bank's well-being (i.e., utility) is aligned with the
incoming payments they receive from the network, we define and analyze a game
among banks who want to maximize their utility by strategically giving up some
incoming payments. In addition, we extend the previous game by considering
bailout payments. After formally defining the above games, we prove results
about the existence and quality of pure Nash equilibria, as well as the
computational complexity of finding such equilibria.
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 15:40:49 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Kanellopoulos",
"Panagiotis",
""
],
[
"Kyropoulou",
"Maria",
""
],
[
"Zhou",
"Hao",
""
]
] |
new_dataset
| 0.999023 |
2202.11025
|
Zhi Yan Dr.
|
Tao Yang, You Li, Cheng Zhao, Dexin Yao, Guanyin Chen, Li Sun, Tomas
Krajnik, Zhi Yan
|
3D ToF LiDAR in Mobile Robotics: A Review
|
16 pages, 10 figures, 5 tables, 4 equations
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the past ten years, the use of 3D Time-of-Flight (ToF) LiDARs in mobile
robotics has grown rapidly. Based on our accumulation of relevant research,
this article systematically reviews and analyzes the use 3D ToF LiDARs in
research and industrial applications. The former includes object detection,
robot localization, long-term autonomy, LiDAR data processing under adverse
weather conditions, and sensor fusion. The latter encompasses service robots,
assisted and autonomous driving, and recent applications performed in response
to public health crises. We hope that our efforts can effectively provide
readers with relevant references and promote the deployment of existing mature
technologies in real-world systems.
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 16:56:09 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Yang",
"Tao",
""
],
[
"Li",
"You",
""
],
[
"Zhao",
"Cheng",
""
],
[
"Yao",
"Dexin",
""
],
[
"Chen",
"Guanyin",
""
],
[
"Sun",
"Li",
""
],
[
"Krajnik",
"Tomas",
""
],
[
"Yan",
"Zhi",
""
]
] |
new_dataset
| 0.999292 |
2202.11055
|
Mihir Kulkarni
|
Paolo De Petris, Huan Nguyen, Mihir Dharmadhikari, Mihir Kulkarni,
Nikhil Khedekar, Frank Mascarich, Kostas Alexis
|
RMF-Owl: A Collision-Tolerant Flying Robot for Autonomous Subterranean
Exploration
|
8 pages, 9 figures. Submitted to the International Conference on
Unmanned Aircraft Systems, 2022
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work presents the design, hardware realization, autonomous exploration
and object detection capabilities of RMF-Owl, a new collision-tolerant aerial
robot tailored for resilient autonomous subterranean exploration. The system is
custom built for underground exploration with focus on collision tolerance,
resilient autonomy with robust localization and mapping, alongside
high-performance exploration path planning in confined, obstacle-filled and
topologically complex underground environments. Moreover, RMF-Owl offers the
ability to search, detect and locate objects of interest which can be
particularly useful in search and rescue missions. A series of results from
field experiments are presented in order to demonstrate the system's ability to
autonomously explore challenging unknown underground environments.
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 17:36:29 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"De Petris",
"Paolo",
""
],
[
"Nguyen",
"Huan",
""
],
[
"Dharmadhikari",
"Mihir",
""
],
[
"Kulkarni",
"Mihir",
""
],
[
"Khedekar",
"Nikhil",
""
],
[
"Mascarich",
"Frank",
""
],
[
"Alexis",
"Kostas",
""
]
] |
new_dataset
| 0.999286 |
2202.11061
|
Paul G\"olz
|
Paul G\"olz, Dominik Peters, Ariel D. Procaccia
|
In This Apportionment Lottery, the House Always Wins
| null | null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Apportionment is the problem of distributing $h$ indivisible seats across
states in proportion to the states' populations. In the context of the US House
of Representatives, this problem has a rich history and is a prime example of
interactions between mathematical analysis and political practice. Grimmett
suggested to apportion seats in a randomized way such that each state receives
exactly their proportional share $q_i$ of seats in expectation (ex ante
proportionality) and receives either $\lfloor q_i \rfloor$ or $\lceil q_i
\rceil$ many seats ex post (quota). However, there is a vast space of
randomized apportionment methods satisfying these two axioms, and so we
additionally consider prominent axioms from the apportionment literature. Our
main result is a randomized method satisfying quota, ex ante proportionality
and house monotonicity - a property that prevents paradoxes when the number of
seats changes and which we require to hold ex post. This result is based on a
generalization of dependent rounding on bipartite graphs, which we call
cumulative rounding and which might be of independent interest, as we
demonstrate via applications beyond apportionment.
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 17:46:11 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Gölz",
"Paul",
""
],
[
"Peters",
"Dominik",
""
],
[
"Procaccia",
"Ariel D.",
""
]
] |
new_dataset
| 0.956122 |
2202.11066
|
Robert Mieth
|
Samuel Eckstrom, Graham Murphy, Eileen Ye, Samrat Acharya, Robert
Mieth, Yury Dvorkin
|
Outing Power Outages: Real-time and Predictive Socio-demographic
Analytics for New York City
|
Accepted for the 2022 IEEE PES General Meeting
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Electrical outages continue to occur despite technological innovations and
improvements to electric power distribution infrastructure. In this paper, we
describe a tool that was designed to acquire and collect data on electric power
outages in New York City since July 2020. The electrical outages are then
displayed on a front-end application, which is publicly available. We use the
collected outage data to analyze these outages and their socio-economic impacts
on electricity vulnerable population groups. We determined that there was a
slightly negative linear relationship between income and number of outages.
Finally, a Markov Influence Graph was created to better understand the spatial
and temporal relationships between outages.
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 17:51:00 GMT"
}
] | 2022-02-23T00:00:00 |
[
[
"Eckstrom",
"Samuel",
""
],
[
"Murphy",
"Graham",
""
],
[
"Ye",
"Eileen",
""
],
[
"Acharya",
"Samrat",
""
],
[
"Mieth",
"Robert",
""
],
[
"Dvorkin",
"Yury",
""
]
] |
new_dataset
| 0.989407 |
1912.12779
|
Zachary Neal
|
Rachel Domagalski, Zachary Neal, Bruce Sagan
|
backbone: An R Package for extracting the backbone of bipartite
projections
| null |
Plos one, 16(1), e0244363 (2021)
|
10.1371/journal.pone.0244363
| null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bipartite projections are used in a wide range of network contexts including
politics (bill co-sponsorship), genetics (gene co-expression), economics
(executive board co-membership), and innovation (patent co-authorship).
However, because bipartite projections are always weighted graphs, which are
inherently challenging to analyze and visualize, it is often useful to examine
the 'backbone', an unweighted subgraph containing only the most significant
edges. In this paper, we introduce the R package backbone for extracting the
backbone of weighted bipartite projections, and use bill sponsorship data from
the 114th session of the United States Senate to demonstrate its functionality.
|
[
{
"version": "v1",
"created": "Mon, 30 Dec 2019 01:38:51 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Jul 2020 15:56:28 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Aug 2020 21:30:43 GMT"
},
{
"version": "v4",
"created": "Wed, 25 Nov 2020 15:04:14 GMT"
},
{
"version": "v5",
"created": "Mon, 14 Dec 2020 18:54:10 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Domagalski",
"Rachel",
""
],
[
"Neal",
"Zachary",
""
],
[
"Sagan",
"Bruce",
""
]
] |
new_dataset
| 0.960901 |
2011.00892
|
Yuting Fu
|
Yuting Fu, Andrei Terechko, Jan Friso Groote, Arash Khabbaz Saberi
|
A Formally Verified Fail-Operational Safety Concept for Automated
Driving
|
11 pages, 5 figures, 3 tables
|
SAE International Journal of Connected and Automated Vehicles 2022
|
10.4271/12-05-01-0002
| null |
cs.RO cs.DC cs.FL cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern Automated Driving (AD) systems rely on safety measures to handle
faults and to bring vehicle to a safe state. To eradicate lethal road
accidents, car manufacturers are constantly introducing new perception as well
as control systems. Contemporary automotive design and safety engineering best
practices are suitable for analyzing system components in isolation, whereas
today's highly complex and interdependent AD systems require novel approach to
ensure resilience to multi-point failures. We present a holistic safety concept
unifying advanced safety measures for handling multiple-point faults. Our
proposed approach enables designers to focus on more pressing issues such as
handling fault-free hazardous behavior associated with system performance
limitations. To verify our approach, we developed an executable model of the
safety concept in the formal specification language mCRL2. The model behavior
is governed by a four-mode degradation policy controlling distributed
processors, redundant communication networks, and virtual machines. To keep the
vehicle as safe as possible our degradation policy can reduce driving comfort
or AD system's availability using additional low-cost driving channels. We
formalized five safety requirements in the modal mu-calculus and proved them
against our mCRL2 model, which is intractable to accomplish exhaustively using
traditional road tests or simulation techniques. In conclusion, our formally
proven safety concept defines a holistic design pattern for designing AD
systems.
|
[
{
"version": "v1",
"created": "Mon, 2 Nov 2020 11:05:09 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Nov 2020 16:21:26 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Fu",
"Yuting",
""
],
[
"Terechko",
"Andrei",
""
],
[
"Groote",
"Jan Friso",
""
],
[
"Saberi",
"Arash Khabbaz",
""
]
] |
new_dataset
| 0.988856 |
2103.07534
|
Sergey Feldman
|
Shivashankar Subramanian, Daniel King, Doug Downey and Sergey Feldman
|
S2AND: A Benchmark and Evaluation System for Author Name Disambiguation
| null |
JCDL 2021
| null | null |
cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Author Name Disambiguation (AND) is the task of resolving which author
mentions in a bibliographic database refer to the same real-world person, and
is a critical ingredient of digital library applications such as search and
citation analysis. While many AND algorithms have been proposed, comparing them
is difficult because they often employ distinct features and are evaluated on
different datasets. In response to this challenge, we present S2AND, a unified
benchmark dataset for AND on scholarly papers, as well as an open-source
reference model implementation. Our dataset harmonizes eight disparate AND
datasets into a uniform format, with a single rich feature set drawn from the
Semantic Scholar (S2) database. Our evaluation suite for S2AND reports
performance split by facets like publication year and number of papers,
allowing researchers to track both global performance and measures of fairness
across facet values. Our experiments show that because previous datasets tend
to cover idiosyncratic and biased slices of the literature, algorithms trained
to perform well on one on them may generalize poorly to others. By contrast, we
show how training on a union of datasets in S2AND results in more robust models
that perform well even on datasets unseen in training. The resulting AND model
also substantially improves over the production algorithm in S2, reducing error
by over 50% in terms of $B^3$ F1. We release our unified dataset, model code,
trained models, and evaluation suite to the research community.
https://github.com/allenai/S2AND/
|
[
{
"version": "v1",
"created": "Fri, 12 Mar 2021 21:22:36 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jul 2021 16:17:13 GMT"
},
{
"version": "v3",
"created": "Mon, 21 Feb 2022 17:54:15 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Subramanian",
"Shivashankar",
""
],
[
"King",
"Daniel",
""
],
[
"Downey",
"Doug",
""
],
[
"Feldman",
"Sergey",
""
]
] |
new_dataset
| 0.99982 |
2103.13003
|
Tobias Schlagenhauf
|
Tobias Schlagenhauf, Magnus Landwehr, Juergen Fleischer
|
Industrial Machine Tool Component Surface Defect Dataset
|
7 pages, 13 figures
|
Data in Brief, 39, 107643 (2021)
|
10.1016/j.dib.2021.107643
| null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Using machine learning (ML) techniques in general and deep learning
techniques in specific needs a certain amount of data often not available in
large quantities in technical domains. The manual inspection of machine tool
components and the manual end-of-line check of products are labor-intensive
tasks in industrial applications that companies often want to automate. To
automate classification processes and develop reliable and robust machine
learning-based classification and wear prognostics models, one needs real-world
datasets to train and test the models. The dataset is available under
https://doi.org/10.5445/IR/1000129520.
|
[
{
"version": "v1",
"created": "Wed, 24 Mar 2021 06:17:21 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Schlagenhauf",
"Tobias",
""
],
[
"Landwehr",
"Magnus",
""
],
[
"Fleischer",
"Juergen",
""
]
] |
new_dataset
| 0.999799 |
2105.14211
|
Zhu Zhang
|
Zhu Zhang, Jianxin Ma, Chang Zhou, Rui Men, Zhikang Li, Ming Ding, Jie
Tang, Jingren Zhou, and Hongxia Yang
|
M6-UFC: Unifying Multi-Modal Controls for Conditional Image Synthesis
via Non-Autoregressive Generative Transformers
|
Accepted by NeurIPS21
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Conditional image synthesis aims to create an image according to some
multi-modal guidance in the forms of textual descriptions, reference images,
and image blocks to preserve, as well as their combinations. In this paper,
instead of investigating these control signals separately, we propose a new
two-stage architecture, M6-UFC, to unify any number of multi-modal controls. In
M6-UFC, both the diverse control signals and the synthesized image are
uniformly represented as a sequence of discrete tokens to be processed by
Transformer. Different from existing two-stage autoregressive approaches such
as DALL-E and VQGAN, M6-UFC adopts non-autoregressive generation (NAR) at the
second stage to enhance the holistic consistency of the synthesized image, to
support preserving specified image blocks, and to improve the synthesis speed.
Further, we design a progressive algorithm that iteratively improves the
non-autoregressively generated image, with the help of two estimators developed
for evaluating the compliance with the controls and evaluating the fidelity of
the synthesized image, respectively. Extensive experiments on a newly collected
large-scale clothing dataset M2C-Fashion and a facial dataset Multi-Modal
CelebA-HQ verify that M6-UFC can synthesize high-fidelity images that comply
with flexible multi-modal controls.
|
[
{
"version": "v1",
"created": "Sat, 29 May 2021 04:42:07 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Aug 2021 09:55:00 GMT"
},
{
"version": "v3",
"created": "Fri, 26 Nov 2021 13:43:04 GMT"
},
{
"version": "v4",
"created": "Sat, 19 Feb 2022 17:12:14 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Zhang",
"Zhu",
""
],
[
"Ma",
"Jianxin",
""
],
[
"Zhou",
"Chang",
""
],
[
"Men",
"Rui",
""
],
[
"Li",
"Zhikang",
""
],
[
"Ding",
"Ming",
""
],
[
"Tang",
"Jie",
""
],
[
"Zhou",
"Jingren",
""
],
[
"Yang",
"Hongxia",
""
]
] |
new_dataset
| 0.999803 |
2106.09672
|
Matthijs Douze
|
Matthijs Douze and Giorgos Tolias and Ed Pizzi and Zo\"e Papakipos and
Lowik Chanussot and Filip Radenovic and Tomas Jenicek and Maxim Maximov and
Laura Leal-Taix\'e and Ismail Elezi and Ond\v{r}ej Chum and Cristian Canton
Ferrer
|
The 2021 Image Similarity Dataset and Challenge
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a new benchmark for large-scale image similarity
detection. This benchmark is used for the Image Similarity Challenge at
NeurIPS'21 (ISC2021). The goal is to determine whether a query image is a
modified copy of any image in a reference corpus of size 1~million. The
benchmark features a variety of image transformations such as automated
transformations, hand-crafted image edits and machine-learning based
manipulations. This mimics real-life cases appearing in social media, for
example for integrity-related problems dealing with misinformation and
objectionable content. The strength of the image manipulations, and therefore
the difficulty of the benchmark, is calibrated according to the performance of
a set of baseline approaches. Both the query and reference set contain a
majority of "distractor" images that do not match, which corresponds to a
real-life needle-in-haystack setting, and the evaluation metric reflects that.
We expect the DISC21 benchmark to promote image copy detection as an important
and challenging computer vision task and refresh the state of the art. Code and
data are available at https://github.com/facebookresearch/isc2021
|
[
{
"version": "v1",
"created": "Thu, 17 Jun 2021 17:23:59 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Jul 2021 20:58:36 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Jan 2022 17:05:58 GMT"
},
{
"version": "v4",
"created": "Mon, 21 Feb 2022 10:12:06 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Douze",
"Matthijs",
""
],
[
"Tolias",
"Giorgos",
""
],
[
"Pizzi",
"Ed",
""
],
[
"Papakipos",
"Zoë",
""
],
[
"Chanussot",
"Lowik",
""
],
[
"Radenovic",
"Filip",
""
],
[
"Jenicek",
"Tomas",
""
],
[
"Maximov",
"Maxim",
""
],
[
"Leal-Taixé",
"Laura",
""
],
[
"Elezi",
"Ismail",
""
],
[
"Chum",
"Ondřej",
""
],
[
"Ferrer",
"Cristian Canton",
""
]
] |
new_dataset
| 0.999835 |
2107.07243
|
Marco Camurri
|
David Wisth, Marco Camurri, Maurice Fallon
|
VILENS: Visual, Inertial, Lidar, and Leg Odometry for All-Terrain Legged
Robots
|
Video: https://youtu.be/NG4pkjJKhus
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present VILENS (Visual Inertial Lidar Legged Navigation System), an
odometry system for legged robots based on factor graphs. The key novelty is
the tight fusion of four different sensor modalities to achieve reliable
operation when the individual sensors would otherwise produce degenerate
estimation. To minimize leg odometry drift, we extend the robot's state with a
linear velocity bias term which is estimated online. This bias is observable
because of the tight fusion of this preintegrated velocity factor with vision,
lidar, and IMU factors. Extensive experimental validation on different ANYmal
quadruped robots is presented, for a total duration of 2 h and 1.8 km traveled.
The experiments involved dynamic locomotion over loose rocks, slopes, and mud
which caused challenges like slippage and terrain deformation. Perceptual
challenges included dark and dusty underground caverns, and open and
feature-deprived areas. We show an average improvement of 62% translational and
51% rotational errors compared to a state-of-the-art loosely coupled approach.
To demonstrate its robustness, VILENS was also integrated with a perceptive
controller and a local path planner.
|
[
{
"version": "v1",
"created": "Thu, 15 Jul 2021 11:05:00 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Feb 2022 10:02:24 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Wisth",
"David",
""
],
[
"Camurri",
"Marco",
""
],
[
"Fallon",
"Maurice",
""
]
] |
new_dataset
| 0.994447 |
2107.12576
|
Xovee Xu
|
Xovee Xu, Fan Zhou, Kunpeng Zhang, Siyuan Liu
|
CCGL: Contrastive Cascade Graph Learning
|
IEEE TKDE, including 15 pages, 7 figures, and 12 tables
|
IEEE Transactions on Knowledge and Data Engineering (TKDE), 2022
|
10.1109/TKDE.2022.3151829
| null |
cs.SI cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Supervised learning, while prevalent for information cascade modeling, often
requires abundant labeled data in training, and the trained model is not easy
to generalize across tasks and datasets. It often learns task-specific
representations, which can easily result in overfitting for downstream tasks.
Recently, self-supervised learning is designed to alleviate these two
fundamental issues in linguistic and visual tasks. However, its direct
applicability for information cascade modeling, especially graph cascade
related tasks, remains underexplored. In this work, we present Contrastive
Cascade Graph Learning (CCGL), a novel framework for information cascade graph
learning in a contrastive, self-supervised, and task-agnostic way. In
particular, CCGL first designs an effective data augmentation strategy to
capture variation and uncertainty by simulating the information diffusion in
graphs. Second, it learns a generic model for graph cascade tasks via
self-supervised contrastive pre-training using both unlabeled and labeled data.
Third, CCGL learns a task-specific cascade model via fine-tuning using labeled
data. Finally, to make the model transferable across datasets and cascade
applications, CCGL further enhances the model via distillation using a
teacher-student architecture. We demonstrate that CCGL significantly
outperforms its supervised and semi-supervised counterparts for several
downstream tasks.
|
[
{
"version": "v1",
"created": "Tue, 27 Jul 2021 03:37:50 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Feb 2022 13:23:57 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Xu",
"Xovee",
""
],
[
"Zhou",
"Fan",
""
],
[
"Zhang",
"Kunpeng",
""
],
[
"Liu",
"Siyuan",
""
]
] |
new_dataset
| 0.994785 |
2109.02160
|
Arnab Dey
|
Arnab Dey, Andrew Heger and Darin England
|
Urban Fire Station Location Planning using Predicted Demand and Service
Quality Index
| null | null | null | null |
cs.LG cs.IR math.OC
|
http://creativecommons.org/licenses/by/4.0/
|
In this article, we propose a systematic approach for fire station location
planning. We develop machine learning models, based on Random Forest and
Extreme Gradient Boosting, for demand prediction and utilize the models further
to define a generalized index to measure quality of fire service in urban
settings. Our model is built upon spatial data collected from multiple
different sources. Efficacy of proper facility planning depends on choice of
candidates where fire stations can be located along with existing stations, if
any. Also, the travel time from these candidates to demand locations need to be
taken care of to maintain fire safety standard. Here, we propose a travel time
based clustering technique to identify suitable candidates. Finally, we develop
an optimization problem to select best locations to install new fire stations.
Our optimization problem is built upon maximum coverage problem, based on
integer programming. We further develop a two-stage stochastic optimization
model to characterize the confidence in our decision outcome. We present a
detailed experimental study of our proposed approach in collaboration with city
of Victoria Fire Department, MN, USA. Our demand prediction model achieves true
positive rate of 80% and false positive rate of 20% approximately. We aid
Victoria Fire Department to select a location for a new fire station using our
approach. We present detailed results on improvement statistics by locating a
new facility, as suggested by our methodology, in the city of Victoria.
|
[
{
"version": "v1",
"created": "Sun, 5 Sep 2021 19:59:26 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Feb 2022 19:40:36 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Dey",
"Arnab",
""
],
[
"Heger",
"Andrew",
""
],
[
"England",
"Darin",
""
]
] |
new_dataset
| 0.985795 |
2109.09035
|
Jiawei Mo
|
Jiawei Mo and Junaed Sattar
|
Continuous-Time Spline Visual-Inertial Odometry
|
ICRA 2022
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a continuous-time spline-based formulation for visual-inertial
odometry (VIO). Specifically, we model the poses as a cubic spline, whose
temporal derivatives are used to synthesize linear acceleration and angular
velocity, which are compared to the measurements from the inertial measurement
unit (IMU) for optimal state estimation. The spline boundary conditions create
constraints between the camera and the IMU, with which we formulate VIO as a
constrained nonlinear optimization problem. Continuous-time pose representation
makes it possible to address many VIO challenges, e.g., rolling shutter
distortion and sensors that may lack synchronization. We conduct experiments on
two publicly available datasets that demonstrate the state-of-the-art accuracy
and real-time computational efficiency of our method.
|
[
{
"version": "v1",
"created": "Sun, 19 Sep 2021 00:40:54 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Feb 2022 23:18:00 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Mo",
"Jiawei",
""
],
[
"Sattar",
"Junaed",
""
]
] |
new_dataset
| 0.998601 |
2110.00768
|
Vaibhav Adlakha
|
Vaibhav Adlakha, Shehzaad Dhuliawala, Kaheer Suleman, Harm de Vries,
Siva Reddy
|
TopiOCQA: Open-domain Conversational Question Answering with Topic
Switching
|
accepted at TACL
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In a conversational question answering scenario, a questioner seeks to
extract information about a topic through a series of interdependent questions
and answers. As the conversation progresses, they may switch to related topics,
a phenomenon commonly observed in information-seeking search sessions. However,
current datasets for conversational question answering are limiting in two
ways: 1) they do not contain topic switches; and 2) they assume the reference
text for the conversation is given, i.e., the setting is not open-domain. We
introduce TopiOCQA (pronounced Tapioca), an open-domain conversational dataset
with topic switches on Wikipedia. TopiOCQA contains 3,920 conversations with
information-seeking questions and free-form answers. On average, a conversation
in our dataset spans 13 question-answer turns and involves four topics
(documents). TopiOCQA poses a challenging test-bed for models, where efficient
retrieval is required on multiple turns of the same conversation, in
conjunction with constructing valid responses using conversational history. We
evaluate several baselines, by combining state-of-the-art document retrieval
methods with neural reader models. Our best model achieves F1 of 55.8, falling
short of human performance by 14.2 points, indicating the difficulty of our
dataset. Our dataset and code is available at
https://mcgill-nlp.github.io/topiocqa
|
[
{
"version": "v1",
"created": "Sat, 2 Oct 2021 09:53:48 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Jan 2022 22:31:27 GMT"
},
{
"version": "v3",
"created": "Sun, 20 Feb 2022 22:28:32 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Adlakha",
"Vaibhav",
""
],
[
"Dhuliawala",
"Shehzaad",
""
],
[
"Suleman",
"Kaheer",
""
],
[
"de Vries",
"Harm",
""
],
[
"Reddy",
"Siva",
""
]
] |
new_dataset
| 0.973349 |
2110.14789
|
Mingsheng Yin
|
Mingsheng Yin (1), Akshaj Veldanda (1), Amee Trivedi (2), Jeff Zhang
(3), Kai Pfeiffer (1), Yaqi Hu (1), Siddharth Garg (1), Elza Erkip (1),
Ludovic Righetti (1), Sundeep Rangan (1) ((1) NYU Tandon School of
Engineering, (2) University of British Columbia, (3) Harvard University)
|
Millimeter Wave Wireless Assisted Robot Navigation with Link State
Classification
| null | null | null | null |
cs.RO eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
The millimeter wave (mmWave) bands have attracted considerable attention for
high precision localization applications due to the ability to capture high
angular and temporal resolution measurements. This paper explores mmWave-based
positioning for a target localization problem where a fixed target broadcasts
mmWave signals and a mobile robotic agent attempts to capture the signals to
locate and navigate to the target. A three-stage procedure is proposed: First,
the mobile agent uses tensor decomposition methods to detect the multipath
channel components and estimate their parameters. Second, a machine-learning
trained classifier is then used to predict the link state, meaning if the
strongest path is line-of-sight (LOS) or non-LOS (NLOS). For the NLOS case, the
link state predictor also determines if the strongest path arrived via one or
more reflections. Third, based on the link state, the agent either follows the
estimated angles or uses computer vision or other sensor to explore and map the
environment. The method is demonstrated on a large dataset of indoor
environments supplemented with ray tracing to simulate the wireless
propagation. The path estimation and link state classification are also
integrated into a state-of-the-art neural simultaneous localization and mapping
(SLAM) module to augment camera and LIDAR-based navigation. It is shown that
the link state classifier can successfully generalize to completely new
environments outside the training set. In addition, the neural-SLAM module with
the wireless path estimation and link state classifier provides rapid
navigation to the target, close to a baseline that knows the target location.
|
[
{
"version": "v1",
"created": "Wed, 27 Oct 2021 21:43:53 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Nov 2021 17:54:39 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Feb 2022 22:23:16 GMT"
},
{
"version": "v4",
"created": "Sat, 19 Feb 2022 01:23:57 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Yin",
"Mingsheng",
""
],
[
"Veldanda",
"Akshaj",
""
],
[
"Trivedi",
"Amee",
""
],
[
"Zhang",
"Jeff",
""
],
[
"Pfeiffer",
"Kai",
""
],
[
"Hu",
"Yaqi",
""
],
[
"Garg",
"Siddharth",
""
],
[
"Erkip",
"Elza",
""
],
[
"Righetti",
"Ludovic",
""
],
[
"Rangan",
"Sundeep",
""
]
] |
new_dataset
| 0.999194 |
2112.02569
|
Zhengchun Zhou
|
Li Xu and Zhengchun Zhou and Jun Zhang and Sihem Mesnager
|
Optimal quaternary $(r,\delta)$-Locally Recoverable Codes: Their
Structures and Complete Classification
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Aiming to recover the data from several concurrent node failures, linear
$r$-LRC codes with locality $r$ were extended into $(r, \delta)$-LRC codes with
locality $(r, \delta)$ which can enable the local recovery of a failed node in
case of more than one node failure. Optimal LRC codes are those whose
parameters achieve the generalized Singleton bound with equality. In the
present paper, we are interested in studying optimal LRC codes over small
fields and, more precisely, over $\mathbb{F}_4$. We shall adopt an approach by
investigating optimal quaternary $(r,\delta)$-LRC codes through their
parity-check matrices.
Our study includes determining the structural properties of optimal
$(r,\delta)$-LRC codes, their constructions, and their complete classification
over $\F_4$ by browsing all possible parameters. We emphasize that the precise
structure of optimal quaternary $(r,\delta)$-LRC codes and their classification
are obtained via the parity-check matrix approach use proofs-techniques
different from those used recently for optimal binary and ternary
$(r,\delta)$-LRC codes obtained by Hao et al. in [IEEE Trans. Inf. Theory,
2020, 66(12): 7465-7474].
|
[
{
"version": "v1",
"created": "Sun, 5 Dec 2021 13:43:42 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Dec 2021 05:11:58 GMT"
},
{
"version": "v3",
"created": "Mon, 21 Feb 2022 11:59:22 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Xu",
"Li",
""
],
[
"Zhou",
"Zhengchun",
""
],
[
"Zhang",
"Jun",
""
],
[
"Mesnager",
"Sihem",
""
]
] |
new_dataset
| 0.992811 |
2112.05997
|
Souvik Sur
|
Souvik Sur
|
Two Sequential Squaring Verifiable Delay Function
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A Verifiable Delay Function (VDF) is a function that takes a specified
sequential time to be evaluated, but can be efficiently verified. VDFs are
useful in several applications ranging from randomness beacons to sustainable
blockchains but are really rare in practice. Most of the VDFs are based on
algebraic assumptions like time-lock puzzle in unknown group orders [6, 8] and
isogenies over pairing groups [4]. The number of modulo squaring required for
verification in the time-lock puzzle based VDFs are proportional to their
security parameter. This paper proposes a verifiable delay function that
requires only 2- modulo squaring for verification. So the sequential effort
required for verification is independent of the security parameter.
|
[
{
"version": "v1",
"created": "Sat, 11 Dec 2021 14:40:19 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Feb 2022 12:10:42 GMT"
},
{
"version": "v3",
"created": "Mon, 21 Feb 2022 14:38:16 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Sur",
"Souvik",
""
]
] |
new_dataset
| 0.950522 |
2202.04361
|
Shijie Wang
|
Shijie Wang, Xi Chen, Chao Zhao, Yuxin Kong, Baojun Lin, Yongyi Wu,
Zhaozhao Bi, Ziyi Xuan, Tao Li, Yuxiang Li, Wei Zhang, En Ma, Zhongrui Wang,
Wei Ma
|
Molecular-scale Integration of Multi-modal Sensing and Neuromorphic
Computing with Organic Electrochemical Transistors
|
17 pages, 4 figures
| null | null | null |
cs.ET cond-mat.mtrl-sci cond-mat.soft
|
http://creativecommons.org/licenses/by/4.0/
|
Abstract: Bionic learning with fused sensing, memory and processing functions
outperforms artificial neural networks running on silicon chips in terms of
efficiency and footprint. However, digital hardware implementation of bionic
learning suffers from device heterogeneity in sensors and processing cores,
which incurs large hardware, energy and time overheads. Here, we present a
universal solution to simultaneously perform multi-modal sensing, memory and
processing using organic electrochemical transistors with designed architecture
and tailored channel morphology, selective ion injection into the
crystalline/amorphous regions. The resultant device work as either a volatile
receptor that shows multi-modal sensing, or a non-volatile synapse that
features record-high 10-bit analog states, low switching stochasticity and good
retention without the integration of any extra devices. Homogeneous integration
of such devices enables bionic learning functions such as conditioned reflex
and real-time cardiac disease diagnose via reservoir computing, illustrating
the promise for future smart edge health informatics.
|
[
{
"version": "v1",
"created": "Wed, 9 Feb 2022 09:50:31 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Feb 2022 04:18:20 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Wang",
"Shijie",
""
],
[
"Chen",
"Xi",
""
],
[
"Zhao",
"Chao",
""
],
[
"Kong",
"Yuxin",
""
],
[
"Lin",
"Baojun",
""
],
[
"Wu",
"Yongyi",
""
],
[
"Bi",
"Zhaozhao",
""
],
[
"Xuan",
"Ziyi",
""
],
[
"Li",
"Tao",
""
],
[
"Li",
"Yuxiang",
""
],
[
"Zhang",
"Wei",
""
],
[
"Ma",
"En",
""
],
[
"Wang",
"Zhongrui",
""
],
[
"Ma",
"Wei",
""
]
] |
new_dataset
| 0.958858 |
2202.06034
|
Hao-Wen Dong
|
Hao-Wen Dong, Cong Zhou, Taylor Berg-Kirkpatrick, Julian McAuley
|
Deep Performer: Score-to-Audio Music Performance Synthesis
|
ICASSP 2022 final version with appendix
| null | null | null |
cs.SD cs.LG cs.MM eess.AS eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Music performance synthesis aims to synthesize a musical score into a natural
performance. In this paper, we borrow recent advances in text-to-speech
synthesis and present the Deep Performer -- a novel system for score-to-audio
music performance synthesis. Unlike speech, music often contains polyphony and
long notes. Hence, we propose two new techniques for handling polyphonic inputs
and providing a fine-grained conditioning in a transformer encoder-decoder
model. To train our proposed system, we present a new violin dataset consisting
of paired recordings and scores along with estimated alignments between them.
We show that our proposed model can synthesize music with clear polyphony and
harmonic structures. In a listening test, we achieve competitive quality
against the baseline model, a conditional generative audio model, in terms of
pitch accuracy, timbre and noise level. Moreover, our proposed model
significantly outperforms the baseline on an existing piano dataset in overall
quality.
|
[
{
"version": "v1",
"created": "Sat, 12 Feb 2022 10:36:52 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Feb 2022 03:29:43 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Dong",
"Hao-Wen",
""
],
[
"Zhou",
"Cong",
""
],
[
"Berg-Kirkpatrick",
"Taylor",
""
],
[
"McAuley",
"Julian",
""
]
] |
new_dataset
| 0.999782 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.