id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2202.07021
|
Burak Han Demirbilek
|
Burak Han Demirbilek
|
QuadSim: A Quadcopter Rotational Dynamics Simulation Framework For
Reinforcement Learning Algorithms
|
for source codes, please visit https://github.com/BurakDmb/quadsim
|
Proceedings of the International CAIAC'21 Conference (2021) pp.
33-38, ISBN: 978-605-7902-60-3
| null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This study focuses on designing and developing a mathematically based
quadcopter rotational dynamics simulation framework for testing reinforcement
learning (RL) algorithms in many flexible configurations. The design of the
simulation framework aims to simulate both linear and nonlinear representations
of a quadcopter by solving initial value problems for ordinary differential
equation (ODE) systems. In addition, the simulation environment is capable of
making the simulation deterministic/stochastic by adding random Gaussian noise
in the forms of process and measurement noises. In order to ensure that the
scope of this simulation environment is not limited only with our own RL
algorithms, the simulation environment has been expanded to be compatible with
the OpenAI Gym toolkit. The framework also supports multiprocessing
capabilities to run simulation environments simultaneously in parallel. To test
these capabilities, many state-of-the-art deep RL algorithms were trained in
this simulation framework and the results were compared in detail.
|
[
{
"version": "v1",
"created": "Mon, 14 Feb 2022 20:34:08 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Demirbilek",
"Burak Han",
""
]
] |
new_dataset
| 0.99735 |
2202.07049
|
Stephen Ninan
|
Stephen Ninan and Sivakumar Rathinam
|
LIDAR data based Segmentation and Localization using Open Street Maps
for Rural Roads
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Accurate pose estimation is a fundamental ability that all mobile robots must
posses in order to traverse robustly in a given environment. Much like a human,
this ability is dependent on the robot's understanding of a given scene. For
Autonomous Vehicles (AV's), detailed 3D maps created beforehand are widely used
to augment the perceptive abilities and estimate pose based on current sensor
measurements. This approach however is less suited for rural communities that
are sparsely connected and cover large areas. To deal with the challenge of
localizing a vehicle in a rural setting, this paper presents a data-set of
rural road scenes, along with an approach for fast segmentation of roads using
LIDAR point clouds. The segmented point cloud in concert with road network
information from Open Street Maps (OSM) is used for pose estimation. We propose
two measurement models which are compared with state of the art methods for
localization on OSM for tracking as well as global localization. The results
show that the proposed algorithm is able to estimate pose within a 2 sq. km
area with mean accuracy of 6.5 meters.
|
[
{
"version": "v1",
"created": "Mon, 14 Feb 2022 21:41:16 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Feb 2022 02:18:05 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Ninan",
"Stephen",
""
],
[
"Rathinam",
"Sivakumar",
""
]
] |
new_dataset
| 0.996709 |
2202.08620
|
Jianwen Luo
|
Yueheng Zhou, Ming Liu, Chaoyang Song, Jianwen Luo
|
Kirin: A Quadruped Robot with High Payload Carrying Capability
|
We found some errors in the presented results and hope to remove this
submission and we hope submit it later after we have carefully checked it. We
feel sorry for the inconvenience and really appreciate it for the technical
support
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The quadruped robot is a versatile mobile platform with potential ability for
high payload carrying. However, most of the existing quadruped robots aim at
high maneuverability, highly dynamic and agile locomotion. In spite of this,
payload carrying is still an indispensable ability for the quadruped robots.
Design of a quadruped robot with high payload capacity is yet deeply explored.
In this study, a 50 kg electrically-actuated quadruped robot, Kirin, is
presented to leverage the payload carrying capability. Kirin is an
characterized with prismatic quasi-direct-drive (QDD) leg. This mechanism
greatly augments the payload carrying capability. This study presents several
design principles for the payload-carrying-oriented quadruped robots, including
the mechanical design, actuator parameters selection, and locomotion control
method. The theoretical analysis implies that the lifting task tends to be a
bottleneck for the existing robots with the articulated knee joints. By using
prismatic QDD leg, the payload carrying capability of Kirin is enhanced
greatly. To demonstrate Kirin's payload carrying capability, in preliminary
experiment, up to 125 kg payload lifting in static stance and 50 kg payload
carrying in dynamic trotting are tested. Whole body compliance with payload
carrying is also demonstrated.
|
[
{
"version": "v1",
"created": "Thu, 17 Feb 2022 12:06:16 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Feb 2022 12:25:09 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Zhou",
"Yueheng",
""
],
[
"Liu",
"Ming",
""
],
[
"Song",
"Chaoyang",
""
],
[
"Luo",
"Jianwen",
""
]
] |
new_dataset
| 0.999775 |
2202.08730
|
Jialin Yu
|
Jialin Yu, Huogen Wang, Ming Chen
|
Colonoscopy polyp detection with massive endoscopic images
|
13 pages, 10 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We improved an existing end-to-end polyp detection model with better average
precision validated by different data sets with trivial cost on detection
speed. Our previous work on detecting polyps within colonoscopy provided an
efficient end-to-end solution to alleviate doctor's examination overhead.
However, our later experiments found this framework is not as robust as before
as the condition of polyp capturing varies. In this work, we conducted several
studies on data set, identifying main issues that causes low precision rate in
the task of polyp detection. We used an optimized anchor generation methods to
get better anchor box shape and more boxes are used for detection as we believe
this is necessary for small object detection. A alternative backbone is used to
compensate the heavy time cost introduced by dense anchor box regression. With
use of the attention gate module, our model can achieve state-of-the-art polyp
detection performance while still maintain real-time detection speed.
|
[
{
"version": "v1",
"created": "Thu, 17 Feb 2022 16:07:59 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Feb 2022 11:05:55 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Yu",
"Jialin",
""
],
[
"Wang",
"Huogen",
""
],
[
"Chen",
"Ming",
""
]
] |
new_dataset
| 0.997693 |
2202.09444
|
Jianping Zeng
|
Jianping Zeng, Hongjune Kim, Jaejin Lee, Changhee Jung
|
Lightweight Soft Error Resilience for In-Order Cores
|
13 pages and 26 figures
| null |
10.1145/3466752.3480042
| null |
cs.AR cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Acoustic-sensor-based soft error resilience is particularly promising, since
it can verify the absence of soft errors and eliminate silent data corruptions
at a low hardware cost. However, the state-of-the-art work incurs a significant
performance overhead for in-order cores due to frequent structural/data hazards
during the verification. To address the problem, this paper presents Turnpike,
a compiler/architecture co-design scheme that can achieve lightweight yet
guaranteed soft error resilience for in-order cores. The key idea is that many
of the data computed in the core can bypass the soft error verification without
compromising the resilience. Along with simple microarchitectural support for
realizing the idea, Turnpike leverages compiler optimizations to further reduce
the performance overhead. Experimental results with 36 benchmarks demonstrate
that Turnpike only incurs a 0-14% run-time overhead on average while the
state-of-the-art incurs a 29-84% overhead when the worst-case latency of the
sensor based error detection is 10-50 cycles.
|
[
{
"version": "v1",
"created": "Fri, 18 Feb 2022 21:56:25 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Zeng",
"Jianping",
""
],
[
"Kim",
"Hongjune",
""
],
[
"Lee",
"Jaejin",
""
],
[
"Jung",
"Changhee",
""
]
] |
new_dataset
| 0.979056 |
2202.09452
|
Pedro Ortiz Suarez
|
Simon Gabay, Pedro Ortiz Suarez, Alexandre Bartz, Alix Chagu\'e,
Rachel Bawden, Philippe Gambette, Beno\^it Sagot
|
From FreEM to D'AlemBERT: a Large Corpus and a Language Model for Early
Modern French
|
8 pages, 2 figures, 4 tables
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Language models for historical states of language are becoming increasingly
important to allow the optimal digitisation and analysis of old textual
sources. Because these historical states are at the same time more complex to
process and more scarce in the corpora available, specific efforts are
necessary to train natural language processing (NLP) tools adapted to the data.
In this paper, we present our efforts to develop NLP tools for Early Modern
French (historical French from the 16$^\text{th}$ to the 18$^\text{th}$
centuries). We present the $\text{FreEM}_{\text{max}}$ corpus of Early Modern
French and D'AlemBERT, a RoBERTa-based language model trained on
$\text{FreEM}_{\text{max}}$. We evaluate the usefulness of D'AlemBERT by
fine-tuning it on a part-of-speech tagging task, outperforming previous work on
the test set. Importantly, we find evidence for the transfer learning capacity
of the language model, since its performance on lesser-resourced time periods
appears to have been boosted by the more resourced ones. We release D'AlemBERT
and the open-sourced subpart of the $\text{FreEM}_{\text{max}}$ corpus.
|
[
{
"version": "v1",
"created": "Fri, 18 Feb 2022 22:17:22 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Gabay",
"Simon",
""
],
[
"Suarez",
"Pedro Ortiz",
""
],
[
"Bartz",
"Alexandre",
""
],
[
"Chagué",
"Alix",
""
],
[
"Bawden",
"Rachel",
""
],
[
"Gambette",
"Philippe",
""
],
[
"Sagot",
"Benoît",
""
]
] |
new_dataset
| 0.999096 |
2202.09495
|
Ryuhei Uehara
|
Takehiro Ito, Jun Kawahara, Shin-ichi Minato, Yota Otachi, Toshiki
Saitoh, Akira Suzuki, Ryuhei Uehara, Takeaki Uno, Katsuhisa Yamanaka, Ryo
Yoshinaka
|
Sorting Balls and Water: Equivalence and Computational Complexity
|
17 pages, 10 figures
| null | null | null |
cs.CC cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Various forms of sorting problems have been studied over the years. Recently,
two kinds of sorting puzzle apps are popularized. In these puzzles, we are
given a set of bins filled with colored units, balls or water, and some empty
bins. These puzzles allow us to move colored units from a bin to another when
the colors involved match in some way or the target bin is empty. The goal of
these puzzles is to sort all the color units in order. We investigate
computational complexities of these puzzles. We first show that these two
puzzles are essentially the same from the viewpoint of solvability. That is, an
instance is sortable by ball-moves if and only if it is sortable by
water-moves. We also show that every yes-instance has a solution of polynomial
length, which implies that these puzzles belong to in NP. We then show that
these puzzles are NP-complete. For some special cases, we give polynomial-time
algorithms. We finally consider the number of empty bins sufficient for making
all instances solvable and give non-trivial upper and lower bounds in terms of
the number of filled bins and the capacity of bins.
|
[
{
"version": "v1",
"created": "Sat, 19 Feb 2022 02:18:52 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Ito",
"Takehiro",
""
],
[
"Kawahara",
"Jun",
""
],
[
"Minato",
"Shin-ichi",
""
],
[
"Otachi",
"Yota",
""
],
[
"Saitoh",
"Toshiki",
""
],
[
"Suzuki",
"Akira",
""
],
[
"Uehara",
"Ryuhei",
""
],
[
"Uno",
"Takeaki",
""
],
[
"Yamanaka",
"Katsuhisa",
""
],
[
"Yoshinaka",
"Ryo",
""
]
] |
new_dataset
| 0.978917 |
2202.09509
|
Kenan Tang
|
Kenan Tang (The University of Chicago)
|
PETCI: A Parallel English Translation Dataset of Chinese Idioms
| null | null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Idioms are an important language phenomenon in Chinese, but idiom translation
is notoriously hard. Current machine translation models perform poorly on idiom
translation, while idioms are sparse in many translation datasets. We present
PETCI, a parallel English translation dataset of Chinese idioms, aiming to
improve idiom translation by both human and machine. The dataset is built by
leveraging human and machine effort. Baseline generation models show
unsatisfactory abilities to improve translation, but structure-aware
classification models show good performance on distinguishing good
translations. Furthermore, the size of PETCI can be easily increased without
expertise. Overall, PETCI can be helpful to language learners and machine
translation systems.
|
[
{
"version": "v1",
"created": "Sat, 19 Feb 2022 03:16:20 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Tang",
"Kenan",
"",
"The University of Chicago"
]
] |
new_dataset
| 0.999779 |
2202.09511
|
Xiongfei Zhao
|
Xiongfei Zhao and Yain-Whar Si
|
NFTCert: NFT-Based Certificates With Online Payment Gateway
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Nowadays, academic certificates are still widely issued in paper format.
Traditional certificate verification is a lengthy, manually intensive, and
sometimes expensive process. In this paper, we propose a novel NFT-based
certificate framework called NFTCert, which enables the establishment of links
between a legitimate certificate and its owner through a Blockchain. In this
paper, we describe the implementation of the NFTCert framework, including
schema definition, minting, verification, and revocation of NFT-based
certificates. We also introduce a payment gateway into the minting process,
which enables NFTCert to be used by a wider audience. Therefore, participants
of NFTCerts do not need to rely on cryptocurrency for transactions. All in all,
the proposed framework is designed to achieve usability, authenticity,
confidentiality, transparency, and availability properties when it is compared
to existing Blockchain-based systems.
|
[
{
"version": "v1",
"created": "Sat, 19 Feb 2022 03:18:21 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Zhao",
"Xiongfei",
""
],
[
"Si",
"Yain-Whar",
""
]
] |
new_dataset
| 0.999079 |
2202.09580
|
Sanghyun Yoo
|
Sanghyun Yoo, Ohyun Kwon, Hoshik Lee
|
Image-to-Graph Transformers for Chemical Structure Recognition
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
For several decades, chemical knowledge has been published in written text,
and there have been many attempts to make it accessible, for example, by
transforming such natural language text to a structured format. Although the
discovered chemical itself commonly represented in an image is the most
important part, the correct recognition of the molecular structure from the
image in literature still remains a hard problem since they are often
abbreviated to reduce the complexity and drawn in many different styles. In
this paper, we present a deep learning model to extract molecular structures
from images. The proposed model is designed to transform the molecular image
directly into the corresponding graph, which makes it capable of handling
non-atomic symbols for abbreviations. Also, by end-to-end learning approach it
can fully utilize many open image-molecule pair data from various sources, and
hence it is more robust to image style variation than other tools. The
experimental results show that the proposed model outperforms the existing
models with 17.1 % and 12.8 % relative improvement for well-known benchmark
datasets and large molecular images that we collected from literature,
respectively.
|
[
{
"version": "v1",
"created": "Sat, 19 Feb 2022 11:33:54 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Yoo",
"Sanghyun",
""
],
[
"Kwon",
"Ohyun",
""
],
[
"Lee",
"Hoshik",
""
]
] |
new_dataset
| 0.998458 |
2202.09583
|
Laura Perez-Beltrachini
|
Laura Perez-Beltrachini and Mirella Lapata
|
Models and Datasets for Cross-Lingual Summarisation
|
EMNLP 2021
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a cross-lingual summarisation corpus with long documents in a
source language associated with multi-sentence summaries in a target language.
The corpus covers twelve language pairs and directions for four European
languages, namely Czech, English, French and German, and the methodology for
its creation can be applied to several other languages. We derive cross-lingual
document-summary instances from Wikipedia by combining lead paragraphs and
articles' bodies from language aligned Wikipedia titles. We analyse the
proposed cross-lingual summarisation task with automatic metrics and validate
it with a human study. To illustrate the utility of our dataset we report
experiments with multi-lingual pre-trained models in supervised, zero- and
few-shot, and out-of-domain scenarios.
|
[
{
"version": "v1",
"created": "Sat, 19 Feb 2022 11:55:40 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Perez-Beltrachini",
"Laura",
""
],
[
"Lapata",
"Mirella",
""
]
] |
new_dataset
| 0.993861 |
2202.09675
|
Clelia De Felice
|
Clelia De Felice
|
Finite maximal codes and factorizations of cyclic groups
| null | null | null | null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Variable-length codes are the bases of the free submonoids of a free monoid.
There are some important longstanding open questions about the structure of
finite maximal codes, namely the factorization conjecture and the triangle
conjecture, proposed by Perrin and Sch\"{u}tzemberger. The latter concerns
finite codes $Y$ which are subsets of $a^* B a^*$, where $a$ is a letter and
$B$ is an alphabet not containing $a$. A structural property of finite maximal
codes has recently been shown by Zhang and Shum. It exhibits a relationship
between finite maximal codes and factorizations of cyclic groups. With the aim
of highlighting the links between this result and other older ones on maximal
and factorizing codes, we give a simpler and a new proof of this result. As a
consequence, we prove that for any finite maximal code $X \subseteq (B \cup \{a
\})^*$ containing the word $a^{pq}$, where $p,q$ are prime numbers, $X \cap a^*
B a^*$ satisfies the triangle conjecture. Let $n$ be a positive integer that is
a product of at most two prime numbers. We also prove that it is decidable
whether a finite code $Y \cup a^{n} \subseteq a^* B a^* \cup a^*$ is included
in a finite maximal code and that, if this holds, $Y \cup a^{n}$ is included in
a code that also satisfies the factorization conjecture.
|
[
{
"version": "v1",
"created": "Sat, 19 Feb 2022 20:33:11 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"De Felice",
"Clelia",
""
]
] |
new_dataset
| 0.980304 |
2202.09686
|
Paul Zhang
|
Paul Zhang, Judy (Hsin-Hui) Chiang, Xinyi (Cynthia) Fan, Klara
Mundilova
|
Local Decomposition of Hexahedral Singular Nodes into Singular Curves
| null | null | null | null |
cs.CG cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Hexahedral (hex) meshing is a long studied topic in geometry processing with
many fascinating and challenging associated problems. Hex meshes vary in
complexity from structured to unstructured depending on application or domain
of interest. Fully structured meshes require that all interior mesh edges are
adjacent to exactly four hexes. Edges not satisfying this criteria are
considered singular and indicate an unstructured hex mesh. Singular edges join
together into singular curves that either form closed cycles, end on the mesh
boundary, or end at a singular node, a complex junction of more than two
singular curves. While all hex meshes with singularities are unstructured,
those with more complex singular nodes tend to have more distorted elements and
smaller scaled Jacobian values. In this work, we study the topology of singular
nodes. We show that all eight of the most common singular nodes are
decomposable into just singular curves. We further show that all singular
nodes, regardless of edge valence, are locally decomposable. Finally we
demonstrate these decompositions on hex meshes, thereby decreasing their
distortion and converting all singular nodes into singular curves. With this
decomposition, the enigmatic complexity of 3D singular nodes becomes
effectively 2D.
|
[
{
"version": "v1",
"created": "Sat, 19 Feb 2022 22:07:49 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Zhang",
"Paul",
"",
"Hsin-Hui"
],
[
"Judy",
"",
"",
"Hsin-Hui"
],
[
"Chiang",
"",
"",
"Cynthia"
],
[
"Xinyi",
"",
"",
"Cynthia"
],
[
"Fan",
"",
""
],
[
"Mundilova",
"Klara",
""
]
] |
new_dataset
| 0.979912 |
2202.09694
|
Amir Pouran Ben Veyseh
|
Amir Pouran Ben Veyseh, Nicole Meister, Seunghyun Yoon, Rajiv Jain,
Franck Dernoncourt, Thien Huu Nguyen
|
MACRONYM: A Large-Scale Dataset for Multilingual and Multi-Domain
Acronym Extraction
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Acronym extraction is the task of identifying acronyms and their expanded
forms in texts that is necessary for various NLP applications. Despite major
progress for this task in recent years, one limitation of existing AE research
is that they are limited to the English language and certain domains (i.e.,
scientific and biomedical). As such, challenges of AE in other languages and
domains is mainly unexplored. Lacking annotated datasets in multiple languages
and domains has been a major issue to hinder research in this area. To address
this limitation, we propose a new dataset for multilingual multi-domain AE.
Specifically, 27,200 sentences in 6 typologically different languages and 2
domains, i.e., Legal and Scientific, is manually annotated for AE. Our
extensive experiments on the proposed dataset show that AE in different
languages and different learning settings has unique challenges, emphasizing
the necessity of further research on multilingual and multi-domain AE.
|
[
{
"version": "v1",
"created": "Sat, 19 Feb 2022 23:08:38 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Veyseh",
"Amir Pouran Ben",
""
],
[
"Meister",
"Nicole",
""
],
[
"Yoon",
"Seunghyun",
""
],
[
"Jain",
"Rajiv",
""
],
[
"Dernoncourt",
"Franck",
""
],
[
"Nguyen",
"Thien Huu",
""
]
] |
new_dataset
| 0.999856 |
2202.09715
|
Yuqing Lan
|
Yuqing Lan, Yao Duan, Chenyi Liu, Chenyang Zhu, Yueshan Xiong, Hui
Huang, Kai Xu
|
ARM3D: Attention-based relation module for indoor 3D object detection
|
16 pages, 9 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Relation context has been proved to be useful for many challenging vision
tasks. In the field of 3D object detection, previous methods have been taking
the advantage of context encoding, graph embedding, or explicit relation
reasoning to extract relation context. However, there exists inevitably
redundant relation context due to noisy or low-quality proposals. In fact,
invalid relation context usually indicates underlying scene misunderstanding
and ambiguity, which may, on the contrary, reduce the performance in complex
scenes. Inspired by recent attention mechanism like Transformer, we propose a
novel 3D attention-based relation module (ARM3D). It encompasses object-aware
relation reasoning to extract pair-wise relation contexts among qualified
proposals and an attention module to distribute attention weights towards
different relation contexts. In this way, ARM3D can take full advantage of the
useful relation context and filter those less relevant or even confusing
contexts, which mitigates the ambiguity in detection. We have evaluated the
effectiveness of ARM3D by plugging it into several state-of-the-art 3D object
detectors and showing more accurate and robust detection results. Extensive
experiments show the capability and generalization of ARM3D on 3D object
detection. Our source code is available at https://github.com/lanlan96/ARM3D.
|
[
{
"version": "v1",
"created": "Sun, 20 Feb 2022 02:43:42 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Lan",
"Yuqing",
""
],
[
"Duan",
"Yao",
""
],
[
"Liu",
"Chenyi",
""
],
[
"Zhu",
"Chenyang",
""
],
[
"Xiong",
"Yueshan",
""
],
[
"Huang",
"Hui",
""
],
[
"Xu",
"Kai",
""
]
] |
new_dataset
| 0.990674 |
2202.09747
|
Kewei Cheng
|
Kewei Cheng, Xian Li, Yifan Ethan Xu, Xin Luna Dong, Yizhou Sun
|
PGE: Robust Product Graph Embedding Learning for Error Detection
| null | null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Although product graphs (PGs) have gained increasing attentions in recent
years for their successful applications in product search and recommendations,
the extensive power of PGs can be limited by the inevitable involvement of
various kinds of errors. Thus, it is critical to validate the correctness of
triples in PGs to improve their reliability. Knowledge graph (KG) embedding
methods have strong error detection abilities. Yet, existing KG embedding
methods may not be directly applicable to a PG due to its distinct
characteristics: (1) PG contains rich textual signals, which necessitates a
joint exploration of both text information and graph structure; (2) PG contains
a large number of attribute triples, in which attribute values are represented
by free texts. Since free texts are too flexible to define entities in KGs,
traditional way to map entities to their embeddings using ids is no longer
appropriate for attribute value representation; (3) Noisy triples in a PG
mislead the embedding learning and significantly hurt the performance of error
detection. To address the aforementioned challenges, we propose an end-to-end
noise-tolerant embedding learning framework, PGE, to jointly leverage both text
information and graph structure in PG to learn embeddings for error detection.
Experimental results on real-world product graph demonstrate the effectiveness
of the proposed framework comparing with the state-of-the-art approaches.
|
[
{
"version": "v1",
"created": "Sun, 20 Feb 2022 07:16:09 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Cheng",
"Kewei",
""
],
[
"Li",
"Xian",
""
],
[
"Xu",
"Yifan Ethan",
""
],
[
"Dong",
"Xin Luna",
""
],
[
"Sun",
"Yizhou",
""
]
] |
new_dataset
| 0.997481 |
2202.09855
|
Varun Chandola
|
Amol Salunkhe, Dwyer Deighan, Paul DesJardin, Varun Chandola
|
ChemTab: A Physics Guided Chemistry Modeling Framework
| null | null | null | null |
cs.LG cs.CE
|
http://creativecommons.org/licenses/by/4.0/
|
Modeling of turbulent combustion system requires modeling the underlying
chemistry and the turbulent flow. Solving both systems simultaneously is
computationally prohibitive. Instead, given the difference in scales at which
the two sub-systems evolve, the two sub-systems are typically (re)solved
separately. Popular approaches such as the Flamelet Generated Manifolds (FGM)
use a two-step strategy where the governing reaction kinetics are pre-computed
and mapped to a low-dimensional manifold, characterized by a few reaction
progress variables (model reduction) and the manifold is then "looked-up"
during the run-time to estimate the high-dimensional system state by the flow
system. While existing works have focused on these two steps independently, we
show that joint learning of the progress variables and the look-up model, can
yield more accurate results. We propose a deep neural network architecture,
called ChemTab, customized for the joint learning task and experimentally
demonstrate its superiority over existing state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Sun, 20 Feb 2022 16:21:13 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Salunkhe",
"Amol",
""
],
[
"Deighan",
"Dwyer",
""
],
[
"DesJardin",
"Paul",
""
],
[
"Chandola",
"Varun",
""
]
] |
new_dataset
| 0.996967 |
2202.09935
|
Alexis E. Block
|
Alexis E. Block and Hasti Seifi and Otmar Hilliges and Roger Gassert
and Katherine J. Kuchenbecker
|
In the Arms of a Robot: Designing Autonomous Hugging Robots with
Intra-Hug Gestures
|
48 pages, 22 figures, 4 supplementary videos
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Hugs are complex affective interactions that often include gestures like
squeezes. We present six new guidelines for designing interactive hugging
robots, which we validate through two studies with our custom robot. To achieve
autonomy, we investigated robot responses to four human intra-hug gestures:
holding, rubbing, patting, and squeezing. Thirty-two users each exchanged and
rated sixteen hugs with an experimenter-controlled HuggieBot 2.0. The robot's
inflated torso's microphone and pressure sensor collected data of the subjects'
demonstrations that were used to develop a perceptual algorithm that classifies
user actions with 88\% accuracy. Users enjoyed robot squeezes, regardless of
their performed action, they valued variety in the robot response, and they
appreciated robot-initiated intra-hug gestures. From average user ratings, we
created a probabilistic behavior algorithm that chooses robot responses in real
time. We implemented improvements to the robot platform to create HuggieBot 3.0
and then validated its gesture perception system and behavior algorithm with
sixteen users. The robot's responses and proactive gestures were greatly
enjoyed. Users found the robot more natural, enjoyable, and intelligent in the
last phase of the experiment than in the first. After the study, they felt more
understood by the robot and thought robots were nicer to hug.
|
[
{
"version": "v1",
"created": "Sun, 20 Feb 2022 23:47:21 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Block",
"Alexis E.",
""
],
[
"Seifi",
"Hasti",
""
],
[
"Hilliges",
"Otmar",
""
],
[
"Gassert",
"Roger",
""
],
[
"Kuchenbecker",
"Katherine J.",
""
]
] |
new_dataset
| 0.998311 |
2202.09977
|
Ke Sun
|
Ke Sun, Stephen Chaves, Paul Martin, Vijay Kumar
|
RTGNN: A Novel Approach to Model Stochastic Traffic Dynamics
|
Accepted by ICRA 2022
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Modeling stochastic traffic dynamics is critical to developing self-driving
cars. Because it is difficult to develop first principle models of cars driven
by humans, there is great potential for using data driven approaches in
developing traffic dynamical models. While there is extensive literature on
this subject, previous works mainly address the prediction accuracy of
data-driven models. Moreover, it is often difficult to apply these models to
common planning frameworks since they fail to meet the assumptions therein. In
this work, we propose a new stochastic traffic model, Recurrent Traffic Graph
Neural Network (RTGNN), by enforcing additional structures on the model so that
the proposed model can be seamlessly integrated with existing motion planning
algorithms. RTGNN is a Markovian model and is able to infer future traffic
states conditioned on the motion of the ego vehicle. Specifically, RTGNN uses a
definition of the traffic state that includes the state of all players in a
local region and is therefore able to make joint predictions for all agents of
interest. Meanwhile, we explicitly model the hidden states of agents,
"intentions," as part of the traffic state to reflect the inherent partial
observability of traffic dynamics. The above mentioned properties are critical
for integrating RTGNN with motion planning algorithms coupling prediction and
decision making. Despite the additional structures, we show that RTGNN is able
to achieve state-of-the-art accuracy through comparisons with other similar
works.
|
[
{
"version": "v1",
"created": "Mon, 21 Feb 2022 03:55:00 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Sun",
"Ke",
""
],
[
"Chaves",
"Stephen",
""
],
[
"Martin",
"Paul",
""
],
[
"Kumar",
"Vijay",
""
]
] |
new_dataset
| 0.955995 |
2202.10025
|
Yong Lai
|
Yong Lai, Kuldeep S. Meel, Roland H. C. Yap
|
CCDD: A Tractable Representation for Model Counting and Uniform Sampling
| null | null | null | null |
cs.AI cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Knowledge compilation concerns with the compilation of representation
languages to target languages supporting a wide range of tractable operations
arising from diverse areas of computer science. Tractable target compilation
languages are usually achieved by restrictions on the internal nodes of the
NNF. In this paper, we propose a new representation language CCDD, which
introduces new restrictions on conjunction nodes to capture equivalent
literals. We show that CCDD supports two key queries, model counting and
uniform samping, in polytime. We present algorithms and a compiler to compile
propositional formulas expressed in CNF into CCDD. Experiments over a large set
of benchmarks show that our compilation times are better with smaller
representation than state-of-art Decision-DNNF, SDD and OBDD[AND] compilers. We
apply our techniques to model counting and uniform sampling, and develop model
counter and uniform sampler on CNF. Our empirical evaluation demonstrates the
following significant improvements: our model counter can solve 885 instances
while the prior state of the art solved only 843 instances, representing an
improvement of 43 instances; and our uniform sampler can solve 780 instances
while the prior state of the art solved only 648 instances, representing an
improvement of 132 instances.
|
[
{
"version": "v1",
"created": "Mon, 21 Feb 2022 07:44:44 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Lai",
"Yong",
""
],
[
"Meel",
"Kuldeep S.",
""
],
[
"Yap",
"Roland H. C.",
""
]
] |
new_dataset
| 0.97549 |
2202.10057
|
Alessandro Sestini
|
Alessandro Sestini, Linus Gissl\'en, Joakim Bergdahl, Konrad Tollmar
and Andrew D. Bagdanov
|
CCPT: Automatic Gameplay Testing and Validation with
Curiosity-Conditioned Proximal Trajectories
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a novel deep reinforcement learning algorithm to perform
automatic analysis and detection of gameplay issues in complex 3D navigation
environments. The Curiosity-Conditioned Proximal Trajectories (CCPT) method
combines curiosity and imitation learning to train agents to methodically
explore in the proximity of known trajectories derived from expert
demonstrations. We show how CCPT can explore complex environments, discover
gameplay issues and design oversights in the process, and recognize and
highlight them directly to game designers. We further demonstrate the
effectiveness of the algorithm in a novel 3D navigation environment which
reflects the complexity of modern AAA video games. Our results show a higher
level of coverage and bug discovery than baselines methods, and it hence can
provide a valuable tool for game designers to identify issues in game design
automatically.
|
[
{
"version": "v1",
"created": "Mon, 21 Feb 2022 09:08:33 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Sestini",
"Alessandro",
""
],
[
"Gisslén",
"Linus",
""
],
[
"Bergdahl",
"Joakim",
""
],
[
"Tollmar",
"Konrad",
""
],
[
"Bagdanov",
"Andrew D.",
""
]
] |
new_dataset
| 0.958722 |
2202.10221
|
Fl\'avio Ca\c{c}\~ao
|
Fl\'avio Nakasato Ca\c{c}\~ao, Anna Helena Reali Costa, Natalie
Unterstell, Liuca Yonaha, Taciana Stec and F\'abio Ishisaki
|
Tracking environmental policy changes in the Brazilian Federal Official
Gazette
|
Accepted at the 15th International Conference on the Computational
Processing of Portuguese (PROPOR 2022)
| null | null | null |
cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Even though most of its energy generation comes from renewable sources,
Brazil is one of the largest emitters of greenhouse gases in the world, due to
intense farming and deforestation of biomes such as the Amazon Rainforest,
whose preservation is essential for compliance with the Paris Agreement. Still,
regardless of lobbies or prevailing political orientation, all government legal
actions are published daily in the Brazilian Federal Official Gazette (BFOG, or
"Di\'ario Oficial da Uni\~ao" in Portuguese). However, with hundreds of decrees
issued every day by the authorities, it is absolutely burdensome to manually
analyze all these processes and find out which ones can pose serious
environmental hazards. In this paper, we present a strategy to compose
automated techniques and domain expert knowledge to process all the data from
the BFOG. We also provide the Government Actions Tracker, a highly curated
dataset, in Portuguese, annotated by domain experts, on federal government acts
about the Brazilian environmental policies. Finally, we build and compared four
different NLP models on the classfication task in this dataset. Our best model
achieved a F1-score of $0.714 \pm 0.031$. In the future, this system should
serve to scale up the high-quality tracking of all oficial documents with a
minimum of human supervision and contribute to increasing society's awareness
of government actions.
|
[
{
"version": "v1",
"created": "Fri, 11 Feb 2022 21:06:13 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Cação",
"Flávio Nakasato",
""
],
[
"Costa",
"Anna Helena Reali",
""
],
[
"Unterstell",
"Natalie",
""
],
[
"Yonaha",
"Liuca",
""
],
[
"Stec",
"Taciana",
""
],
[
"Ishisaki",
"Fábio",
""
]
] |
new_dataset
| 0.997655 |
2202.10228
|
Shahar Kvatinsky Prof.
|
Wei Wang, Loai Danial, Eric Herbelin, Barak Hoffer, Batel Oved,
Tzofnat Greenberg-Toledo, Evgeny Pikhay, Yakov Roizin and Shahar Kvatinsky
|
Physical based compact model of Y-Flash memristor for neuromorphic
computation
| null | null |
10.1063/5.0069116
| null |
cs.ET physics.app-ph
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Y-Flash memristors utilize the mature technology of single polysilicon
floating gate non-volatile memories (NVM). It can be operated in a two-terminal
configuration similar to the other emerging memristive devices, i.e., resistive
random-access memory (RRAM), phase-change memory (PCM), etc. Fabricated in
production complementary metal-oxide-semiconductor (CMOS) technology, Y-Flash
memristors allow excellent repro-ducibility reflected in high neuromorphic
products yields. Working in the subthreshold region, the device can be
programmed to a large number of fine-tuned intermediate states in an analog
fashion and allows low readout currents (1 nA ~ 5 $\mu$ A). However, currently,
there are no accurate models to describe the dynamic switching in this type of
memristive device and account for multiple operational configurations. In this
paper, we provide a physical-based compact model that describes Y-Flash
memristor performance both in DC and AC regimes, and consistently describes the
dynamic program and erase operations. The model is integrated into the
commercial circuit design tools and is ready to be used in applications related
to neuromorphic computation.
|
[
{
"version": "v1",
"created": "Wed, 16 Feb 2022 08:28:02 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Wang",
"Wei",
""
],
[
"Danial",
"Loai",
""
],
[
"Herbelin",
"Eric",
""
],
[
"Hoffer",
"Barak",
""
],
[
"Oved",
"Batel",
""
],
[
"Greenberg-Toledo",
"Tzofnat",
""
],
[
"Pikhay",
"Evgeny",
""
],
[
"Roizin",
"Yakov",
""
],
[
"Kvatinsky",
"Shahar",
""
]
] |
new_dataset
| 0.98426 |
2202.10277
|
Song Ren Wang
|
Song-Ren Wang, Hong-Yang Shih, Zheng-Yi Shen, and Wen-Kai Tai
|
End-to-End High Accuracy License Plate Recognition Based on Depthwise
Separable Convolution Networks
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Automatic license plate recognition plays a crucial role in modern
transportation systems such as for traffic monitoring and vehicle violation
detection. In real-world scenarios, license plate recognition still faces many
challenges and is impaired by unpredictable interference such as weather or
lighting conditions. Many machine learning based ALPR solutions have been
proposed to solve such challenges in recent years. However, most are not
convincing, either because their results are evaluated on small or simple
datasets that lack diverse surroundings, or because they require powerful
hardware to achieve a reasonable frames-per-second in real-world applications.
In this paper, we propose a novel segmentation-free framework for license plate
recognition and introduce NP-ALPR, a diverse and challenging dataset which
resembles real-world scenarios. The proposed network model consists of the
latest deep learning methods and state-of-the-art ideas, and benefits from a
novel network architecture. It achieves higher accuracy with lower
computational requirements than previous works. We evaluate the effectiveness
of the proposed method on three different datasets and show a recognition
accuracy of over 99% and over 70 fps, demonstrating that our method is not only
robust but also computationally efficient.
|
[
{
"version": "v1",
"created": "Mon, 21 Feb 2022 14:45:03 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Wang",
"Song-Ren",
""
],
[
"Shih",
"Hong-Yang",
""
],
[
"Shen",
"Zheng-Yi",
""
],
[
"Tai",
"Wen-Kai",
""
]
] |
new_dataset
| 0.999656 |
2202.10297
|
Troels Henriksen
|
Robert Schenck, Ola R{\o}nning, Troels Henriksen, Cosmin E. Oancea
|
AD for an Array Language with Nested Parallelism
| null | null | null | null |
cs.PL cs.DC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We present a technique for applying (forward and) reverse-mode automatic
differentiation (AD) on a non-recursive second-order functional array language
that supports nested parallelism and is primarily aimed at efficient GPU
execution. The key idea is to eliminate the need for a "tape" by relying on
redundant execution to bring into each new scope all program variables that may
be needed by the differentiated code. Efficient execution is enabled by the
observation that perfectly-nested scopes do not introduce re-execution, and
such perfect nests are produced by known compiler transformations, e.g.,
flattening. Our technique differentiates loops and bulk-parallel operators,
such as map, reduce, histogram, scan, scatter, by specific rewrite rules, and
aggressively optimizes the resulting nested-parallel code. We report an
experimental evaluation that compares with established AD solutions and
demonstrates competitive performance on nine common benchmarks from recent
applied AD literature.
|
[
{
"version": "v1",
"created": "Mon, 21 Feb 2022 15:25:18 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Schenck",
"Robert",
""
],
[
"Rønning",
"Ola",
""
],
[
"Henriksen",
"Troels",
""
],
[
"Oancea",
"Cosmin E.",
""
]
] |
new_dataset
| 0.952 |
2202.10318
|
Leonardo Bonati
|
Leonardo Bonati, Michele Polese, Salvatore D'Oro, Stefano Basagni,
Tommaso Melodia
|
OpenRAN Gym: An Open Toolbox for Data Collection and Experimentation
with AI in O-RAN
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Open Radio Access Network (RAN) architectures will enable interoperability,
openness, and programmatic data-driven control in next generation cellular
networks. However, developing scalable and efficient data-driven algorithms
that can generalize across diverse deployments and optimize RAN performance is
a complex feat, largely unaddressed as of today. Specifically, the ability to
design efficient data-driven algorithms for network control and inference
requires at a minimum (i) access to large, rich, and heterogeneous datasets;
(ii) testing at scale in controlled but realistic environments, and (iii)
software pipelines to automate data collection and experimentation. To
facilitate these tasks, in this paper we propose OpenRAN Gym, a practical,
open, experimental toolbox that provides end-to-end design, data collection,
and testing workflows for intelligent control in next generation Open RAN
systems. OpenRAN Gym builds on software frameworks for the collection of large
datasets and RAN control, and on a lightweight O-RAN environment for
experimental wireless platforms. We first provide an overview of OpenRAN Gym
and then describe how it can be used to collect data, to design and train
artificial intelligence and machine learning-based O-RAN applications (xApps),
and to test xApps on a softwarized RAN. Then, we provide an example of two
xApps designed with OpenRAN Gym and used to control a large-scale network with
7 base stations and 42 users deployed on the Colosseum testbed. OpenRAN Gym and
its software components are open source and publicly-available to the research
community.
|
[
{
"version": "v1",
"created": "Mon, 21 Feb 2022 15:42:37 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Bonati",
"Leonardo",
""
],
[
"Polese",
"Michele",
""
],
[
"D'Oro",
"Salvatore",
""
],
[
"Basagni",
"Stefano",
""
],
[
"Melodia",
"Tommaso",
""
]
] |
new_dataset
| 0.994619 |
2202.10354
|
Joaquin Garcia-Alfaro
|
Michel Barbeau and Joaquin Garcia-Alfaro
|
Cyber-Physical Defense in the Quantum Era
|
14 pages, 7 figures, 1 table, 4 boxes
|
Scientific Reports, Nature Publishing Group, 12(1):1905, February
2022
|
10.1038/s41598-022-05690-1
| null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Networked-Control Systems (NCSs), a type of cyber-physical systems, consist
of tightly integrated computing, communication and control technologies. While
being very flexible environments, they are vulnerable to computing and
networking attacks. Recent NCSs hacking incidents had major impact. They call
for more research on cyber-physical security. Fears about the use of quantum
computing to break current cryptosystems make matters worse. While the quantum
threat motivated the creation of new disciplines to handle the issue, such as
post-quantum cryptography, other fields have overlooked the existence of
quantum-enabled adversaries. This is the case of cyber-physical defense
research, a distinct but complementary discipline to cyber-physical protection.
Cyber-physical defense refers to the capability to detect and react in response
to cyber-physical attacks. Concretely, it involves the integration of
mechanisms to identify adverse events and prepare response plans, during and
after incidents occur. In this paper, we make the assumption that the
eventually available quantum computer will provide an advantage to adversaries
against defenders, unless they also adopt this technology. We envision the
necessity for a paradigm shift, where an increase of adversarial resources
because of quantum supremacy does not translate into higher likelihood of
disruptions. Consistently with current system design practices in other areas,
such as the use of artificial intelligence for the reinforcement of attack
detection tools, we outline a vision for next generation cyber-physical defense
layers leveraging ideas from quantum computing and machine learning. Through an
example, we show that defenders of NCSs can learn and improve their strategies
to anticipate and recover from attacks.
|
[
{
"version": "v1",
"created": "Mon, 21 Feb 2022 16:52:50 GMT"
}
] | 2022-02-22T00:00:00 |
[
[
"Barbeau",
"Michel",
""
],
[
"Garcia-Alfaro",
"Joaquin",
""
]
] |
new_dataset
| 0.998435 |
1810.03772
|
Denis Krotov
|
Denis S. Krotov (Sobolev Institute of Mathematics, Novosibirsk,
Russia)
|
The existence of perfect codes in Doob graphs
|
5 IEEE pages. V.2: accepted version; the introduction has been
extended by a mini-survey
|
IEEE Trans. Inf. Theory 66(3) 2020, 1423-1427
|
10.1109/TIT.2019.2946612
| null |
cs.IT cs.DM math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We solve the problem of existence of perfect codes in the Doob graph. It is
shown that 1-perfect codes in the Doob graph D(m,n) exist if and only if
6m+3n+1 is a power of 2; that is, if the size of a 1-ball divides the number of
vertices. Keywords: perfect codes, distance-regular graphs, Doob graphs,
Eisenstein-Jacobi integers.
|
[
{
"version": "v1",
"created": "Tue, 9 Oct 2018 02:09:13 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Feb 2022 22:40:19 GMT"
}
] | 2022-02-21T00:00:00 |
[
[
"Krotov",
"Denis S.",
"",
"Sobolev Institute of Mathematics, Novosibirsk,\n Russia"
]
] |
new_dataset
| 0.9996 |
2010.11887
|
Matthijs V\'ak\'ar
|
Maria I. Gorinova, Andrew D. Gordon, Charles Sutton, Matthijs
V\'ak\'ar
|
Conditional independence by typing
| null |
ACM Transactions on Programming Languages and Systems, Volume 44,
Issue 1, March 2022, Article No 4, pp 1-54
|
10.1145/3490421
| null |
cs.PL cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A central goal of probabilistic programming languages (PPLs) is to separate
modelling from inference. However, this goal is hard to achieve in practice.
Users are often forced to re-write their models in order to improve efficiency
of inference or meet restrictions imposed by the PPL. Conditional independence
(CI) relationships among parameters are a crucial aspect of probabilistic
models that capture a qualitative summary of the specified model and can
facilitate more efficient inference. We present an information flow type system
for probabilistic programming that captures conditional independence (CI)
relationships, and show that, for a well-typed program in our system, the
distribution it implements is guaranteed to have certain CI-relationships.
Further, by using type inference, we can statically deduce which CI-properties
are present in a specified model. As a practical application, we consider the
problem of how to perform inference on models with mixed discrete and
continuous parameters. Inference on such models is challenging in many existing
PPLs, but can be improved through a workaround, where the discrete parameters
are used implicitly, at the expense of manual model re-writing. We present a
source-to-source semantics-preserving transformation, which uses our CI-type
system to automate this workaround by eliminating the discrete parameters from
a probabilistic program. The resulting program can be seen as a hybrid
inference algorithm on the original program, where continuous parameters can be
drawn using efficient gradient-based inference methods, while the discrete
parameters are inferred using variable elimination. We implement our CI-type
system and its example application in SlicStan: a compositional variant of
Stan.
|
[
{
"version": "v1",
"created": "Thu, 22 Oct 2020 17:27:22 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Feb 2022 14:19:22 GMT"
}
] | 2022-02-21T00:00:00 |
[
[
"Gorinova",
"Maria I.",
""
],
[
"Gordon",
"Andrew D.",
""
],
[
"Sutton",
"Charles",
""
],
[
"Vákár",
"Matthijs",
""
]
] |
new_dataset
| 0.982866 |
2109.14478
|
Hedongliang Liu
|
Hedongliang Liu, Lukas Holzbaur, Nikita Polyanskii, Sven Puchinger,
Antonia Wachter-Zeh
|
Quadratic-Curve-Lifted Reed-Solomon Codes
|
16 pages, 2 figures. A short version is accepted by WCC 2022 (12th
International Workshop on Coding and Cryptography)
| null | null | null |
cs.IT math.AG math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lifted codes are a class of evaluation codes attracting more attention due to
good locality and intermediate availability. In this work we introduce and
study quadratic-curve-lifted Reed-Solomon (QC-LRS) codes, where the codeword
symbols whose coordinates are on a quadratic curve form a codeword of a
Reed-Solomon code. We first develop a necessary and sufficient condition on the
monomials which form a basis the code. Based on the condition, we give upper
and lower bounds on the dimension and show that the asymptotic rate of a QC-LRS
code over $\mathbb{F}_q$ with local redundancy $r$ is
$1-\Theta(q/r)^{-0.2284}$. Moreover, we provide analytical results on the
minimum distance of this class of codes and compare QC-LRS codes with lifted
Reed-Solomon codes by simulations in terms of the local recovery capability
against erasures. For short lengths, QC-LRS codes have better performance in
local recovery for erasures than LRS codes of the same dimension.
|
[
{
"version": "v1",
"created": "Wed, 29 Sep 2021 15:10:07 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Nov 2021 16:17:28 GMT"
},
{
"version": "v3",
"created": "Fri, 18 Feb 2022 14:19:33 GMT"
}
] | 2022-02-21T00:00:00 |
[
[
"Liu",
"Hedongliang",
""
],
[
"Holzbaur",
"Lukas",
""
],
[
"Polyanskii",
"Nikita",
""
],
[
"Puchinger",
"Sven",
""
],
[
"Wachter-Zeh",
"Antonia",
""
]
] |
new_dataset
| 0.993535 |
2201.06618
|
Mengshu Sun
|
Mengshu Sun, Haoyu Ma, Guoliang Kang, Yifan Jiang, Tianlong Chen,
Xiaolong Ma, Zhangyang Wang, Yanzhi Wang
|
VAQF: Fully Automatic Software-Hardware Co-Design Framework for Low-Bit
Vision Transformer
| null | null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The transformer architectures with attention mechanisms have obtained success
in Nature Language Processing (NLP), and Vision Transformers (ViTs) have
recently extended the application domains to various vision tasks. While
achieving high performance, ViTs suffer from large model size and high
computation complexity that hinders the deployment of them on edge devices. To
achieve high throughput on hardware and preserve the model accuracy
simultaneously, we propose VAQF, a framework that builds inference accelerators
on FPGA platforms for quantized ViTs with binary weights and low-precision
activations. Given the model structure and the desired frame rate, VAQF will
automatically output the required quantization precision for activations as
well as the optimized parameter settings of the accelerator that fulfill the
hardware requirements. The implementations are developed with Vivado High-Level
Synthesis (HLS) on the Xilinx ZCU102 FPGA board, and the evaluation results
with the DeiT-base model indicate that a frame rate requirement of 24 frames
per second (FPS) is satisfied with 8-bit activation quantization, and a target
of 30 FPS is met with 6-bit activation quantization. To the best of our
knowledge, this is the first time quantization has been incorporated into ViT
acceleration on FPGAs with the help of a fully automatic framework to guide the
quantization strategy on the software side and the accelerator implementations
on the hardware side given the target frame rate. Very small compilation time
cost is incurred compared with quantization training, and the generated
accelerators show the capability of achieving real-time execution for
state-of-the-art ViT models on FPGAs.
|
[
{
"version": "v1",
"created": "Mon, 17 Jan 2022 20:27:52 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Feb 2022 18:54:59 GMT"
}
] | 2022-02-21T00:00:00 |
[
[
"Sun",
"Mengshu",
""
],
[
"Ma",
"Haoyu",
""
],
[
"Kang",
"Guoliang",
""
],
[
"Jiang",
"Yifan",
""
],
[
"Chen",
"Tianlong",
""
],
[
"Ma",
"Xiaolong",
""
],
[
"Wang",
"Zhangyang",
""
],
[
"Wang",
"Yanzhi",
""
]
] |
new_dataset
| 0.988309 |
2201.12585
|
Meng Ai
|
Meng Ai, Biao Li, Heyang Gong, Qingwei Yu, Shengjie Xue, Yuan Zhang,
Yunzhou Zhang, Peng Jiang
|
LBCF: A Large-Scale Budget-Constrained Causal Forest Algorithm
|
Published in Web Conference 2022 (WWW'2022)
| null |
10.1145/3485447.3512103
| null |
cs.LG cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Offering incentives (e.g., coupons at Amazon, discounts at Uber and video
bonuses at Tiktok) to user is a common strategy used by online platforms to
increase user engagement and platform revenue. Despite its proven
effectiveness, these marketing incentives incur an inevitable cost and might
result in a low ROI (Return on Investment) if not used properly. On the other
hand, different users respond differently to these incentives, for instance,
some users never buy certain products without coupons, while others do anyway.
Thus, how to select the right amount of incentives (i.e. treatment) to each
user under budget constraints is an important research problem with great
practical implications. In this paper, we call such problem as a
budget-constrained treatment selection (BTS) problem.
The challenge is how to efficiently solve BTS problem on a Large-Scale
dataset and achieve improved results over the existing techniques. We propose a
novel tree-based treatment selection technique under budget constraints, called
Large-Scale Budget-Constrained Causal Forest (LBCF) algorithm, which is also an
efficient treatment selection algorithm suitable for modern distributed
computing systems. A novel offline evaluation method is also proposed to
overcome an intrinsic challenge in assessing solutions' performance for BTS
problem in randomized control trials (RCT) data. We deploy our approach in a
real-world scenario on a large-scale video platform, where the platform gives
away bonuses in order to increase users' campaign engagement duration. The
simulation analysis, offline and online experiments all show that our method
outperforms various tree-based state-of-the-art baselines. The proposed
approach is currently serving over hundreds of millions of users on the
platform and achieves one of the most tremendous improvements over these
months.
|
[
{
"version": "v1",
"created": "Sat, 29 Jan 2022 13:21:07 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Feb 2022 12:37:07 GMT"
}
] | 2022-02-21T00:00:00 |
[
[
"Ai",
"Meng",
""
],
[
"Li",
"Biao",
""
],
[
"Gong",
"Heyang",
""
],
[
"Yu",
"Qingwei",
""
],
[
"Xue",
"Shengjie",
""
],
[
"Zhang",
"Yuan",
""
],
[
"Zhang",
"Yunzhou",
""
],
[
"Jiang",
"Peng",
""
]
] |
new_dataset
| 0.999184 |
2202.08933
|
Varun Nalam
|
Chinmay Shah, Aaron Fleming, Varun Nalam and He (Helen) Huang
|
Design of EMG-driven Musculoskeletal Model for Volitional Control of a
Robotic Ankle Prosthesis
|
6 page conference submission pre-print
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Existing robotic lower-limb prostheses use autonomous control to address
cyclic, locomotive tasks, but they are inadequate to operate the prosthesis for
daily activities that are non-cyclic and unpredictable. To address this
challenge, this study aims to design a novel electromyography (EMG)-driven
musculoskeletal model for volitional control of a robotic ankle-foot
prosthesis. This controller places the user in continuous control of the
device, allowing them to freely manipulate the prosthesis behavior at will. The
Hill-type muscle model was used to model a dorsiflexor and a plantarflexor,
which functioned around a virtual ankle joint. The model parameters were
determined by fitting the model prediction to the experimental data collected
from an able-bodied subject. EMG signals recorded from ankle agonist and
antagonist muscle pair were used to activate the virtual muscle models. This
model was validated via offline simulations and real-time prosthesis control.
Additionally, the feasibility of the proposed prosthesis control on assisting
the user's functional tasks was demonstrated. The present control may further
improve the function of robotic prosthesis for supporting versatile activities
in individuals with lower-limb amputations.
|
[
{
"version": "v1",
"created": "Thu, 17 Feb 2022 23:22:12 GMT"
}
] | 2022-02-21T00:00:00 |
[
[
"Shah",
"Chinmay",
"",
"Helen"
],
[
"Fleming",
"Aaron",
"",
"Helen"
],
[
"Nalam",
"Varun",
"",
"Helen"
],
[
"He",
"",
"",
"Helen"
],
[
"Huang",
"",
""
]
] |
new_dataset
| 0.996011 |
2202.08948
|
Camille Coti
|
Camille Coti and Allen D. Malony
|
SKaMPI-OpenSHMEM: Measuring OpenSHMEM Communication Routines
|
17 pages, OpenSHMEM workshop 2021
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Benchmarking is an important challenge in HPC, in particular, to be able to
tune the basic blocks of the software environment used by applications. The
communication library and distributed run-time environment are among the most
critical ones. In particular, many of the routines provided by communication
libraries can be adjusted using parameters such as buffer sizes and
communication algorithm. As a consequence, being able to measure accurately the
time taken by these routines is crucial in order to optimize them and achieve
the best performance. For instance, the SKaMPI library was designed to measure
the time taken by MPI routines, relying on MPI's two-sided communication model
to measure one-sided and two-sided peer-to-peer communication and collective
routines. In this paper, we discuss the benchmarking challenges specific to
OpenSHMEM's communication model, mainly to avoid inter-call pipelining and
overlapping when measuring the time taken by its routines. We extend SKaMPI for
OpenSHMEM for this purpose and demonstrate measurement algorithms that address
OpenSHMEM's communication model in practice. Scaling experiments are run on the
Summit platform to compare different benchmarking approaches on the SKaMPI
benchmark operations. These show the advantages of our techniques for more
accurate performance characterization.
|
[
{
"version": "v1",
"created": "Fri, 18 Feb 2022 00:53:07 GMT"
}
] | 2022-02-21T00:00:00 |
[
[
"Coti",
"Camille",
""
],
[
"Malony",
"Allen D.",
""
]
] |
new_dataset
| 0.99955 |
2202.09035
|
Shaahin Angizi
|
Shaahin Angizi, Sepehr Tabrizchi, Arman Roohi
|
PISA: A Binary-Weight Processing-In-Sensor Accelerator for Edge Image
Processing
|
11 pages, 16 figures
| null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work proposes a Processing-In-Sensor Accelerator, namely PISA, as a
flexible, energy-efficient, and high-performance solution for real-time and
smart image processing in AI devices. PISA intrinsically implements a
coarse-grained convolution operation in Binarized-Weight Neural Networks
(BWNNs) leveraging a novel compute-pixel with non-volatile weight storage at
the sensor side. This remarkably reduces the power consumption of data
conversion and transmission to an off-chip processor. The design is completed
with a bit-wise near-sensor processing-in-DRAM computing unit to process the
remaining network layers. Once the object is detected, PISA switches to typical
sensing mode to capture the image for a fine-grained convolution using only the
near-sensor processing unit. Our circuit-to-application co-simulation results
on a BWNN acceleration demonstrate acceptable accuracy on various image
datasets in coarse-grained evaluation compared to baseline BWNN models, while
PISA achieves a frame rate of 1000 and efficiency of ~1.74 TOp/s/W. Lastly,
PISA substantially reduces data conversion and transmission energy by ~84%
compared to a baseline CPU-sensor design.
|
[
{
"version": "v1",
"created": "Fri, 18 Feb 2022 06:02:27 GMT"
}
] | 2022-02-21T00:00:00 |
[
[
"Angizi",
"Shaahin",
""
],
[
"Tabrizchi",
"Sepehr",
""
],
[
"Roohi",
"Arman",
""
]
] |
new_dataset
| 0.984877 |
2202.09108
|
Yuling Gu
|
Yuling Gu, Nancy F. Chen
|
Large-Scale Acoustic Characterization of Singaporean Children's English
Pronunciation
| null | null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this work, we investigate pronunciation differences in English spoken by
Singaporean children in relation to their American and British counterparts by
conducting Kmeans clustering and Archetypal analysis on selected vowel pairs
and approximants. Given that Singapore adopts British English as the
institutional standard due to historical reasons, one might expect Singaporean
children to follow British pronunciation patterns. Indeed, Singaporean and
British children are more similar in their production of syllable-final /r/ --
they do not lower their third formant nearly as much as American children do,
suggesting a lack of rhoticity. Interestingly, Singaporean children also
present similar patterns to American children when it comes to their fronting
of vowels as demonstrated across various vowels including TRAP-BATH split
vowels. Singaporean children's English also demonstrated characteristics that
do not resemble any of the other two populations. We observe that Singaporean
children's vowel height characteristics are distinct from both that of American
and British children. In tense and lax vowel pairs, we also consistently
observe that the distinction is less conspicuous for Singaporean children
compared to the other speaker groups. Further, while American and British
children demonstrate lowering of F1 and F2 formants in transitions into
syllable-final /l/s, a wide gap between F2 and F3 formants, and small
difference between F1 and F2 formants, all of these are not exhibited in
Singaporean children's pronunciation. These findings point towards potential
sociolinguistic implications of how Singapore English might be evolving to
embody more than British pronunciation characteristics. Furthermore, these
findings also suggest that Singapore English could be have been influenced by
languages beyond American and British English, potentially due to Singapore's
multilingual environment.
|
[
{
"version": "v1",
"created": "Fri, 18 Feb 2022 10:18:09 GMT"
}
] | 2022-02-21T00:00:00 |
[
[
"Gu",
"Yuling",
""
],
[
"Chen",
"Nancy F.",
""
]
] |
new_dataset
| 0.999059 |
2202.09136
|
Xavier Salleras
|
Xavier Salleras, Sergi Rovira, Vanesa Daza
|
FORT: Right-proving and Attribute-blinding Self-sovereign Authentication
| null | null |
10.3390/math10040617
| null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Nowadays, there is a plethora of services that are provided and paid for
online, like video streaming subscriptions, car or parking sharing, purchasing
tickets for events, etc. Online services usually issue tokens directly related
to the identities of their users after signing up into their platform, and the
users need to authenticate using the same credentials each time they are
willing to use the service. Likewise, when using in-person services like going
to a concert, after paying for this service the user usually gets a ticket
which proves that he/she has the right to use that service. In both scenarios,
the main concerns are the centralization of the systems, and that they do not
ensure customers' privacy. The involved Service Providers are Trusted Third
Parties, authorities that offer services and handle private data about users.
In this paper, we design and implement FORT, a decentralized system that allows
customers to prove their right to use specific services (either online or
in-person) without revealing sensitive information. To achieve decentralization
we propose a solution where all the data is handled by a Blockchain. We
describe and uniquely identify users' rights using Non-Fungible Tokens (NFTs),
and possession of these rights is demonstrated by using Zero-Knowledge Proofs,
cryptographic primitives that allow us to guarantee customers' privacy.
Furthermore, we provide benchmarks of FORT which show that our protocol is
efficient enough to be used in devices with low computing resources, like
smartphones or smartwatches, which are the kind of devices commonly used in our
use case scenario.
|
[
{
"version": "v1",
"created": "Fri, 18 Feb 2022 11:37:34 GMT"
}
] | 2022-02-21T00:00:00 |
[
[
"Salleras",
"Xavier",
""
],
[
"Rovira",
"Sergi",
""
],
[
"Daza",
"Vanesa",
""
]
] |
new_dataset
| 0.998457 |
2202.09210
|
\v{S}imon Schierreich
|
Robert Ganian, Thekla Hamm, Du\v{s}an Knop, \v{S}imon Schierreich,
Ond\v{r}ej Such\'y
|
Hedonic Diversity Games: A Complexity Picture with More than Two Colors
|
Accepted to AAAI '22
| null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hedonic diversity games are a variant of the classical Hedonic games designed
to better model a variety of questions concerning diversity and fairness.
Previous works mainly targeted the case with two diversity classes (represented
as colors in the model) and provided some initial complexity-theoretic and
existential results concerning Nash and individually stable outcomes. Here, we
design new algorithms accompanied with lower bounds which provide a complete
parameterized-complexity picture for computing Nash and individually stable
outcomes with respect to the most natural parameterizations of the problem.
Crucially, our results hold for general Hedonic diversity games where the
number of colors is not necessarily restricted to two, and show that -- apart
from two trivial cases -- a necessary condition for tractability in this
setting is that the number of colors is bounded by the parameter. Moreover, for
the special case of two colors we resolve an open question asked in previous
work (Boehmer and Elkind, AAAI 2020).
|
[
{
"version": "v1",
"created": "Fri, 18 Feb 2022 14:16:33 GMT"
}
] | 2022-02-21T00:00:00 |
[
[
"Ganian",
"Robert",
""
],
[
"Hamm",
"Thekla",
""
],
[
"Knop",
"Dušan",
""
],
[
"Schierreich",
"Šimon",
""
],
[
"Suchý",
"Ondřej",
""
]
] |
new_dataset
| 0.998194 |
1901.08435
|
Egor Zuev
|
Egor Zuev
|
Mokka: BFT consensus
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mokka is a partial-synchronous, strong consistent BFT consensus algorithm for
reaching the consensus about a certain value in open networks. This algorithm
has some common approaches nested from RAFT, but its nature and design make
Mokka a better solution for DLT (distributed ledger).
|
[
{
"version": "v1",
"created": "Thu, 24 Jan 2019 14:49:31 GMT"
},
{
"version": "v2",
"created": "Wed, 8 May 2019 16:54:44 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Sep 2019 20:10:22 GMT"
},
{
"version": "v4",
"created": "Mon, 13 Apr 2020 18:31:09 GMT"
},
{
"version": "v5",
"created": "Sun, 8 Aug 2021 06:47:37 GMT"
},
{
"version": "v6",
"created": "Wed, 18 Aug 2021 13:50:11 GMT"
},
{
"version": "v7",
"created": "Thu, 17 Feb 2022 18:36:47 GMT"
}
] | 2022-02-18T00:00:00 |
[
[
"Zuev",
"Egor",
""
]
] |
new_dataset
| 0.998159 |
1908.11568
|
Aastha Mehta
|
Aastha Mehta, Mohamed Alzayat, Roberta de Viti, Bj\"orn B.
Brandenburg, Peter Druschel, Deepak Garg
|
Pacer: Comprehensive Network Side-Channel Mitigation in the Cloud
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Network side channels (NSCs) leak secrets through packet timing and packet
sizes. They are of particular concern in public IaaS Clouds, where any tenant
may be able to colocate and indirectly observe a victim's traffic shape. We
present Pacer, the first system that eliminates NSC leaks in public IaaS Clouds
end-to-end. It builds on the principled technique of shaping guest traffic
outside the guest to make the traffic shape independent of secrets by design.
However, Pacer also addresses important concerns that have not been considered
in prior work -- it prevents internal side-channel leaks from affecting
reshaped traffic, and it respects network flow control, congestion control and
loss recovery signals. Pacer is implemented as a paravirtualizing extension to
the host hypervisor, requiring modest changes to the hypervisor and the guest
kernel, and only optional, minimal changes to applications. We present Pacer's
key abstraction of a cloaked tunnel, describe its design and implementation,
prove the security of important design aspects through a formal model, and show
through an experimental evaluation that Pacer imposes moderate overheads on
bandwidth, client latency, and server throughput, while thwarting attacks based
on state-of-the-art CNN classifiers.
|
[
{
"version": "v1",
"created": "Fri, 30 Aug 2019 07:13:29 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Dec 2019 23:48:35 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Sep 2020 01:20:52 GMT"
},
{
"version": "v4",
"created": "Sat, 6 Feb 2021 22:57:25 GMT"
},
{
"version": "v5",
"created": "Thu, 17 Feb 2022 18:49:59 GMT"
}
] | 2022-02-18T00:00:00 |
[
[
"Mehta",
"Aastha",
""
],
[
"Alzayat",
"Mohamed",
""
],
[
"de Viti",
"Roberta",
""
],
[
"Brandenburg",
"Björn B.",
""
],
[
"Druschel",
"Peter",
""
],
[
"Garg",
"Deepak",
""
]
] |
new_dataset
| 0.998887 |
2009.04938
|
Simon Praetorius
|
Simon Praetorius and Florian Stenger
|
Dune-CurvedGrid -- A Dune module for surface parametrization
|
26 pages
|
Arch. Num. Soft., 2022, 6(1)
|
10.11588/ans.2022.1.75917
| null |
cs.MS cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we introduce and describe an implementation of curved (surface)
geometries within the Dune framework for grid-based discretizations. Therefore,
we employ the abstraction of geometries as local-functions bound to a grid
element, and the abstraction of a grid as connectivity of elements together
with a grid-function that can be localized to the elements to provide element
local parametrizations of the curved surface.
|
[
{
"version": "v1",
"created": "Thu, 10 Sep 2020 15:32:42 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Oct 2020 15:59:27 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Feb 2022 14:46:01 GMT"
}
] | 2022-02-18T00:00:00 |
[
[
"Praetorius",
"Simon",
""
],
[
"Stenger",
"Florian",
""
]
] |
new_dataset
| 0.993594 |
2101.01925
|
Christina Katsamaki
|
Christina Katsamaki, Fabrice Rouillier, Elias Tsigaridas
|
PTOPO: Computing the Geometry and the Topology of Parametric Curves
| null | null | null | null |
cs.SC
|
http://creativecommons.org/licenses/by/4.0/
|
We consider the problem of computing the topology and describing the geometry
of a parametric curve in $\mathbb{R}^n$. We present an algorithm, PTOPO, that
constructs an abstract graph that is isotopic to the curve in the embedding
space. Our method exploits the benefits of the parametric representation and
does not resort to implicitization.
Most importantly, we perform all computations in the parameter space and not
in the implicit space. When the parametrization involves polynomials of degree
at most $d$ and maximum bitsize of coefficients $\tau$, then the worst case bit
complexity of PTOPO is $
\tilde{\mathcal{O}}_B(nd^6+nd^5\tau+d^4(n^2+n\tau)+d^3(n^2\tau+
n^3)+n^3d^2\tau)$. This bound matches the current record bound
$\tilde{\mathcal{O}}_B(d^6+d^5\tau)$ for the problem of computing the topology
of a plane algebraic curve given in implicit form. For plane and space curves,
if $N = \max\{d, \tau \}$, the complexity of PTOPO becomes
$\tilde{\mathcal{O}}_B(N^6)$, which improves the state-of-the-art result, due
to Alc\'azar and D\'iaz-Toca [CAGD'10], by a factor of $N^{10}$. In the same
time complexity, we obtain a graph whose straight-line embedding is isotopic to
the curve. However, visualizing the curve on top of the abstract graph
construction, increases the bound to $\tilde{\mathcal{O}}_B(N^7)$. For curves
of general dimension, we can also distinguish between ordinary and non-ordinary
real singularities and determine their multiplicities in the same expected
complexity of PTOPO by employing the algorithm of Blasco and P\'erez-D\'iaz
[CAGD'19]. We have implemented PTOPO in Maple for the case of plane and space
curves. Our experiments illustrate its practical nature.
|
[
{
"version": "v1",
"created": "Wed, 6 Jan 2021 08:48:25 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Feb 2022 17:26:42 GMT"
}
] | 2022-02-18T00:00:00 |
[
[
"Katsamaki",
"Christina",
""
],
[
"Rouillier",
"Fabrice",
""
],
[
"Tsigaridas",
"Elias",
""
]
] |
new_dataset
| 0.969554 |
2109.08652
|
Kaiwen Cai
|
Kaiwen Cai, Bing Wang, Chris Xiaoxuan Lu
|
AutoPlace: Robust Place Recognition with Single-chip Automotive Radar
|
Accepted by IEEE Conference on Robotics and Automation (ICRA), 8
pages
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a novel place recognition approach to autonomous vehicles
by using low-cost, single-chip automotive radar. Aimed at improving recognition
robustness and fully exploiting the rich information provided by this emerging
automotive radar, our approach follows a principled pipeline that comprises (1)
dynamic points removal from instant Doppler measurement, (2) spatial-temporal
feature embedding on radar point clouds, and (3) retrieved candidates
refinement from Radar Cross Section measurement. Extensive experimental results
on the public nuScenes dataset demonstrate that existing visual/LiDAR/spinning
radar place recognition approaches are less suitable for single-chip automotive
radar. In contrast, our purpose-built approach for automotive radar
consistently outperforms a variety of baseline methods via a comprehensive set
of metrics, providing insights into the efficacy when used in a realistic
system.
|
[
{
"version": "v1",
"created": "Fri, 17 Sep 2021 17:16:09 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Feb 2022 09:09:04 GMT"
}
] | 2022-02-18T00:00:00 |
[
[
"Cai",
"Kaiwen",
""
],
[
"Wang",
"Bing",
""
],
[
"Lu",
"Chris Xiaoxuan",
""
]
] |
new_dataset
| 0.999669 |
2109.10506
|
A. Feder Cooper
|
A. Feder Cooper, Maria Antoniak, Christopher De Sa, Marilyn Migiel and
David Mimno
|
Tecnologica cosa: Modeling Storyteller Personalities in Boccaccio's
Decameron
|
The 5th Joint SIGHUM Workshop on Computational Linguistics for
Cultural Heritage, Social Sciences, Humanities and Literature (co-located
with EMNLP 2021)
| null |
10.18653/v1/2021.latechclfl-1.17
| null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We explore Boccaccio's Decameron to see how digital humanities tools can be
used for tasks that have limited data in a language no longer in contemporary
use: medieval Italian. We focus our analysis on the question: Do the different
storytellers in the text exhibit distinct personalities? To answer this
question, we curate and release a dataset based on the authoritative edition of
the text. We use supervised classification methods to predict storytellers
based on the stories they tell, confirming the difficulty of the task, and
demonstrate that topic modeling can extract thematic storyteller "profiles."
|
[
{
"version": "v1",
"created": "Wed, 22 Sep 2021 03:42:14 GMT"
}
] | 2022-02-18T00:00:00 |
[
[
"Cooper",
"A. Feder",
""
],
[
"Antoniak",
"Maria",
""
],
[
"De Sa",
"Christopher",
""
],
[
"Migiel",
"Marilyn",
""
],
[
"Mimno",
"David",
""
]
] |
new_dataset
| 0.998208 |
2112.00209
|
Yuki Okamoto
|
Yuki Okamoto, Shota Horiguchi, Masaaki Yamamoto, Keisuke Imoto, Yohei
Kawaguchi
|
Environmental Sound Extraction Using Onomatopoeic Words
|
Accepted to ICASSP2022
| null | null | null |
cs.SD cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An onomatopoeic word, which is a character sequence that phonetically
imitates a sound, is effective in expressing characteristics of sound such as
duration, pitch, and timbre. We propose an environmental-sound-extraction
method using onomatopoeic words to specify the target sound to be extracted. By
this method, we estimate a time-frequency mask from an input mixture
spectrogram and an onomatopoeic word using a U-Net architecture, then extract
the corresponding target sound by masking the spectrogram. Experimental results
indicate that the proposed method can extract only the target sound
corresponding to the onomatopoeic word and performs better than conventional
methods that use sound-event classes to specify the target sound.
|
[
{
"version": "v1",
"created": "Wed, 1 Dec 2021 01:18:06 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Dec 2021 03:55:40 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Feb 2022 10:27:02 GMT"
},
{
"version": "v4",
"created": "Thu, 17 Feb 2022 04:41:59 GMT"
}
] | 2022-02-18T00:00:00 |
[
[
"Okamoto",
"Yuki",
""
],
[
"Horiguchi",
"Shota",
""
],
[
"Yamamoto",
"Masaaki",
""
],
[
"Imoto",
"Keisuke",
""
],
[
"Kawaguchi",
"Yohei",
""
]
] |
new_dataset
| 0.992051 |
2201.02065
|
Cleison Correia de Amorim
|
Cleison Correia de Amorim and Cleber Zanchettin
|
ASL-Skeleton3D and ASL-Phono: Two Novel Datasets for the American Sign
Language
| null |
The paper is under consideration at Pattern Recognition Letters
(2022) (under the manuscript number PRLETTERS-D-22-00140)
| null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sign language is an essential resource enabling access to communication and
proper socioemotional development for individuals suffering from disabling
hearing loss. As this population is expected to reach 700 million by 2050, the
importance of the language becomes even more essential as it plays a critical
role to ensure the inclusion of such individuals in society. The Sign Language
Recognition field aims to bridge the gap between users and non-users of sign
languages. However, the scarcity in quantity and quality of datasets is one of
the main challenges limiting the exploration of novel approaches that could
lead to significant advancements in this research area. Thus, this paper
contributes by introducing two new datasets for the American Sign Language: the
first is composed of the three-dimensional representation of the signers and,
the second, by an unprecedented linguistics-based representation containing a
set of phonological attributes of the signs.
|
[
{
"version": "v1",
"created": "Thu, 6 Jan 2022 14:10:03 GMT"
}
] | 2022-02-18T00:00:00 |
[
[
"de Amorim",
"Cleison Correia",
""
],
[
"Zanchettin",
"Cleber",
""
]
] |
new_dataset
| 0.999623 |
2201.13361
|
Nils Koster
|
Nils Koster, Oliver Grothe and Achim Rettinger
|
Signing the Supermask: Keep, Hide, Invert
|
ICLR 2022 camera ready
| null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The exponential growth in numbers of parameters of neural networks over the
past years has been accompanied by an increase in performance across several
fields. However, due to their sheer size, the networks not only became
difficult to interpret but also problematic to train and use in real-world
applications, since hardware requirements increased accordingly. Tackling both
issues, we present a novel approach that either drops a neural network's
initial weights or inverts their respective sign. Put simply, a network is
trained by weight selection and inversion without changing their absolute
values. Our contribution extends previous work on masking by additionally
sign-inverting the initial weights and follows the findings of the Lottery
Ticket Hypothesis. Through this extension and adaptations of initialization
methods, we achieve a pruning rate of up to 99%, while still matching or
exceeding the performance of various baseline and previous models. Our approach
has two main advantages. First, and most notable, signed Supermask models
drastically simplify a model's structure, while still performing well on given
tasks. Second, by reducing the neural network to its very foundation, we gain
insights into which weights matter for performance. The code is available on
GitHub.
|
[
{
"version": "v1",
"created": "Mon, 31 Jan 2022 17:17:37 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Feb 2022 13:32:27 GMT"
}
] | 2022-02-18T00:00:00 |
[
[
"Koster",
"Nils",
""
],
[
"Grothe",
"Oliver",
""
],
[
"Rettinger",
"Achim",
""
]
] |
new_dataset
| 0.998625 |
2202.03643
|
Junqiang Li
|
Junqiang Li, Senyi Li, Gang Sun, Ting Chen, and Hongfang Yu
|
SNPSFuzzer: A Fast Greybox Fuzzer for Stateful Network Protocols using
Snapshots
| null | null | null | null |
cs.CR cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Greybox fuzzing has been widely used in stateless programs and has achieved
great success. However, most state-of-the-art greybox fuzzers generally have
the problems of slow speed and shallow state depth coverage in the process of
fuzzing stateful network protocol programs which are able to remember and store
details of the interactions. The existing greybox fuzzers for network protocol
programs send a series of well-defined prefix sequences of input messages first
and then send mutated messages to test the target state of a stateful network
protocol. The process mentioned above causes a high time cost. In this paper,
we propose SNPSFuzzer, a fast greybox fuzzer for stateful network protocol
using snapshots. SNPSFuzzer dumps the context information when the network
protocol program is under a specific state and restores it when the state needs
to be fuzzed. Furthermore, we design a message chain analysis algorithm to
explore more and deeper network protocol states. Our evaluation shows that,
compared with the state-of-the-art network protocol greybox fuzzer AFLNET,
SNPSFuzzer increases the speed of network protocol fuzzing by 112.0%-168.9% and
improves path coverage by 21.4%-27.5% within 24 hours. Moreover, SNPSFuzzer
exposes a previously unreported vulnerability in program Tinydtls.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 04:53:36 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Feb 2022 03:34:18 GMT"
}
] | 2022-02-18T00:00:00 |
[
[
"Li",
"Junqiang",
""
],
[
"Li",
"Senyi",
""
],
[
"Sun",
"Gang",
""
],
[
"Chen",
"Ting",
""
],
[
"Yu",
"Hongfang",
""
]
] |
new_dataset
| 0.997578 |
2202.08267
|
Huiyuan Yang
|
Huiyuan Yang, Han Yu, Kusha Sridhar, Thomas Vaessen, Inez Myin-Germeys
and Akane Sano
|
More to Less (M2L): Enhanced Health Recognition in the Wild with Reduced
Modality of Wearable Sensors
|
4 pages, two figures and three tables
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurately recognizing health-related conditions from wearable data is
crucial for improved healthcare outcomes. To improve the recognition accuracy,
various approaches have focused on how to effectively fuse information from
multiple sensors. Fusing multiple sensors is a common scenario in many
applications, but may not always be feasible in real-world scenarios. For
example, although combining bio-signals from multiple sensors (i.e., a chest
pad sensor and a wrist wearable sensor) has been proved effective for improved
performance, wearing multiple devices might be impractical in the free-living
context. To solve the challenges, we propose an effective more to less (M2L)
learning framework to improve testing performance with reduced sensors through
leveraging the complementary information of multiple modalities during
training. More specifically, different sensors may carry different but
complementary information, and our model is designed to enforce collaborations
among different modalities, where positive knowledge transfer is encouraged and
negative knowledge transfer is suppressed, so that better representation is
learned for individual modalities. Our experimental results show that our
framework achieves comparable performance when compared with the full
modalities. Our code and results will be available at
https://github.com/compwell-org/More2Less.git.
|
[
{
"version": "v1",
"created": "Wed, 16 Feb 2022 18:23:29 GMT"
}
] | 2022-02-18T00:00:00 |
[
[
"Yang",
"Huiyuan",
""
],
[
"Yu",
"Han",
""
],
[
"Sridhar",
"Kusha",
""
],
[
"Vaessen",
"Thomas",
""
],
[
"Myin-Germeys",
"Inez",
""
],
[
"Sano",
"Akane",
""
]
] |
new_dataset
| 0.989615 |
2202.08320
|
Zhaocheng Zhu
|
Zhaocheng Zhu, Chence Shi, Zuobai Zhang, Shengchao Liu, Minghao Xu,
Xinyu Yuan, Yangtian Zhang, Junkun Chen, Huiyu Cai, Jiarui Lu, Chang Ma,
Runcheng Liu, Louis-Pascal Xhonneux, Meng Qu, Jian Tang
|
TorchDrug: A Powerful and Flexible Machine Learning Platform for Drug
Discovery
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine learning has huge potential to revolutionize the field of drug
discovery and is attracting increasing attention in recent years. However,
lacking domain knowledge (e.g., which tasks to work on), standard benchmarks
and data preprocessing pipelines are the main obstacles for machine learning
researchers to work in this domain. To facilitate the progress of machine
learning for drug discovery, we develop TorchDrug, a powerful and flexible
machine learning platform for drug discovery built on top of PyTorch. TorchDrug
benchmarks a variety of important tasks in drug discovery, including molecular
property prediction, pretrained molecular representations, de novo molecular
design and optimization, retrosynthsis prediction, and biomedical knowledge
graph reasoning. State-of-the-art techniques based on geometric deep learning
(or graph machine learning), deep generative models, reinforcement learning and
knowledge graph reasoning are implemented for these tasks. TorchDrug features a
hierarchical interface that facilitates customization from both novices and
experts in this domain. Tutorials, benchmark results and documentation are
available at https://torchdrug.ai. Code is released under Apache License 2.0.
|
[
{
"version": "v1",
"created": "Wed, 16 Feb 2022 20:24:02 GMT"
}
] | 2022-02-18T00:00:00 |
[
[
"Zhu",
"Zhaocheng",
""
],
[
"Shi",
"Chence",
""
],
[
"Zhang",
"Zuobai",
""
],
[
"Liu",
"Shengchao",
""
],
[
"Xu",
"Minghao",
""
],
[
"Yuan",
"Xinyu",
""
],
[
"Zhang",
"Yangtian",
""
],
[
"Chen",
"Junkun",
""
],
[
"Cai",
"Huiyu",
""
],
[
"Lu",
"Jiarui",
""
],
[
"Ma",
"Chang",
""
],
[
"Liu",
"Runcheng",
""
],
[
"Xhonneux",
"Louis-Pascal",
""
],
[
"Qu",
"Meng",
""
],
[
"Tang",
"Jian",
""
]
] |
new_dataset
| 0.9981 |
2202.08341
|
Samet Akcay
|
Samet Akcay, Dick Ameln, Ashwin Vaidya, Barath Lakshmanan, Nilesh
Ahuja, Utku Genc
|
Anomalib: A Deep Learning Library for Anomaly Detection
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces anomalib, a novel library for unsupervised anomaly
detection and localization. With reproducibility and modularity in mind, this
open-source library provides algorithms from the literature and a set of tools
to design custom anomaly detection algorithms via a plug-and-play approach.
Anomalib comprises state-of-the-art anomaly detection algorithms that achieve
top performance on the benchmarks and that can be used off-the-shelf. In
addition, the library provides components to design custom algorithms that
could be tailored towards specific needs. Additional tools, including
experiment trackers, visualizers, and hyper-parameter optimizers, make it
simple to design and implement anomaly detection models. The library also
supports OpenVINO model optimization and quantization for real-time deployment.
Overall, anomalib is an extensive library for the design, implementation, and
deployment of unsupervised anomaly detection models from data to the edge.
|
[
{
"version": "v1",
"created": "Wed, 16 Feb 2022 21:15:59 GMT"
}
] | 2022-02-18T00:00:00 |
[
[
"Akcay",
"Samet",
""
],
[
"Ameln",
"Dick",
""
],
[
"Vaidya",
"Ashwin",
""
],
[
"Lakshmanan",
"Barath",
""
],
[
"Ahuja",
"Nilesh",
""
],
[
"Genc",
"Utku",
""
]
] |
new_dataset
| 0.99022 |
2202.08418
|
Jinseok Bae
|
Jinseok Bae, Hojun Jang, Cheol-Hui Min, Hyungun Choi, Young Min Kim
|
Neural Marionette: Unsupervised Learning of Motion Skeleton and Latent
Dynamics from Volumetric Video
|
7 pages (main), 10 pages (appendix) and to be appeared in AAAI2022
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Neural Marionette, an unsupervised approach that discovers the
skeletal structure from a dynamic sequence and learns to generate diverse
motions that are consistent with the observed motion dynamics. Given a video
stream of point cloud observation of an articulated body under arbitrary
motion, our approach discovers the unknown low-dimensional skeletal
relationship that can effectively represent the movement. Then the discovered
structure is utilized to encode the motion priors of dynamic sequences in a
latent structure, which can be decoded to the relative joint rotations to
represent the full skeletal motion. Our approach works without any prior
knowledge of the underlying motion or skeletal structure, and we demonstrate
that the discovered structure is even comparable to the hand-labeled ground
truth skeleton in representing a 4D sequence of motion. The skeletal structure
embeds the general semantics of possible motion space that can generate motions
for diverse scenarios. We verify that the learned motion prior is generalizable
to the multi-modal sequence generation, interpolation of two poses, and motion
retargeting to a different skeletal structure.
|
[
{
"version": "v1",
"created": "Thu, 17 Feb 2022 02:44:16 GMT"
}
] | 2022-02-18T00:00:00 |
[
[
"Bae",
"Jinseok",
""
],
[
"Jang",
"Hojun",
""
],
[
"Min",
"Cheol-Hui",
""
],
[
"Choi",
"Hyungun",
""
],
[
"Kim",
"Young Min",
""
]
] |
new_dataset
| 0.990772 |
2202.08432
|
Liu Liu
|
Liu Liu, Wenqiang Xu, Haoyuan Fu, Sucheng Qian, Yang Han, Cewu Lu
|
AKB-48: A Real-World Articulated Object Knowledge Base
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Human life is populated with articulated objects. A comprehensive
understanding of articulated objects, namely appearance, structure, physics
property, and semantics, will benefit many research communities. As current
articulated object understanding solutions are usually based on synthetic
object dataset with CAD models without physics properties, which prevent
satisfied generalization from simulation to real-world applications in visual
and robotics tasks. To bridge the gap, we present AKB-48: a large-scale
Articulated object Knowledge Base which consists of 2,037 real-world 3D
articulated object models of 48 categories. Each object is described by a
knowledge graph ArtiKG. To build the AKB-48, we present a fast articulation
knowledge modeling (FArM) pipeline, which can fulfill the ArtiKG for an
articulated object within 10-15 minutes, and largely reduce the cost for object
modeling in the real world. Using our dataset, we propose AKBNet, a novel
integral pipeline for Category-level Visual Articulation Manipulation (C-VAM)
task, in which we benchmark three sub-tasks, namely pose estimation, object
reconstruction and manipulation. Dataset, codes, and models will be publicly
available at https://liuliu66.github.io/articulationobjects/.
|
[
{
"version": "v1",
"created": "Thu, 17 Feb 2022 03:24:07 GMT"
}
] | 2022-02-18T00:00:00 |
[
[
"Liu",
"Liu",
""
],
[
"Xu",
"Wenqiang",
""
],
[
"Fu",
"Haoyuan",
""
],
[
"Qian",
"Sucheng",
""
],
[
"Han",
"Yang",
""
],
[
"Lu",
"Cewu",
""
]
] |
new_dataset
| 0.999888 |
2202.08450
|
Xinyang Geng
|
Brandon Trabucco, Xinyang Geng, Aviral Kumar, Sergey Levine
|
Design-Bench: Benchmarks for Data-Driven Offline Model-Based
Optimization
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Black-box model-based optimization (MBO) problems, where the goal is to find
a design input that maximizes an unknown objective function, are ubiquitous in
a wide range of domains, such as the design of proteins, DNA sequences,
aircraft, and robots. Solving model-based optimization problems typically
requires actively querying the unknown objective function on design proposals,
which means physically building the candidate molecule, aircraft, or robot,
testing it, and storing the result. This process can be expensive and time
consuming, and one might instead prefer to optimize for the best design using
only the data one already has. This setting -- called offline MBO -- poses
substantial and different algorithmic challenges than more commonly studied
online techniques. A number of recent works have demonstrated success with
offline MBO for high-dimensional optimization problems using high-capacity deep
neural networks. However, the lack of standardized benchmarks in this emerging
field is making progress difficult to track. To address this, we present
Design-Bench, a benchmark for offline MBO with a unified evaluation protocol
and reference implementations of recent methods. Our benchmark includes a suite
of diverse and realistic tasks derived from real-world optimization problems in
biology, materials science, and robotics that present distinct challenges for
offline MBO. Our benchmark and reference implementations are released at
github.com/rail-berkeley/design-bench and
github.com/rail-berkeley/design-baselines.
|
[
{
"version": "v1",
"created": "Thu, 17 Feb 2022 05:33:27 GMT"
}
] | 2022-02-18T00:00:00 |
[
[
"Trabucco",
"Brandon",
""
],
[
"Geng",
"Xinyang",
""
],
[
"Kumar",
"Aviral",
""
],
[
"Levine",
"Sergey",
""
]
] |
new_dataset
| 0.999342 |
2202.08453
|
Zixu Zhao
|
Zixu Zhao, Yueming Jin, Pheng-Ann Heng
|
TraSeTR: Track-to-Segment Transformer with Contrastive Query for
Instance-level Instrument Segmentation in Robotic Surgery
|
Accepted by ICRA 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Surgical instrument segmentation -- in general a pixel classification task --
is fundamentally crucial for promoting cognitive intelligence in robot-assisted
surgery (RAS). However, previous methods are struggling with discriminating
instrument types and instances. To address the above issues, we explore a mask
classification paradigm that produces per-segment predictions. We propose
TraSeTR, a novel Track-to-Segment Transformer that wisely exploits tracking
cues to assist surgical instrument segmentation. TraSeTR jointly reasons about
the instrument type, location, and identity with instance-level predictions
i.e., a set of class-bbox-mask pairs, by decoding query embeddings.
Specifically, we introduce the prior query that encoded with previous temporal
knowledge, to transfer tracking signals to current instances via identity
matching. A contrastive query learning strategy is further applied to reshape
the query feature space, which greatly alleviates the tracking difficulty
caused by large temporal variations. The effectiveness of our method is
demonstrated with state-of-the-art instrument type segmentation results on
three public datasets, including two RAS benchmarks from EndoVis Challenges and
one cataract surgery dataset CaDIS.
|
[
{
"version": "v1",
"created": "Thu, 17 Feb 2022 05:52:18 GMT"
}
] | 2022-02-18T00:00:00 |
[
[
"Zhao",
"Zixu",
""
],
[
"Jin",
"Yueming",
""
],
[
"Heng",
"Pheng-Ann",
""
]
] |
new_dataset
| 0.998759 |
2202.08487
|
Jiashi Zhang
|
Jiashi Zhang, Chengyang Zhang, Jun Wu, Jianxiang Jin, Qiuguo Zhu
|
LiDAR-Inertial 3D SLAM with Plane Constraint for Multi-story Building
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ubiquitous planes and structural consistency are the most apparent
features of indoor multi-story Buildings compared with outdoor environments. In
this paper, we propose a tightly coupled LiDAR-Inertial 3D SLAM framework with
plane features for the multi-story building. The framework we proposed is
mainly composed of three parts: tightly coupled LiDAR-Inertial odometry,
extraction of representative planes of the structure, and factor graph
optimization. By building a local map and inertial measurement unit (IMU)
pre-integration, we get LiDAR scan-to-local-map matching and IMU measurements,
respectively. Minimize the joint cost function to obtain the LiDAR-Inertial
odometry information. Once a new keyframe is added to the graph, all the planes
of this keyframe that can represent structural features are extracted to find
the constraint between different poses and stories. A keyframe-based factor
graph is conducted with the constraint of planes, and LiDAR-Inertial odometry
for keyframe poses refinement. The experimental results show that our algorithm
has outstanding performance in accuracy compared with the state-of-the-art
algorithms.
|
[
{
"version": "v1",
"created": "Thu, 17 Feb 2022 07:42:25 GMT"
}
] | 2022-02-18T00:00:00 |
[
[
"Zhang",
"Jiashi",
""
],
[
"Zhang",
"Chengyang",
""
],
[
"Wu",
"Jun",
""
],
[
"Jin",
"Jianxiang",
""
],
[
"Zhu",
"Qiuguo",
""
]
] |
new_dataset
| 0.977565 |
2202.08517
|
Haihan Tang
|
Haihan Tang, Yi Wang, Lap-Pui Chau
|
TAFNet: A Three-Stream Adaptive Fusion Network for RGB-T Crowd Counting
|
This work has been accepted by IEEE International Symposium on
Circuits and Systems (ISCAS) 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a three-stream adaptive fusion network named
TAFNet, which uses paired RGB and thermal images for crowd counting.
Specifically, TAFNet is divided into one main stream and two auxiliary streams.
We combine a pair of RGB and thermal images to constitute the input of main
stream. Two auxiliary streams respectively exploit RGB image and thermal image
to extract modality-specific features. Besides, we propose an Information
Improvement Module (IIM) to fuse the modality-specific features into the main
stream adaptively. Experiment results on RGBT-CC dataset show that our method
achieves more than 20% improvement on mean average error and root mean squared
error compared with state-of-the-art method. The source code will be publicly
available at https://github.com/TANGHAIHAN/TAFNet.
|
[
{
"version": "v1",
"created": "Thu, 17 Feb 2022 08:43:10 GMT"
}
] | 2022-02-18T00:00:00 |
[
[
"Tang",
"Haihan",
""
],
[
"Wang",
"Yi",
""
],
[
"Chau",
"Lap-Pui",
""
]
] |
new_dataset
| 0.98614 |
2202.08533
|
Boli Chen
|
Boli Chen, Guangwei Xu, Xiaobin Wang, Pengjun Xie, Meishan Zhang, Fei
Huang
|
AISHELL-NER: Named Entity Recognition from Chinese Speech
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Named Entity Recognition (NER) from speech is among Spoken Language
Understanding (SLU) tasks, aiming to extract semantic information from the
speech signal. NER from speech is usually made through a two-step pipeline that
consists of (1) processing the audio using an Automatic Speech Recognition
(ASR) system and (2) applying an NER tagger to the ASR outputs. Recent works
have shown the capability of the End-to-End (E2E) approach for NER from English
and French speech, which is essentially entity-aware ASR. However, due to the
many homophones and polyphones that exist in Chinese, NER from Chinese speech
is effectively a more challenging task. In this paper, we introduce a new
dataset AISEHLL-NER for NER from Chinese speech. Extensive experiments are
conducted to explore the performance of several state-of-the-art methods. The
results demonstrate that the performance could be improved by combining
entity-aware ASR and pretrained NER tagger, which can be easily applied to the
modern SLU pipeline. The dataset is publicly available at
github.com/Alibaba-NLP/AISHELL-NER.
|
[
{
"version": "v1",
"created": "Thu, 17 Feb 2022 09:18:48 GMT"
}
] | 2022-02-18T00:00:00 |
[
[
"Chen",
"Boli",
""
],
[
"Xu",
"Guangwei",
""
],
[
"Wang",
"Xiaobin",
""
],
[
"Xie",
"Pengjun",
""
],
[
"Zhang",
"Meishan",
""
],
[
"Huang",
"Fei",
""
]
] |
new_dataset
| 0.977751 |
2202.08774
|
Ozan Alp Topal
|
Ozan Alp Topal, Mustafa Ozger, Dominic Schupke, Emil Bj\"ornson, Cicek
Cavdar
|
mmWave Communications for Indoor Dense Spaces: Ray-Tracing Based Channel
Characterization and Performance Comparison
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, the indoor dense space (IDS) channel at 28 GHz is
characterized through extensive Ray-Tracing (RT) simulations. We consider IDS
as a specific type of indoor environment with confined geometry and packed with
humans, such as aircraft cabins and train wagons. Based on RT simulations, we
characterize path loss, shadow fading, root-mean-square delay spread, Rician
K-factor, azimuth/elevation angular spread of arrival/departure considering
different RT simulation scenarios of the fuselage geometry, material, and human
presence. While the large-scale fading parameters are similar to the
state-of-the-art channel models, the small-scale fading parameters demonstrate
richer multipath scattering in IDS, resulting in poorer bit error rate
performance in comparison to the 3GPP indoor channel model.
|
[
{
"version": "v1",
"created": "Thu, 17 Feb 2022 17:21:17 GMT"
}
] | 2022-02-18T00:00:00 |
[
[
"Topal",
"Ozan Alp",
""
],
[
"Ozger",
"Mustafa",
""
],
[
"Schupke",
"Dominic",
""
],
[
"Björnson",
"Emil",
""
],
[
"Cavdar",
"Cicek",
""
]
] |
new_dataset
| 0.999153 |
2202.08791
|
Yiran Zhong
|
Zhen Qin, Weixuan Sun, Hui Deng, Dongxu Li, Yunshen Wei, Baohong Lv,
Junjie Yan, Lingpeng Kong, Yiran Zhong
|
cosFormer: Rethinking Softmax in Attention
|
Accepted to ICLR2022. Yiran Zhong is the corresponding author. Zhen
Qin, Weixuan Sun, Hui Deng contributed equally to this work
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformer has shown great successes in natural language processing,
computer vision, and audio processing. As one of its core components, the
softmax attention helps to capture long-range dependencies yet prohibits its
scale-up due to the quadratic space and time complexity to the sequence length.
Kernel methods are often adopted to reduce the complexity by approximating the
softmax operator. Nevertheless, due to the approximation errors, their
performances vary in different tasks/corpus and suffer crucial performance
drops when compared with the vanilla softmax attention. In this paper, we
propose a linear transformer called cosFormer that can achieve comparable or
better accuracy to the vanilla transformer in both casual and cross attentions.
cosFormer is based on two key properties of softmax attention: i).
non-negativeness of the attention matrix; ii). a non-linear re-weighting scheme
that can concentrate the distribution of the attention matrix. As its linear
substitute, cosFormer fulfills these properties with a linear operator and a
cosine-based distance re-weighting mechanism. Extensive experiments on language
modeling and text understanding tasks demonstrate the effectiveness of our
method. We further examine our method on long sequences and achieve
state-of-the-art performance on the Long-Range Arena benchmark. The source code
is available at https://github.com/OpenNLPLab/cosFormer.
|
[
{
"version": "v1",
"created": "Thu, 17 Feb 2022 17:53:48 GMT"
}
] | 2022-02-18T00:00:00 |
[
[
"Qin",
"Zhen",
""
],
[
"Sun",
"Weixuan",
""
],
[
"Deng",
"Hui",
""
],
[
"Li",
"Dongxu",
""
],
[
"Wei",
"Yunshen",
""
],
[
"Lv",
"Baohong",
""
],
[
"Yan",
"Junjie",
""
],
[
"Kong",
"Lingpeng",
""
],
[
"Zhong",
"Yiran",
""
]
] |
new_dataset
| 0.986817 |
2202.08805
|
Hussam Habib
|
Hussam Habib, Padmini Srinivasan and Rishab Nithyanand
|
Making a Radical Misogynist: How online social engagement with the
Manosphere influences traits of radicalization
| null | null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
The algorithms and the interactions facilitated by online platforms have been
used by radical groups to recruit vulnerable individuals to their cause. This
has resulted in the sharp growth of violent events and deteriorating online
discourse. The Manosphere, a collection of radical anti-feminist communities,
is one such group which has attracted attention due to their rapid growth and
increasingly violent real world outbursts. In this paper, we examine the social
engagements between Reddit users who have participated in feminist discourse
and the Manosphere communities on Reddit to understand the process of
development of traits associated with the adoption of extremist ideologies. By
using existing research on the psychology of radicalization we track how
specific types of social engagement with the Manosphere influence the
development of traits associated with radicalization. Our findings show that:
(1) participation, even by the simple act of joining the Manosphere, has a
significant influence on the language and outlook traits of a user, (2)
Manosphere elites are extremely effective propagators of radical traits and
cause their increase even outside the Manosphere, and (3) community perception
can heavily influence a user's behavior. Finally, we examine how our findings
can help draft community and platform moderation policies to help mitigate the
problem of online radicalization.
|
[
{
"version": "v1",
"created": "Thu, 17 Feb 2022 18:17:16 GMT"
}
] | 2022-02-18T00:00:00 |
[
[
"Habib",
"Hussam",
""
],
[
"Srinivasan",
"Padmini",
""
],
[
"Nithyanand",
"Rishab",
""
]
] |
new_dataset
| 0.99592 |
2202.08837
|
Jan-Nico Zaech
|
Jan-Nico Zaech, Alexander Liniger, Martin Danelljan, Dengxin Dai, Luc
Van Gool
|
Adiabatic Quantum Computing for Multi Object Tracking
|
16 Pages
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-Object Tracking (MOT) is most often approached in the
tracking-by-detection paradigm, where object detections are associated through
time. The association step naturally leads to discrete optimization problems.
As these optimization problems are often NP-hard, they can only be solved
exactly for small instances on current hardware. Adiabatic quantum computing
(AQC) offers a solution for this, as it has the potential to provide a
considerable speedup on a range of NP-hard optimization problems in the near
future. However, current MOT formulations are unsuitable for quantum computing
due to their scaling properties. In this work, we therefore propose the first
MOT formulation designed to be solved with AQC. We employ an Ising model that
represents the quantum mechanical system implemented on the AQC. We show that
our approach is competitive compared with state-of-the-art optimization-based
approaches, even when using of-the-shelf integer programming solvers. Finally,
we demonstrate that our MOT problem is already solvable on the current
generation of real quantum computers for small examples, and analyze the
properties of the measured solutions.
|
[
{
"version": "v1",
"created": "Thu, 17 Feb 2022 18:59:20 GMT"
}
] | 2022-02-18T00:00:00 |
[
[
"Zaech",
"Jan-Nico",
""
],
[
"Liniger",
"Alexander",
""
],
[
"Danelljan",
"Martin",
""
],
[
"Dai",
"Dengxin",
""
],
[
"Van Gool",
"Luc",
""
]
] |
new_dataset
| 0.996942 |
2012.05362
|
Adrian R\"ofer M.Sc.
|
Adrian R\"ofer, Georg Bartels, Wolfram Burgard, Abhinav Valada,
Michael Beetz
|
Kineverse: A Symbolic Articulation Model Framework for Model-Agnostic
Mobile Manipulation
|
8 pages, 8 figures, Published in: IEEE Robotics and Automation
Letters ( Volume: 7, Issue: 2, April 2022)
|
IEEE Robotics and Automation Letters, 7 (2022) 3372-3379
|
10.1109/LRA.2022.3146515
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Service robots in the future need to execute abstract instructions such as
"fetch the milk from the fridge". To translate such instructions into
actionable plans, robots require in-depth background knowledge. With regards to
interactions with doors and drawers, robots require articulation models that
they can use for state estimation and motion planning. Existing frameworks
model articulated connections as abstract concepts such as prismatic, or
revolute, but do not provide a parameterized model of these connections for
computation. In this paper, we introduce a novel framework that uses symbolic
mathematical expressions to model articulated structures -- robots and objects
alike -- in a unified and extensible manner. We provide a theoretical
description of this framework, and the operations that are supported by its
models, and introduce an architecture to exchange our models in robotic
applications, making them as flexible as any other environmental observation.
To demonstrate the utility of our approach, we employ our practical
implementation Kineverse for solving common robotics tasks from state
estimation and mobile manipulation, and use it further in real-world mobile
robot manipulation.
|
[
{
"version": "v1",
"created": "Wed, 9 Dec 2020 23:16:44 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Sep 2021 06:08:30 GMT"
},
{
"version": "v3",
"created": "Mon, 4 Oct 2021 10:01:59 GMT"
},
{
"version": "v4",
"created": "Wed, 16 Feb 2022 15:05:17 GMT"
}
] | 2022-02-17T00:00:00 |
[
[
"Röfer",
"Adrian",
""
],
[
"Bartels",
"Georg",
""
],
[
"Burgard",
"Wolfram",
""
],
[
"Valada",
"Abhinav",
""
],
[
"Beetz",
"Michael",
""
]
] |
new_dataset
| 0.996378 |
2102.03625
|
Daniel Oliveira
|
Daniel Oliveira, Tiago Gomes, and Sandro Pinto
|
uTango: an open-source TEE for IoT devices
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Security is one of the main challenges of the Internet of Things (IoT). IoT
devices are mainly powered by low-cost microcontrollers (MCUs) that typically
lack basic hardware security mechanisms to separate security-critical
applications from less critical components. Recently, Arm has started to
release Cortex-M MCUs enhanced with TrustZone technology (i.e., TrustZone-M), a
system-wide security solution aiming at providing robust protection for IoT
devices. Trusted Execution Environments (TEEs) relying on TrustZone hardware
have been perceived as safe havens for securing mobile devices. However, for
the past few years, considerable effort has gone into unveiling hundreds of
vulnerabilities and proposing a collection of relevant defense techniques to
address several issues. While new TEE solutions built on TrustZone-M start
flourishing, the lessons gathered from the research community appear to be
falling short, as these new systems are trapping into the same pitfalls of the
past.
In this paper, we present uTango, the first multi-world TEE for modern IoT
devices. uTango proposes a novel architecture aiming at tackling the major
architectural deficiencies currently affecting TrustZone(-M)-assisted TEEs. In
particular, we leverage the very same TrustZone hardware primitives used by
dual-world implementations to create multiple and equally secure execution
environments within the normal world. We demonstrate the benefits of uTango by
conducting an extensive evaluation on a real TrustZone-M hardware platform,
i.e., Arm Musca-B1. uTango will be open-sourced and freely available on GitHub
in hopes of engaging academia and industry on securing the foreseeable trillion
IoT devices.
|
[
{
"version": "v1",
"created": "Sat, 6 Feb 2021 17:55:47 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Feb 2022 16:45:41 GMT"
}
] | 2022-02-17T00:00:00 |
[
[
"Oliveira",
"Daniel",
""
],
[
"Gomes",
"Tiago",
""
],
[
"Pinto",
"Sandro",
""
]
] |
new_dataset
| 0.999057 |
2105.08559
|
Aaron Ong
|
David Doty and Aaron Ong
|
Simulating 3-symbol Turing machines with SIMD||DNA
| null | null | null | null |
cs.ET q-bio.MN
|
http://creativecommons.org/licenses/by/4.0/
|
SIMD||DNA is a model of DNA strand displacement allowing parallel in-memory
computation on DNA storage. We show how to simulate an arbitrary 3-symbol
space-bounded Turing machine with a SIMD||DNA program, giving a more direct and
efficient route to general-purpose information manipulation on DNA storage than
the Rule 110 simulation of [Wang, Chalk, Soloveichik, DNA 2019]. We also
develop software that can simulate SIMD||DNA programs and produce SVG figures.
|
[
{
"version": "v1",
"created": "Tue, 18 May 2021 14:42:25 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jul 2021 22:03:29 GMT"
},
{
"version": "v3",
"created": "Mon, 26 Jul 2021 21:26:38 GMT"
},
{
"version": "v4",
"created": "Mon, 27 Sep 2021 15:34:36 GMT"
},
{
"version": "v5",
"created": "Wed, 16 Feb 2022 02:21:17 GMT"
}
] | 2022-02-17T00:00:00 |
[
[
"Doty",
"David",
""
],
[
"Ong",
"Aaron",
""
]
] |
new_dataset
| 0.997936 |
2105.09041
|
Giovanni Interdonato
|
Carmen D'Andrea, Giovanni Interdonato and Stefano Buzzi
|
User-centric Handover in mmWave Cell-Free Massive MIMO with User
Mobility
|
Paper accepted for publication in the proceedings of the 2021
European Signal Processing COnference (EUSIPCO), 23-27 August 2021, Dublin,
Ireland
| null |
10.23919/EUSIPCO54536.2021.9616361
| null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The coupling between cell-free massive multiple-input multiple-output (MIMO)
systems operating at millimeter-wave (mmWave) carrier frequencies and user
mobility is considered in this paper. First of all, a mmWave channel is
introduced taking into account the user mobility and the impact of the channel
aging. Then, three beamforming techniques are proposed in the considered
scenario, along with a dynamic user association technique (handover): starting
from a user-centric association between each mobile device and a cluster of
access points (APs), a rule for updating the APs cluster is formulated and
analyzed. Numerical results reveal that the proposed beamforming and user
association techniques are effective in the considered scenario.
|
[
{
"version": "v1",
"created": "Wed, 19 May 2021 10:13:29 GMT"
}
] | 2022-02-17T00:00:00 |
[
[
"D'Andrea",
"Carmen",
""
],
[
"Interdonato",
"Giovanni",
""
],
[
"Buzzi",
"Stefano",
""
]
] |
new_dataset
| 0.99164 |
2108.13105
|
Bjorn Lindqvist Mr.
|
Bj\"orn Lindqvist, Christoforos Kanellakis, Sina Sharif Mansouri,
Ali-akbar Agha-mohammadi, George Nikolakopoulos
|
COMPRA: A COMPact Reactive Autonomy framework for subterranean MAV based
search-and-rescue operations
|
27 pages, 21 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work establishes COMPRA, a compact and reactive autonomy framework for
fast deployment of Micro Aerial Vehicles (MAVs) in subterranean
Search-and-Rescue (SAR) missions. A COMPRA-enabled MAV is able to autonomously
explore previously unknown areas while specific mission criteria are considered
e.g. an object of interest is identified and localized, the remaining useful
battery life, the overall desired exploration mission duration. The proposed
architecture follows a low-complexity algorithmic design to facilitate fully
on-board computations, including nonlinear control, state-estimation,
navigation, exploration behavior and object localization capabilities. The
framework is mainly structured around a reactive local avoidance planner, based
on enhanced Potential Field concepts and using instantaneous 3D pointclouds, as
well as a computationally efficient heading regulation technique, based on
depth images from an instantaneous camera stream. Those techniques decouple the
collision-free path generation from the dependency of a global map and are
capable of handling imprecise localization occasions. Field experimental
verification of the overall architecture is performed in relevant unknown
Global Positioning System (GPS)-denied environments.
|
[
{
"version": "v1",
"created": "Mon, 30 Aug 2021 10:28:10 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Feb 2022 11:15:12 GMT"
}
] | 2022-02-17T00:00:00 |
[
[
"Lindqvist",
"Björn",
""
],
[
"Kanellakis",
"Christoforos",
""
],
[
"Mansouri",
"Sina Sharif",
""
],
[
"Agha-mohammadi",
"Ali-akbar",
""
],
[
"Nikolakopoulos",
"George",
""
]
] |
new_dataset
| 0.999551 |
2109.10231
|
Yunlong Wang
|
Yunlong Wang, Jiaying Liu, Homin Park, Jordan Schultz-McArdle,
Stephanie Rosenthal, Judy Kay, Brian Y. Lim
|
SalienTrack: providing salient information for semi-automated
self-tracking feedback with model explanations
| null | null | null | null |
cs.HC cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Self-tracking can improve people's awareness of their unhealthy behaviors and
support reflection to inform behavior change. Increasingly, new technologies
make tracking easier, leading to large amounts of tracked data. However, much
of that information is not salient for reflection and self-awareness. To tackle
this burden for reflection, we created the SalienTrack framework, which aims to
1) identify salient tracking events, 2) select the salient details of those
events, 3) explain why they are informative, and 4) present the details as
manually elicited or automatically shown feedback. We implemented SalienTrack
in the context of nutrition tracking. To do this, we first conducted a field
study to collect photo-based mobile food tracking over 1-5 weeks. We then
report how we used this data to train an explainable-AI model of salience.
Finally, we created interfaces to present salient information and conducted a
formative user study to gain insights about how SalienTrack could be integrated
into an interface for reflection. Our key contributions are the SalienTrack
framework, a demonstration of its implementation for semi-automated feedback in
an important and challenging self-tracking context and a discussion of the
broader uses of the framework.
|
[
{
"version": "v1",
"created": "Tue, 21 Sep 2021 14:53:47 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Nov 2021 09:39:06 GMT"
},
{
"version": "v3",
"created": "Wed, 16 Feb 2022 12:33:16 GMT"
}
] | 2022-02-17T00:00:00 |
[
[
"Wang",
"Yunlong",
""
],
[
"Liu",
"Jiaying",
""
],
[
"Park",
"Homin",
""
],
[
"Schultz-McArdle",
"Jordan",
""
],
[
"Rosenthal",
"Stephanie",
""
],
[
"Kay",
"Judy",
""
],
[
"Lim",
"Brian Y.",
""
]
] |
new_dataset
| 0.999099 |
2112.06402
|
Constantinos Chamzas
|
Constantinos Chamzas, and Carlos Quintero-Pe\~na, Zachary Kingston,
Andreas Orthey, Daniel Rakita, Michael Gleicher, Marc Toussaint, Lydia E.
Kavraki
|
MotionBenchMaker: A Tool to Generate and Benchmark Motion Planning
Datasets
|
accepted in IEEE Robotics and Automation Letters (RAL), 2022.
Supplementary video: https://youtu.be/t96Py0QX0NI Code:
https://github.com/KavrakiLab/motion_bench_maker
| null |
10.1109/LRA.2021.3133603
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, there has been a wealth of development in motion planning for
robotic manipulation new motion planners are continuously proposed, each with
their own unique strengths and weaknesses. However, evaluating new planners is
challenging and researchers often create their own ad-hoc problems for
benchmarking, which is time-consuming, prone to bias, and does not directly
compare against other state-of-the-art planners. We present MotionBenchMaker,
an open-source tool to generate benchmarking datasets for realistic robot
manipulation problems. MotionBenchMaker is designed to be an extensible,
easy-to-use tool that allows users to both generate datasets and benchmark them
by comparing motion planning algorithms. Empirically, we show the benefit of
using MotionBenchMaker as a tool to procedurally generate datasets which helps
in the fair evaluation of planners. We also present a suite of 40 prefabricated
datasets, with 5 different commonly used robots in 8 environments, to serve as
a common ground to accelerate motion planning research.
|
[
{
"version": "v1",
"created": "Mon, 13 Dec 2021 03:39:01 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Feb 2022 20:30:35 GMT"
}
] | 2022-02-17T00:00:00 |
[
[
"Chamzas",
"Constantinos",
""
],
[
"Quintero-Peña",
"Carlos",
""
],
[
"Kingston",
"Zachary",
""
],
[
"Orthey",
"Andreas",
""
],
[
"Rakita",
"Daniel",
""
],
[
"Gleicher",
"Michael",
""
],
[
"Toussaint",
"Marc",
""
],
[
"Kavraki",
"Lydia E.",
""
]
] |
new_dataset
| 0.999685 |
2201.11479
|
Koteswar Rao Jerripothula
|
Sharik Ali Ansari, Koteswar Rao Jerripothula, Pragya Nagpal, Ankush
Mittal
|
Eye-focused Detection of Bell's Palsy in Videos
|
Published in the Proceedings of the 34th Canadian Conference on
Artificial Intelligence. Please cite this paper in the following manner: S.
A. Ansari, K. R. Jerripothula, P. Nagpal, and A. Mittal. "Eye-focused
Detection of Bell's Palsy in Videos". In: Proceedings of the 34th Canadian
Conference on Artificial Intelligence (June 8, 2021). doi:
10.21428/594757db.d2f8342b
| null |
10.21428/594757db.d2f8342b
| null |
cs.CV cs.AI cs.MM eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present how Bell's Palsy, a neurological disorder, can be
detected just from a subject's eyes in a video. We notice that Bell's Palsy
patients often struggle to blink their eyes on the affected side. As a result,
we can observe a clear contrast between the blinking patterns of the two eyes.
Although previous works did utilize images/videos to detect this disorder, none
have explicitly focused on the eyes. Most of them require the entire face. One
obvious advantage of having an eye-focused detection system is that subjects'
anonymity is not at risk. Also, our AI decisions based on simple blinking
patterns make them explainable and straightforward. Specifically, we develop a
novel feature called blink similarity, which measures the similarity between
the two blinking patterns. Our extensive experiments demonstrate that the
proposed feature is quite robust, for it helps in Bell's Palsy detection even
with very few labels. Our proposed eye-focused detection system is not only
cheaper but also more convenient than several existing methods.
|
[
{
"version": "v1",
"created": "Thu, 27 Jan 2022 12:34:35 GMT"
}
] | 2022-02-17T00:00:00 |
[
[
"Ansari",
"Sharik Ali",
""
],
[
"Jerripothula",
"Koteswar Rao",
""
],
[
"Nagpal",
"Pragya",
""
],
[
"Mittal",
"Ankush",
""
]
] |
new_dataset
| 0.995109 |
2201.12133
|
Mingsong Chen
|
Yanhong Fei, Yingjie Liu, Xian Wei, Mingsong Chen
|
O-ViT: Orthogonal Vision Transformer
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inspired by the tremendous success of the self-attention mechanism in natural
language processing, the Vision Transformer (ViT) creatively applies it to
image patch sequences and achieves incredible performance. However, the scaled
dot-product self-attention of ViT brings about scale ambiguity to the structure
of the original feature space. To address this problem, we propose a novel
method named Orthogonal Vision Transformer (O-ViT), to optimize ViT from the
geometric perspective. O-ViT limits parameters of self-attention blocks to be
on the norm-keeping orthogonal manifold, which can keep the geometry of the
feature space. Moreover, O-ViT achieves both orthogonal constraints and cheap
optimization overhead by adopting a surjective mapping between the orthogonal
group and its Lie algebra.We have conducted comparative experiments on image
recognition tasks to demonstrate O-ViT's validity and experiments show that
O-ViT can boost the performance of ViT by up to 3.6%.
|
[
{
"version": "v1",
"created": "Fri, 28 Jan 2022 14:18:52 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Feb 2022 13:49:43 GMT"
}
] | 2022-02-17T00:00:00 |
[
[
"Fei",
"Yanhong",
""
],
[
"Liu",
"Yingjie",
""
],
[
"Wei",
"Xian",
""
],
[
"Chen",
"Mingsong",
""
]
] |
new_dataset
| 0.963806 |
2202.06839
|
Shuo Niu
|
Shuo Niu, Hugh S. Manon, Ava Bartolome, Nguyen B. Ha, Keegan Veazey
|
Close-up and Whispering: An Understanding of Multimodal and Parasocial
Interactions in YouTube ASMR videos
|
4 pages
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
ASMR (Autonomous Sensory Meridian Response) has grown to immense popularity
on YouTube and drawn HCI designers' attention to its effects and applications
in design. YouTube ASMR creators incorporate visual elements, sounds, motifs of
touching and tasting, and other scenarios in multisensory video interactions to
deliver enjoyable and relaxing experiences to their viewers. ASMRtists engage
viewers by social, physical, and task attractions. Research has identified the
benefits of ASMR in mental wellbeing. However, ASMR remains an understudied
phenomenon in the HCI community, constraining designers' ability to incorporate
ASMR in video-based designs. This work annotates and analyzes the interaction
modalities and parasocial attractions of 2663 videos to identify unique
experiences. YouTube comment sections are also analyzed to compare viewers'
responses to different ASMR interactions. We find that ASMR videos are
experiences of multimodal social connection, relaxing physical intimacy, and
sensory-rich activity observation. Design implications are discussed to foster
future ASMR-augmented video interactions.
|
[
{
"version": "v1",
"created": "Mon, 14 Feb 2022 16:21:52 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Feb 2022 15:37:36 GMT"
}
] | 2022-02-17T00:00:00 |
[
[
"Niu",
"Shuo",
""
],
[
"Manon",
"Hugh S.",
""
],
[
"Bartolome",
"Ava",
""
],
[
"Ha",
"Nguyen B.",
""
],
[
"Veazey",
"Keegan",
""
]
] |
new_dataset
| 0.951055 |
2202.07569
|
Rasoul Akhavan Mahdavi
|
Rasoul Akhavan Mahdavi and Florian Kerschbaum
|
Constant-weight PIR: Single-round Keyword PIR via Constant-weight
Equality Operators
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Equality operators are an essential building block in tasks over secure
computation such as private information retrieval. In private information
retrieval (PIR), a user queries a database such that the server does not learn
which element is queried. In this work, we propose \emph{equality operators for
constant-weight codewords}. A constant-weight code is a collection of codewords
that share the same Hamming weight. Constant-weight equality operators have a
multiplicative depth that depends only on the Hamming weight of the code, not
the bit-length of the elements. In our experiments, we show how these equality
operators are up to 10 times faster than existing equality operators.
Furthermore, we propose PIR using the constant-weight equality operator or
\emph{constant-weight PIR}, which is a PIR protocol using an approach
previously deemed impractical. We show that for private retrieval of large,
streaming data, constant-weight PIR has a smaller communication complexity and
lower runtime compared to SEALPIR and MulPIR, respectively, which are two
state-of-the-art solutions for PIR. Moreover, we show how constant-weight PIR
can be extended to keyword PIR. In keyword PIR, the desired element is
retrieved by a unique identifier pertaining to the sought item, e.g., the name
of a file. Previous solutions to keyword PIR require one or multiple rounds of
communication to reduce the problem to normal PIR. We show that constant-weight
PIR is the first practical single-round solution to single-server keyword PIR.
|
[
{
"version": "v1",
"created": "Tue, 15 Feb 2022 16:54:14 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Feb 2022 05:12:51 GMT"
}
] | 2022-02-17T00:00:00 |
[
[
"Mahdavi",
"Rasoul Akhavan",
""
],
[
"Kerschbaum",
"Florian",
""
]
] |
new_dataset
| 0.985314 |
2202.07704
|
Safras Iqbal
|
Safras Iqbal, Peter Ball, Muhammad H Kamarudin, Andrew Bradley
|
Simulating Malicious Attacks on VANETs for Connected and Autonomous
Vehicle Cybersecurity: A Machine Learning Dataset
|
12 page, 13 figures, 3 tables, conference CSNDSP 2022
| null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Connected and Autonomous Vehicles (CAVs) rely on Vehicular Adhoc Networks
with wireless communication between vehicles and roadside infrastructure to
support safe operation. However, cybersecurity attacks pose a threat to VANETs
and the safe operation of CAVs. This study proposes the use of simulation for
modelling typical communication scenarios which may be subject to malicious
attacks. The Eclipse MOSAIC simulation framework is used to model two typical
road scenarios, including messaging between the vehicles and infrastructure -
and both replay and bogus information cybersecurity attacks are introduced. The
model demonstrates the impact of these attacks, and provides an open dataset to
inform the development of machine learning algorithms to provide anomaly
detection and mitigation solutions for enhancing secure communications and safe
deployment of CAVs on the road.
|
[
{
"version": "v1",
"created": "Tue, 15 Feb 2022 20:08:58 GMT"
}
] | 2022-02-17T00:00:00 |
[
[
"Iqbal",
"Safras",
""
],
[
"Ball",
"Peter",
""
],
[
"Kamarudin",
"Muhammad H",
""
],
[
"Bradley",
"Andrew",
""
]
] |
new_dataset
| 0.999609 |
2202.07843
|
Pranav Kadam
|
Pranav Kadam, Qingyang Zhou, Shan Liu, C.-C. Jay Kuo
|
PCRP: Unsupervised Point Cloud Object Retrieval and Pose Estimation
|
8 pages, 3 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An unsupervised point cloud object retrieval and pose estimation method,
called PCRP, is proposed in this work. It is assumed that there exists a
gallery point cloud set that contains point cloud objects with given pose
orientation information. PCRP attempts to register the unknown point cloud
object with those in the gallery set so as to achieve content-based object
retrieval and pose estimation jointly, where the point cloud registration task
is built upon an enhanced version of the unsupervised R-PointHop method.
Experiments on the ModelNet40 dataset demonstrate the superior performance of
PCRP in comparison with traditional and learning based methods.
|
[
{
"version": "v1",
"created": "Wed, 16 Feb 2022 03:37:43 GMT"
}
] | 2022-02-17T00:00:00 |
[
[
"Kadam",
"Pranav",
""
],
[
"Zhou",
"Qingyang",
""
],
[
"Liu",
"Shan",
""
],
[
"Kuo",
"C. -C. Jay",
""
]
] |
new_dataset
| 0.998676 |
2202.07844
|
Reza Soltani
|
Reza Soltani, Uyen Trang Nguyen and Aijun An
|
Data Capsule: A Self-Contained Data Model as an Access Policy
Enforcement Strategy
| null | null |
10.1109/BRAINS52497.2021.9569788
| null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we introduce a data capsule model, a self-contained and
self-enforcing data container based on emerging self-sovereign identity
standards, blockchain, and attribute-based encryption. A data capsule allows
for a transparent, privacy-respecting, and secure exchange of personal data,
enabling a progressive trust scheme in a semi-trusted environment. Each data
capsule is bundled with its own access policy structure and verifiable data,
drastically reducing the number of interactions needed among the user, the
service providers, and data custodians. Moreover, by relying on the
decentralized nature of blockchain and attribute-based encryption our proposed
model ensures the access policies published by service providers are public,
transparent, and strictly followed.
|
[
{
"version": "v1",
"created": "Wed, 16 Feb 2022 03:40:13 GMT"
}
] | 2022-02-17T00:00:00 |
[
[
"Soltani",
"Reza",
""
],
[
"Nguyen",
"Uyen Trang",
""
],
[
"An",
"Aijun",
""
]
] |
new_dataset
| 0.971569 |
2202.07858
|
Thinh Hung Truong
|
Thinh Hung Truong, Yulia Otmakhova, Rahmad Mahendra, Timothy Baldwin,
Jey Han Lau, Trevor Cohn, Lawrence Cavedon, Damiano Spina, Karin Verspoor
|
ITTC @ TREC 2021 Clinical Trials Track
|
7 pages
| null | null | null |
cs.CL cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
This paper describes the submissions of the Natural Language Processing (NLP)
team from the Australian Research Council Industrial Transformation Training
Centre (ITTC) for Cognitive Computing in Medical Technologies to the TREC 2021
Clinical Trials Track. The task focuses on the problem of matching eligible
clinical trials to topics constituting a summary of a patient's admission
notes. We explore different ways of representing trials and topics using NLP
techniques, and then use a common retrieval model to generate the ranked list
of relevant trials for each topic. The results from all our submitted runs are
well above the median scores for all topics, but there is still plenty of scope
for improvement.
|
[
{
"version": "v1",
"created": "Wed, 16 Feb 2022 04:56:47 GMT"
}
] | 2022-02-17T00:00:00 |
[
[
"Truong",
"Thinh Hung",
""
],
[
"Otmakhova",
"Yulia",
""
],
[
"Mahendra",
"Rahmad",
""
],
[
"Baldwin",
"Timothy",
""
],
[
"Lau",
"Jey Han",
""
],
[
"Cohn",
"Trevor",
""
],
[
"Cavedon",
"Lawrence",
""
],
[
"Spina",
"Damiano",
""
],
[
"Verspoor",
"Karin",
""
]
] |
new_dataset
| 0.981659 |
2202.07882
|
Mohamed Nabeel
|
Shehan Edirimannage, Mohamed Nabeel, Charith Elvitigala, Chamath
Keppitiyagama
|
PhishChain: A Decentralized and Transparent System to Blacklist Phishing
URLs
|
phishing blockchain blocklisting
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Blacklists are a widely-used Internet security mechanism to protect Internet
users from financial scams, malicious web pages and other cyber attacks based
on blacklisted URLs. In this demo, we introduce PhishChain, a transparent and
decentralized system to blacklisting phishing URLs. At present, public/private
domain blacklists, such as PhishTank, CryptoScamDB, and APWG, are maintained by
a centralized authority, but operate in a crowd sourcing fashion to create a
manually verified blacklist periodically. In addition to being a single point
of failure, the blacklisting process utilized by such systems is not
transparent. We utilize the blockchain technology to support transparency and
decentralization, where no single authority is controlling the blacklist and
all operations are recorded in an immutable distributed ledger. Further, we
design a page rank based truth discovery algorithm to assign a phishing score
to each URL based on crowd sourced assessment of URLs. As an incentive for
voluntary participation, we assign skill points to each user based on their
participation in URL verification.
|
[
{
"version": "v1",
"created": "Wed, 16 Feb 2022 06:16:53 GMT"
}
] | 2022-02-17T00:00:00 |
[
[
"Edirimannage",
"Shehan",
""
],
[
"Nabeel",
"Mohamed",
""
],
[
"Elvitigala",
"Charith",
""
],
[
"Keppitiyagama",
"Chamath",
""
]
] |
new_dataset
| 0.981769 |
2202.07883
|
Mohamed Nabeel
|
Wathsara Daluwatta, Ravindu De Silva, Sanduni Kariyawasam, Mohamed
Nabeel, Charith Elvitigala, Kasun De Zoysa, Chamath Keppitiyagama
|
CGraph: Graph Based Extensible Predictive Domain Threat Intelligence
Platform
|
threat intelligence graph investigation
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Ability to effectively investigate indicators of compromise and associated
network resources involved in cyber attacks is paramount not only to identify
affected network resources but also to detect related malicious resources.
Today, most of the cyber threat intelligence platforms are reactive in that
they can identify attack resources only after the attack is carried out.
Further, these systems have limited functionality to investigate associated
network resources. In this work, we propose an extensible predictive cyber
threat intelligence platform called cGraph that addresses the above
limitations. cGraph is built as a graph-first system where investigators can
explore network resources utilizing a graph based API. Further, cGraph provides
real-time predictive capabilities based on state-of-the-art inference
algorithms to predict malicious domains from network graphs with a few known
malicious and benign seeds. To the best of our knowledge, cGraph is the only
threat intelligence platform to do so. cGraph is extensible in that additional
network resources can be added to the system transparently.
|
[
{
"version": "v1",
"created": "Wed, 16 Feb 2022 06:28:07 GMT"
}
] | 2022-02-17T00:00:00 |
[
[
"Daluwatta",
"Wathsara",
""
],
[
"De Silva",
"Ravindu",
""
],
[
"Kariyawasam",
"Sanduni",
""
],
[
"Nabeel",
"Mohamed",
""
],
[
"Elvitigala",
"Charith",
""
],
[
"De Zoysa",
"Kasun",
""
],
[
"Keppitiyagama",
"Chamath",
""
]
] |
new_dataset
| 0.994267 |
2202.07896
|
Jiamin Li
|
Jiamin Li, Hong Xu, Yibo Zhu, Zherui Liu, Chuanxiong Guo, Cong Wang
|
Aryl: An Elastic Cluster Scheduler for Deep Learning
| null | null | null | null |
cs.DC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Companies build separate training and inference GPU clusters for deep
learning, and use separate schedulers to manage them. This leads to problems
for both training and inference: inference clusters have low GPU utilization
when the traffic load is low; training jobs often experience long queueing time
due to lack of resources. We introduce Aryl, a new cluster scheduler to address
these problems. Aryl introduces capacity loaning to loan idle inference GPU
servers for training jobs. It further exploits elastic scaling that scales a
training job's GPU allocation to better utilize loaned resources. Capacity
loaning and elastic scaling create new challenges to cluster management. When
the loaned servers need to be returned, we need to minimize the number of job
preemptions; when more GPUs become available, we need to allocate them to
elastic jobs and minimize the job completion time (JCT). Aryl addresses these
combinatorial problems using principled heuristics. It introduces the notion of
server preemption cost which it greedily reduces during server reclaiming. It
further relies on the JCT reduction value defined for each additional worker
for an elastic job to solve the scheduling problem as a multiple-choice
knapsack problem. Prototype implementation on a 64-GPU testbed and large-scale
simulation with 15-day traces of over 50,000 production jobs show that Aryl
brings 1.53x and 1.50x reductions in average queuing time and JCT, and improves
cluster usage by up to 26.9% over the cluster scheduler without capacity
loaning or elastic scaling.
|
[
{
"version": "v1",
"created": "Wed, 16 Feb 2022 07:03:25 GMT"
}
] | 2022-02-17T00:00:00 |
[
[
"Li",
"Jiamin",
""
],
[
"Xu",
"Hong",
""
],
[
"Zhu",
"Yibo",
""
],
[
"Liu",
"Zherui",
""
],
[
"Guo",
"Chuanxiong",
""
],
[
"Wang",
"Cong",
""
]
] |
new_dataset
| 0.970644 |
2202.08103
|
Jesus Malo
|
Jesus Malo
|
Paraphrasing Magritte's Observation
|
Keywords: Visual stimuli generation, Image representation in
Surrealism, Cartoon-like images
| null | null | null |
cs.CV q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
Contrast Sensitivity of the human visual system can be explained from certain
low-level vision tasks (like retinal noise and optical blur removal), but not
from others (like chromatic adaptation or pure reconstruction after simple
bottlenecks). This conclusion still holds even under substantial change in
stimulus statistics, as for instance considering cartoon-like images as opposed
to natural images (Li et al. Journal of Vision, 2022, Preprint
arXiv:2103.00481).
In this note we present a method to generate original cartoon-like images
compatible with the statistical training used in (Li et al., 2022). Following
the classical observation in (Magritte, 1929), the stimuli generated by the
proposed method certainly are not what they represent: Ceci n'est pas une pipe.
The clear distinction between representation (the stimuli generated by the
proposed method) and reality (the actual object) avoids eventual problems for
the use of the generated stimuli in academic, non-profit, publications.
|
[
{
"version": "v1",
"created": "Fri, 11 Feb 2022 00:20:04 GMT"
}
] | 2022-02-17T00:00:00 |
[
[
"Malo",
"Jesus",
""
]
] |
new_dataset
| 0.990419 |
2202.08112
|
Nimish Magre
|
Nimish Magre, Nicholas Brown
|
Typography-MNIST (TMNIST): an MNIST-Style Image Dataset to Categorize
Glyphs and Font-Styles
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present Typography-MNIST (TMNIST), a dataset comprising of 565,292
MNIST-style grayscale images representing 1,812 unique glyphs in varied styles
of 1,355 Google-fonts. The glyph-list contains common characters from over 150
of the modern and historical language scripts with symbol sets, and each
font-style represents varying subsets of the total unique glyphs. The dataset
has been developed as part of the CognitiveType project which aims to develop
eye-tracking tools for real-time mapping of type to cognition and to create
computational tools that allow for the easy design of typefaces with cognitive
properties such as readability. The dataset and scripts to generate MNIST-style
images for glyphs in different font styles are freely available at
https://github.com/aiskunks/CognitiveType.
|
[
{
"version": "v1",
"created": "Sat, 12 Feb 2022 21:01:39 GMT"
}
] | 2022-02-17T00:00:00 |
[
[
"Magre",
"Nimish",
""
],
[
"Brown",
"Nicholas",
""
]
] |
new_dataset
| 0.999876 |
2202.08118
|
Mayukh Bagchi
|
Mayukh Bagchi
|
Smart Cities, Smart Libraries and Smart Knowledge Managers: Ushering in
the neo-Knowledge Society
| null | null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
The emergence of smart cities as a specific concept is not very old. In
simple terms, it refers to cities which are sustainable and driven
predominantly by their Information and Communication Technology (ICT)
infrastructure. Smart libraries and smart knowledge managers, alongside its
other smart component-entities, are vital for their emergence, sustenance and
progress. The paper attempts at deducing a symbiosis amongst smart cities,
smart libraries and smart knowledge managers. It further elaborates on how
these will usher in the neo-knowledge society, and the opportunities it'll
offer vis-\`a-vis Library and Information Science (LIS). Finally, it concludes
on an optimistic note, mentioning possible future research activities in this
regard.
|
[
{
"version": "v1",
"created": "Wed, 16 Feb 2022 14:53:18 GMT"
}
] | 2022-02-17T00:00:00 |
[
[
"Bagchi",
"Mayukh",
""
]
] |
new_dataset
| 0.978719 |
2202.08134
|
Bruno Jos\'e Olivieri de Souza
|
Thiago Lamenza, Marcelo Paulon, Breno Perricone, Bruno Olivieri,
Markus Endler
|
GrADyS-SIM -- A OMNET++/INET simulation framework for Internet of Flying
things
| null | null | null | null |
cs.NI cs.DC cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This technical report describes GrADyS-SIM, a framework for simulating
cooperating swarms of UAVs in joint mission in hypothetical landscape and
communicating through RF radios. The framework was created to aid and verify
the communication, coordination and context-awareness protocols being developed
in the GrADyS project. GrADyS-SIM uses the OMNeT++ simulation library and its
INET model suite and and allows for addition of modified or customized versions
of some simulated components, network configurations and vehicle coordination,
so that new coordination protocols can be developed and tested through the
framework. The framework simulates UAV movement dictated by file containing
some MAVLink instructions and affected on the fly by different network
situations. The UAV swarm coordination protocol emerges from individual
interactions between UAVs and has the objective of optimizing the collection of
sensor data over an area. It also allows for the simulation of some types of
failures to test the protocol adaptability. Every node in the simulation is
highly configurable making testing different network opographies, coordination
protocols, node hardware configurations and more a quick task.
|
[
{
"version": "v1",
"created": "Wed, 16 Feb 2022 15:22:34 GMT"
}
] | 2022-02-17T00:00:00 |
[
[
"Lamenza",
"Thiago",
""
],
[
"Paulon",
"Marcelo",
""
],
[
"Perricone",
"Breno",
""
],
[
"Olivieri",
"Bruno",
""
],
[
"Endler",
"Markus",
""
]
] |
new_dataset
| 0.998127 |
2202.08153
|
Thinura Ariyaratne
|
U.H.D. Thinura Nethpiya Ariyaratne, V. Diyon Yasaswin Vitharana, L.H.
Don Ranul Deelaka, H.M. Sumudu Maduranga Herath
|
IoT Smart Plant Monitoring, Watering and Security System
|
11 pages, 1 table, 3 figures
| null | null | null |
cs.CY cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Interest in home gardening has burgeoned since governments around the
world-imposed lockdowns to suppress the spread of COVID-19. Nowadays, most
families start to do gardening during this lockdown season because they can
grow vegetables and fruits or any other plants that they want in their
day-to-day life. So, they can survive without spending money on online grocery
shopping for fruits and vegetables during this lockdown season. In Sri Lanka,
home gardening was a trend during the past couple of months due to this
pandemic. Most of the families were trying to do gardening for their needs. But
the problem is, nowadays the government is trying to release those restrictions
to start day-to-day work in Sri Lanka. With this situation, people are starting
to do their jobs and they do not have time to spend in their gardens continuing
their gardening. We thought about this problem and tried to find a solution to
continue the gardening work while doing their jobs. The major concern is people
cannot monitor their plants every time and protect their garden. So, we decided
to automate the garden work. With our new solution, gardeners can monitor some
important factors like the plant's healthiness, soil moisture level, air
humidity level, and the surrounding temperature and water their garden from
anywhere in the world at any time by using our app. Plant health has a
significant impact on plant development, production, and quality of
agricultural goods. The goal of this study is to create an automated system
that can identify the presence of illness in plants based on variations in
plant leaf health state is created utilizing sensors such as temperature,
humidity, and color....
|
[
{
"version": "v1",
"created": "Wed, 16 Feb 2022 15:51:14 GMT"
}
] | 2022-02-17T00:00:00 |
[
[
"Ariyaratne",
"U. H. D. Thinura Nethpiya",
""
],
[
"Vitharana",
"V. Diyon Yasaswin",
""
],
[
"Deelaka",
"L. H. Don Ranul",
""
],
[
"Herath",
"H. M. Sumudu Maduranga",
""
]
] |
new_dataset
| 0.996712 |
2202.08156
|
Munesh Kumari
|
Kalika Prasad, Hrishikesh Mahato and Munesh Kumari
|
A novel public key cryptography based on generalized Lucas matrices
|
14pages
| null | null | null |
cs.CR cs.DM math.CO math.NT
|
http://creativecommons.org/licenses/by/4.0/
|
In this article, we have proposed a generalized Lucas matrix (recursive
matrix of higher order) having relation with generalized Fibonacci sequences
and established many special properties in addition to that usual matrix
algebra. Further, we have proposed a modified public key cryptography using
these matrices as keys in Affine cipher and key agreement for
encryption-decryption with the combination of terms of generalized Lucas
sequences under residue operations. In this scheme, instead of exchanging the
whole key matrix, only a pair of numbers(parameters) need to be exchanged,
which reduces the time complexity as well as space complexity of the key
transmission and has a large key-space.
|
[
{
"version": "v1",
"created": "Wed, 16 Feb 2022 15:55:16 GMT"
}
] | 2022-02-17T00:00:00 |
[
[
"Prasad",
"Kalika",
""
],
[
"Mahato",
"Hrishikesh",
""
],
[
"Kumari",
"Munesh",
""
]
] |
new_dataset
| 0.997494 |
2006.07540
|
Hae Beom Lee
|
Jeongun Ryu and Jaewoong Shin and Hae Beom Lee and Sung Ju Hwang
|
MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and
Architectures
| null | null | null |
Advances in Neural Information Processing Systems 33 (NeurIPS 2020)
|
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Regularization and transfer learning are two popular techniques to enhance
generalization on unseen data, which is a fundamental problem of machine
learning. Regularization techniques are versatile, as they are task- and
architecture-agnostic, but they do not exploit a large amount of data
available. Transfer learning methods learn to transfer knowledge from one
domain to another, but may not generalize across tasks and architectures, and
may introduce new training cost for adapting to the target task. To bridge the
gap between the two, we propose a transferable perturbation, MetaPerturb, which
is meta-learned to improve generalization performance on unseen data.
MetaPerturb is implemented as a set-based lightweight network that is agnostic
to the size and the order of the input, which is shared across the layers.
Then, we propose a meta-learning framework, to jointly train the perturbation
function over heterogeneous tasks in parallel. As MetaPerturb is a set-function
trained over diverse distributions across layers and tasks, it can generalize
to heterogeneous tasks and architectures. We validate the efficacy and
generality of MetaPerturb trained on a specific source domain and architecture,
by applying it to the training of diverse neural architectures on heterogeneous
target datasets against various regularizers and fine-tuning. The results show
that the networks trained with MetaPerturb significantly outperform the
baselines on most of the tasks and architectures, with a negligible increase in
the parameter size and no hyperparameters to tune.
|
[
{
"version": "v1",
"created": "Sat, 13 Jun 2020 02:54:59 GMT"
},
{
"version": "v2",
"created": "Sun, 5 Dec 2021 13:19:40 GMT"
},
{
"version": "v3",
"created": "Tue, 15 Feb 2022 13:56:01 GMT"
}
] | 2022-02-16T00:00:00 |
[
[
"Ryu",
"Jeongun",
""
],
[
"Shin",
"Jaewoong",
""
],
[
"Lee",
"Hae Beom",
""
],
[
"Hwang",
"Sung Ju",
""
]
] |
new_dataset
| 0.961974 |
2102.11938
|
Kanishk Gandhi
|
Kanishk Gandhi, Gala Stojnic, Brenden M. Lake, Moira R. Dillon
|
Baby Intuitions Benchmark (BIB): Discerning the goals, preferences, and
actions of others
|
Published in Advances in Neural Information Processing Systems
(NeurIPS) 34
| null | null | null |
cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
To achieve human-like common sense about everyday life, machine learning
systems must understand and reason about the goals, preferences, and actions of
other agents in the environment. By the end of their first year of life, human
infants intuitively achieve such common sense, and these cognitive achievements
lay the foundation for humans' rich and complex understanding of the mental
states of others. Can machines achieve generalizable, commonsense reasoning
about other agents like human infants? The Baby Intuitions Benchmark (BIB)
challenges machines to predict the plausibility of an agent's behavior based on
the underlying causes of its actions. Because BIB's content and paradigm are
adopted from developmental cognitive science, BIB allows for direct comparison
between human and machine performance. Nevertheless, recently proposed,
deep-learning-based agency reasoning models fail to show infant-like reasoning,
leaving BIB an open challenge.
|
[
{
"version": "v1",
"created": "Tue, 23 Feb 2021 21:01:06 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Nov 2021 06:44:39 GMT"
},
{
"version": "v3",
"created": "Thu, 16 Dec 2021 19:32:26 GMT"
},
{
"version": "v4",
"created": "Fri, 11 Feb 2022 22:57:16 GMT"
}
] | 2022-02-16T00:00:00 |
[
[
"Gandhi",
"Kanishk",
""
],
[
"Stojnic",
"Gala",
""
],
[
"Lake",
"Brenden M.",
""
],
[
"Dillon",
"Moira R.",
""
]
] |
new_dataset
| 0.971878 |
2103.16909
|
Tian Xu
|
Xu Chen, Bangguo Yin, Songqiang Chen, Haifeng Li and Tian Xu
|
Generating Multi-scale Maps from Remote Sensing Images via Series
Generative Adversarial Networks
| null | null |
10.1109/LGRS.2021.3129285
| null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Considering the success of generative adversarial networks (GANs) for
image-to-image translation, researchers have attempted to translate remote
sensing images (RSIs) to maps (rs2map) through GAN for cartography. However,
these studies involved limited scales, which hinders multi-scale map creation.
By extending their method, multi-scale RSIs can be trivially translated to
multi-scale maps (multi-scale rs2map translation) through scale-wise rs2map
models trained for certain scales (parallel strategy). However, this strategy
has two theoretical limitations. First, inconsistency between various spatial
resolutions of multi-scale RSIs and object generalization on multi-scale maps
(RS-m inconsistency) increasingly complicate the extraction of geographical
information from RSIs for rs2map models with decreasing scale. Second, as
rs2map translation is cross-domain, generators incur high computation costs to
transform the RSI pixel distribution to that on maps. Thus, we designed a
series strategy of generators for multi-scale rs2map translation to address
these limitations. In this strategy, high-resolution RSIs are inputted to an
rs2map model to output large-scale maps, which are translated to multi-scale
maps through series multi-scale map translation models. The series strategy
avoids RS-m inconsistency as inputs are high-resolution large-scale RSIs, and
reduces the distribution gap in multi-scale map generation through similar
pixel distributions among multi-scale maps. Our experimental results showed
better quality multi-scale map generation with the series strategy, as shown by
average increases of 11.69%, 53.78%, 55.42%, and 72.34% in the structural
similarity index, edge structural similarity index, intersection over union
(road), and intersection over union (water) for data from Mexico City and Tokyo
at zoom level 17-13.
|
[
{
"version": "v1",
"created": "Wed, 31 Mar 2021 08:58:37 GMT"
}
] | 2022-02-16T00:00:00 |
[
[
"Chen",
"Xu",
""
],
[
"Yin",
"Bangguo",
""
],
[
"Chen",
"Songqiang",
""
],
[
"Li",
"Haifeng",
""
],
[
"Xu",
"Tian",
""
]
] |
new_dataset
| 0.982022 |
2104.02846
|
Yuansheng Hua
|
Yuansheng Hua, Lichao Mou, Pu Jin, Xiao Xiang Zhu
|
MultiScene: A Large-scale Dataset and Benchmark for Multi-scene
Recognition in Single Aerial Images
| null | null |
10.1109/TGRS.2021.3110314
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Aerial scene recognition is a fundamental research problem in interpreting
high-resolution aerial imagery. Over the past few years, most studies focus on
classifying an image into one scene category, while in real-world scenarios, it
is more often that a single image contains multiple scenes. Therefore, in this
paper, we investigate a more practical yet underexplored task -- multi-scene
recognition in single images. To this end, we create a large-scale dataset,
called MultiScene, composed of 100,000 unconstrained high-resolution aerial
images. Considering that manually labeling such images is extremely arduous, we
resort to low-cost annotations from crowdsourcing platforms, e.g.,
OpenStreetMap (OSM). However, OSM data might suffer from incompleteness and
incorrectness, which introduce noise into image labels. To address this issue,
we visually inspect 14,000 images and correct their scene labels, yielding a
subset of cleanly-annotated images, named MultiScene-Clean. With it, we can
develop and evaluate deep networks for multi-scene recognition using clean
data. Moreover, we provide crowdsourced annotations of all images for the
purpose of studying network learning with noisy labels. We conduct experiments
with extensive baseline models on both MultiScene-Clean and MultiScene to offer
benchmarks for multi-scene recognition in single images and learning from noisy
labels for this task, respectively. To facilitate progress, we make our dataset
and trained models available on
https://gitlab.lrz.de/ai4eo/reasoning/multiscene.
|
[
{
"version": "v1",
"created": "Wed, 7 Apr 2021 01:09:12 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Aug 2021 09:39:23 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Sep 2021 13:02:45 GMT"
}
] | 2022-02-16T00:00:00 |
[
[
"Hua",
"Yuansheng",
""
],
[
"Mou",
"Lichao",
""
],
[
"Jin",
"Pu",
""
],
[
"Zhu",
"Xiao Xiang",
""
]
] |
new_dataset
| 0.999902 |
2104.04572
|
Le Xia
|
Le Xia, Yao Sun, Rafiq Swash, Lina Mohjazi, Lei Zhang, and Muhammad
Ali Imran
|
Smart and Secure CAV Networks Empowered by AI-Enabled Blockchain: The
Next Frontier for Intelligent Safe Driving Assessment
|
This article has been accepted for publication by IEEE Network.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null |
10.1109/MNET.101.2100387
| null |
cs.NI cs.AI cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Securing safe driving for connected and autonomous vehicles (CAVs) continues
to be a widespread concern, despite various sophisticated functions delivered
by artificial intelligence for in-vehicle devices. Diverse malicious network
attacks are ubiquitous, along with the worldwide implementation of the Internet
of Vehicles, which exposes a range of reliability and privacy threats for
managing data in CAV networks. Combined with the fact that the capability of
existing CAVs in handling intensive computation tasks is limited, this implies
a need for designing an efficient assessment system to guarantee autonomous
driving safety without compromising data security. In this article we propose a
novel framework, namely Blockchain-enabled intElligent Safe-driving assessmenT
(BEST), which offers a smart and reliable approach for conducting safe driving
supervision while protecting vehicular information. Specifically, a promising
solution that exploits a long short-term memory model is introduced to assess
the safety level of the moving CAVs. Then we investigate how a distributed
blockchain obtains adequate trustworthiness and robustness for CAV data by
adopting a byzantine fault tolerance-based delegated proof-of-stake consensus
mechanism. Simulation results demonstrate that our presented BEST gains better
data credibility with a higher prediction accuracy for vehicular safety
assessment when compared with existing schemes. Finally, we discuss several
open challenges that need to be addressed in future CAV networks.
|
[
{
"version": "v1",
"created": "Fri, 9 Apr 2021 19:08:34 GMT"
},
{
"version": "v2",
"created": "Sun, 4 Jul 2021 11:03:20 GMT"
},
{
"version": "v3",
"created": "Tue, 5 Oct 2021 20:01:47 GMT"
},
{
"version": "v4",
"created": "Sat, 20 Nov 2021 21:42:55 GMT"
},
{
"version": "v5",
"created": "Fri, 11 Feb 2022 19:33:38 GMT"
}
] | 2022-02-16T00:00:00 |
[
[
"Xia",
"Le",
""
],
[
"Sun",
"Yao",
""
],
[
"Swash",
"Rafiq",
""
],
[
"Mohjazi",
"Lina",
""
],
[
"Zhang",
"Lei",
""
],
[
"Imran",
"Muhammad Ali",
""
]
] |
new_dataset
| 0.998703 |
2104.13772
|
Jincaho Zhou
|
Qi Xuan, Jinchao Zhou, Kunfeng Qiu, Dongwei Xu, Shilian Zheng and
Xiaoniu Yang
|
CLPVG: Circular limited penetrable visibility graph as a new network
model for time series
|
9 pages, 9 figures
| null |
10.1063/5.0048243
| null |
cs.LG eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visibility Graph (VG) transforms time series into graphs, facilitating signal
processing by advanced graph data mining algorithms. In this paper, based on
the classic Limited Penetrable Visibility Graph (LPVG) method, we propose a
novel nonlinear mapping method named Circular Limited Penetrable Visibility
Graph (CLPVG). The testing on degree distribution and clustering coefficient on
the generated graphs of typical time series validates that our CLPVG is able to
effectively capture the important features of time series and has better
anti-noise ability than traditional LPVG. The experiments on real-world
time-series datasets of radio signal and electroencephalogram (EEG) also
suggest that the structural features provided by CLPVG, rather than LPVG, are
more useful for time-series classification, leading to higher accuracy. And
this classification performance can be further enhanced through structural
feature expansion by adopting Subgraph Networks (SGN). All of these results
validate the effectiveness of our CLPVG model.
|
[
{
"version": "v1",
"created": "Mon, 1 Mar 2021 03:13:58 GMT"
}
] | 2022-02-16T00:00:00 |
[
[
"Xuan",
"Qi",
""
],
[
"Zhou",
"Jinchao",
""
],
[
"Qiu",
"Kunfeng",
""
],
[
"Xu",
"Dongwei",
""
],
[
"Zheng",
"Shilian",
""
],
[
"Yang",
"Xiaoniu",
""
]
] |
new_dataset
| 0.990368 |
2105.07364
|
Yu Shen
|
Yu Shen, Sijie Zhu, Taojiannan Yang, Chen Chen, Delu Pan, Jianyu Chen,
Liang Xiao, Qian Du
|
BDANet: Multiscale Convolutional Neural Network with Cross-directional
Attention for Building Damage Assessment from Satellite Images
|
arXiv admin note: text overlap with arXiv:2010.14014
| null |
10.1109/TGRS.2021.3080580
| null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fast and effective responses are required when a natural disaster (e.g.,
earthquake, hurricane, etc.) strikes. Building damage assessment from satellite
imagery is critical before relief effort is deployed. With a pair of pre- and
post-disaster satellite images, building damage assessment aims at predicting
the extent of damage to buildings. With the powerful ability of feature
representation, deep neural networks have been successfully applied to building
damage assessment. Most existing works simply concatenate pre- and
post-disaster images as input of a deep neural network without considering
their correlations. In this paper, we propose a novel two-stage convolutional
neural network for Building Damage Assessment, called BDANet. In the first
stage, a U-Net is used to extract the locations of buildings. Then the network
weights from the first stage are shared in the second stage for building damage
assessment. In the second stage, a two-branch multi-scale U-Net is employed as
backbone, where pre- and post-disaster images are fed into the network
separately. A cross-directional attention module is proposed to explore the
correlations between pre- and post-disaster images. Moreover, CutMix data
augmentation is exploited to tackle the challenge of difficult classes. The
proposed method achieves state-of-the-art performance on a large-scale dataset
-- xBD. The code is available at
https://github.com/ShaneShen/BDANet-Building-Damage-Assessment.
|
[
{
"version": "v1",
"created": "Sun, 16 May 2021 06:13:28 GMT"
}
] | 2022-02-16T00:00:00 |
[
[
"Shen",
"Yu",
""
],
[
"Zhu",
"Sijie",
""
],
[
"Yang",
"Taojiannan",
""
],
[
"Chen",
"Chen",
""
],
[
"Pan",
"Delu",
""
],
[
"Chen",
"Jianyu",
""
],
[
"Xiao",
"Liang",
""
],
[
"Du",
"Qian",
""
]
] |
new_dataset
| 0.994176 |
2106.00880
|
Pourya Shamsolmoali
|
Pourya Shamsolmoali, Masoumeh Zareapoor, Jocelyn Chanussot, Huiyu
Zhou, and Jie Yang
|
Rotation Equivariant Feature Image Pyramid Network for Object Detection
in Optical Remote Sensing Imagery
| null | null |
10.1109/TGRS.2021.3112481
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Detection of objects is extremely important in various aerial vision-based
applications. Over the last few years, the methods based on convolution neural
networks have made substantial progress. However, because of the large variety
of object scales, densities, and arbitrary orientations, the current detectors
struggle with the extraction of semantically strong features for small-scale
objects by a predefined convolution kernel. To address this problem, we propose
the rotation equivariant feature image pyramid network (REFIPN), an image
pyramid network based on rotation equivariance convolution. The proposed model
adopts single-shot detector in parallel with a lightweight image pyramid module
to extract representative features and generate regions of interest in an
optimization approach. The proposed network extracts feature in a wide range of
scales and orientations by using novel convolution filters. These features are
used to generate vector fields and determine the weight and angle of the
highest-scoring orientation for all spatial locations on an image. By this
approach, the performance for small-sized object detection is enhanced without
sacrificing the performance for large-sized object detection. The performance
of the proposed model is validated on two commonly used aerial benchmarks and
the results show our proposed model can achieve state-of-the-art performance
with satisfactory efficiency.
|
[
{
"version": "v1",
"created": "Wed, 2 Jun 2021 01:33:49 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Jun 2021 01:16:48 GMT"
},
{
"version": "v3",
"created": "Mon, 6 Sep 2021 03:09:00 GMT"
}
] | 2022-02-16T00:00:00 |
[
[
"Shamsolmoali",
"Pourya",
""
],
[
"Zareapoor",
"Masoumeh",
""
],
[
"Chanussot",
"Jocelyn",
""
],
[
"Zhou",
"Huiyu",
""
],
[
"Yang",
"Jie",
""
]
] |
new_dataset
| 0.959282 |
2110.01098
|
Michael Coblenz
|
Michael Coblenz, Michelle Mazurek, Michael Hicks
|
Garbage Collection Makes Rust Easier to Use: A Randomized Controlled
Trial of the Bronze Garbage Collector
|
Michael Coblenz, Michelle L. Mazurek, and Michael Hicks. 2022.
Garbage Collection Makes Rust Easier to Use: A Randomized Controlled Trial of
the Bronze Garbage Collector. In 44th International Conference on Software
Engineering (ICSE '22), May 21-29, 2022, Pittsburgh, PA, USA. ACM, New York,
NY, USA, 12 pages. https://doi.org/10.1145/3510003.3510107
| null |
10.1145/3510003.3510107
| null |
cs.SE cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rust is a general-purpose programming language that is both type- and
memory-safe. Rust does not use a garbage collector, but rather achieves these
properties through a sophisticated, but complex, type system. Doing so makes
Rust very efficient, but makes Rust relatively hard to learn and use. We
designed Bronze, an optional, library-based garbage collector for Rust. To see
whether Bronze could make Rust more usable, we conducted a randomized
controlled trial with volunteers from a 633-person class, collecting data from
428 students in total. We found that for a task that required managing complex
aliasing, Bronze users were more likely to complete the task in the time
available, and those who did so required only about a third as much time (4
hours vs. 12 hours). We found no significant difference in total time, even
though Bronze users re-did the task without Bronze afterward. Surveys indicated
that ownership, borrowing, and lifetimes were primary causes of the challenges
that users faced when using Rust.
|
[
{
"version": "v1",
"created": "Sun, 3 Oct 2021 20:26:24 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Feb 2022 14:55:41 GMT"
}
] | 2022-02-16T00:00:00 |
[
[
"Coblenz",
"Michael",
""
],
[
"Mazurek",
"Michelle",
""
],
[
"Hicks",
"Michael",
""
]
] |
new_dataset
| 0.996339 |
2110.11499
|
Ho-Hsiang Wu
|
Ho-Hsiang Wu, Prem Seetharaman, Kundan Kumar, Juan Pablo Bello
|
Wav2CLIP: Learning Robust Audio Representations From CLIP
|
Copyright 2022 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other works
| null | null | null |
cs.SD cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose Wav2CLIP, a robust audio representation learning method by
distilling from Contrastive Language-Image Pre-training (CLIP). We
systematically evaluate Wav2CLIP on a variety of audio tasks including
classification, retrieval, and generation, and show that Wav2CLIP can
outperform several publicly available pre-trained audio representation
algorithms. Wav2CLIP projects audio into a shared embedding space with images
and text, which enables multimodal applications such as zero-shot
classification, and cross-modal retrieval. Furthermore, Wav2CLIP needs just
~10% of the data to achieve competitive performance on downstream tasks
compared with fully supervised models, and is more efficient to pre-train than
competing methods as it does not require learning a visual model in concert
with an auditory model. Finally, we demonstrate image generation from Wav2CLIP
as qualitative assessment of the shared embedding space. Our code and model
weights are open sourced and made available for further applications.
|
[
{
"version": "v1",
"created": "Thu, 21 Oct 2021 22:00:13 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Feb 2022 13:06:57 GMT"
}
] | 2022-02-16T00:00:00 |
[
[
"Wu",
"Ho-Hsiang",
""
],
[
"Seetharaman",
"Prem",
""
],
[
"Kumar",
"Kundan",
""
],
[
"Bello",
"Juan Pablo",
""
]
] |
new_dataset
| 0.99887 |
2201.03115
|
Rohitash Chandra
|
Rohitash Chandra, Venkatesh Kulkarni
|
Semantic and sentiment analysis of selected Bhagavad Gita translations
using BERT-based language framework
| null |
IEEE Access, 2022
| null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
It is well known that translations of songs and poems not only break rhythm
and rhyming patterns, but can also result in loss of semantic information. The
Bhagavad Gita is an ancient Hindu philosophical text originally written in
Sanskrit that features a conversation between Lord Krishna and Arjuna prior to
the Mahabharata war. The Bhagavad Gita is also one of the key sacred texts in
Hinduism and is known as the forefront of the Vedic corpus of Hinduism. In the
last two centuries, there has been a lot of interest in Hindu philosophy from
western scholars; hence, the Bhagavad Gita has been translated in a number of
languages. However, there is not much work that validates the quality of the
English translations. Recent progress of language models powered by deep
learning has enabled not only translations but a better understanding of
language and texts with semantic and sentiment analysis. Our work is motivated
by the recent progress of language models powered by deep learning methods. In
this paper, we present a framework that compares selected translations (from
Sanskrit to English) of the Bhagavad Gita using semantic and sentiment
analyses. We use hand-labelled sentiment dataset for tuning state-of-art deep
learning-based language model known as bidirectional encoder representations
from transformers (BERT). We provide sentiment and semantic analysis for
selected chapters and verses across translations. Our results show that
although the style and vocabulary in the respective translations vary widely,
the sentiment analysis and semantic similarity shows that the message conveyed
are mostly similar.
|
[
{
"version": "v1",
"created": "Sun, 9 Jan 2022 23:59:11 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Feb 2022 10:22:32 GMT"
}
] | 2022-02-16T00:00:00 |
[
[
"Chandra",
"Rohitash",
""
],
[
"Kulkarni",
"Venkatesh",
""
]
] |
new_dataset
| 0.99497 |
2201.12086
|
Junnan Li Dr
|
Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi
|
BLIP: Bootstrapping Language-Image Pre-training for Unified
Vision-Language Understanding and Generation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Vision-Language Pre-training (VLP) has advanced the performance for many
vision-language tasks. However, most existing pre-trained models only excel in
either understanding-based tasks or generation-based tasks. Furthermore,
performance improvement has been largely achieved by scaling up the dataset
with noisy image-text pairs collected from the web, which is a suboptimal
source of supervision. In this paper, we propose BLIP, a new VLP framework
which transfers flexibly to both vision-language understanding and generation
tasks. BLIP effectively utilizes the noisy web data by bootstrapping the
captions, where a captioner generates synthetic captions and a filter removes
the noisy ones. We achieve state-of-the-art results on a wide range of
vision-language tasks, such as image-text retrieval (+2.7% in average
recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score).
BLIP also demonstrates strong generalization ability when directly transferred
to video-language tasks in a zero-shot manner. Code, models, and datasets are
released at https://github.com/salesforce/BLIP.
|
[
{
"version": "v1",
"created": "Fri, 28 Jan 2022 12:49:48 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Feb 2022 05:43:32 GMT"
}
] | 2022-02-16T00:00:00 |
[
[
"Li",
"Junnan",
""
],
[
"Li",
"Dongxu",
""
],
[
"Xiong",
"Caiming",
""
],
[
"Hoi",
"Steven",
""
]
] |
new_dataset
| 0.998659 |
2202.01624
|
Tianchi Liu
|
Tianchi Liu, Rohan Kumar Das, Kong Aik Lee, Haizhou Li
|
MFA: TDNN with Multi-scale Frequency-channel Attention for
Text-independent Speaker Verification with Short Utterances
|
Accepted by ICASSP 2022
| null | null | null |
cs.SD cs.CL eess.AS eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The time delay neural network (TDNN) represents one of the state-of-the-art
of neural solutions to text-independent speaker verification. However, they
require a large number of filters to capture the speaker characteristics at any
local frequency region. In addition, the performance of such systems may
degrade under short utterance scenarios. To address these issues, we propose a
multi-scale frequency-channel attention (MFA), where we characterize speakers
at different scales through a novel dual-path design which consists of a
convolutional neural network and TDNN. We evaluate the proposed MFA on the
VoxCeleb database and observe that the proposed framework with MFA can achieve
state-of-the-art performance while reducing parameters and computation
complexity. Further, the MFA mechanism is found to be effective for speaker
verification with short test utterances.
|
[
{
"version": "v1",
"created": "Thu, 3 Feb 2022 14:57:05 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Feb 2022 15:39:24 GMT"
},
{
"version": "v3",
"created": "Tue, 15 Feb 2022 17:09:04 GMT"
}
] | 2022-02-16T00:00:00 |
[
[
"Liu",
"Tianchi",
""
],
[
"Das",
"Rohan Kumar",
""
],
[
"Lee",
"Kong Aik",
""
],
[
"Li",
"Haizhou",
""
]
] |
new_dataset
| 0.998903 |
2202.04996
|
Siamak Mehrkanoon
|
Yimin Yang and Siamak Mehrkanoon
|
AA-TransUNet: Attention Augmented TransUNet For Nowcasting Tasks
|
8 pages, 8 figures
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Data driven modeling based approaches have recently gained a lot of attention
in many challenging meteorological applications including weather element
forecasting. This paper introduces a novel data-driven predictive model based
on TransUNet for precipitation nowcasting task. The TransUNet model which
combines the Transformer and U-Net models has been previously successfully
applied in medical segmentation tasks. Here, TransUNet is used as a core model
and is further equipped with Convolutional Block Attention Modules (CBAM) and
Depthwise-separable Convolution (DSC). The proposed Attention Augmented
TransUNet (AA-TransUNet) model is evaluated on two distinct datasets: the Dutch
precipitation map dataset and the French cloud cover dataset. The obtained
results show that the proposed model outperforms other examined models on both
tested datasets. Furthermore, the uncertainty analysis of the proposed
AA-TransUNet is provided to give additional insights on its predictions.
|
[
{
"version": "v1",
"created": "Thu, 10 Feb 2022 12:48:50 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Feb 2022 08:40:31 GMT"
}
] | 2022-02-16T00:00:00 |
[
[
"Yang",
"Yimin",
""
],
[
"Mehrkanoon",
"Siamak",
""
]
] |
new_dataset
| 0.966409 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.