id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2205.05590
|
Kai Wei
|
Kai Wei, Dillon Knox, Martin Radfar, Thanh Tran, Markus Muller, Grant
P. Strimel, Nathan Susanj, Athanasios Mouchtaris, Maurizio Omologo
|
A neural prosody encoder for end-ro-end dialogue act classification
| null | null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Dialogue act classification (DAC) is a critical task for spoken language
understanding in dialogue systems. Prosodic features such as energy and pitch
have been shown to be useful for DAC. Despite their importance, little research
has explored neural approaches to integrate prosodic features into end-to-end
(E2E) DAC models which infer dialogue acts directly from audio signals. In this
work, we propose an E2E neural architecture that takes into account the need
for characterizing prosodic phenomena co-occurring at different levels inside
an utterance. A novel part of this architecture is a learnable gating mechanism
that assesses the importance of prosodic features and selectively retains core
information necessary for E2E DAC. Our proposed model improves DAC accuracy by
1.07% absolute across three publicly available benchmark datasets.
|
[
{
"version": "v1",
"created": "Wed, 11 May 2022 16:01:06 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"Wei",
"Kai",
""
],
[
"Knox",
"Dillon",
""
],
[
"Radfar",
"Martin",
""
],
[
"Tran",
"Thanh",
""
],
[
"Muller",
"Markus",
""
],
[
"Strimel",
"Grant P.",
""
],
[
"Susanj",
"Nathan",
""
],
[
"Mouchtaris",
"Athanasios",
""
],
[
"Omologo",
"Maurizio",
""
]
] |
new_dataset
| 0.981468 |
2205.05594
|
Ivo Maffei
|
Ivo Maffei and A. W. Roscoe
|
Delay Encryption by Cubing
|
30 pages
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Delay Encryption (often called Timed-Release Encryption) is a scheme in which
a message is sent into the future by ensuring its confidentiality only for a
given amount of time. We propose a new scheme based on a novel time-lock
puzzle. This puzzle relies on the assumption that repeated squaring is an
inherently sequential process. We perform an extensive and practical analysis
of many classical and quantum attacks on our scheme and conclude that it is
secure given some precautions.
|
[
{
"version": "v1",
"created": "Wed, 11 May 2022 16:03:50 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"Maffei",
"Ivo",
""
],
[
"Roscoe",
"A. W.",
""
]
] |
new_dataset
| 0.964883 |
2205.05675
|
Yawei Li
|
Yawei Li and Kai Zhang and Radu Timofte and Luc Van Gool and Fangyuan
Kong and Mingxi Li and Songwei Liu and Zongcai Du and Ding Liu and Chenhui
Zhou and Jingyi Chen and Qingrui Han and Zheyuan Li and Yingqi Liu and
Xiangyu Chen and Haoming Cai and Yu Qiao and Chao Dong and Long Sun and
Jinshan Pan and Yi Zhu and Zhikai Zong and Xiaoxiao Liu and Zheng Hui and Tao
Yang and Peiran Ren and Xuansong Xie and Xian-Sheng Hua and Yanbo Wang and
Xiaozhong Ji and Chuming Lin and Donghao Luo and Ying Tai and Chengjie Wang
and Zhizhong Zhang and Yuan Xie and Shen Cheng and Ziwei Luo and Lei Yu and
Zhihong Wen and Qi Wu1 and Youwei Li and Haoqiang Fan and Jian Sun and
Shuaicheng Liu and Yuanfei Huang and Meiguang Jin and Hua Huang and Jing Liu
and Xinjian Zhang and Yan Wang and Lingshun Long and Gen Li and Yuanfan Zhang
and Zuowei Cao and Lei Sun and Panaetov Alexander and Yucong Wang and Minjie
Cai and Li Wang and Lu Tian and Zheyuan Wang and Hongbing Ma and Jie Liu and
Chao Chen and Yidong Cai and Jie Tang and Gangshan Wu and Weiran Wang and
Shirui Huang and Honglei Lu and Huan Liu and Keyan Wang and Jun Chen and Shi
Chen and Yuchun Miao and Zimo Huang and Lefei Zhang and Mustafa Ayazo\u{g}lu
and Wei Xiong and Chengyi Xiong and Fei Wang and Hao Li and Ruimian Wen and
Zhijing Yang and Wenbin Zou and Weixin Zheng and Tian Ye and Yuncheng Zhang
and Xiangzhen Kong and Aditya Arora and Syed Waqas Zamir and Salman Khan and
Munawar Hayat and Fahad Shahbaz Khan and Dandan Gaoand Dengwen Zhouand Qian
Ning and Jingzhu Tang and Han Huang and Yufei Wang and Zhangheng Peng and
Haobo Li and Wenxue Guan and Shenghua Gong and Xin Li and Jun Liu and Wanjun
Wang and Dengwen Zhou and Kun Zeng and Hanjiang Lin and Xinyu Chen and
Jinsheng Fang
|
NTIRE 2022 Challenge on Efficient Super-Resolution: Methods and Results
|
Validation code of the baseline model is available at
https://github.com/ofsoundof/IMDN. Validation of all submitted models is
available at https://github.com/ofsoundof/NTIRE2022_ESR
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
This paper reviews the NTIRE 2022 challenge on efficient single image
super-resolution with focus on the proposed solutions and results. The task of
the challenge was to super-resolve an input image with a magnification factor
of $\times$4 based on pairs of low and corresponding high resolution images.
The aim was to design a network for single image super-resolution that achieved
improvement of efficiency measured according to several metrics including
runtime, parameters, FLOPs, activations, and memory consumption while at least
maintaining the PSNR of 29.00dB on DIV2K validation set. IMDN is set as the
baseline for efficiency measurement. The challenge had 3 tracks including the
main track (runtime), sub-track one (model complexity), and sub-track two
(overall performance). In the main track, the practical runtime performance of
the submissions was evaluated. The rank of the teams were determined directly
by the absolute value of the average runtime on the validation set and test
set. In sub-track one, the number of parameters and FLOPs were considered. And
the individual rankings of the two metrics were summed up to determine a final
ranking in this track. In sub-track two, all of the five metrics mentioned in
the description of the challenge including runtime, parameter count, FLOPs,
activations, and memory consumption were considered. Similar to sub-track one,
the rankings of five metrics were summed up to determine a final ranking. The
challenge had 303 registered participants, and 43 teams made valid submissions.
They gauge the state-of-the-art in efficient single image super-resolution.
|
[
{
"version": "v1",
"created": "Wed, 11 May 2022 17:58:54 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"Li",
"Yawei",
""
],
[
"Zhang",
"Kai",
""
],
[
"Timofte",
"Radu",
""
],
[
"Van Gool",
"Luc",
""
],
[
"Kong",
"Fangyuan",
""
],
[
"Li",
"Mingxi",
""
],
[
"Liu",
"Songwei",
""
],
[
"Du",
"Zongcai",
""
],
[
"Liu",
"Ding",
""
],
[
"Zhou",
"Chenhui",
""
],
[
"Chen",
"Jingyi",
""
],
[
"Han",
"Qingrui",
""
],
[
"Li",
"Zheyuan",
""
],
[
"Liu",
"Yingqi",
""
],
[
"Chen",
"Xiangyu",
""
],
[
"Cai",
"Haoming",
""
],
[
"Qiao",
"Yu",
""
],
[
"Dong",
"Chao",
""
],
[
"Sun",
"Long",
""
],
[
"Pan",
"Jinshan",
""
],
[
"Zhu",
"Yi",
""
],
[
"Zong",
"Zhikai",
""
],
[
"Liu",
"Xiaoxiao",
""
],
[
"Hui",
"Zheng",
""
],
[
"Yang",
"Tao",
""
],
[
"Ren",
"Peiran",
""
],
[
"Xie",
"Xuansong",
""
],
[
"Hua",
"Xian-Sheng",
""
],
[
"Wang",
"Yanbo",
""
],
[
"Ji",
"Xiaozhong",
""
],
[
"Lin",
"Chuming",
""
],
[
"Luo",
"Donghao",
""
],
[
"Tai",
"Ying",
""
],
[
"Wang",
"Chengjie",
""
],
[
"Zhang",
"Zhizhong",
""
],
[
"Xie",
"Yuan",
""
],
[
"Cheng",
"Shen",
""
],
[
"Luo",
"Ziwei",
""
],
[
"Yu",
"Lei",
""
],
[
"Wen",
"Zhihong",
""
],
[
"Wu1",
"Qi",
""
],
[
"Li",
"Youwei",
""
],
[
"Fan",
"Haoqiang",
""
],
[
"Sun",
"Jian",
""
],
[
"Liu",
"Shuaicheng",
""
],
[
"Huang",
"Yuanfei",
""
],
[
"Jin",
"Meiguang",
""
],
[
"Huang",
"Hua",
""
],
[
"Liu",
"Jing",
""
],
[
"Zhang",
"Xinjian",
""
],
[
"Wang",
"Yan",
""
],
[
"Long",
"Lingshun",
""
],
[
"Li",
"Gen",
""
],
[
"Zhang",
"Yuanfan",
""
],
[
"Cao",
"Zuowei",
""
],
[
"Sun",
"Lei",
""
],
[
"Alexander",
"Panaetov",
""
],
[
"Wang",
"Yucong",
""
],
[
"Cai",
"Minjie",
""
],
[
"Wang",
"Li",
""
],
[
"Tian",
"Lu",
""
],
[
"Wang",
"Zheyuan",
""
],
[
"Ma",
"Hongbing",
""
],
[
"Liu",
"Jie",
""
],
[
"Chen",
"Chao",
""
],
[
"Cai",
"Yidong",
""
],
[
"Tang",
"Jie",
""
],
[
"Wu",
"Gangshan",
""
],
[
"Wang",
"Weiran",
""
],
[
"Huang",
"Shirui",
""
],
[
"Lu",
"Honglei",
""
],
[
"Liu",
"Huan",
""
],
[
"Wang",
"Keyan",
""
],
[
"Chen",
"Jun",
""
],
[
"Chen",
"Shi",
""
],
[
"Miao",
"Yuchun",
""
],
[
"Huang",
"Zimo",
""
],
[
"Zhang",
"Lefei",
""
],
[
"Ayazoğlu",
"Mustafa",
""
],
[
"Xiong",
"Wei",
""
],
[
"Xiong",
"Chengyi",
""
],
[
"Wang",
"Fei",
""
],
[
"Li",
"Hao",
""
],
[
"Wen",
"Ruimian",
""
],
[
"Yang",
"Zhijing",
""
],
[
"Zou",
"Wenbin",
""
],
[
"Zheng",
"Weixin",
""
],
[
"Ye",
"Tian",
""
],
[
"Zhang",
"Yuncheng",
""
],
[
"Kong",
"Xiangzhen",
""
],
[
"Arora",
"Aditya",
""
],
[
"Zamir",
"Syed Waqas",
""
],
[
"Khan",
"Salman",
""
],
[
"Hayat",
"Munawar",
""
],
[
"Khan",
"Fahad Shahbaz",
""
],
[
"Ning",
"Dandan Gaoand Dengwen Zhouand Qian",
""
],
[
"Tang",
"Jingzhu",
""
],
[
"Huang",
"Han",
""
],
[
"Wang",
"Yufei",
""
],
[
"Peng",
"Zhangheng",
""
],
[
"Li",
"Haobo",
""
],
[
"Guan",
"Wenxue",
""
],
[
"Gong",
"Shenghua",
""
],
[
"Li",
"Xin",
""
],
[
"Liu",
"Jun",
""
],
[
"Wang",
"Wanjun",
""
],
[
"Zhou",
"Dengwen",
""
],
[
"Zeng",
"Kun",
""
],
[
"Lin",
"Hanjiang",
""
],
[
"Chen",
"Xinyu",
""
],
[
"Fang",
"Jinsheng",
""
]
] |
new_dataset
| 0.984139 |
2205.05678
|
Chuang Gan
|
Pingchuan Ma, Tao Du, Joshua B. Tenenbaum, Wojciech Matusik, Chuang
Gan
|
RISP: Rendering-Invariant State Predictor with Differentiable Simulation
and Rendering for Cross-Domain Parameter Estimation
|
ICLR Oral. Project page: http://risp.csail.mit.edu
| null | null | null |
cs.CV cs.AI cs.GR cs.LG cs.RO
|
http://creativecommons.org/publicdomain/zero/1.0/
|
This work considers identifying parameters characterizing a physical system's
dynamic motion directly from a video whose rendering configurations are
inaccessible. Existing solutions require massive training data or lack
generalizability to unknown rendering configurations. We propose a novel
approach that marries domain randomization and differentiable rendering
gradients to address this problem. Our core idea is to train a
rendering-invariant state-prediction (RISP) network that transforms image
differences into state differences independent of rendering configurations,
e.g., lighting, shadows, or material reflectance. To train this predictor, we
formulate a new loss on rendering variances using gradients from differentiable
rendering. Moreover, we present an efficient, second-order method to compute
the gradients of this loss, allowing it to be integrated seamlessly into modern
deep learning frameworks. We evaluate our method in rigid-body and
deformable-body simulation environments using four tasks: state estimation,
system identification, imitation learning, and visuomotor control. We further
demonstrate the efficacy of our approach on a real-world example: inferring the
state and action sequences of a quadrotor from a video of its motion sequences.
Compared with existing methods, our approach achieves significantly lower
reconstruction errors and has better generalizability among unknown rendering
configurations.
|
[
{
"version": "v1",
"created": "Wed, 11 May 2022 17:59:51 GMT"
}
] | 2022-05-12T00:00:00 |
[
[
"Ma",
"Pingchuan",
""
],
[
"Du",
"Tao",
""
],
[
"Tenenbaum",
"Joshua B.",
""
],
[
"Matusik",
"Wojciech",
""
],
[
"Gan",
"Chuang",
""
]
] |
new_dataset
| 0.996693 |
1702.05420
|
Junya Yamauchi Mr.
|
J. Yamauchi, M.W.S. Atman, T. Hatanaka, N. Chopra and M. Fujita
|
Passivity-Based Control of Human-Robotic Networks with Inter-Robot
Communication Delays and Experimental Verification
| null | null |
10.1109/AIM.2017.8014087
| null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present experimental studies on a cooperative control
system for human-robotic networks with inter-robot communication delays. We
first design a cooperative controller to be implemented on each robot so that
their motion are synchronized to a reference motion desired by a human
operator, and then point out that each robot motion ensures passivity.
Inter-robot communication channels are then designed via so-called scattering
transformation which is a technique to passify the delayed channel. The
resulting robotic network is then connected with human operator based on
passivity theory. In order to demonstrate the present control architecture, we
build an experimental testbed consisting of multiple robots and a tablet. In
particular, we analyze the effects of the communication delays on the human
operator's behavior.
|
[
{
"version": "v1",
"created": "Thu, 16 Feb 2017 05:04:49 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Feb 2017 12:04:07 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Yamauchi",
"J.",
""
],
[
"Atman",
"M. W. S.",
""
],
[
"Hatanaka",
"T.",
""
],
[
"Chopra",
"N.",
""
],
[
"Fujita",
"M.",
""
]
] |
new_dataset
| 0.995257 |
2101.00204
|
Rifat Shahriyar
|
Abhik Bhattacharjee, Tahmid Hasan, Wasi Uddin Ahmad, Kazi Samin, Md
Saiful Islam, Anindya Iqbal, M. Sohel Rahman, Rifat Shahriyar
|
BanglaBERT: Language Model Pretraining and Benchmarks for Low-Resource
Language Understanding Evaluation in Bangla
|
Findings of North American Chapter of the Association for
Computational Linguistics, NAACL 2022 (camera-ready)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we introduce BanglaBERT, a BERT-based Natural Language
Understanding (NLU) model pretrained in Bangla, a widely spoken yet
low-resource language in the NLP literature. To pretrain BanglaBERT, we collect
27.5 GB of Bangla pretraining data (dubbed `Bangla2B+') by crawling 110 popular
Bangla sites. We introduce two downstream task datasets on natural language
inference and question answering and benchmark on four diverse NLU tasks
covering text classification, sequence labeling, and span prediction. In the
process, we bring them under the first-ever Bangla Language Understanding
Benchmark (BLUB). BanglaBERT achieves state-of-the-art results outperforming
multilingual and monolingual models. We are making the models, datasets, and a
leaderboard publicly available at https://github.com/csebuetnlp/banglabert to
advance Bangla NLP.
|
[
{
"version": "v1",
"created": "Fri, 1 Jan 2021 09:28:45 GMT"
},
{
"version": "v2",
"created": "Sat, 28 Aug 2021 15:23:27 GMT"
},
{
"version": "v3",
"created": "Wed, 13 Apr 2022 08:11:55 GMT"
},
{
"version": "v4",
"created": "Tue, 10 May 2022 05:30:12 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Bhattacharjee",
"Abhik",
""
],
[
"Hasan",
"Tahmid",
""
],
[
"Ahmad",
"Wasi Uddin",
""
],
[
"Samin",
"Kazi",
""
],
[
"Islam",
"Md Saiful",
""
],
[
"Iqbal",
"Anindya",
""
],
[
"Rahman",
"M. Sohel",
""
],
[
"Shahriyar",
"Rifat",
""
]
] |
new_dataset
| 0.995384 |
2102.04009
|
Liang Ding
|
Di Wu, Liang Ding, Shuo Yang, Mingyang Li
|
MirrorAlign: A Super Lightweight Unsupervised Word Alignment Model via
Cross-Lingual Contrastive Learning
|
IWSLT 2022 (oral)
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Word alignment is essential for the downstream cross-lingual language
understanding and generation tasks. Recently, the performance of the neural
word alignment models has exceeded that of statistical models. However, they
heavily rely on sophisticated translation models. In this study, we propose a
super lightweight unsupervised word alignment model named MirrorAlign, in which
bidirectional symmetric attention trained with a contrastive learning objective
is introduced, and an agreement loss is employed to bind the attention maps,
such that the alignments follow mirror-like symmetry hypothesis. Experimental
results on several public benchmarks demonstrate that our model achieves
competitive, if not better, performance compared to the state of the art in
word alignment while significantly reducing the training and decoding time on
average. Further ablation analysis and case studies show the superiority of our
proposed MirrorAlign. Notably, we recognize our model as a pioneer attempt to
unify bilingual word embedding and word alignments. Encouragingly, our approach
achieves {16.4X speedup} against GIZA++, and {50X parameter compression}
compared with the Transformer-based alignment methods. We release our code to
facilitate the community: https://github.com/moore3930/MirrorAlign.
|
[
{
"version": "v1",
"created": "Mon, 8 Feb 2021 05:54:11 GMT"
},
{
"version": "v2",
"created": "Sun, 7 Mar 2021 17:35:07 GMT"
},
{
"version": "v3",
"created": "Tue, 10 May 2022 13:39:38 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Wu",
"Di",
""
],
[
"Ding",
"Liang",
""
],
[
"Yang",
"Shuo",
""
],
[
"Li",
"Mingyang",
""
]
] |
new_dataset
| 0.997001 |
2103.08908
|
Jingyu Feng
|
Wenbo Zhang, Jing Zhang, Yifei Shi and Jingyu Feng
|
Blockchain-assisted Undisclosed IIoT Vulnerabilities Trusted Sharing
Protection with Dynamic Token
|
10 pages,12 figures
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the large-scale deployment of industrial internet of things (IIoT)
devices, the number of vulnerabilities that threaten IIoT security is also
growing dramatically, including a mass of undisclosed IIoT vulnerabilities that
lack mitigation measures. Coordination Vulnerabilities Disclosure (CVD) is one
of the most popular vulnerabilities sharing solutions, in which some security
workers (SWs) can develop undisclosed vulnerabilities patches together.
However, CVD assumes that sharing participants (SWs) are all honest, and thus
offering chances for dishonest SWs to leak undisclosed IIoT vulnerabilities. To
combat such threats, we propose an Undisclosed IIoT Vulnerabilities Trusted
Sharing Protection (UIV-TSP) scheme with dynamic token. In this article, a
dynamic token is an implicit access credential for an SW to acquire an
undisclosed vulnerability information, which is only held by the system and
constantly updated as the SW access. Meanwhile, the latest updated token can be
stealthily sneaked into the acquired information as the traceability token.
Once the undisclosed vulnerability information leaves the SW host, the embedded
self-destruct program will be automatically triggered to prevent leaks since
the destination MAC address in the traceability token has changed. To quickly
distinguish dishonest SWs, trust mechanism is adopted to evaluate the trust
value of SWs. Moreover, we design a blockchain-assisted continuous logs storage
method to achieve the tamper-proofing of dynamic token and the transparency of
undisclosed IIoT vulnerabilities sharing. The simulation results indicate that
our proposed scheme is resilient to suppress dishonest SWs and protect the IoT
undisclosed vulnerabilities effectively.
|
[
{
"version": "v1",
"created": "Tue, 16 Mar 2021 08:30:33 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Oct 2021 03:40:49 GMT"
},
{
"version": "v3",
"created": "Tue, 10 May 2022 00:44:21 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Zhang",
"Wenbo",
""
],
[
"Zhang",
"Jing",
""
],
[
"Shi",
"Yifei",
""
],
[
"Feng",
"Jingyu",
""
]
] |
new_dataset
| 0.995195 |
2104.00969
|
Jiaojiao Zhao
|
Jiaojiao Zhao, Yanyi Zhang, Xinyu Li, Hao Chen, Shuai Bing, Mingze Xu,
Chunhui Liu, Kaustav Kundu, Yuanjun Xiong, Davide Modolo, Ivan Marsic, Cees
G.M. Snoek, Joseph Tighe
|
TubeR: Tubelet Transformer for Video Action Detection
|
Accepted at CVPR 2022 (Oral)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose TubeR: a simple solution for spatio-temporal video action
detection. Different from existing methods that depend on either an off-line
actor detector or hand-designed actor-positional hypotheses like proposals or
anchors, we propose to directly detect an action tubelet in a video by
simultaneously performing action localization and recognition from a single
representation. TubeR learns a set of tubelet-queries and utilizes a
tubelet-attention module to model the dynamic spatio-temporal nature of a video
clip, which effectively reinforces the model capacity compared to using
actor-positional hypotheses in the spatio-temporal space. For videos containing
transitional states or scene changes, we propose a context aware classification
head to utilize short-term and long-term context to strengthen action
classification, and an action switch regression head for detecting the precise
temporal action extent. TubeR directly produces action tubelets with variable
lengths and even maintains good results for long video clips. TubeR outperforms
the previous state-of-the-art on commonly used action detection datasets AVA,
UCF101-24 and JHMDB51-21.
|
[
{
"version": "v1",
"created": "Fri, 2 Apr 2021 10:21:22 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Apr 2021 12:22:14 GMT"
},
{
"version": "v3",
"created": "Mon, 6 Dec 2021 09:19:47 GMT"
},
{
"version": "v4",
"created": "Fri, 15 Apr 2022 12:42:21 GMT"
},
{
"version": "v5",
"created": "Tue, 10 May 2022 07:39:03 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Zhao",
"Jiaojiao",
""
],
[
"Zhang",
"Yanyi",
""
],
[
"Li",
"Xinyu",
""
],
[
"Chen",
"Hao",
""
],
[
"Bing",
"Shuai",
""
],
[
"Xu",
"Mingze",
""
],
[
"Liu",
"Chunhui",
""
],
[
"Kundu",
"Kaustav",
""
],
[
"Xiong",
"Yuanjun",
""
],
[
"Modolo",
"Davide",
""
],
[
"Marsic",
"Ivan",
""
],
[
"Snoek",
"Cees G. M.",
""
],
[
"Tighe",
"Joseph",
""
]
] |
new_dataset
| 0.999521 |
2104.05503
|
Shyam Sundar Kannan
|
Shyam Sundar Kannan and Byung-Cheol Min
|
Autonomous Drone Delivery to Your Door and Yard
|
Accepted for publication in International Conference on Unmanned
Aircraft Systems (ICUAS) 2022
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, we present a system that enables delivery drones to
autonomously navigate and deliver packages at various locations around a house
according to the desire of the recipient and without the need for any external
markers as currently used. This development is motivated by recent advancements
in deep learning that can potentially supplant the specialized markers
presently used by delivery drones for identifying sites at which to deliver
packages. The proposed system is more natural in that it takes instruction on
where to deliver the package as input, similar to the instructions provided to
human couriers. First, we propose a semantic image segmentation-based
descending location estimator that enables the drone to find a safe spot around
the house at which it can descend from higher altitudes. Following this, we
propose a strategy for visually routing the drone from the descent location to
a specific site at which it is to deliver the package, such as the front door.
We extensively evaluate this approach in a simulated environment and
demonstrate that with our system, a delivery drone can deliver a package to the
front door and also to other specified locations around a house. Relative to a
frontier exploration-based strategy, drones using the proposed system found and
reached the front doors of the 20 test houses 161% faster.
|
[
{
"version": "v1",
"created": "Mon, 12 Apr 2021 14:32:36 GMT"
},
{
"version": "v2",
"created": "Tue, 10 May 2022 12:12:46 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Kannan",
"Shyam Sundar",
""
],
[
"Min",
"Byung-Cheol",
""
]
] |
new_dataset
| 0.999419 |
2107.13592
|
Leon Derczynski
|
Erida Nurce, Jorgel Keci, Leon Derczynski
|
Detecting Abusive Albanian
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The ever growing usage of social media in the recent years has had a direct
impact on the increased presence of hate speech and offensive speech in online
platforms. Research on effective detection of such content has mainly focused
on English and a few other widespread languages, while the leftover majority
fail to have the same work put into them and thus cannot benefit from the
steady advancements made in the field. In this paper we present \textsc{Shaj},
an annotated Albanian dataset for hate speech and offensive speech that has
been constructed from user-generated content on various social media platforms.
Its annotation follows the hierarchical schema introduced in OffensEval. The
dataset is tested using three different classification models, the best of
which achieves an F1 score of 0.77 for the identification of offensive
language, 0.64 F1 score for the automatic categorization of offensive types and
lastly, 0.52 F1 score for the offensive language target identification.
|
[
{
"version": "v1",
"created": "Wed, 28 Jul 2021 18:47:32 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Jul 2021 19:50:13 GMT"
},
{
"version": "v3",
"created": "Tue, 10 May 2022 11:37:44 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Nurce",
"Erida",
""
],
[
"Keci",
"Jorgel",
""
],
[
"Derczynski",
"Leon",
""
]
] |
new_dataset
| 0.999412 |
2107.14352
|
Bradley Hauer
|
Bradley Hauer, Grzegorz Kondrak
|
WiC = TSV = WSD: On the Equivalence of Three Semantic Tasks
|
To be published in the proceedings of NAACL 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The Word-in-Context (WiC) task has attracted considerable attention in the
NLP community, as demonstrated by the popularity of the recent MCL-WiC SemEval
shared task. Systems and lexical resources from word sense disambiguation (WSD)
are often used for the WiC task and WiC dataset construction. In this paper, we
establish the exact relationship between WiC and WSD, as well as the related
task of target sense verification (TSV). Building upon a novel hypothesis on
the equivalence of sense and meaning distinctions, we demonstrate through the
application of tools from theoretical computer science that these three
semantic classification problems can be pairwise reduced to each other, and
therefore are equivalent. The results of experiments that involve systems and
datasets for both WiC and WSD provide strong empirical evidence that our
problem reductions work in practice.
|
[
{
"version": "v1",
"created": "Thu, 29 Jul 2021 22:16:32 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Dec 2021 00:47:02 GMT"
},
{
"version": "v3",
"created": "Mon, 9 May 2022 19:35:01 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Hauer",
"Bradley",
""
],
[
"Kondrak",
"Grzegorz",
""
]
] |
new_dataset
| 0.996884 |
2109.03571
|
Shardul Suryawanshi
|
Shardul Suryawanshi, Bharathi Raja Chakravarthi, Mihael Arcan, Suzanne
Little, Paul Buitelaar
|
TrollsWithOpinion: A Dataset for Predicting Domain-specific Opinion
Manipulation in Troll Memes
| null | null | null | null |
cs.SI cs.CL cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Research into the classification of Image with Text (IWT) troll memes has
recently become popular. Since the online community utilizes the refuge of
memes to express themselves, there is an abundance of data in the form of
memes. These memes have the potential to demean, harras, or bully targeted
individuals. Moreover, the targeted individual could fall prey to opinion
manipulation. To comprehend the use of memes in opinion manipulation, we define
three specific domains (product, political or others) which we classify into
troll or not-troll, with or without opinion manipulation. To enable this
analysis, we enhanced an existing dataset by annotating the data with our
defined classes, resulting in a dataset of 8,881 IWT or multimodal memes in the
English language (TrollsWithOpinion dataset). We perform baseline experiments
on the annotated dataset, and our result shows that existing state-of-the-art
techniques could only reach a weighted-average F1-score of 0.37. This shows the
need for a development of a specific technique to deal with multimodal troll
memes.
|
[
{
"version": "v1",
"created": "Wed, 8 Sep 2021 12:12:13 GMT"
},
{
"version": "v2",
"created": "Tue, 10 May 2022 15:47:20 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Suryawanshi",
"Shardul",
""
],
[
"Chakravarthi",
"Bharathi Raja",
""
],
[
"Arcan",
"Mihael",
""
],
[
"Little",
"Suzanne",
""
],
[
"Buitelaar",
"Paul",
""
]
] |
new_dataset
| 0.99974 |
2110.02258
|
Ava Chen
|
Ava Chen, Lauren Winterbottom, Sangwoo Park, Jingxi Xu, Dawn Nilsen,
Joel Stein, Matei Ciocarlie
|
Thumb Stabilization and Assistance in a Robotic Hand Orthosis for
Post-Stroke Hemiparesis
|
7 pages, 6 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a dual-cable method of stabilizing the thumb in the context of a
hand orthosis designed for individuals with upper extremity hemiparesis after
stroke. This cable network adds opposition/reposition capabilities to the
thumb, and increases the likelihood of forming a hand pose that can
successfully manipulate objects. In addition to a passive-thumb version (where
both cables are of fixed length), our approach also allows for a
single-actuator active-thumb version (where the extension cable is actuated
while the abductor remains passive), which allows a range of motion intended to
facilitate creating and maintaining grasps. We performed experiments with five
chronic stroke survivors consisting of unimanual resistive-pull tasks and
bimanual twisting tasks with simulated real-world objects; these explored the
effects of thumb assistance on grasp stability and functional range of motion.
Our results show that both active- and passive-thumb versions achieved similar
performance in terms of improving grasp force generation over a no-device
baseline, but active thumb stabilization enabled users to maintain grasps for
longer durations.
|
[
{
"version": "v1",
"created": "Tue, 5 Oct 2021 18:08:27 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Feb 2022 18:33:19 GMT"
},
{
"version": "v3",
"created": "Tue, 10 May 2022 16:59:49 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Chen",
"Ava",
""
],
[
"Winterbottom",
"Lauren",
""
],
[
"Park",
"Sangwoo",
""
],
[
"Xu",
"Jingxi",
""
],
[
"Nilsen",
"Dawn",
""
],
[
"Stein",
"Joel",
""
],
[
"Ciocarlie",
"Matei",
""
]
] |
new_dataset
| 0.988142 |
2110.14994
|
Liang Xu
|
Liang Xu, Cuiling Lan, Wenjun Zeng, Cewu Lu
|
Skeleton-Based Mutually Assisted Interacted Object Localization and
Human Action Recognition
|
Accepted to the IEEE Transactions on Multimedia 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Skeleton data carries valuable motion information and is widely explored in
human action recognition. However, not only the motion information but also the
interaction with the environment provides discriminative cues to recognize the
action of persons. In this paper, we propose a joint learning framework for
mutually assisted "interacted object localization" and "human action
recognition" based on skeleton data. The two tasks are serialized together and
collaborate to promote each other, where preliminary action type derived from
skeleton alone helps improve interacted object localization, which in turn
provides valuable cues for the final human action recognition. Besides, we
explore the temporal consistency of interacted object as constraint to better
localize the interacted object with the absence of ground-truth labels.
Extensive experiments on the datasets of SYSU-3D, NTU60 RGB+D,
Northwestern-UCLA and UAV-Human show that our method achieves the best or
competitive performance with the state-of-the-art methods for human action
recognition. Visualization results show that our method can also provide
reasonable interacted object localization results.
|
[
{
"version": "v1",
"created": "Thu, 28 Oct 2021 10:09:34 GMT"
},
{
"version": "v2",
"created": "Tue, 10 May 2022 07:34:14 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Xu",
"Liang",
""
],
[
"Lan",
"Cuiling",
""
],
[
"Zeng",
"Wenjun",
""
],
[
"Lu",
"Cewu",
""
]
] |
new_dataset
| 0.995782 |
2111.07997
|
Maarten Sap
|
Maarten Sap, Swabha Swayamdipta, Laura Vianna, Xuhui Zhou, Yejin Choi,
Noah A. Smith
|
Annotators with Attitudes: How Annotator Beliefs And Identities Bias
Toxic Language Detection
|
NAACL 2022 Camera Ready
| null | null | null |
cs.CL cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
The perceived toxicity of language can vary based on someone's identity and
beliefs, but this variation is often ignored when collecting toxic language
datasets, resulting in dataset and model biases. We seek to understand the who,
why, and what behind biases in toxicity annotations. In two online studies with
demographically and politically diverse participants, we investigate the effect
of annotator identities (who) and beliefs (why), drawing from social psychology
research about hate speech, free speech, racist beliefs, political leaning, and
more. We disentangle what is annotated as toxic by considering posts with three
characteristics: anti-Black language, African American English (AAE) dialect,
and vulgarity. Our results show strong associations between annotator identity
and beliefs and their ratings of toxicity. Notably, more conservative
annotators and those who scored highly on our scale for racist beliefs were
less likely to rate anti-Black language as toxic, but more likely to rate AAE
as toxic. We additionally present a case study illustrating how a popular
toxicity detection system's ratings inherently reflect only specific beliefs
and perspectives. Our findings call for contextualizing toxicity labels in
social variables, which raises immense implications for toxic language
annotation and detection.
|
[
{
"version": "v1",
"created": "Mon, 15 Nov 2021 18:58:20 GMT"
},
{
"version": "v2",
"created": "Mon, 9 May 2022 23:58:07 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Sap",
"Maarten",
""
],
[
"Swayamdipta",
"Swabha",
""
],
[
"Vianna",
"Laura",
""
],
[
"Zhou",
"Xuhui",
""
],
[
"Choi",
"Yejin",
""
],
[
"Smith",
"Noah A.",
""
]
] |
new_dataset
| 0.987036 |
2112.06447
|
Yicheng Qian
|
Yicheng Qian, Weixin Luo, Dongze Lian, Xu Tang, Peilin Zhao, Shenghua
Gao
|
SVIP: Sequence VerIfication for Procedures in Videos
|
Accepted by CVPR2022. For the included dataset, see
https://svip-lab.github.io/dataset/CSV_dataset.html
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a novel sequence verification task that aims to
distinguish positive video pairs performing the same action sequence from
negative ones with step-level transformations but still conducting the same
task. Such a challenging task resides in an open-set setting without prior
action detection or segmentation that requires event-level or even frame-level
annotations. To that end, we carefully reorganize two publicly available
action-related datasets with step-procedure-task structure. To fully
investigate the effectiveness of any method, we collect a scripted video
dataset enumerating all kinds of step-level transformations in chemical
experiments. Besides, a novel evaluation metric Weighted Distance Ratio is
introduced to ensure equivalence for different step-level transformations
during evaluation. In the end, a simple but effective baseline based on the
transformer encoder with a novel sequence alignment loss is introduced to
better characterize long-term dependency between steps, which outperforms other
action recognition methods. Codes and data will be released.
|
[
{
"version": "v1",
"created": "Mon, 13 Dec 2021 07:03:36 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Dec 2021 06:29:12 GMT"
},
{
"version": "v3",
"created": "Sun, 17 Apr 2022 13:56:10 GMT"
},
{
"version": "v4",
"created": "Tue, 10 May 2022 13:40:49 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Qian",
"Yicheng",
""
],
[
"Luo",
"Weixin",
""
],
[
"Lian",
"Dongze",
""
],
[
"Tang",
"Xu",
""
],
[
"Zhao",
"Peilin",
""
],
[
"Gao",
"Shenghua",
""
]
] |
new_dataset
| 0.993743 |
2112.08001
|
Bruno Sauvalle
|
Bruno Sauvalle and Arnaud de La Fortelle
|
Autoencoder-based background reconstruction and foreground segmentation
with background noise estimation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Even after decades of research, dynamic scene background reconstruction and
foreground object segmentation are still considered as open problems due
various challenges such as illumination changes, camera movements, or
background noise caused by air turbulence or moving trees. We propose in this
paper to model the background of a frame sequence as a low dimensional manifold
using an autoencoder and compare the reconstructed background provided by this
autoencoder with the original image to compute the foreground/background
segmentation masks. The main novelty of the proposed model is that the
autoencoder is also trained to predict the background noise, which allows to
compute for each frame a pixel-dependent threshold to perform the foreground
segmentation. Although the proposed model does not use any temporal or motion
information, it exceeds the state of the art for unsupervised background
subtraction on the CDnet 2014 and LASIESTA datasets, with a significant
improvement on videos where the camera is moving. It is also able to perform
background reconstruction on some non-video image datasets.
|
[
{
"version": "v1",
"created": "Wed, 15 Dec 2021 09:51:00 GMT"
},
{
"version": "v2",
"created": "Tue, 10 May 2022 15:52:53 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Sauvalle",
"Bruno",
""
],
[
"de La Fortelle",
"Arnaud",
""
]
] |
new_dataset
| 0.999029 |
2205.03663
|
Shuming Jiao
|
Shuming Jiao, Jiaxiang Li, Wei Huang, Zibang Zhang
|
Playing Tic-Tac-Toe Games with Intelligent Single-pixel Imaging
| null | null | null | null |
cs.CV cs.AI eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Single-pixel imaging (SPI) is a novel optical imaging technique by replacing
a two-dimensional pixelated sensor with a single-pixel detector and pattern
illuminations. SPI have been extensively used for various tasks related to
image acquisition and processing. In this work, a novel non-image-based task of
playing Tic-Tac-Toe games interactively is merged into the framework of SPI. An
optoelectronic artificial intelligent (AI) player with minimal digital
computation can detect the game states, generate optimal moves and display
output results mainly by pattern illumination and single-pixel detection.
Simulated and experimental results demonstrate the feasibility of proposed
scheme and its unbeatable performance against human players.
|
[
{
"version": "v1",
"created": "Sat, 7 May 2022 14:45:54 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Jiao",
"Shuming",
""
],
[
"Li",
"Jiaxiang",
""
],
[
"Huang",
"Wei",
""
],
[
"Zhang",
"Zibang",
""
]
] |
new_dataset
| 0.999339 |
2205.04502
|
Zeyu Ma
|
Zeyu Ma, Zachary Teed, Jia Deng
|
Multiview Stereo with Cascaded Epipolar RAFT
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address multiview stereo (MVS), an important 3D vision task that
reconstructs a 3D model such as a dense point cloud from multiple calibrated
images. We propose CER-MVS (Cascaded Epipolar RAFT Multiview Stereo), a new
approach based on the RAFT (Recurrent All-Pairs Field Transforms) architecture
developed for optical flow. CER-MVS introduces five new changes to RAFT:
epipolar cost volumes, cost volume cascading, multiview fusion of cost volumes,
dynamic supervision, and multiresolution fusion of depth maps. CER-MVS is
significantly different from prior work in multiview stereo. Unlike prior work,
which operates by updating a 3D cost volume, CER-MVS operates by updating a
disparity field. Furthermore, we propose an adaptive thresholding method to
balance the completeness and accuracy of the reconstructed point clouds.
Experiments show that our approach achieves competitive performance on DTU (the
second best among known results) and state-of-the-art performance on the
Tanks-and-Temples benchmark (both the intermediate and advanced set). Code is
available at https://github.com/princeton-vl/CER-MVS
|
[
{
"version": "v1",
"created": "Mon, 9 May 2022 18:17:05 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Ma",
"Zeyu",
""
],
[
"Teed",
"Zachary",
""
],
[
"Deng",
"Jia",
""
]
] |
new_dataset
| 0.998294 |
2205.04538
|
Ahmet-Serdar Karakaya
|
Ahmet-Serdar Karakaya, Konstantin K\"ohler, Julian Heinovski, Falko
Dressler, David Bermbach
|
A Realistic Cyclist Model for SUMO Based on the SimRa Dataset
|
Accepted for the 20th Mediterranean Communication and Computer
Networking Conference (MedComNet 2022)
| null | null | null |
cs.MA cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Increasing the modal share of bicycle traffic to reduce carbon emissions,
reduce urban car traffic, and to improve the health of citizens, requires a
shift away from car-centric city planning. For this, traffic planners often
rely on simulation tools such as SUMO which allow them to study the effects of
construction changes before implementing them. Similarly, studies of vulnerable
road users, here cyclists, also use such models to assess the performance of
communication-based road traffic safety systems. The cyclist model in SUMO,
however, is very imprecise as SUMO cyclists behave either like slow cars or
fast pedestrians, thus, casting doubt on simulation results for bicycle
traffic. In this paper, we analyze acceleration, velocity, and intersection
left-turn behavior of cyclists in a large dataset of real world cycle tracks.
We use the results to derive an improved cyclist model and implement it in
SUMO.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2022 19:32:08 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Karakaya",
"Ahmet-Serdar",
""
],
[
"Köhler",
"Konstantin",
""
],
[
"Heinovski",
"Julian",
""
],
[
"Dressler",
"Falko",
""
],
[
"Bermbach",
"David",
""
]
] |
new_dataset
| 0.999754 |
2205.04565
|
HyunJun Jung
|
HyunJun Jung, Patrick Ruhkamp, Guangyao Zhai, Nikolas Brasch, Yitong
Li, Yannick Verdie, Jifei Song, Yiren Zhou, Anil Armagan, Slobodan Ilic, Ales
Leonardis, Benjamin Busam
|
Is my Depth Ground-Truth Good Enough? HAMMER -- Highly Accurate
Multi-Modal Dataset for DEnse 3D Scene Regression
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Depth estimation is a core task in 3D computer vision. Recent methods
investigate the task of monocular depth trained with various depth sensor
modalities. Every sensor has its advantages and drawbacks caused by the nature
of estimates. In the literature, mostly mean average error of the depth is
investigated and sensor capabilities are typically not discussed. Especially
indoor environments, however, pose challenges for some devices. Textureless
regions pose challenges for structure from motion, reflective materials are
problematic for active sensing, and distances for translucent material are
intricate to measure with existing sensors. This paper proposes HAMMER, a
dataset comprising depth estimates from multiple commonly used sensors for
indoor depth estimation, namely ToF, stereo, structured light together with
monocular RGB+P data. We construct highly reliable ground truth depth maps with
the help of 3D scanners and aligned renderings. A popular depth estimators is
trained on this data and typical depth senosors. The estimates are extensively
analyze on different scene structures. We notice generalization issues arising
from various sensor technologies in household environments with challenging but
everyday scene content. HAMMER, which we make publicly available, provides a
reliable base to pave the way to targeted depth improvements and sensor fusion
approaches.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2022 21:25:09 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Jung",
"HyunJun",
""
],
[
"Ruhkamp",
"Patrick",
""
],
[
"Zhai",
"Guangyao",
""
],
[
"Brasch",
"Nikolas",
""
],
[
"Li",
"Yitong",
""
],
[
"Verdie",
"Yannick",
""
],
[
"Song",
"Jifei",
""
],
[
"Zhou",
"Yiren",
""
],
[
"Armagan",
"Anil",
""
],
[
"Ilic",
"Slobodan",
""
],
[
"Leonardis",
"Ales",
""
],
[
"Busam",
"Benjamin",
""
]
] |
new_dataset
| 0.966912 |
2205.04567
|
Michael Dikshtein
|
Michael Dikshtein, Nir Weinberger, and Shlomo Shamai (Shitz)
|
The Compound Information Bottleneck Outlook
|
This work has been submitted to the IEEE for possible publication
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We formulate and analyze the compound information bottleneck programming. In
this problem, a Markov chain $ \mathsf{X} \rightarrow \mathsf{Y} \rightarrow
\mathsf{Z} $ is assumed with fixed marginal distributions
$\mathsf{P}_{\mathsf{X}}$ and $\mathsf{P}_{\mathsf{Y}}$, and the mutual
information between $ \mathsf{X} $ and $ \mathsf{Z} $ is sought to be maximized
over the choice of conditional probability of $\mathsf{Z}$ given $\mathsf{Y}$
from a given class, under the \textit{worst choice} of the joint probability of
the pair $(\mathsf{X},\mathsf{Y})$ from a different class. We consider several
classes based on extremes of: mutual information; minimal correlation; total
variation; and the relative entropy class. We provide values, bounds, and
various characterizations for specific instances of this problem: the binary
symmetric case, the scalar Gaussian case, the vector Gaussian case and the
symmetric modulo-additive case. Finally, for the general case, we propose a
Blahut-Arimoto type of alternating iterations algorithm to find a consistent
solution to this problem.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2022 21:27:45 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Dikshtein",
"Michael",
"",
"Shitz"
],
[
"Weinberger",
"Nir",
"",
"Shitz"
],
[
"Shamai",
"Shlomo",
"",
"Shitz"
]
] |
new_dataset
| 0.9861 |
2205.04575
|
Yicheng Gao
|
Yicheng Gao and Giuliano Casale
|
JCSP: Joint Caching and Service Placement for Edge Computing Systems
| null | null | null | null |
cs.PF cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
With constrained resources, what, where, and how to cache at the edge is one
of the key challenges for edge computing systems. The cached items include not
only the application data contents but also the local caching of edge services
that handle incoming requests. However, current systems separate the contents
and services without considering the latency interplay of caching and queueing.
Therefore, in this paper, we propose a novel class of stochastic models that
enable the optimization of content caching and service placement decisions
jointly. We first explain how to apply layered queueing networks (LQNs) models
for edge service placement and show that combining this with genetic algorithms
provides higher accuracy in resource allocation than an established baseline.
Next, we extend LQNs with caching components to establish a joint modeling
method for content caching and service placement (JCSP) and present analytical
methods to analyze the resulting model. Finally, we simulate real-world Azure
traces to evaluate the JCSP method and find that JCSP achieves up to 35%
improvement in response time and 500MB reduction in memory usage than baseline
heuristics for edge caching resource allocation.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2022 21:47:08 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Gao",
"Yicheng",
""
],
[
"Casale",
"Giuliano",
""
]
] |
new_dataset
| 0.993144 |
2205.04612
|
Serena Mou
|
Serena Mou, Dorian Tsai and Matthew Dunbabin
|
Reconfigurable Robots for Scaling Reef Restoration
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Coral reefs are under increasing threat from the impacts of climate change.
Whilst current restoration approaches are effective, they require significant
human involvement and equipment, and have limited deployment scale. Harvesting
wild coral spawn from mass spawning events, rearing them to the larval stage
and releasing the larvae onto degraded reefs is an emerging solution for reef
restoration known as coral reseeding. This paper presents a reconfigurable
autonomous surface vehicle system that can eliminate risky diving, cover
greater areas with coral larvae, has a sensory suite for additional data
measurement, and requires minimal non-technical expert training. A key feature
is an on-board real-time benthic substrate classification model that predicts
when to release larvae to increase settlement rate and ultimately,
survivability. The presented robot design is reconfigurable, light weight,
scalable, and easy to transport. Results from restoration deployments at Lizard
Island demonstrate improved coral larvae release onto appropriate coral
substrate, while also achieving 21.8 times more area coverage compared to
manual methods.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2022 01:15:01 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Mou",
"Serena",
""
],
[
"Tsai",
"Dorian",
""
],
[
"Dunbabin",
"Matthew",
""
]
] |
new_dataset
| 0.999702 |
2205.04621
|
Alex Dytso
|
Martina Cardone and Alex Dytso and Cynthia Rush
|
Entropic CLT for Order Statistics
|
Accepted to the 2022 IEEE International Symposium on Information
Theory (ISIT)
| null | null | null |
cs.IT math.IT math.ST stat.ML stat.TH
|
http://creativecommons.org/licenses/by/4.0/
|
It is well known that central order statistics exhibit a central limit
behavior and converge to a Gaussian distribution as the sample size grows. This
paper strengthens this known result by establishing an entropic version of the
CLT that ensures a stronger mode of convergence using the relative entropy. In
particular, an order $O(1/\sqrt{n})$ rate of convergence is established under
mild conditions on the parent distribution of the sample generating the order
statistics. To prove this result, ancillary results on order statistics are
derived, which might be of independent interest.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2022 01:37:55 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Cardone",
"Martina",
""
],
[
"Dytso",
"Alex",
""
],
[
"Rush",
"Cynthia",
""
]
] |
new_dataset
| 0.991175 |
2205.04651
|
Radityo Eko Prasojo
|
Alham Fikri Aji, Tirana Noor Fatyanosa, Radityo Eko Prasojo, Philip
Arthur, Suci Fitriany, Salma Qonitah, Nadhifa Zulfa, Tomi Santoso, Mahendra
Data
|
ParaCotta: Synthetic Multilingual Paraphrase Corpora from the Most
Diverse Translation Sample Pair
|
10 pages, 3 figures, 6 tables. Accepted at PACLIC 2021. (ACL
Anthology link: https://aclanthology.org/2021.paclic-1.56/)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We release our synthetic parallel paraphrase corpus across 17 languages:
Arabic, Catalan, Czech, German, English, Spanish, Estonian, French, Hindi,
Indonesian, Italian, Dutch, Romanian, Russian, Swedish, Vietnamese, and
Chinese. Our method relies only on monolingual data and a neural machine
translation system to generate paraphrases, hence simple to apply. We generate
multiple translation samples using beam search and choose the most lexically
diverse pair according to their sentence BLEU. We compare our generated corpus
with the \texttt{ParaBank2}. According to our evaluation, our synthetic
paraphrase pairs are semantically similar and lexically diverse.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2022 03:40:14 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Aji",
"Alham Fikri",
""
],
[
"Fatyanosa",
"Tirana Noor",
""
],
[
"Prasojo",
"Radityo Eko",
""
],
[
"Arthur",
"Philip",
""
],
[
"Fitriany",
"Suci",
""
],
[
"Qonitah",
"Salma",
""
],
[
"Zulfa",
"Nadhifa",
""
],
[
"Santoso",
"Tomi",
""
],
[
"Data",
"Mahendra",
""
]
] |
new_dataset
| 0.999731 |
2205.04685
|
Rachit Agarwal
|
Rohit Kumar Sachan, Rachit Agarwal, Sandeep Kumar Shukla
|
DNS based In-Browser Cryptojacking Detection
|
Submitted
| null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The metadata aspect of Domain Names (DNs) enables us to perform a behavioral
study of DNs and detect if a DN is involved in in-browser cryptojacking. Thus,
we are motivated to study different temporal and behavioral aspects of DNs
involved in cryptojacking. We use temporal features such as query frequency and
query burst along with graph-based features such as degree and diameter, and
non-temporal features such as the string-based to detect if a DNs is suspect to
be involved in the in-browser cryptojacking. Then, we use them to train the
Machine Learning (ML) algorithms over different temporal granularities such as
2 hours datasets and complete dataset. Our results show DecisionTrees
classifier performs the best with 59.5% Recall on cryptojacked DN, while for
unsupervised learning, K-Means with K=2 perform the best. Similarity analysis
of the features reveals a minimal divergence between the cryptojacking DNs and
other already known malicious DNs. It also reveals the need for improvements in
the feature set of state-of-the-art methods to improve their accuracy in
detecting in-browser cryptojacking. As added analysis, our signature-based
analysis identifies that none-of-the Indian Government websites were involved
in cryptojacking during October-December 2021. However, based on the resource
utilization, we identify 10 DNs with different properties than others.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2022 05:40:17 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Sachan",
"Rohit Kumar",
""
],
[
"Agarwal",
"Rachit",
""
],
[
"Shukla",
"Sandeep Kumar",
""
]
] |
new_dataset
| 0.998013 |
2205.04759
|
Soonchan Park
|
Soonchan Park, Jinah Park
|
WG-VITON: Wearing-Guide Virtual Try-On for Top and Bottom Clothes
|
5 pages
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Studies of virtual try-on (VITON) have been shown their effectiveness in
utilizing the generative neural network for virtually exploring fashion
products, and some of recent researches of VITON attempted to synthesize human
image wearing given multiple types of garments (e.g., top and bottom clothes).
However, when replacing the top and bottom clothes of the target human,
numerous wearing styles are possible with a certain combination of the clothes.
In this paper, we address the problem of variation in wearing style when
simultaneously replacing the top and bottom clothes of the model. We introduce
Wearing-Guide VITON (i.e., WG-VITON) which utilizes an additional input binary
mask to control the wearing styles of the generated image. Our experiments show
that WG-VITON effectively generates an image of the model wearing given top and
bottom clothes, and create complicated wearing styles such as partly tucking in
the top to the bottom
|
[
{
"version": "v1",
"created": "Tue, 10 May 2022 09:09:02 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Park",
"Soonchan",
""
],
[
"Park",
"Jinah",
""
]
] |
new_dataset
| 0.99585 |
2205.04802
|
David C. Kutner
|
David C. Kutner and Sun\v{c}ica Had\v{z}idedi\'c
|
Vibration-based communication for deafblind people
|
6 pages, 3 figures Accepted at the IEEE Haptics Symposium 2022
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Deafblind people have both hearing and visual impairments, which makes
communication with other people often dependent on expensive technologies e.g.,
Braille displays, or on caregivers acting as interpreters. This paper presents
Morse I/O (MIO), a vibrotactile interface for Android, evaluated through
experiments and interviews with deafblind participants. MIO was shown to enable
consistent text entry and recognition after only a few hours of practice. The
participants were willing to continue using the interface, although there were
perceived difficulties in learning to use it. Overall, MIO is a cost-effective,
portable interface for deafblind people without access to Braille displays or
similar.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2022 11:02:29 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Kutner",
"David C.",
""
],
[
"Hadžidedić",
"Sunčica",
""
]
] |
new_dataset
| 0.995253 |
2205.04831
|
Christopher Csikszentmihalyi
|
Christopher Cs\'ikszentmih\'alyi
|
An Engineer's Nightmare: 102 Years of Critical Robotics
|
Presented at the "Re-Configuring Human-Robot Interaction" workshop,
HRI'22
| null | null | null |
cs.HC cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
A critical and re-configured HRI might look to the arts, where another
history of robots has been unfolding since the Czech artist Karel Capek's
critical robotic labor parable of 1921, in which the word robot was coined in
its modern usage. This paper explores several vectors by which artist-created
robots, both physical and imaginary, have offered pronounced contrasts to
robots-as-usual, and offers directions as to how these more emancipated cousins
might be useful to the field of HRI.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2022 17:52:47 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Csíkszentmihályi",
"Christopher",
""
]
] |
new_dataset
| 0.998552 |
2205.04841
|
Ganesh Bagler Dr
|
Deepanshu Pandey, Purva Parmar, Gauri Toshniwal, Mansi Goel, Vishesh
Agrawal, Shivangi Dhiman, Lavanya Gupta and Ganesh Bagler
|
Object Detection in Indian Food Platters using Transfer Learning with
YOLOv4
|
6 pages, 7 figures, 38th IEEE International Conference on Data
Engineering, 2022, DECOR Workshop
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Object detection is a well-known problem in computer vision. Despite this,
its usage and pervasiveness in the traditional Indian food dishes has been
limited. Particularly, recognizing Indian food dishes present in a single photo
is challenging due to three reasons: 1. Lack of annotated Indian food datasets
2. Non-distinct boundaries between the dishes 3. High intra-class variation. We
solve these issues by providing a comprehensively labelled Indian food dataset-
IndianFood10, which contains 10 food classes that appear frequently in a staple
Indian meal and using transfer learning with YOLOv4 object detector model. Our
model is able to achieve an overall mAP score of 91.8% and f1-score of 0.90 for
our 10 class dataset. We also provide an extension of our 10 class dataset-
IndianFood20, which contains 10 more traditional Indian food classes.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2022 12:28:01 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Pandey",
"Deepanshu",
""
],
[
"Parmar",
"Purva",
""
],
[
"Toshniwal",
"Gauri",
""
],
[
"Goel",
"Mansi",
""
],
[
"Agrawal",
"Vishesh",
""
],
[
"Dhiman",
"Shivangi",
""
],
[
"Gupta",
"Lavanya",
""
],
[
"Bagler",
"Ganesh",
""
]
] |
new_dataset
| 0.993564 |
2205.04898
|
Cristina Menghini
|
Cristina Menghini, Justin Uhr, Shahrzad Haddadan, Ashley Champagne,
Bjorn Sandstede, Sohini Ramachandran
|
The Drift of #MyBodyMyChoice Discourse on Twitter
|
Accepted at WebSci'22
| null |
10.1145/3501247.3531570
| null |
cs.CY cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
#MyBodyMyChoice is a well-known hashtag originally created to advocate for
women's rights, often used in discourse about abortion and bodily autonomy. The
Covid-19 outbreak prompted governments to take containment measures such as
vaccination campaigns and mask mandates. Population groups opposed to such
measures started to use the slogan "My Body My Choice" to claim their bodily
autonomy. In this paper, we investigate whether the discourse around the
hashtag #MyBodyMyChoice on Twitter changed its usage after the Covid-19
outbreak. We observe that the conversation around the hashtag changed in two
ways. First, semantically, the hashtag #MyBodyMyChoice drifted towards
conversations around Covid-19, especially in messages opposed to containment
measures. Second, while before the pandemic users used to share content
produced by experts and authorities, after Covid-19 the users' attention has
shifted towards individuals.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2022 13:43:56 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Menghini",
"Cristina",
""
],
[
"Uhr",
"Justin",
""
],
[
"Haddadan",
"Shahrzad",
""
],
[
"Champagne",
"Ashley",
""
],
[
"Sandstede",
"Bjorn",
""
],
[
"Ramachandran",
"Sohini",
""
]
] |
new_dataset
| 0.994855 |
2205.04961
|
Vinod Ganapathy
|
Gokulnath Pillai, Eikansh Gupta, Ajith Suresh, Vinod Ganapathy, Arpita
Patra
|
Privadome: Protecting Citizen Privacy from Delivery Drones
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As e-commerce companies begin to consider using delivery drones for customer
fulfillment, there are growing concerns around citizen privacy. Drones are
equipped with cameras, and the video feed from these cameras is often required
as part of routine navigation, be it for semi autonomous or fully-autonomous
drones. Footage of ground-based citizens may be captured in this video feed,
thereby leading to privacy concerns.
This paper presents Privadome, a system that implements the vision of a
virtual privacy dome centered around the citizen. Privadome is designed to be
integrated with city-scale regulatory authorities that oversee delivery drone
operations and realizes this vision through two components, PD-MPC and PD-ROS.
PD-MPC allows citizens equipped with a mobile device to identify drones that
have captured their footage. It uses secure two-party computation to achieve
this goal without compromising the privacy of the citizen's location.
PD-ROS allows the citizen to communicate with such drones and obtain an audit
trail showing how the drone uses their footage and determine if
privacy-preserving steps are taken to sanitize the footage. An experimental
evaluation of Privadome using our prototype implementations of PD-MPC and
PD-ROS shows that the system scales to near-term city-scale delivery drone
deployments (hundreds of drones). We show that with PD-MPC the mobile data
usage on the citizen's mobile device is comparable to that of routine
activities on the device, such as streaming videos. We also show that the
workflow of PD-ROS consumes a modest amount of additional CPU resources and
power on our experimental platform.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2022 15:22:52 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Pillai",
"Gokulnath",
""
],
[
"Gupta",
"Eikansh",
""
],
[
"Suresh",
"Ajith",
""
],
[
"Ganapathy",
"Vinod",
""
],
[
"Patra",
"Arpita",
""
]
] |
new_dataset
| 0.999753 |
2205.05028
|
Kris Oosthoek
|
Kris Oosthoek and Jack Cable and Georgios Smaragdakis
|
A Tale of Two Markets: Investigating the Ransomware Payments Economy
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Ransomware attacks are among the most severe cyber threats. They have made
headlines in recent years by threatening the operation of governments, critical
infrastructure, and corporations. Collecting and analyzing ransomware data is
an important step towards understanding the spread of ransomware and designing
effective defense and mitigation mechanisms. We report on our experience
operating Ransomwhere, an open crowdsourced ransomware payment tracker to
collect information from victims of ransomware attacks. With Ransomwhere, we
have gathered 13.5k ransom payments to more than 87 ransomware criminal actors
with total payments of more than $101 million. Leveraging the transparent
nature of Bitcoin, the cryptocurrency used for most ransomware payments, we
characterize the evolving ransomware criminal structure and ransom laundering
strategies. Our analysis shows that there are two parallel ransomware criminal
markets: commodity ransomware and Ransomware as a Service (RaaS). We notice
that there are striking differences between the two markets in the way that
cryptocurrency resources are utilized, revenue per transaction, and ransom
laundering efficiency. Although it is relatively easy to identify choke points
in commodity ransomware payment activity, it is more difficult to do the same
for RaaS.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2022 16:41:26 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Oosthoek",
"Kris",
""
],
[
"Cable",
"Jack",
""
],
[
"Smaragdakis",
"Georgios",
""
]
] |
new_dataset
| 0.999541 |
2205.05032
|
Juliane Fonseca De Oliveira
|
N\'ivea B. da Silva, Luis Iv\'an O. Valencia, F\'abio M. H. S. Filho,
Andressa C. S. Ferreira, Felipe A. C. Pereira, Guilherme L. de Oliveira,
Paloma F. Oliveira, Moreno S. Rodrigues, Pablo I. P. Ramos, Juliane F.
Oliveira
|
Brazilian COVID-19 data streaming
|
12 pages, 6 figures, 2 tables
| null | null | null |
cs.DB cs.DL q-bio.PE
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We collected individualized (unidentifiable) and aggregated openly available
data from various sources related to suspected/confirmed SARS-CoV-2 infections,
vaccinations, non-pharmaceutical government interventions, human mobility, and
levels of population inequality in Brazil. In addition, a data structure
allowing real-time data collection, curation, integration, and
extract-transform-load processes for different objectives was developed. The
granularity of this dataset (state- and municipality-wide) enables its
application to individualized and ecological epidemiological studies,
statistical, mathematical, and computational modeling, data visualization as
well as the scientific dissemination of information on the COVID-19 pandemic in
Brazil.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2022 16:44:56 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"da Silva",
"Nívea B.",
""
],
[
"Valencia",
"Luis Iván O.",
""
],
[
"Filho",
"Fábio M. H. S.",
""
],
[
"Ferreira",
"Andressa C. S.",
""
],
[
"Pereira",
"Felipe A. C.",
""
],
[
"de Oliveira",
"Guilherme L.",
""
],
[
"Oliveira",
"Paloma F.",
""
],
[
"Rodrigues",
"Moreno S.",
""
],
[
"Ramos",
"Pablo I. P.",
""
],
[
"Oliveira",
"Juliane F.",
""
]
] |
new_dataset
| 0.997479 |
2205.05039
|
Sergey Loyka
|
Sergey Loyka, Charalambos D. Charalambous
|
On the Capacity of Gaussian MIMO Channels with Memory
|
accepted by IEEE Comm. Letters
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The operational capacity of Gaussian MIMO channels with memory was obtained
by Brandenburg and Wyner in [9] under certain mild assumptions on the channel
impulse response and its noise covariance matrix, which essentuially require
channel memory to be not too strong. This channel was also considered by
Tsybakov in [10] and its information capacity was obtained in some cases. It
was further conjectured, based on numerical evidence, that these capacities are
the same in all cases. This conjecture is proved here. An explicit closed-form
expression for the optimal input power spectral density matrix is also given.
The obtained result is further extended to the case of joint constraints,
including per-antenna and interference power constraints as well as energy
harvesting constraints. These results imply the information-theoretic
optimality of OFDM-type transmission systems for such channels with memory.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2022 16:55:09 GMT"
}
] | 2022-05-11T00:00:00 |
[
[
"Loyka",
"Sergey",
""
],
[
"Charalambous",
"Charalambos D.",
""
]
] |
new_dataset
| 0.993271 |
1912.13347
|
R Jaberi
|
Raed Jaberi
|
$2$-edge-twinless blocks
| null |
Bulletin des Sciences Math\'ematiques 168 May 2021, 102969
|
10.1016/j.bulsci.2021.102969
| null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Let $G=(V,E)$ be a directed graph. A $2$-edge-twinless block in $G$ is a
maximal vertex set $C^{t}\subseteq V$ with $|C^{t}|>1$ such that for any
distinct vertices $v,w \in C^{t}$, and for every edge $e\in E$, the vertices
$v,w$ are in the same twinless strongly connected component of $G\setminus\left
\lbrace e \right\rbrace $.
In this paper we study this concept and describe algorithms for computing
$2$-edge-twinless blocks.
|
[
{
"version": "v1",
"created": "Tue, 31 Dec 2019 15:12:35 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Jan 2020 10:20:53 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Jaberi",
"Raed",
""
]
] |
new_dataset
| 0.996855 |
2008.00496
|
R Jaberi
|
Raed Jaberi
|
Minimum $2$-vertex strongly biconnected spanning directed subgraph
problem
| null |
Discrete Mathematics Letters (DML) 7 (2021) 40-73
|
10.47443/dml.2021.0024
| null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A directed graph $G=(V,E)$ is strongly biconnected if $G$ is strongly
connected and its underlying graph is biconnected. A strongly biconnected
directed graph $G=(V,E)$ is called $2$-vertex-strongly biconnected if $|V|\geq
3$ and the induced subgraph on $V\setminus\left\lbrace w\right\rbrace $ is
strongly biconnected for every vertex $w\in V$. In this paper we study the
following problem.
Given a $2$-vertex-strongly biconnected directed graph $G=(V,E)$, compute an
edge subset $E^{2sb} \subseteq E$ of minimum size such that the subgraph
$(V,E^{2sb})$ is $2$-vertex-strongly biconnected.
|
[
{
"version": "v1",
"created": "Sun, 2 Aug 2020 14:50:17 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Jaberi",
"Raed",
""
]
] |
new_dataset
| 0.987535 |
2012.03750
|
Paul Maxwell
|
Paul Maxwell, David Niblick, and Daniel C. Ruiz
|
Using Side Channel Information and Artificial Intelligence for Malware
Detection
|
7 pages
|
2021 IEEE International Conference on Artificial Intelligence and
Computer Applications (ICAICA)
|
10.1109/ICAICA52286.2021.9498094
| null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Cybersecurity continues to be a difficult issue for society especially as the
number of networked systems grows. Techniques to protect these systems range
from rules-based to artificial intelligence-based intrusion detection systems
and anti-virus tools. These systems rely upon the information contained in the
network packets and download executables to function. Side channel information
leaked from hardware has been shown to reveal secret information in systems
such as encryption keys. This work demonstrates that side channel information
can be used to detect malware running on a computing platform without access to
the code involved.
|
[
{
"version": "v1",
"created": "Thu, 3 Dec 2020 18:38:53 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Maxwell",
"Paul",
""
],
[
"Niblick",
"David",
""
],
[
"Ruiz",
"Daniel C.",
""
]
] |
new_dataset
| 0.998322 |
2104.12601
|
Freddie Hong
|
Freddie Hong, Luca Tendera, Connor Myant, David Boyle
|
Vacuum-formed 3D printed electronics: fabrication of thin, rigid and
free-form interactive surfaces
|
9 pages, 14 figures
| null |
10.1007/s42979-022-01174-1
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vacuum-forming is a common manufacturing technique for constructing thin
plastic shell products by pressing heated plastic sheets onto a mold using
atmospheric pressure. Vacuum-forming is ubiquitous in packaging and casing
products in industry spanning fast moving consumer goods to connected devices.
Integrating advanced functionality, which may include sensing, computation and
communication, within thin structures is desirable for various next-generation
interactive devices. Hybrid additive manufacturing techniques like
thermoforming are becoming popular for prototyping freeform electronics given
its design flexibility, speed and cost-effectiveness. In this paper, we present
a new hybrid method for constructing thin, rigid and free-form interconnected
surfaces via fused deposition modelling (FDM) 3D printing and vacuum-forming.
While 3D printing a mold for vacuum-forming has been explored by many,
utilising 3D printing to construct sheet materials has remains unexplored. 3D
printing the sheet material allows embedding conductive traces within thin
layers of the substrate, which can be vacuum-formed but remain conductive and
insulated. We characterise the behaviour of the vacuum-formed 3D printed sheet,
analyse the electrical performance of 3D printed traces after vacuum-forming,
and showcase a range of examples constructed using the technique. We
demonstrate a new design interface specifically for designing conformal
interconnects, which allows designers to draw conductive patterns in 3D and
export pre-distorted sheet models ready to be 3D printed.
|
[
{
"version": "v1",
"created": "Mon, 26 Apr 2021 14:03:33 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Apr 2021 10:25:35 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Hong",
"Freddie",
""
],
[
"Tendera",
"Luca",
""
],
[
"Myant",
"Connor",
""
],
[
"Boyle",
"David",
""
]
] |
new_dataset
| 0.999462 |
2108.06863
|
Chao-Yu Chen
|
Cheng-Yu Pai, Zilong Liu, You-Qi Zhao, Zhen-Ming Huang, and Chao-Yu
Chen
|
Designing Two-Dimensional Complete Complementary Codes for
Omnidirectional Transmission in Massive MIMO Systems
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents an efficient construction of two-dimensional (2D)
complete complementary codes (CCCs) for their modern application as
omnidirectional precoding matrices in massive MIMO systems to attain enhanced
cell coverage. Unlike the traditional 1D CCCs, little progress has been made on
efficient and systematic constructions of the 2D counterpart. In contrast to
the existing recursive constructions with the aid of various sequence
operations, certain 1D seed sequences or 2D arrays, we propose to use 2D
generalized Boolean functions for direct synthesis of 2D CCCs. Simulation
results show that the proposed 2D CCCs appear to be good candidates for
precoding matrices to achieve omnidirectional transmission in massive MIMO
systems.
|
[
{
"version": "v1",
"created": "Mon, 16 Aug 2021 02:40:14 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Jan 2022 06:57:47 GMT"
},
{
"version": "v3",
"created": "Mon, 9 May 2022 02:53:15 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Pai",
"Cheng-Yu",
""
],
[
"Liu",
"Zilong",
""
],
[
"Zhao",
"You-Qi",
""
],
[
"Huang",
"Zhen-Ming",
""
],
[
"Chen",
"Chao-Yu",
""
]
] |
new_dataset
| 0.985888 |
2109.00122
|
Zhiyu Chen
|
Zhiyu Chen, Wenhu Chen, Charese Smiley, Sameena Shah, Iana Borova,
Dylan Langdon, Reema Moussa, Matt Beane, Ting-Hao Huang, Bryan Routledge,
William Yang Wang
|
FinQA: A Dataset of Numerical Reasoning over Financial Data
|
EMNLP 2021
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The sheer volume of financial statements makes it difficult for humans to
access and analyze a business's financials. Robust numerical reasoning likewise
faces unique challenges in this domain. In this work, we focus on answering
deep questions over financial data, aiming to automate the analysis of a large
corpus of financial documents. In contrast to existing tasks on general domain,
the finance domain includes complex numerical reasoning and understanding of
heterogeneous representations. To facilitate analytical progress, we propose a
new large-scale dataset, FinQA, with Question-Answering pairs over Financial
reports, written by financial experts. We also annotate the gold reasoning
programs to ensure full explainability. We further introduce baselines and
conduct comprehensive experiments in our dataset. The results demonstrate that
popular, large, pre-trained models fall far short of expert humans in acquiring
finance knowledge and in complex multi-step numerical reasoning on that
knowledge. Our dataset -- the first of its kind -- should therefore enable
significant, new community research into complex application domains. The
dataset and code are publicly available\url{https://github.com/czyssrs/FinQA}.
|
[
{
"version": "v1",
"created": "Wed, 1 Sep 2021 00:08:14 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Sep 2021 16:54:38 GMT"
},
{
"version": "v3",
"created": "Sat, 7 May 2022 07:52:39 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Chen",
"Zhiyu",
""
],
[
"Chen",
"Wenhu",
""
],
[
"Smiley",
"Charese",
""
],
[
"Shah",
"Sameena",
""
],
[
"Borova",
"Iana",
""
],
[
"Langdon",
"Dylan",
""
],
[
"Moussa",
"Reema",
""
],
[
"Beane",
"Matt",
""
],
[
"Huang",
"Ting-Hao",
""
],
[
"Routledge",
"Bryan",
""
],
[
"Wang",
"William Yang",
""
]
] |
new_dataset
| 0.999688 |
2112.08326
|
Qing Lyu
|
Qing Lyu, Hua Zheng, Daoxin Li, Li Zhang, Marianna Apidianaki, Chris
Callison-Burch
|
Is "My Favorite New Movie" My Favorite Movie? Probing the Understanding
of Recursive Noun Phrases
|
NAACL 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recursive noun phrases (NPs) have interesting semantic properties. For
example, "my favorite new movie" is not necessarily my favorite movie, whereas
"my new favorite movie" is. This is common sense to humans, yet it is unknown
whether language models have such knowledge. We introduce the Recursive Noun
Phrase Challenge (RNPC), a dataset of three textual inference tasks involving
textual entailment and event plausibility comparison, precisely targeting the
understanding of recursive NPs. When evaluated on RNPC, state-of-the-art
Transformer models only perform around chance. Still, we show that such
knowledge is learnable with appropriate data. We further probe the models for
relevant linguistic features that can be learned from our tasks, including
modifier semantic category and modifier scope. Finally, models trained on RNPC
achieve strong zero-shot performance on an extrinsic Harm Detection evaluation
task, showing the usefulness of the understanding of recursive NPs in
downstream applications.
|
[
{
"version": "v1",
"created": "Wed, 15 Dec 2021 18:20:02 GMT"
},
{
"version": "v2",
"created": "Sun, 8 May 2022 16:15:28 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Lyu",
"Qing",
""
],
[
"Zheng",
"Hua",
""
],
[
"Li",
"Daoxin",
""
],
[
"Zhang",
"Li",
""
],
[
"Apidianaki",
"Marianna",
""
],
[
"Callison-Burch",
"Chris",
""
]
] |
new_dataset
| 0.999682 |
2112.10482
|
Shuhei Kurita
|
Daichi Azuma, Taiki Miyanishi, Shuhei Kurita and Motoaki Kawanabe
|
ScanQA: 3D Question Answering for Spatial Scene Understanding
|
CVPR2022. The first three authors are equally contributed. Project
page: https://github.com/ATR-DBI/ScanQA
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a new 3D spatial understanding task of 3D Question Answering
(3D-QA). In the 3D-QA task, models receive visual information from the entire
3D scene of the rich RGB-D indoor scan and answer the given textual questions
about the 3D scene. Unlike the 2D-question answering of VQA, the conventional
2D-QA models suffer from problems with spatial understanding of object
alignment and directions and fail the object identification from the textual
questions in 3D-QA. We propose a baseline model for 3D-QA, named ScanQA model,
where the model learns a fused descriptor from 3D object proposals and encoded
sentence embeddings. This learned descriptor correlates the language
expressions with the underlying geometric features of the 3D scan and
facilitates the regression of 3D bounding boxes to determine described objects
in textual questions and outputs correct answers. We collected human-edited
question-answer pairs with free-form answers that are grounded to 3D objects in
each 3D scene. Our new ScanQA dataset contains over 40K question-answer pairs
from the 800 indoor scenes drawn from the ScanNet dataset. To the best of our
knowledge, the proposed 3D-QA task is the first large-scale effort to perform
object-grounded question-answering in 3D environments.
|
[
{
"version": "v1",
"created": "Mon, 20 Dec 2021 12:30:55 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Mar 2022 06:57:25 GMT"
},
{
"version": "v3",
"created": "Sat, 7 May 2022 21:55:42 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Azuma",
"Daichi",
""
],
[
"Miyanishi",
"Taiki",
""
],
[
"Kurita",
"Shuhei",
""
],
[
"Kawanabe",
"Motoaki",
""
]
] |
new_dataset
| 0.990067 |
2203.03796
|
Yunhao Du
|
Yunhao Du, Zhihang Tong, Junfeng Wan, Binyu Zhang, and Yanyun Zhao
|
PAMI-AD: An Activity Detector Exploiting Part-attention and Motion
Information in Surveillance Videos
|
ICME 2022 Workshop
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Activity detection in surveillance videos is a challenging task caused by
small objects, complex activity categories, its untrimmed nature, etc. Existing
methods are generally limited in performance due to inaccurate proposals, poor
classifiers or inadequate post-processing method. In this work, we propose a
comprehensive and effective activity detection system in untrimmed surveillance
videos for person-centered and vehicle-centered activities. It consists of four
modules, i.e., object localizer, proposal filter, activity classifier and
activity refiner. For person-centered activities, a novel part-attention
mechanism is proposed to explore detailed features in different body parts. As
for vehicle-centered activities, we propose a localization masking method to
jointly encode motion and foreground attention features. We conduct experiments
on the large-scale activity detection datasets VIRAT, and achieve the best
results for both groups of activities. Furthermore, our team won the 1st place
in the TRECVID 2021 ActEV challenge.
|
[
{
"version": "v1",
"created": "Tue, 8 Mar 2022 01:36:26 GMT"
},
{
"version": "v2",
"created": "Mon, 9 May 2022 11:34:14 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Du",
"Yunhao",
""
],
[
"Tong",
"Zhihang",
""
],
[
"Wan",
"Junfeng",
""
],
[
"Zhang",
"Binyu",
""
],
[
"Zhao",
"Yanyun",
""
]
] |
new_dataset
| 0.984983 |
2203.09384
|
Vincent Richard Pascuzzi
|
Vincent R. Pascuzzi, Mehdi Goli
|
Benchmarking a Proof-of-Concept Performance Portable SYCL-based Fast
Fourier Transformation Library
|
12 pages, 6 figures, submitted to IWOCL 2022
|
IWOCL'22: International Workshop on OpenCL, May 2022, Article No.:
20, Pages 1-9
|
10.1145/3529538.3529996
| null |
cs.DC cs.MS cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present an early version of a SYCL-based FFT library,
capable of running on all major vendor hardware, including CPUs and GPUs from
AMD, ARM, Intel and NVIDIA. Although preliminary, the aim of this work is to
seed further developments for a rich set of features for calculating FFTs. It
has the advantage over existing portable FFT libraries in that it is
single-source, and therefore removes the complexities that arise due to
abundant use of pre-process macros and auto-generated kernels to target
different architectures. We exercise two SYCL-enabled compilers, Codeplay
ComputeCpp and Intel's open-source LLVM project, to evaluate performance
portability of our SYCL-based FFT on various heterogeneous architectures. The
current limitations of our library is it supports single-dimension FFTs up to
$2^{11}$ in length and base-2 input sequences. We compare our results with
highly optimized vendor specific FFT libraries and provide a detailed analysis
to demonstrate a fair level of performance, as well as potential sources of
performance bottlenecks.
|
[
{
"version": "v1",
"created": "Thu, 17 Mar 2022 15:20:56 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Pascuzzi",
"Vincent R.",
""
],
[
"Goli",
"Mehdi",
""
]
] |
new_dataset
| 0.999051 |
2203.09494
|
Charlie Nash
|
Charlie Nash, Jo\~ao Carreira, Jacob Walker, Iain Barr, Andrew Jaegle,
Mateusz Malinowski, Peter Battaglia
|
Transframer: Arbitrary Frame Prediction with Generative Models
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a general-purpose framework for image modelling and vision tasks
based on probabilistic frame prediction. Our approach unifies a broad range of
tasks, from image segmentation, to novel view synthesis and video
interpolation. We pair this framework with an architecture we term Transframer,
which uses U-Net and Transformer components to condition on annotated context
frames, and outputs sequences of sparse, compressed image features. Transframer
is the state-of-the-art on a variety of video generation benchmarks, is
competitive with the strongest models on few-shot view synthesis, and can
generate coherent 30 second videos from a single image without any explicit
geometric information. A single generalist Transframer simultaneously produces
promising results on 8 tasks, including semantic segmentation, image
classification and optical flow prediction with no task-specific architectural
components, demonstrating that multi-task computer vision can be tackled using
probabilistic image models. Our approach can in principle be applied to a wide
range of applications that require learning the conditional structure of
annotated image-formatted data.
|
[
{
"version": "v1",
"created": "Thu, 17 Mar 2022 17:48:32 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Mar 2022 10:34:43 GMT"
},
{
"version": "v3",
"created": "Mon, 9 May 2022 17:02:49 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Nash",
"Charlie",
""
],
[
"Carreira",
"João",
""
],
[
"Walker",
"Jacob",
""
],
[
"Barr",
"Iain",
""
],
[
"Jaegle",
"Andrew",
""
],
[
"Malinowski",
"Mateusz",
""
],
[
"Battaglia",
"Peter",
""
]
] |
new_dataset
| 0.955497 |
2204.04611
|
Chiyu Zhang
|
Chiyu Zhang, Muhammad Abdul-Mageed, El Moatez Billah Nagoudi
|
Decay No More: A Persistent Twitter Dataset for Learning Social Meaning
|
1st Workshop on Novel Evaluation Approaches for Text Classification
Systems on Social Media (NEATCLasS) colocated at ICWSM 2022. arXiv admin
note: text overlap with arXiv:2108.00356
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
With the proliferation of social media, many studies resort to social media
to construct datasets for developing social meaning understanding systems. For
the popular case of Twitter, most researchers distribute tweet IDs without the
actual text contents due to the data distribution policy of the platform. One
issue is that the posts become increasingly inaccessible over time, which leads
to unfair comparisons and a temporal bias in social media research. To
alleviate this challenge of data decay, we leverage a paraphrase model to
propose a new persistent English Twitter dataset for social meaning (PTSM).
PTSM consists of $17$ social meaning datasets in $10$ categories of tasks. We
experiment with two SOTA pre-trained language models and show that our PTSM can
substitute the actual tweets with paraphrases with marginal performance loss.
|
[
{
"version": "v1",
"created": "Sun, 10 Apr 2022 06:07:54 GMT"
},
{
"version": "v2",
"created": "Sat, 7 May 2022 08:35:29 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Zhang",
"Chiyu",
""
],
[
"Abdul-Mageed",
"Muhammad",
""
],
[
"Nagoudi",
"El Moatez Billah",
""
]
] |
new_dataset
| 0.99975 |
2204.13569
|
Ana-Maria Bucur
|
Ana-Maria Bucur, Adrian Cosma, Liviu P. Dinu
|
Life is not Always Depressing: Exploring the Happy Moments of People
Diagnosed with Depression
|
Accepted to LREC 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, we explore the relationship between depression and
manifestations of happiness in social media. While the majority of works
surrounding depression focus on symptoms, psychological research shows that
there is a strong link between seeking happiness and being diagnosed with
depression. We make use of Positive-Unlabeled learning paradigm to
automatically extract happy moments from social media posts of both controls
and users diagnosed with depression, and qualitatively analyze them with
linguistic tools such as LIWC and keyness information. We show that the life of
depressed individuals is not always bleak, with positive events related to
friends and family being more noteworthy to their lives compared to the more
mundane happy events reported by control users.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 15:32:04 GMT"
},
{
"version": "v2",
"created": "Sun, 8 May 2022 16:37:10 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Bucur",
"Ana-Maria",
""
],
[
"Cosma",
"Adrian",
""
],
[
"Dinu",
"Liviu P.",
""
]
] |
new_dataset
| 0.999113 |
2204.14040
|
Michael Bekos
|
Michael A. Bekos, Martin Gronemann, Fabrizio Montecchiani, Antonios
Symvonis
|
Convex Grid Drawings of Planar Graphs with Constant Edge-Vertex
Resolution
| null | null | null | null |
cs.DS cs.CG
|
http://creativecommons.org/publicdomain/zero/1.0/
|
We continue the study of the area requirement of convex straight-line grid
drawings of 3-connected plane graphs, which has been intensively investigated
in the last decades. Motivated by applications, such as graph editors, we
additionally require the obtained drawings to have bounded edge-vertex
resolution, that is, the closest distance between a vertex and any non-incident
edge is lower bounded by a constant that does not depend on the size of the
graph. We present a drawing algorithm that takes as input a 3-connected plane
graph with n vertices and f internal faces and computes a convex straight-line
drawing with edge-vertex resolution at least 1/2 on an integer grid of size
(n-2+a)x(n-2+a), where a=min{n-3,f}. Our result improves the previously
best-known area bound of (3n-7)x(3n-7)/2 by Chrobak, Goodrich and Tamassia.
|
[
{
"version": "v1",
"created": "Fri, 29 Apr 2022 12:25:34 GMT"
},
{
"version": "v2",
"created": "Mon, 9 May 2022 15:37:40 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Bekos",
"Michael A.",
""
],
[
"Gronemann",
"Martin",
""
],
[
"Montecchiani",
"Fabrizio",
""
],
[
"Symvonis",
"Antonios",
""
]
] |
new_dataset
| 0.990604 |
2205.01041
|
Scott McLachlan Dr
|
Scott McLachlan, Kudakwashe Dube, Burkhard Schafer, Anthony Gillespie,
Norman Fenton
|
The Chaotic State of UK Drone Regulation
| null | null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
In December 2020 the law for drone pilots and unmanned aerial vehicle (UAV)
use went into a transition phase in preparation for new EU international UAV
regulation. That EU regulation comes into full effect as the transition periods
defined in the United Kingdom's Civil Aviation Authority Air Policy CAP722
expire during December 2022 (CAA, 2020). However, international homologation
regulation will not address the patchwork of inconsistent drone use regulations
that exist in the United Kingdom from the layering of local and subordinate
authority byelaws over UK aviation law. We provide an extensive review of local
authority regulation of drone use on public open and green spaces, finding that
many local authorities are unaware of the issues being created through: (i)
inappropriately couched or poorly framed byelaws; (ii) multiple byelaws
covering the same area by virtue of overlapping jurisdictions; or (iii) the
lack readily identifiable policies for drone use on public land.
Overregulation, inconsistent regulation and regulatory disharmony are causing
confusion for recreational drone enthusiasts such that it is never clear which
public or crown-owned open and green spaces they are allowed to, or prohibited
from, flying. While the government and local authorities might like them to,
drones are not going away. Therefore, we conclude, the easiest way to ensure
citizens stay within the bounds of drone law that is intended to ensure public
safety, is to make that law comprehensible, consistent and easy to comply with.
|
[
{
"version": "v1",
"created": "Mon, 4 Apr 2022 07:37:42 GMT"
},
{
"version": "v2",
"created": "Tue, 3 May 2022 08:09:54 GMT"
},
{
"version": "v3",
"created": "Sat, 7 May 2022 14:01:26 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"McLachlan",
"Scott",
""
],
[
"Dube",
"Kudakwashe",
""
],
[
"Schafer",
"Burkhard",
""
],
[
"Gillespie",
"Anthony",
""
],
[
"Fenton",
"Norman",
""
]
] |
new_dataset
| 0.999395 |
2205.02071
|
Zhenyue Qin
|
Zhenyue Qin, Yang Liu, Madhawa Perera, Tom Gedeon, Pan Ji, Dongwoo
Kim, Saeed Anwar
|
ANUBIS: Skeleton Action Recognition Dataset, Review, and Benchmark
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Skeleton-based action recognition, as a subarea of action recognition, is
swiftly accumulating attention and popularity. The task is to recognize actions
performed by human articulation points. Compared with other data modalities, 3D
human skeleton representations have extensive unique desirable characteristics,
including succinctness, robustness, racial-impartiality, and many more. We aim
to provide a roadmap for new and existing researchers a on the landscapes of
skeleton-based action recognition for new and existing researchers. To this
end, we present a review in the form of a taxonomy on existing works of
skeleton-based action recognition. We partition them into four major
categories: (1) datasets; (2) extracting spatial features; (3) capturing
temporal patterns; (4) improving signal quality. For each method, we provide
concise yet informatively-sufficient descriptions. To promote more fair and
comprehensive evaluation on existing approaches of skeleton-based action
recognition, we collect ANUBIS, a large-scale human skeleton dataset. Compared
with previously collected dataset, ANUBIS are advantageous in the following
four aspects: (1) employing more recently released sensors; (2) containing
novel back view; (3) encouraging high enthusiasm of subjects; (4) including
actions of the COVID pandemic era. Using ANUBIS, we comparably benchmark
performance of current skeleton-based action recognizers. At the end of this
paper, we outlook future development of skeleton-based action recognition by
listing several new technical problems. We believe they are valuable to solve
in order to commercialize skeleton-based action recognition in the near future.
The dataset of ANUBIS is available at:
http://hcc-workshop.anu.edu.au/webs/anu101/home.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 14:03:43 GMT"
},
{
"version": "v2",
"created": "Thu, 5 May 2022 01:06:52 GMT"
},
{
"version": "v3",
"created": "Sun, 8 May 2022 04:36:52 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Qin",
"Zhenyue",
""
],
[
"Liu",
"Yang",
""
],
[
"Perera",
"Madhawa",
""
],
[
"Gedeon",
"Tom",
""
],
[
"Ji",
"Pan",
""
],
[
"Kim",
"Dongwoo",
""
],
[
"Anwar",
"Saeed",
""
]
] |
new_dataset
| 0.999881 |
2205.03467
|
Levi Burner
|
Levi Burner, Anton Mitrokhin, Cornelia Ferm\"uller, Yiannis Aloimonos
|
EVIMO2: An Event Camera Dataset for Motion Segmentation, Optical Flow,
Structure from Motion, and Visual Inertial Odometry in Indoor Scenes with
Monocular or Stereo Algorithms
|
5 pages, 3 figures, 1 table
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
A new event camera dataset, EVIMO2, is introduced that improves on the
popular EVIMO dataset by providing more data, from better cameras, in more
complex scenarios. As with its predecessor, EVIMO2 provides labels in the form
of per-pixel ground truth depth and segmentation as well as camera and object
poses. All sequences use data from physical cameras and many sequences feature
multiple independently moving objects. Typically, such labeled data is
unavailable in physical event camera datasets. Thus, EVIMO2 will serve as a
challenging benchmark for existing algorithms and rich training set for the
development of new algorithms. In particular, EVIMO2 is suited for supporting
research in motion and object segmentation, optical flow, structure from
motion, and visual (inertial) odometry in both monocular or stereo
configurations.
EVIMO2 consists of 41 minutes of data from three 640$\times$480 event
cameras, one 2080$\times$1552 classical color camera, inertial measurements
from two six axis inertial measurement units, and millimeter accurate object
poses from a Vicon motion capture system. The dataset's 173 sequences are
arranged into three categories. 3.75 minutes of independently moving household
objects, 22.55 minutes of static scenes, and 14.85 minutes of basic motions in
shallow scenes. Some sequences were recorded in low-light conditions where
conventional cameras fail. Depth and segmentation are provided at 60 Hz for the
event cameras and 30 Hz for the classical camera. The masks can be regenerated
using open-source code up to rates as high as 200 Hz.
This technical report briefly describes EVIMO2. The full documentation is
available online. Videos of individual sequences can be sampled on the download
page.
|
[
{
"version": "v1",
"created": "Fri, 6 May 2022 20:09:18 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Burner",
"Levi",
""
],
[
"Mitrokhin",
"Anton",
""
],
[
"Fermüller",
"Cornelia",
""
],
[
"Aloimonos",
"Yiannis",
""
]
] |
new_dataset
| 0.999874 |
2205.03468
|
Daniel Zhang
|
Daniel Zhang, Nestor Maslej, Erik Brynjolfsson, John Etchemendy, Terah
Lyons, James Manyika, Helen Ngo, Juan Carlos Niebles, Michael Sellitto, Ellie
Sakhaee, Yoav Shoham, Jack Clark, Raymond Perrault
|
The AI Index 2022 Annual Report
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Welcome to the fifth edition of the AI Index Report! The latest edition
includes data from a broad set of academic, private, and nonprofit
organizations as well as more self-collected data and original analysis than
any previous editions, including an expanded technical performance chapter, a
new survey of robotics researchers around the world, data on global AI
legislation records in 25 countries, and a new chapter with an in-depth
analysis of technical AI ethics metrics.
The AI Index Report tracks, collates, distills, and visualizes data related
to artificial intelligence. Its mission is to provide unbiased, rigorously
vetted, and globally sourced data for policymakers, researchers, executives,
journalists, and the general public to develop a more thorough and nuanced
understanding of the complex field of AI. The report aims to be the world's
most credible and authoritative source for data and insights about AI.
|
[
{
"version": "v1",
"created": "Mon, 2 May 2022 20:59:33 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Zhang",
"Daniel",
""
],
[
"Maslej",
"Nestor",
""
],
[
"Brynjolfsson",
"Erik",
""
],
[
"Etchemendy",
"John",
""
],
[
"Lyons",
"Terah",
""
],
[
"Manyika",
"James",
""
],
[
"Ngo",
"Helen",
""
],
[
"Niebles",
"Juan Carlos",
""
],
[
"Sellitto",
"Michael",
""
],
[
"Sakhaee",
"Ellie",
""
],
[
"Shoham",
"Yoav",
""
],
[
"Clark",
"Jack",
""
],
[
"Perrault",
"Raymond",
""
]
] |
new_dataset
| 0.95386 |
2205.03472
|
Sebastian Schuster
|
Sebastian Schuster, Tal Linzen
|
When a sentence does not introduce a discourse entity, Transformer-based
models still sometimes refer to it
|
To appear at NAACL 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Understanding longer narratives or participating in conversations requires
tracking of discourse entities that have been mentioned. Indefinite noun
phrases (NPs), such as 'a dog', frequently introduce discourse entities but
this behavior is modulated by sentential operators such as negation. For
example, 'a dog' in 'Arthur doesn't own a dog' does not introduce a discourse
entity due to the presence of negation. In this work, we adapt the
psycholinguistic assessment of language models paradigm to higher-level
linguistic phenomena and introduce an English evaluation suite that targets the
knowledge of the interactions between sentential operators and indefinite NPs.
We use this evaluation suite for a fine-grained investigation of the entity
tracking abilities of the Transformer-based models GPT-2 and GPT-3. We find
that while the models are to a certain extent sensitive to the interactions we
investigate, they are all challenged by the presence of multiple NPs and their
behavior is not systematic, which suggests that even models at the scale of
GPT-3 do not fully acquire basic entity tracking abilities.
|
[
{
"version": "v1",
"created": "Fri, 6 May 2022 20:49:27 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Schuster",
"Sebastian",
""
],
[
"Linzen",
"Tal",
""
]
] |
new_dataset
| 0.998214 |
2205.03491
|
Amir Yazdani
|
Amir Yazdani, Roya Sabbagh Novin, Andrew Merryweather, Tucker Hermans
|
DULA and DEBA: Differentiable Ergonomic Risk Models for Postural
Assessment and Optimization in Ergonomically Intelligent pHRI
|
Submitted to IROS 2022. arXiv admin note: substantial text overlap
with arXiv:2108.05971
| null | null | null |
cs.RO cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ergonomics and human comfort are essential concerns in physical human-robot
interaction applications. Defining an accurate and easy-to-use ergonomic
assessment model stands as an important step in providing feedback for postural
correction to improve operator health and comfort. Common practical methods in
the area suffer from inaccurate ergonomics models in performing postural
optimization. In order to retain assessment quality, while improving
computational considerations, we propose a novel framework for postural
assessment and optimization for ergonomically intelligent physical human-robot
interaction. We introduce DULA and DEBA, differentiable and continuous
ergonomics models learned to replicate the popular and scientifically validated
RULA and REBA assessments with more than 99% accuracy. We show that DULA and
DEBA provide assessment comparable to RULA and REBA while providing
computational benefits when being used in postural optimization. We evaluate
our framework through human and simulation experiments. We highlight DULA and
DEBA's strength in a demonstration of postural optimization for a simulated
pHRI task.
|
[
{
"version": "v1",
"created": "Fri, 6 May 2022 22:24:01 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Yazdani",
"Amir",
""
],
[
"Novin",
"Roya Sabbagh",
""
],
[
"Merryweather",
"Andrew",
""
],
[
"Hermans",
"Tucker",
""
]
] |
new_dataset
| 0.993425 |
2205.03509
|
Manav Nitin Kapadnis
|
Ankan Mullick, Abhilash Nandy, Manav Nitin Kapadnis, Sohan Patnaik, R
Raghav
|
Fine-grained Intent Classification in the Legal Domain
|
4 pages, 7 tables, 1 figure, appeared in the AAAI-22 workshop on
Scientific Document Understanding
| null | null | null |
cs.CL cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
A law practitioner has to go through a lot of long legal case proceedings. To
understand the motivation behind the actions of different parties/individuals
in a legal case, it is essential that the parts of the document that express an
intent corresponding to the case be clearly understood. In this paper, we
introduce a dataset of 93 legal documents, belonging to the case categories of
either Murder, Land Dispute, Robbery, or Corruption, where phrases expressing
intent same as the category of the document are annotated. Also, we annotate
fine-grained intents for each such phrase to enable a deeper understanding of
the case for a reader. Finally, we analyze the performance of several
transformer-based models in automating the process of extracting intent phrases
(both at a coarse and a fine-grained level), and classifying a document into
one of the possible 4 categories, and observe that, our dataset is challenging,
especially in the case of fine-grained intent classification.
|
[
{
"version": "v1",
"created": "Fri, 6 May 2022 23:57:17 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Mullick",
"Ankan",
""
],
[
"Nandy",
"Abhilash",
""
],
[
"Kapadnis",
"Manav Nitin",
""
],
[
"Patnaik",
"Sohan",
""
],
[
"Raghav",
"R",
""
]
] |
new_dataset
| 0.999798 |
2205.03532
|
Yashraj Narang
|
Yashraj Narang, Kier Storey, Iretiayo Akinola, Miles Macklin, Philipp
Reist, Lukasz Wawrzyniak, Yunrong Guo, Adam Moravanszky, Gavriel State,
Michelle Lu, Ankur Handa, Dieter Fox
|
Factory: Fast Contact for Robotic Assembly
|
Accepted to Robotics: Science and Systems (RSS) 2022
| null | null | null |
cs.RO cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robotic assembly is one of the oldest and most challenging applications of
robotics. In other areas of robotics, such as perception and grasping,
simulation has rapidly accelerated research progress, particularly when
combined with modern deep learning. However, accurately, efficiently, and
robustly simulating the range of contact-rich interactions in assembly remains
a longstanding challenge. In this work, we present Factory, a set of physics
simulation methods and robot learning tools for such applications. We achieve
real-time or faster simulation of a wide range of contact-rich scenes,
including simultaneous simulation of 1000 nut-and-bolt interactions. We provide
$60$ carefully-designed part models, 3 robotic assembly environments, and 7
robot controllers for training and testing virtual robots. Finally, we train
and evaluate proof-of-concept reinforcement learning policies for nut-and-bolt
assembly. We aim for Factory to open the doors to using simulation for robotic
assembly, as well as many other contact-rich applications in robotics. Please
see https://sites.google.com/nvidia.com/factory for supplementary content,
including videos.
|
[
{
"version": "v1",
"created": "Sat, 7 May 2022 03:27:30 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Narang",
"Yashraj",
""
],
[
"Storey",
"Kier",
""
],
[
"Akinola",
"Iretiayo",
""
],
[
"Macklin",
"Miles",
""
],
[
"Reist",
"Philipp",
""
],
[
"Wawrzyniak",
"Lukasz",
""
],
[
"Guo",
"Yunrong",
""
],
[
"Moravanszky",
"Adam",
""
],
[
"State",
"Gavriel",
""
],
[
"Lu",
"Michelle",
""
],
[
"Handa",
"Ankur",
""
],
[
"Fox",
"Dieter",
""
]
] |
new_dataset
| 0.998807 |
2205.03566
|
David Navarro-Alarcon
|
Maria Victorova, Heidi Hin Ting Lau, Timothy Tin-Yan Lee, David
Navarro-Alarcon and Yongping Zheng
|
Reliability of Robotic Ultrasound Scanning for Scoliosis Assessment in
Comparison with Manual Scanning
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Background: Ultrasound (US) imaging for scoliosis assessment is challenging
for a non-experienced operator. The robotic scanning was developed to follow a
spinal curvature with deep learning and apply consistent forces to the patient'
back. Methods: 23 scoliosis patients were scanned with US devices both,
robotically and manually. Two human raters measured each subject's spinous
process angles (SPA) on robotic and manual coronal images. Results: The robotic
method showed high intra- (ICC > 0.85) and inter-rater (ICC > 0.77)
reliabilities. Compared with the manual method, the robotic approach showed no
significant difference (p < 0.05) when measuring coronal deformity angles. The
MAD for intra-rater analysis lies within an acceptable range from 0 deg to 5
deg for a minimum of 86% and a maximum 97% of a total number of the measured
angles. Conclusions: This study demonstrated that scoliosis deformity angles
measured on ultrasound images obtained with robotic scanning are comparable to
those obtained by manual scanning.
|
[
{
"version": "v1",
"created": "Sat, 7 May 2022 06:14:16 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Victorova",
"Maria",
""
],
[
"Lau",
"Heidi Hin Ting",
""
],
[
"Lee",
"Timothy Tin-Yan",
""
],
[
"Navarro-Alarcon",
"David",
""
],
[
"Zheng",
"Yongping",
""
]
] |
new_dataset
| 0.994549 |
2205.03582
|
Yanxiang Gong
|
Yanxiang Gong, Linjie Deng, Shuai Tao, Xinchen Lu, Peicheng Wu, Zhiwei
Xie, Zheng Ma, Mei Xie
|
Unified Chinese License Plate Detection and Recognition with High
Efficiency
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, deep learning-based methods have reached an excellent performance
on License Plate (LP) detection and recognition tasks. However, it is still
challenging to build a robust model for Chinese LPs since there are not enough
large and representative datasets. In this work, we propose a new dataset named
Chinese Road Plate Dataset (CRPD) that contains multi-objective Chinese LP
images as a supplement to the existing public benchmarks. The images are mainly
captured with electronic monitoring systems with detailed annotations. To our
knowledge, CRPD is the largest public multi-objective Chinese LP dataset with
annotations of vertices. With CRPD, a unified detection and recognition network
with high efficiency is presented as the baseline. The network is end-to-end
trainable with totally real-time inference efficiency (30 fps with 640p). The
experiments on several public benchmarks demonstrate that our method has
reached competitive performance. The code and dataset will be publicly
available at https://github.com/yxgong0/CRPD.
|
[
{
"version": "v1",
"created": "Sat, 7 May 2022 07:35:51 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Gong",
"Yanxiang",
""
],
[
"Deng",
"Linjie",
""
],
[
"Tao",
"Shuai",
""
],
[
"Lu",
"Xinchen",
""
],
[
"Wu",
"Peicheng",
""
],
[
"Xie",
"Zhiwei",
""
],
[
"Ma",
"Zheng",
""
],
[
"Xie",
"Mei",
""
]
] |
new_dataset
| 0.999774 |
2205.03684
|
Yiwen Xu
|
Yiwen Xu, Liangtao Huang, Tiesong Zhao, Liqun Lin, Ying Fang
|
Timestamp-independent Haptic-Visual Synchronization
| null | null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The booming haptic data significantly improves the users'immersion during
multimedia interaction. As a result, the study of Haptic,Audio-Visual
Environment(HAVE)has attracted attentions of multimedia community. To realize
such a system, a challenging tack is the synchronization of multiple sensorial
signals that is critical to user experience. Despite of audio-visual
synchronization efforts, there is still a lack of haptic-aware multimedia
synchronization model. In this work, we propose a timestamp-independent
synchronization for haptic-visual signal transmission. First, we exploit the
sequential correlations during delivery and playback of a haptic-visual
communication system. Second, we develop a key sample extraction of haptic
signals based on the force feedback characteristics, and a key frame extraction
of visual signals based on deep object detection. Third, we combine the key
samples and frames to synchronize the corresponding haptic-visual signals.
Without timestamps in signal flow, the proposed method is still effective and
more robust to complicated network conditions. Subjective evaluation also shows
a significant improvement of user experience with the proposed method.
|
[
{
"version": "v1",
"created": "Sat, 7 May 2022 16:56:08 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Xu",
"Yiwen",
""
],
[
"Huang",
"Liangtao",
""
],
[
"Zhao",
"Tiesong",
""
],
[
"Lin",
"Liqun",
""
],
[
"Fang",
"Ying",
""
]
] |
new_dataset
| 0.961278 |
2205.03688
|
Igor Morawski
|
Igor Morawski and Yu-An Chen and Yu-Sheng Lin and Shusil Dangi and Kai
He and Winston H. Hsu
|
GenISP: Neural ISP for Low-Light Machine Cognition
|
Accepted to CVPR 2022 Workshop NTIRE: New Trends in Image Restoration
and Enhancement workshop and Challenges
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Object detection in low-light conditions remains a challenging but important
problem with many practical implications. Some recent works show that, in
low-light conditions, object detectors using raw image data are more robust
than detectors using image data processed by a traditional ISP pipeline. To
improve detection performance in low-light conditions, one can fine-tune the
detector to use raw image data or use a dedicated low-light neural pipeline
trained with paired low- and normal-light data to restore and enhance the
image. However, different camera sensors have different spectral sensitivity
and learning-based models using raw images process data in the sensor-specific
color space. Thus, once trained, they do not guarantee generalization to other
camera sensors. We propose to improve generalization to unseen camera sensors
by implementing a minimal neural ISP pipeline for machine cognition, named
GenISP, that explicitly incorporates Color Space Transformation to a
device-independent color space. We also propose a two-stage color processing
implemented by two image-to-parameter modules that take down-sized image as
input and regress global color correction parameters. Moreover, we propose to
train our proposed GenISP under the guidance of a pre-trained object detector
and avoid making assumptions about perceptual quality of the image, but rather
optimize the image representation for machine cognition. At the inference
stage, GenISP can be paired with any object detector. We perform extensive
experiments to compare our method to other low-light image restoration and
enhancement methods in an extrinsic task-based evaluation and validate that
GenISP can generalize to unseen sensors and object detectors. Finally, we
contribute a low-light dataset of 7K raw images annotated with 46K bounding
boxes for task-based benchmarking of future low-light image restoration and
object detection.
|
[
{
"version": "v1",
"created": "Sat, 7 May 2022 17:17:24 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Morawski",
"Igor",
""
],
[
"Chen",
"Yu-An",
""
],
[
"Lin",
"Yu-Sheng",
""
],
[
"Dangi",
"Shusil",
""
],
[
"He",
"Kai",
""
],
[
"Hsu",
"Winston H.",
""
]
] |
new_dataset
| 0.998529 |
2205.03695
|
Chengsheng Mao
|
Chengsheng Mao, Liang Yao and Yuan Luo
|
AKI-BERT: a Pre-trained Clinical Language Model for Early Prediction of
Acute Kidney Injury
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Acute kidney injury (AKI) is a common clinical syndrome characterized by a
sudden episode of kidney failure or kidney damage within a few hours or a few
days. Accurate early prediction of AKI for patients in ICU who are more likely
than others to have AKI can enable timely interventions, and reduce the
complications of AKI. Much of the clinical information relevant to AKI is
captured in clinical notes that are largely unstructured text and requires
advanced natural language processing (NLP) for useful information extraction.
On the other hand, pre-trained contextual language models such as Bidirectional
Encoder Representations from Transformers (BERT) have improved performances for
many NLP tasks in general domain recently. However, few have explored BERT on
disease-specific medical domain tasks such as AKI early prediction. In this
paper, we try to apply BERT to specific diseases and present an AKI
domain-specific pre-trained language model based on BERT (AKI-BERT) that could
be used to mine the clinical notes for early prediction of AKI. AKI-BERT is a
BERT model pre-trained on the clinical notes of patients having risks for AKI.
Our experiments on Medical Information Mart for Intensive Care III (MIMIC-III)
dataset demonstrate that AKI-BERT can yield performance improvements for early
AKI prediction, thus expanding the utility of the BERT model from general
clinical domain to disease-specific domain.
|
[
{
"version": "v1",
"created": "Sat, 7 May 2022 18:04:31 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Mao",
"Chengsheng",
""
],
[
"Yao",
"Liang",
""
],
[
"Luo",
"Yuan",
""
]
] |
new_dataset
| 0.997242 |
2205.03719
|
Laura Sisson
|
Laura Sisson
|
Odor Descriptor Understanding through Prompting
|
14 pages, 6 figures, 5 tables
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Embeddings from contemporary natural language processing (NLP) models are
commonly used as numerical representations for words or sentences. However,
odor descriptor words, like "leather" or "fruity", vary significantly between
their commonplace usage and their olfactory usage, as a result traditional
methods for generating these embeddings do not suffice. In this paper, we
present two methods to generate embeddings for odor words that are more closely
aligned with their olfactory meanings when compared to off-the-shelf
embeddings. These generated embeddings outperform the previous state-of-the-art
and contemporary fine-tuning/prompting methods on a pre-existing zero-shot
odor-specific NLP benchmark.
|
[
{
"version": "v1",
"created": "Sat, 7 May 2022 20:44:22 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Sisson",
"Laura",
""
]
] |
new_dataset
| 0.986831 |
2205.03739
|
Eva Agapaki Dr.
|
Eva Agapaki
|
Airport Digital Twins for Resilient Disaster Management Response
| null | null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Airports are constantly facing a variety of hazards and threats from natural
disasters to cybersecurity attacks and airport stakeholders are confronted with
making operational decisions under irregular conditions. We introduce the
concept of the foundational twin, which can serve as a resilient data platform,
incorporating multiple data sources and enabling the interaction between an
umbrella of twins. We then focus on providing data sources and metrics for each
foundational twin, with an emphasis on the environmental airport twin for major
US airports.
|
[
{
"version": "v1",
"created": "Sat, 7 May 2022 23:26:16 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Agapaki",
"Eva",
""
]
] |
new_dataset
| 0.999159 |
2205.03745
|
Mattia Paccamiccio
|
Fadi Al-Turjman, Diletta Cacciagrano, Leonardo Mostarda, Mattia
Paccamiccio, Zaib Ullah
|
Light Communication for Controlling Industrial Robots
| null |
First EAI International Conference, FoNeS - IoT 2020, Virtual
Event, October 1-2, 2020, Proceedings
|
10.1007/978-3-030-69431-9_9
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Optical Wireless Communication (OWC) is regarded as an auspicious
communication approach that can outperform the existing wireless technology. It
utilizes LED lights, whose subtle variation in radiant intensity generate a
binary data stream. This is perceived by a photodiode, that converts it to
electric signals for further interpretation. This article aims at exploring the
use of this emerging technology in order to control wirelessly industrial
robots, overcoming the need for wires, especially in environments where radio
waves are not working due to environmental factors or not allowed for safety
reasons. We performed experiments to ensure the suitability and efficiency of
OWC based technology for the aforementioned scope and "in vitro" tests in
various Line-of-Sight (LoS) and Non-Line-of-Sight (NLoS) configurations to
observe the system throughput and reliability. The technology performance in
the "clear LoS" and in the presence of a transparent barrier, were also
analyzed.
|
[
{
"version": "v1",
"created": "Sun, 8 May 2022 00:30:52 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Al-Turjman",
"Fadi",
""
],
[
"Cacciagrano",
"Diletta",
""
],
[
"Mostarda",
"Leonardo",
""
],
[
"Paccamiccio",
"Mattia",
""
],
[
"Ullah",
"Zaib",
""
]
] |
new_dataset
| 0.987378 |
2205.03757
|
Naoki Matsumoto
|
Naoki Matsumoto and Yuuki Takai
|
Cover time of graphs with bounded genus
|
17 pages
| null | null | null |
cs.DM math.CO math.PR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The cover time of a finite connected graph is the expected number of steps
needed for a simple random walk on the graph to visit all vertices of the
graph. It is known that the cover time of any finite connected $n$-vertex graph
is at least $(1 + o(1)) n \log n$ and at most $(1 + o(1)) \frac{4}{27} n^3$. By
Jonasson and Schramm, the cover time of any bounded-degree finite connected
$n$-vertex planar graph is at least $c n(\log n)^2$ and at most $6n^2$, where
$c$ is a positive constant depending only on the maximal degree of the graph.
In particular, the lower bound is established via the use of circle packing of
planar graphs on the Riemann sphere. In this paper, we show that the cover time
of any finite $n$-vertex graph $G$ with maximum degree $\Delta$ on the compact
Riemann surface $S$ of given genus $g$ is at least $c n(\log n)^2/ \Delta(g +
1)$ and at most $(6 + o(1))n^2$, where $c$ is an absolute constant, if $n$ is
sufficiently large and three sufficient conditions for $S$ and a circle packing
of $G$ filling $S$.
|
[
{
"version": "v1",
"created": "Sun, 8 May 2022 02:07:19 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Matsumoto",
"Naoki",
""
],
[
"Takai",
"Yuuki",
""
]
] |
new_dataset
| 0.983716 |
2205.03759
|
Chi-Luen Feng
|
Chi-Luen Feng, Po-chun Hsu, Hung-yi Lee
|
Silence is Sweeter Than Speech: Self-Supervised Model Using Silence to
Store Speaker Information
| null | null | null | null |
cs.LG cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Self-Supervised Learning (SSL) has made great strides recently. SSL speech
models achieve decent performance on a wide range of downstream tasks,
suggesting that they extract different aspects of information from speech.
However, how SSL models store various information in hidden representations
without interfering is still poorly understood. Taking the recently successful
SSL model, HuBERT, as an example, we explore how the SSL model processes and
stores speaker information in the representation. We found that HuBERT stores
speaker information in representations whose positions correspond to silences
in a waveform. There are several pieces of evidence. (1) We find that the
utterances with more silent parts in the waveforms have better Speaker
Identification (SID) accuracy. (2) If we use the whole utterances for SID, the
silence part always contributes more to the SID task. (3) If we only use the
representation of a part of the utterance for SID, the silenced part has higher
accuracy than the other parts. Our findings not only contribute to a better
understanding of SSL models but also improve performance. By simply adding
silence to the original waveform, HuBERT improved its accuracy on SID by nearly
2%.
|
[
{
"version": "v1",
"created": "Sun, 8 May 2022 02:10:39 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Feng",
"Chi-Luen",
""
],
[
"Hsu",
"Po-chun",
""
],
[
"Lee",
"Hung-yi",
""
]
] |
new_dataset
| 0.998917 |
2205.03774
|
Eileen Wang
|
Eileen Wang, Caren Han, Josiah Poon
|
RoViST:Learning Robust Metrics for Visual Storytelling
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual storytelling (VST) is the task of generating a story paragraph that
describes a given image sequence. Most existing storytelling approaches have
evaluated their models using traditional natural language generation metrics
like BLEU or CIDEr. However, such metrics based on n-gram matching tend to have
poor correlation with human evaluation scores and do not explicitly consider
other criteria necessary for storytelling such as sentence structure or topic
coherence. Moreover, a single score is not enough to assess a story as it does
not inform us about what specific errors were made by the model. In this paper,
we propose 3 evaluation metrics sets that analyses which aspects we would look
for in a good story: 1) visual grounding, 2) coherence, and 3) non-redundancy.
We measure the reliability of our metric sets by analysing its correlation with
human judgement scores on a sample of machine stories obtained from 4
state-of-the-arts models trained on the Visual Storytelling Dataset (VIST). Our
metric sets outperforms other metrics on human correlation, and could be served
as a learning based evaluation metric set that is complementary to existing
rule-based metrics.
|
[
{
"version": "v1",
"created": "Sun, 8 May 2022 03:51:22 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Wang",
"Eileen",
""
],
[
"Han",
"Caren",
""
],
[
"Poon",
"Josiah",
""
]
] |
new_dataset
| 0.992649 |
2205.03786
|
Muhao Chen
|
Yiwei Wang, Muhao Chen, Wenxuan Zhou, Yujun Cai, Yuxuan Liang, Bryan
Hooi
|
GRAPHCACHE: Message Passing as Caching for Sentence-Level Relation
Extraction
|
NAACL 2022 (Findings)
| null | null | null |
cs.CL cs.AI cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Entity types and textual context are essential properties for sentence-level
relation extraction (RE). Existing work only encodes these properties within
individual instances, which limits the performance of RE given the insufficient
features in a single sentence. In contrast, we model these properties from the
whole dataset and use the dataset-level information to enrich the semantics of
every instance. We propose the GRAPHCACHE (Graph Neural Network as Caching)
module, that propagates the features across sentences to learn better
representations for RE. GRAPHCACHE aggregates the features from sentences in
the whole dataset to learn global representations of properties, and use them
to augment the local features within individual sentences. The global property
features act as dataset-level prior knowledge for RE, and a complement to the
sentence-level features. Inspired by the classical caching technique in
computer systems, we develop GRAPHCACHE to update the property representations
in an online manner. Overall, GRAPHCACHE yields significant effectiveness gains
on RE and enables efficient message passing across all sentences in the
dataset.
|
[
{
"version": "v1",
"created": "Sun, 8 May 2022 05:30:19 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Wang",
"Yiwei",
""
],
[
"Chen",
"Muhao",
""
],
[
"Zhou",
"Wenxuan",
""
],
[
"Cai",
"Yujun",
""
],
[
"Liang",
"Yuxuan",
""
],
[
"Hooi",
"Bryan",
""
]
] |
new_dataset
| 0.992891 |
2205.03804
|
Orith Toledo-Ronen
|
Orith Toledo-Ronen, Matan Orbach, Yoav Katz, Noam Slonim
|
Multi-Domain Targeted Sentiment Analysis
|
Accepted to NAACL 2022 (long paper)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Targeted Sentiment Analysis (TSA) is a central task for generating insights
from consumer reviews. Such content is extremely diverse, with sites like
Amazon or Yelp containing reviews on products and businesses from many
different domains. A real-world TSA system should gracefully handle that
diversity. This can be achieved by a multi-domain model -- one that is robust
to the domain of the analyzed texts, and performs well on various domains. To
address this scenario, we present a multi-domain TSA system based on augmenting
a given training set with diverse weak labels from assorted domains. These are
obtained through self-training on the Yelp reviews corpus. Extensive
experiments with our approach on three evaluation datasets across different
domains demonstrate the effectiveness of our solution. We further analyze how
restrictions imposed on the available labeled data affect the performance, and
compare the proposed method to the costly alternative of manually gathering
diverse TSA labeled data. Our results and analysis show that our approach is a
promising step towards a practical domain-robust TSA system.
|
[
{
"version": "v1",
"created": "Sun, 8 May 2022 07:40:36 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Toledo-Ronen",
"Orith",
""
],
[
"Orbach",
"Matan",
""
],
[
"Katz",
"Yoav",
""
],
[
"Slonim",
"Noam",
""
]
] |
new_dataset
| 0.961211 |
2205.03817
|
Siyang Jiang Leon
|
Siyang Jiang, Wei Ding, Hsi-Wen Chen, Ming-Syan Chen
|
PGADA: Perturbation-Guided Adversarial Alignment for Few-shot Learning
Under the Support-Query Shift
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Few-shot learning methods aim to embed the data to a low-dimensional
embedding space and then classify the unseen query data to the seen support
set. While these works assume that the support set and the query set lie in the
same embedding space, a distribution shift usually occurs between the support
set and the query set, i.e., the Support-Query Shift, in the real world. Though
optimal transportation has shown convincing results in aligning different
distributions, we find that the small perturbations in the images would
significantly misguide the optimal transportation and thus degrade the model
performance. To relieve the misalignment, we first propose a novel adversarial
data augmentation method, namely Perturbation-Guided Adversarial Alignment
(PGADA), which generates the hard examples in a self-supervised manner. In
addition, we introduce Regularized Optimal Transportation to derive a smooth
optimal transportation plan. Extensive experiments on three benchmark datasets
manifest that our framework significantly outperforms the eleven
state-of-the-art methods on three datasets.
|
[
{
"version": "v1",
"created": "Sun, 8 May 2022 09:15:58 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Jiang",
"Siyang",
""
],
[
"Ding",
"Wei",
""
],
[
"Chen",
"Hsi-Wen",
""
],
[
"Chen",
"Ming-Syan",
""
]
] |
new_dataset
| 0.982766 |
2205.03890
|
Boris Ryabko
|
Boris Ryabko
|
Entropically secure cipher for messages generated by Markov chains with
unknown statistics
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
In 2002, Russell and Wang proposed a definition of entropically security that
was developed within the framework of secret key cryptography. An
entropically-secure system is unconditionally secure, that is, unbreakable,
regardless of the enemy's computing power. In 2004, Dodis and Smith developed
the results of Russell and Wang and, in particular, stated that the concept of
an entropy-protected symmetric encryption scheme is extremely important for
cryptography, since it is possible to construct entropy-protected symmetric
encryption schemes with keys much shorter than the keys. the length of the
input data, which allows you to bypass the famous lower bound on the length of
the Shannon key. In this report, we propose an entropy-protected scheme for the
case where the encrypted message is generated by a Markov chain with unknown
statistics. The length of the required secret key is proportional to the
logarithm of the length of the message (as opposed to the length of the message
itself for the one-time pad).
|
[
{
"version": "v1",
"created": "Sun, 8 May 2022 15:01:50 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Ryabko",
"Boris",
""
]
] |
new_dataset
| 0.973363 |
2205.04022
|
Maria N\u{a}dejde
|
Maria N\u{a}dejde, Anna Currey, Benjamin Hsu, Xing Niu, Marcello
Federico, Georgiana Dinu
|
CoCoA-MT: A Dataset and Benchmark for Contrastive Controlled MT with
Application to Formality
|
NAACL 2022
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The machine translation (MT) task is typically formulated as that of
returning a single translation for an input segment. However, in many cases,
multiple different translations are valid and the appropriate translation may
depend on the intended target audience, characteristics of the speaker, or even
the relationship between speakers. Specific problems arise when dealing with
honorifics, particularly translating from English into languages with formality
markers. For example, the sentence "Are you sure?" can be translated in German
as "Sind Sie sich sicher?" (formal register) or "Bist du dir sicher?"
(informal). Using wrong or inconsistent tone may be perceived as inappropriate
or jarring for users of certain cultures and demographics. This work addresses
the problem of learning to control target language attributes, in this case
formality, from a small amount of labeled contrastive data. We introduce an
annotated dataset (CoCoA-MT) and an associated evaluation metric for training
and evaluating formality-controlled MT models for six diverse target languages.
We show that we can train formality-controlled models by fine-tuning on labeled
contrastive data, achieving high accuracy (82% in-domain and 73% out-of-domain)
while maintaining overall quality.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2022 04:05:36 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Nădejde",
"Maria",
""
],
[
"Currey",
"Anna",
""
],
[
"Hsu",
"Benjamin",
""
],
[
"Niu",
"Xing",
""
],
[
"Federico",
"Marcello",
""
],
[
"Dinu",
"Georgiana",
""
]
] |
new_dataset
| 0.999797 |
2205.04164
|
Iason Katsamenis
|
Iason Katsamenis, Matthaios Bimpas, Eftychios Protopapadakis,
Charalampos Zafeiropoulos, Dimitris Kalogeras, Anastasios Doulamis, Nikolaos
Doulamis, Carlos Mart\'in-Portugu\'es Montoliu, Yannis Handanos, Franziska
Schmidt, Lionel Ott, Miquel Cantero, Rafael Lopez
|
Robotic Maintenance of Road Infrastructures: The HERON Project
|
13 pages, 6 figures, 1 table
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Of all public assets, road infrastructure tops the list. Roads are crucial
for economic development and growth, providing access to education, health, and
employment. The maintenance, repair, and upgrade of roads are therefore vital
to road users' health and safety as well as to a well-functioning and
prosperous modern economy. The EU-funded HERON project will develop an
integrated automated system to adequately maintain road infrastructure. In
turn, this will reduce accidents, lower maintenance costs, and increase road
network capacity and efficiency. To coordinate maintenance works, the project
will design an autonomous ground robotic vehicle that will be supported by
autonomous drones. Sensors and scanners for 3D mapping will be used in addition
to artificial intelligence toolkits to help coordinate road maintenance and
upgrade workflows.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2022 10:17:36 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Katsamenis",
"Iason",
""
],
[
"Bimpas",
"Matthaios",
""
],
[
"Protopapadakis",
"Eftychios",
""
],
[
"Zafeiropoulos",
"Charalampos",
""
],
[
"Kalogeras",
"Dimitris",
""
],
[
"Doulamis",
"Anastasios",
""
],
[
"Doulamis",
"Nikolaos",
""
],
[
"Montoliu",
"Carlos Martín-Portugués",
""
],
[
"Handanos",
"Yannis",
""
],
[
"Schmidt",
"Franziska",
""
],
[
"Ott",
"Lionel",
""
],
[
"Cantero",
"Miquel",
""
],
[
"Lopez",
"Rafael",
""
]
] |
new_dataset
| 0.99799 |
2205.04185
|
Mustafa Melih Mutlu
|
M. Melih Mutlu, Arzucan \"Ozg\"ur
|
A Dataset and BERT-based Models for Targeted Sentiment Analysis on
Turkish Texts
| null | null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Targeted Sentiment Analysis aims to extract sentiment towards a particular
target from a given text. It is a field that is attracting attention due to the
increasing accessibility of the Internet, which leads people to generate an
enormous amount of data. Sentiment analysis, which in general requires
annotated data for training, is a well-researched area for widely studied
languages such as English. For low-resource languages such as Turkish, there is
a lack of such annotated data. We present an annotated Turkish dataset suitable
for targeted sentiment analysis. We also propose BERT-based models with
different architectures to accomplish the task of targeted sentiment analysis.
The results demonstrate that the proposed models outperform the traditional
sentiment analysis models for the targeted sentiment analysis task.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2022 10:57:39 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Mutlu",
"M. Melih",
""
],
[
"Özgür",
"Arzucan",
""
]
] |
new_dataset
| 0.999653 |
2205.04187
|
Brendon McBain
|
Brendon McBain, Emanuele Viterbo, James Saunderson
|
Finite-State Semi-Markov Channels for Nanopore Sequencing
|
6 pages. 4 figures. To appear in the Proceedings of the 2022 IEEE
International Symposium on Information Theory (ISIT)
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Nanopore sequencing is an emerging DNA sequencing technology that has been
proposed for use in DNA storage systems. We propose the noisy nanopore channel
model for nanopore sequencing. This model captures duplications, inter-symbol
interference, and noisy measurements by concatenating an i.i.d. duplication
channel with a finite-state semi-Markov channel. Compared to previous models,
this channel models the dominant distortions of the nanopore while remaining
tractable. Anticipating future coding schemes, we derive MAP detection
algorithms and estimate achievable rates. Given that finite-state semi-Markov
channels are a subclass of channels with memory, we conjecture that the
achievable rate of the noisy nanopore channel can be optimised using a
variation of the generalised Blahut-Arimoto algorithm.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2022 11:05:23 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"McBain",
"Brendon",
""
],
[
"Viterbo",
"Emanuele",
""
],
[
"Saunderson",
"James",
""
]
] |
new_dataset
| 0.996866 |
2205.04189
|
Jorge Mart\'in-P\'erez
|
Milan Groshev, Jorge Mart\'in-P\'erez, Carlos Guimar\~aes, Antonio de
la Oliva, Carlos J. Bernardos
|
FoReCo: a forecast-based recovery mechanism for real-time remote control
of robotic manipulators
|
10 figures, 12 pages, journal, submitted to IEEE TNSM
| null | null | null |
cs.NI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wireless communications represent a game changer for future manufacturing
plants, enabling flexible production chains as machinery and other components
are not restricted to a location by the rigid wired connections on the factory
floor. However, the presence of electromagnetic interference in the wireless
spectrum may result in packet loss and delay, making it a challenging
environment to meet the extreme reliability requirements of industrial
applications. In such conditions, achieving real-time remote control, either
from the Edge or Cloud, becomes complex. In this paper, we investigate a
forecast-based recovery mechanism for real-time remote control of robotic
manipulators (FoReCo) that uses Machine Learning (ML) to infer lost commands
caused by interference in the wireless channel. FoReCo is evaluated through
both simulation and experimentation in interference prone IEEE 802.11 wireless
links, and using a commercial research robot that performs pick-and-place
tasks. Results show that in case of interference, FoReCo trajectory error is
decreased by x18 and x2 times in simulation and experimentation, and that
FoReCo is sufficiently lightweight to be deployed in the hardware of already
used in existing solutions.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2022 11:08:45 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Groshev",
"Milan",
""
],
[
"Martín-Pérez",
"Jorge",
""
],
[
"Guimarães",
"Carlos",
""
],
[
"de la Oliva",
"Antonio",
""
],
[
"Bernardos",
"Carlos J.",
""
]
] |
new_dataset
| 0.984632 |
2205.04193
|
Oliver Gasser
|
Victor-Alexandru P\u{a}durean, Oliver Gasser, Randy Bush, Anja
Feldmann
|
SRv6: Is There Anybody Out There?
|
Accepted at WTMC 2022
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Segment routing is a modern form of source-based routing, i.e., a routing
technique where all or part of the routing decision is predetermined by the
source or a hop on the path. Since initial standardization efforts in 2013,
segment routing seems to have garnered substantial industry and operator
support. Especially segment routing over IPv6 (SRv6) is advertised as having
several advantages for easy deployment and flexibility in operations in
networks. Many people, however, argue that the deployment of segment routing
and SRv6 in particular poses a significant security threat if not done with the
utmost care. In this paper we conduct a first empirical analysis of SRv6
deployment in the Internet. First, we analyze SRv6 behavior in an emulation
environment and find that different SRv6 implementations have the potential to
leak information to the outside. Second, we search for signs of SRv6 deployment
in publicly available route collector data, but could not find any traces.
Third, we run large-scale traceroute campaigns to investigate possible SRv6
deployments. In this first empirical study on SRv6 we are unable to find traces
of SRv6 deployment even for companies that claim to have it deployed in their
networks. This lack of leakage might be an indication of good security
practices being followed by network operators when deploying SRv6.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2022 11:14:56 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Pădurean",
"Victor-Alexandru",
""
],
[
"Gasser",
"Oliver",
""
],
[
"Bush",
"Randy",
""
],
[
"Feldmann",
"Anja",
""
]
] |
new_dataset
| 0.999815 |
2205.04197
|
James C. A. Main
|
James C. A. Main, Mickael Randour, Jeremy Sproston
|
Timed Games with Bounded Window Parity Objectives
|
44 pages
| null | null | null |
cs.GT cs.FL cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The window mechanism, introduced by Chatterjee et al. for mean-payoff and
total-payoff objectives in two-player turn-based games on graphs, refines
long-term objectives with time bounds. This mechanism has proven useful in a
variety of settings, and most recently in timed systems.
In the timed setting, the so-called fixed timed window parity objectives have
been studied. A fixed timed window parity objective is defined with respect to
some time bound and requires that, at all times, we witness a time frame, i.e.,
a window, of size less than the fixed bound in which the smallest priority is
even. In this work, we focus on the bounded timed window parity objective. Such
an objective is satisfied if there exists some bound for which the fixed
objective is satisfied. The satisfaction of bounded objectives is robust to
modeling choices such as constants appearing in constraints, unlike fixed
objectives, for which the choice of constants may affect the satisfaction for a
given bound.
We show that verification of bounded timed window objectives in timed
automata can be performed in polynomial space, and that timed games with these
objectives can be solved in exponential time, even for multi-objective
extensions. This matches the complexity classes of the fixed case. We also
provide a comparison of the different variants of window parity objectives.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2022 11:30:51 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Main",
"James C. A.",
""
],
[
"Randour",
"Mickael",
""
],
[
"Sproston",
"Jeremy",
""
]
] |
new_dataset
| 0.968443 |
2205.04210
|
Adam Hamilton
|
Adam Hamilton, Matthew Roughan, and Giang T. Nguyen
|
Boolean Expressions in Firewall Analysis
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Firewall policies are an important line of defence in cybersecurity,
specifying which packets are allowed to pass through a network and which are
not. These firewall policies are made up of a list of interacting rules. In
practice, firewall can consist of hundreds or thousands of rules. This can be
very difficult for a human to correctly configure. One proposed solution is to
model firewall policies as Boolean expressions and use existing computer
programs such as SAT solvers to verify that the firewall satisfies certain
conditions. This paper takes an in-depth look at the Boolean expressions that
represent firewall policies. We present an algorithm that translates a list of
firewall rules into a Boolean expression in conjunctive normal form (CNF) or
disjunctive normal form (DNF). We also place an upper bound on the size of the
CNF and DNF that is polynomial in the number of rules in the firewall policy.
This shows that past results suggesting a combinatorial explosion when
converting from a Boolean expression in CNF to one in DNF does note occur in
the context of firewall analysis
|
[
{
"version": "v1",
"created": "Tue, 3 May 2022 23:46:04 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Hamilton",
"Adam",
""
],
[
"Roughan",
"Matthew",
""
],
[
"Nguyen",
"Giang T.",
""
]
] |
new_dataset
| 0.968084 |
2205.04251
|
Huanghao Feng
|
Huanghao Fengr, Mohammad H. Mahoor and Francesca Dino
|
A Music-Therapy Robotic Platform for Children with Autism: A Pilot Study
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Children with Autism Spectrum Disorder (ASD) experience deficits in verbal
and nonverbal communication skills including motor control, turn-taking, and
emotion recognition. Innovative technology, such as socially assistive robots,
has shown to be a viable method for Autism therapy. This paper presents a novel
robot-based music-therapy platform for modeling and improving the social
responses and behaviors of children with ASD. Our autonomous social interactive
system consists of three modules. We adopted Short-time Fourier Transform and
Levenshtein distance to fulfill the design requirements: a) "music detection"
and b) "smart scoring and feedback", which allows NAO to understand music and
provide additional practice and oral feedback to the users as applicable. We
designed and implemented six Human-Robot-Interaction (HRI) sessions including
four intervention sessions. Nine children with ASD and seven Typically
Developing participated in a total of fifty HRI experimental sessions. Using
our platform, we collected and analyzed data on social behavioral changes and
emotion recognition using Electrodermal Activity (EDA) signals. The results of
our experiments demonstrate most of the participants were able to complete
motor control tasks with ~70% accuracy. Six out of the 9 ASD participants
showed stable turn-taking behavior when playing music. The results of automated
emotion classification using Support Vector Machines illustrate that emotional
arousal in the ASD group can be detected and well recognized via EDA
bio-signals. In summary, the results of our data analyses, including emotion
classification using EDA signals, indicate that the proposed robot-music based
therapy platform is an attractive and promising assistive tool to facilitate
the improvement of fine motor control and turn-taking skills in children with
ASD.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2022 13:03:56 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Fengr",
"Huanghao",
""
],
[
"Mahoor",
"Mohammad H.",
""
],
[
"Dino",
"Francesca",
""
]
] |
new_dataset
| 0.971494 |
2205.04257
|
Kawsar Haghshenas
|
Kawsar Haghshenas, Brian Setz, Yannis Bloch, and Marco Aiello
|
Enough Hot Air: The Role of Immersion Cooling
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Air cooling is the traditional solution to chill servers in data centers.
However, the continuous increase in global data center energy consumption
combined with the increase of the racks' power dissipation calls for the use of
more efficient alternatives. Immersion cooling is one such alternative. In this
paper, we quantitatively examine and compare air cooling and immersion cooling
solutions. The examined characteristics include power usage efficiency (PUE),
computing and power density, cost, and maintenance overheads. A direct
comparison shows a reduction of about 50% in energy consumption and a reduction
of about two-thirds of the occupied space, by using immersion cooling. In
addition, the higher heat capacity of used liquids in immersion cooling
compared to air allows for much higher rack power densities. Moreover,
immersion cooling requires less capital and operational expenditures. However,
challenging maintenance procedures together with the increased number of IT
failures are the main downsides. By selecting immersion cooling, cloud
providers must trade-off the decrease in energy and cost and the increase in
power density with its higher maintenance and reliability concerns. Finally, we
argue that retrofitting an air-cooled data center with immersion cooling will
result in high costs and is generally not recommended.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2022 13:18:04 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Haghshenas",
"Kawsar",
""
],
[
"Setz",
"Brian",
""
],
[
"Bloch",
"Yannis",
""
],
[
"Aiello",
"Marco",
""
]
] |
new_dataset
| 0.978148 |
2205.04362
|
Jason Harris
|
Jason Harris, Danny Driess, Marc Toussaint
|
FC$^3$: Feasibility-Based Control Chain Coordination
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hierarchical coordination of controllers often uses symbolic state
representations that fully abstract their underlying low-level controllers,
treating them as "black boxes" to the symbolic action abstraction. This paper
proposes a framework to realize robust behavior, which we call
Feasibility-based Control Chain Coordination (FC$^3$). Our controllers expose
the geometric features and constraints they operate on. Based on this, FC$^3$
can reason over the controllers' feasibility and their sequence feasibility.
For a given task, FC$^3$ first automatically constructs a library of potential
controller chains using a symbolic action tree, which is then used to
coordinate controllers in a chain, evaluate task feasibility, as well as
switching between controller chains if necessary. In several real-world
experiments we demonstrate FC$^3$'s robustness and awareness of the task's
feasibility through its own actions and gradual responses to different
interferences.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2022 15:03:53 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Harris",
"Jason",
""
],
[
"Driess",
"Danny",
""
],
[
"Toussaint",
"Marc",
""
]
] |
new_dataset
| 0.998429 |
2205.04404
|
Firoj Alam
|
Rabindra Nath Nandi, Firoj Alam, Preslav Nakov
|
TeamX@DravidianLangTech-ACL2022: A Comparative Analysis for Troll-Based
Meme Classification
|
Accepted at DravidianLangTech-ACL2022 (Colocated with ACL-2022).
disinformation, misinformation, factuality, harmfulness, fake news,
propaganda, multimodality, text, images, videos, network structure,
temporality
| null | null | null |
cs.CL cs.AI cs.CV cs.MM cs.SI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The spread of fake news, propaganda, misinformation, disinformation, and
harmful content online raised concerns among social media platforms, government
agencies, policymakers, and society as a whole. This is because such harmful or
abusive content leads to several consequences to people such as physical,
emotional, relational, and financial. Among different harmful content
\textit{trolling-based} online content is one of them, where the idea is to
post a message that is provocative, offensive, or menacing with an intent to
mislead the audience. The content can be textual, visual, a combination of
both, or a meme. In this study, we provide a comparative analysis of
troll-based memes classification using the textual, visual, and multimodal
content. We report several interesting findings in terms of code-mixed text,
multimodal setting, and combining an additional dataset, which shows
improvements over the majority baseline.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2022 16:19:28 GMT"
}
] | 2022-05-10T00:00:00 |
[
[
"Nandi",
"Rabindra Nath",
""
],
[
"Alam",
"Firoj",
""
],
[
"Nakov",
"Preslav",
""
]
] |
new_dataset
| 0.993014 |
1810.04298
|
Jaros{\l}aw B{\l}asiok
|
Jaros{\l}aw B{\l}asiok, Venkatesan Guruswami, Madhu Sudan
|
Polar Codes with exponentially small error at finite block length
|
17 pages, Appeared in RANDOM'18. arXiv admin note: substantial text
overlap with arXiv:1802.02718
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show that the entire class of polar codes (up to a natural necessary
condition) converge to capacity at block lengths polynomial in the gap to
capacity, while simultaneously achieving failure probabilities that are
exponentially small in the block length (i.e., decoding fails with probability
$\exp(-N^{\Omega(1)})$ for codes of length $N$). Previously this combination
was known only for one specific family within the class of polar codes, whereas
we establish this whenever the polar code exhibits a condition necessary for
any polarization. Our results adapt and strengthen a local analysis of polar
codes due to the authors with Nakkiran and Rudra [Proc. STOC 2018]. Their
analysis related the time-local behavior of a martingale to its global
convergence, and this allowed them to prove that the broad class of polar codes
converge to capacity at polynomial block lengths. Their analysis easily adapts
to show exponentially small failure probabilities, provided the associated
martingale, the ``Arikan martingale'', exhibits a corresponding strong local
effect. The main contribution of this work is a much stronger local analysis of
the Arikan martingale. This leads to the general result claimed above. In
addition to our general result, we also show, for the first time, polar codes
that achieve failure probability $\exp(-N^{\beta})$ for any $\beta < 1$ while
converging to capacity at block length polynomial in the gap to capacity.
Finally we also show that the ``local'' approach can be combined with any
analysis of failure probability of an arbitrary polar code to get essentially
the same failure probability while achieving block length polynomial in the gap
to capacity.
|
[
{
"version": "v1",
"created": "Tue, 9 Oct 2018 23:35:26 GMT"
}
] | 2022-05-09T00:00:00 |
[
[
"Błasiok",
"Jarosław",
""
],
[
"Guruswami",
"Venkatesan",
""
],
[
"Sudan",
"Madhu",
""
]
] |
new_dataset
| 0.994317 |
2101.12001
|
Thanasis Vergoulis
|
Thanasis Vergoulis, Ilias Kanellos, Claudio Atzori, Andrea Mannocci,
Serafeim Chatzopoulos, Sandro La Bruzzo, Natalia Manola, Paolo Manghi
|
BIP! DB: A Dataset of Impact Measures for Scientific Publications
| null |
WWW (Companion Volume) 2021: 456-460
|
10.1145/3442442.3451369
| null |
cs.DL cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
The growth rate of the number of scientific publications is constantly
increasing, creating important challenges in the identification of valuable
research and in various scholarly data management applications, in general. In
this context, measures which can effectively quantify the scientific impact
could be invaluable. In this work, we present BIP! DB, an open dataset that
contains a variety of impact measures calculated for a large collection of more
than 100 million scientific publications from various disciplines.
|
[
{
"version": "v1",
"created": "Thu, 28 Jan 2021 13:59:55 GMT"
},
{
"version": "v2",
"created": "Fri, 6 May 2022 13:03:19 GMT"
}
] | 2022-05-09T00:00:00 |
[
[
"Vergoulis",
"Thanasis",
""
],
[
"Kanellos",
"Ilias",
""
],
[
"Atzori",
"Claudio",
""
],
[
"Mannocci",
"Andrea",
""
],
[
"Chatzopoulos",
"Serafeim",
""
],
[
"La Bruzzo",
"Sandro",
""
],
[
"Manola",
"Natalia",
""
],
[
"Manghi",
"Paolo",
""
]
] |
new_dataset
| 0.994685 |
2105.08693
|
Sriram Bhyravarapu
|
Sriram Bhyravarapu, Tim A. Hartmann, Hung P. Hoang, Subrahmanyam
Kalyanasundaram and I. Vinod Reddy
|
Conflict-Free Coloring: Graphs of Bounded Clique Width and Intersection
Graphs
| null | null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given an undirected graph $G$, a conflict-free coloring CFON* (resp. CFCN*)
is an assignment of colors to a subset of the vertices of the graph, such that
for every vertex there exists a color that is assigned to exactly one vertex in
its open neighborhood (resp. closed neighborhood). The conflict-free coloring
problem asks to find the minimum number of colors required for such a CFON*
(resp. CFCN*) coloring, called the conflict-free chromatic number, denoted by
$\chi^*_{ON}(G)$ (resp. $\chi^*_{CN}(G)$). The decision versions of the
problems are NP-complete in general.
In this paper, we show the following results on the conflict-free coloring
problem under open and closed neighborhood settings. Both versions of the
problem are fixed-parameter tractable parameterized by the combined parameters
clique width and the solution size. We also show the existence of graphs that
have bounded clique width and unbounded conflict-free chromatic numbers (on
both versions). We show that $\chi^*_{CN}(G)\leq 3$, for a distance hereditary
graph $G$. On the contrary, we show the existence of a distance hereditary
graph that has an unbounded $\chi^*_{ON}(G)$. On the positive side, we show
that block graphs and cographs (which are subclasses of distance hereditary
graphs) have bounds of three and two respectively for $\chi^*_{ON}(G)$, and
show that both problems are polynomial time solvable on block graphs and
cographs.
We show that $\chi^*_{ON}(G)\leq 3$, for an interval graph $G$, improving the
bound by Reddy (2018) and also prove that the above bound is tight. Moreover,
we give upper bounds for $\chi^*_{ON}(G)$ on unit square and unit disk graphs
and show NP-completeness results. For split graphs, we show that the CFON*
problem is NP-complete and the CFCN* problem is polynomial time solvable. We
study the problems on Kneser graphs and give upper and lower bounds.
|
[
{
"version": "v1",
"created": "Tue, 18 May 2021 17:29:26 GMT"
},
{
"version": "v2",
"created": "Wed, 19 May 2021 12:14:19 GMT"
},
{
"version": "v3",
"created": "Fri, 6 May 2022 14:33:35 GMT"
}
] | 2022-05-09T00:00:00 |
[
[
"Bhyravarapu",
"Sriram",
""
],
[
"Hartmann",
"Tim A.",
""
],
[
"Hoang",
"Hung P.",
""
],
[
"Kalyanasundaram",
"Subrahmanyam",
""
],
[
"Reddy",
"I. Vinod",
""
]
] |
new_dataset
| 0.983893 |
2108.05921
|
Scott A. Hale
|
Hannah Rose Kirk and Bertram Vidgen and Paul R\"ottger and Tristan
Thrush and Scott A. Hale
|
Hatemoji: A Test Suite and Adversarially-Generated Dataset for
Benchmarking and Detecting Emoji-based Hate
| null |
2022 Annual Conference of the North American Chapter of the
Association for Computational Linguistics (NAACL 2022)
| null | null |
cs.CL cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Detecting online hate is a complex task, and low-performing models have
harmful consequences when used for sensitive applications such as content
moderation. Emoji-based hate is an emerging challenge for automated detection.
We present HatemojiCheck, a test suite of 3,930 short-form statements that
allows us to evaluate performance on hateful language expressed with emoji.
Using the test suite, we expose weaknesses in existing hate detection models.
To address these weaknesses, we create the HatemojiBuild dataset using a
human-and-model-in-the-loop approach. Models built with these 5,912 adversarial
examples perform substantially better at detecting emoji-based hate, while
retaining strong performance on text-only hate. Both HatemojiCheck and
HatemojiBuild are made publicly available. See our Github Repository
(https://github.com/HannahKirk/Hatemoji). HatemojiCheck, HatemojiBuild, and the
final Hatemoji Model are also available on HuggingFace
(https://huggingface.co/datasets/HannahRoseKirk/).
|
[
{
"version": "v1",
"created": "Thu, 12 Aug 2021 18:42:06 GMT"
},
{
"version": "v2",
"created": "Tue, 31 Aug 2021 07:55:12 GMT"
},
{
"version": "v3",
"created": "Fri, 6 May 2022 16:12:05 GMT"
}
] | 2022-05-09T00:00:00 |
[
[
"Kirk",
"Hannah Rose",
""
],
[
"Vidgen",
"Bertram",
""
],
[
"Röttger",
"Paul",
""
],
[
"Thrush",
"Tristan",
""
],
[
"Hale",
"Scott A.",
""
]
] |
new_dataset
| 0.999885 |
2109.06250
|
Tianrui Guan
|
Tianrui Guan, Zhenpeng He, Ruitao Song, Dinesh Manocha, Liangjun Zhang
|
TNS: Terrain Traversability Mapping and Navigation System for Autonomous
Excavators
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a terrain traversability mapping and navigation system (TNS) for
autonomous excavator applications in an unstructured environment. We use an
efficient approach to extract terrain features from RGB images and 3D point
clouds and incorporate them into a global map for planning and navigation. Our
system can adapt to changing environments and update the terrain information in
real-time. Moreover, we present a novel dataset, the Complex Worksite Terrain
(CWT) dataset, which consists of RGB images from construction sites with seven
categories based on navigability. Our novel algorithms improve the mapping
accuracy over previous SOTA methods by 4.17-30.48% and reduce MSE on the
traversability map by 13.8-71.4%. We have combined our mapping approach with
planning and control modules in an autonomous excavator navigation system and
observe 49.3% improvement in the overall success rate. Based on TNS, we
demonstrate the first autonomous excavator that can navigate through
unstructured environments consisting of deep pits, steep hills, rock piles, and
other complex terrain features.
|
[
{
"version": "v1",
"created": "Mon, 13 Sep 2021 18:37:36 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Dec 2021 21:38:11 GMT"
},
{
"version": "v3",
"created": "Sun, 1 May 2022 18:31:39 GMT"
},
{
"version": "v4",
"created": "Thu, 5 May 2022 19:08:28 GMT"
}
] | 2022-05-09T00:00:00 |
[
[
"Guan",
"Tianrui",
""
],
[
"He",
"Zhenpeng",
""
],
[
"Song",
"Ruitao",
""
],
[
"Manocha",
"Dinesh",
""
],
[
"Zhang",
"Liangjun",
""
]
] |
new_dataset
| 0.999465 |
2111.04798
|
Cristina Menghini
|
Wasu Piriyakulkij and Cristina Menghini and Ross Briden and Nihal V.
Nayak and Jeffrey Zhu and Elaheh Raisi and Stephen H. Bach
|
TAGLETS: A System for Automatic Semi-Supervised Learning with Auxiliary
Data
|
Paper published at MLSys 2022. It passed the artifact evaluation
earning two ACM badges: (1) Artifacts Evaluated Functional v1.1 and (2)
Artifacts Available v1.1
| null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine learning practitioners often have access to a spectrum of data:
labeled data for the target task (which is often limited), unlabeled data, and
auxiliary data, the many available labeled datasets for other tasks. We
describe TAGLETS, a system built to study techniques for automatically
exploiting all three types of data and creating high-quality, servable
classifiers. The key components of TAGLETS are: (1) auxiliary data organized
according to a knowledge graph, (2) modules encapsulating different methods for
exploiting auxiliary and unlabeled data, and (3) a distillation stage in which
the ensembled modules are combined into a servable model. We compare TAGLETS
with state-of-the-art transfer learning and semi-supervised learning methods on
four image classification tasks. Our study covers a range of settings, varying
the amount of labeled data and the semantic relatedness of the auxiliary data
to the target task. We find that the intelligent incorporation of auxiliary and
unlabeled data into multiple learning techniques enables TAGLETS to match-and
most often significantly surpass-these alternatives. TAGLETS is available as an
open-source system at github.com/BatsResearch/taglets.
|
[
{
"version": "v1",
"created": "Mon, 8 Nov 2021 20:08:45 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Nov 2021 15:33:24 GMT"
},
{
"version": "v3",
"created": "Thu, 5 May 2022 23:49:23 GMT"
}
] | 2022-05-09T00:00:00 |
[
[
"Piriyakulkij",
"Wasu",
""
],
[
"Menghini",
"Cristina",
""
],
[
"Briden",
"Ross",
""
],
[
"Nayak",
"Nihal V.",
""
],
[
"Zhu",
"Jeffrey",
""
],
[
"Raisi",
"Elaheh",
""
],
[
"Bach",
"Stephen H.",
""
]
] |
new_dataset
| 0.991852 |
2205.01133
|
Idris Abdulmumin
|
Idris Abdulmumin, Satya Ranjan Dash, Musa Abdullahi Dawud, Shantipriya
Parida, Shamsuddeen Hassan Muhammad, Ibrahim Sa'id Ahmad, Subhadarshi Panda,
Ond\v{r}ej Bojar, Bashir Shehu Galadanci, Bello Shehu Bello
|
Hausa Visual Genome: A Dataset for Multi-Modal English to Hausa Machine
Translation
|
Accepted at Language Resources and Evaluation Conference 2022
(LREC2022)
| null | null | null |
cs.CL cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-modal Machine Translation (MMT) enables the use of visual information
to enhance the quality of translations. The visual information can serve as a
valuable piece of context information to decrease the ambiguity of input
sentences. Despite the increasing popularity of such a technique, good and
sizeable datasets are scarce, limiting the full extent of their potential.
Hausa, a Chadic language, is a member of the Afro-Asiatic language family. It
is estimated that about 100 to 150 million people speak the language, with more
than 80 million indigenous speakers. This is more than any of the other Chadic
languages. Despite a large number of speakers, the Hausa language is considered
low-resource in natural language processing (NLP). This is due to the absence
of sufficient resources to implement most NLP tasks. While some datasets exist,
they are either scarce, machine-generated, or in the religious domain.
Therefore, there is a need to create training and evaluation data for
implementing machine learning tasks and bridging the research gap in the
language. This work presents the Hausa Visual Genome (HaVG), a dataset that
contains the description of an image or a section within the image in Hausa and
its equivalent in English. To prepare the dataset, we started by translating
the English description of the images in the Hindi Visual Genome (HVG) into
Hausa automatically. Afterward, the synthetic Hausa data was carefully
post-edited considering the respective images. The dataset comprises 32,923
images and their descriptions that are divided into training, development,
test, and challenge test set. The Hausa Visual Genome is the first dataset of
its kind and can be used for Hausa-English machine translation, multi-modal
research, and image description, among various other natural language
processing and generation tasks.
|
[
{
"version": "v1",
"created": "Mon, 2 May 2022 18:05:35 GMT"
},
{
"version": "v2",
"created": "Fri, 6 May 2022 16:00:39 GMT"
}
] | 2022-05-09T00:00:00 |
[
[
"Abdulmumin",
"Idris",
""
],
[
"Dash",
"Satya Ranjan",
""
],
[
"Dawud",
"Musa Abdullahi",
""
],
[
"Parida",
"Shantipriya",
""
],
[
"Muhammad",
"Shamsuddeen Hassan",
""
],
[
"Ahmad",
"Ibrahim Sa'id",
""
],
[
"Panda",
"Subhadarshi",
""
],
[
"Bojar",
"Ondřej",
""
],
[
"Galadanci",
"Bashir Shehu",
""
],
[
"Bello",
"Bello Shehu",
""
]
] |
new_dataset
| 0.999842 |
2205.02793
|
Waqar Hassan Khan
|
Waqar Hassan Khan, Md Al Imran, Ahmed Nafis Fuad, Mohammed Latif
Siddiq, A. B. M. Alim Al Islam
|
Shashthosheba: Dissecting Perception of Bangladeshi People towards
Telemedicine Apps through the Lens of Features of the Apps
|
12 pages
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Bangladesh, a developing country with a large and dense population, has
recently seen significant economic as well as technological developments. The
growth of technology has resulted in a dramatic increase in the number of
smartphone users in Bangladesh, and as such, mobile apps have become an
increasingly important part of peoples' life, even encompassing healthcare
services. However, the apps used in healthcare (telemedicine to be specific) in
Bangladesh are yet to be studied from the perspective of their features as per
the voices of the users as well as service providers. Therefore, in this study,
we focus on the features of the telemedicine apps used in Bangladesh. First, we
evaluated the present status of existing telemedicine apps in Bangladesh, as
well as their benefits and drawbacks in the context of HCI. We analyzed
publicly accessible reviews of several Bangladeshi telemedicine apps (N = 14)
to evaluate the user impressions. Additionally, to ascertain the public opinion
of these apps, we performed a survey in which the patients (N = 87)
participated willingly. Our analysis of the collected opinions reveals what
users experience, what they appreciate, and what they are concerned about when
they use telemedicine apps. Additionally, our study demonstrates what users
expect from telemedicine apps, independent of their past experience. Finally,
we explore how to address the issues we discovered and how telemedicine may be
used to effectively offer healthcare services throughout the country. To the
best of our knowledge, this study is the first to analyze the perception of the
people of Bangladesh towards telemedicine apps from the perspective of features
of the apps.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2022 17:10:26 GMT"
},
{
"version": "v2",
"created": "Fri, 6 May 2022 12:41:56 GMT"
}
] | 2022-05-09T00:00:00 |
[
[
"Khan",
"Waqar Hassan",
""
],
[
"Imran",
"Md Al",
""
],
[
"Fuad",
"Ahmed Nafis",
""
],
[
"Siddiq",
"Mohammed Latif",
""
],
[
"Islam",
"A. B. M. Alim Al",
""
]
] |
new_dataset
| 0.999625 |
2205.02887
|
Osman Semih Kayhan
|
Osman Semih Kayhan and Jan C. van Gemert
|
Evaluating Context for Deep Object Detectors
|
4 pages, 5 figures
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Which object detector is suitable for your context sensitive task? Deep
object detectors exploit scene context for recognition differently. In this
paper, we group object detectors into 3 categories in terms of context use: no
context by cropping the input (RCNN), partial context by cropping the
featuremap (two-stage methods) and full context without any cropping
(single-stage methods). We systematically evaluate the effect of context for
each deep detector category. We create a fully controlled dataset for varying
context and investigate the context for deep detectors. We also evaluate
gradually removing the background context and the foreground object on MS COCO.
We demonstrate that single-stage and two-stage object detectors can and will
use the context by virtue of their large receptive field. Thus, choosing the
best object detector may depend on the application context.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2022 18:48:29 GMT"
}
] | 2022-05-09T00:00:00 |
[
[
"Kayhan",
"Osman Semih",
""
],
[
"van Gemert",
"Jan C.",
""
]
] |
new_dataset
| 0.999071 |
2205.02971
|
Daniel Engel
|
Daniel Engel, Yingjie Xue
|
Transferable Cross-Chain Options
| null | null | null | null |
cs.CR cs.DC cs.GT cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An option is a financial agreement between two parties to trade two assets.
One party is given the right, but not the obligation, to complete the swap
before a specified termination time. In todays financial markets, an option is
considered an asset which can itself be transferred: while an option is active,
one party can sell its rights (or obligations) to another. Todays blockchains
support simple options in the form of cross-chain atomic swap protocols where
one party has the choice whether to complete the swap. The options implemented
by these cross-chain protocols, are not, however, transferable. This paper
proposes novel distributed protocols for transferable cross-chain options,
where both option owners and providers can sell their positions to third
parties. The protocol ensures that none of the parties can be cheated, that no
unauthorized party can interfere, and that the transfer succeeds if the buyer
and seller faithfully follow the protocol.
|
[
{
"version": "v1",
"created": "Fri, 6 May 2022 01:01:09 GMT"
}
] | 2022-05-09T00:00:00 |
[
[
"Engel",
"Daniel",
""
],
[
"Xue",
"Yingjie",
""
]
] |
new_dataset
| 0.989933 |
2205.03018
|
Anoop Kunchukuttan
|
Yash Madhani, Sushane Parthan, Priyanka Bedekar, Ruchi Khapra, Vivek
Seshadri, Anoop Kunchukuttan, Pratyush Kumar, Mitesh M. Khapra
|
Aksharantar: Towards building open transliteration tools for the next
billion users
|
19 pages, 17 tables, 1 figure
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce Aksharantar, the largest publicly available transliteration
dataset for 21 Indic languages containing 26 million transliteration pairs. We
build this dataset by mining transliteration pairs from large monolingual and
parallel corpora, as well as collecting transliterations from human annotators
to ensure diversity of words and representation of low-resource languages. We
introduce a new, large, diverse testset for Indic language transliteration
containing 103k words pairs spanning 19 languages that enables fine-grained
analysis of transliteration models.
We train the IndicXlit model on the Aksharantar training set. IndicXlit is a
single transformer-based multilingual transliteration model for roman to Indic
script conversion supporting 21 Indic languages. It achieves state-of-the art
results on the Dakshina testset, and establishes strong baselines on the
Aksharantar testset released along with this work.
All the datasets and models are publicly available at
https://indicnlp.ai4bharat.org/aksharantar. We hope the availability of these
large-scale, open resources will spur innovation for Indic language
transliteration and downstream applications.
|
[
{
"version": "v1",
"created": "Fri, 6 May 2022 05:13:12 GMT"
}
] | 2022-05-09T00:00:00 |
[
[
"Madhani",
"Yash",
""
],
[
"Parthan",
"Sushane",
""
],
[
"Bedekar",
"Priyanka",
""
],
[
"Khapra",
"Ruchi",
""
],
[
"Seshadri",
"Vivek",
""
],
[
"Kunchukuttan",
"Anoop",
""
],
[
"Kumar",
"Pratyush",
""
],
[
"Khapra",
"Mitesh M.",
""
]
] |
new_dataset
| 0.99981 |
2205.03075
|
Zechen Li
|
Zechen Li and Anders S{\o}gaard
|
QLEVR: A Diagnostic Dataset for Quantificational Language and Elementary
Visual Reasoning
|
To appear at Findings of NAACL 2022
| null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Synthetic datasets have successfully been used to probe visual
question-answering datasets for their reasoning abilities. CLEVR
(johnson2017clevr), for example, tests a range of visual reasoning abilities.
The questions in CLEVR focus on comparisons of shapes, colors, and sizes,
numerical reasoning, and existence claims. This paper introduces a minimally
biased, diagnostic visual question-answering dataset, QLEVR, that goes beyond
existential and numerical quantification and focus on more complex quantifiers
and their combinations, e.g., asking whether there are more than two red balls
that are smaller than at least three blue balls in an image. We describe how
the dataset was created and present a first evaluation of state-of-the-art
visual question-answering models, showing that QLEVR presents a formidable
challenge to our current models. Code and Dataset are available at
https://github.com/zechenli03/QLEVR
|
[
{
"version": "v1",
"created": "Fri, 6 May 2022 08:51:13 GMT"
}
] | 2022-05-09T00:00:00 |
[
[
"Li",
"Zechen",
""
],
[
"Søgaard",
"Anders",
""
]
] |
new_dataset
| 0.996993 |
2205.03081
|
Liangjun Song
|
Liangjun Song, Gang Sun, Hongfang Yu, Mohsen Guizani
|
SD-AETO: Service Deployment Enabled Adaptive Edge Task Offloading in MEC
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, edge computing, as an important pillar for future networks,
has been developed rapidly. Task offloading is a key part of edge computing
that can provide computing resources for resource-constrained devices to run
computing-intensive applications, accelerate computing speed and save energy.
An efficient and feasible task offloading scheme can not only greatly improve
the quality of experience (QoE) but also provide strong support and assistance
for 5G/B5G networks, the industrial Internet of Things (IIoT), computing
networks and so on. To achieve these goals, this paper proposes an adaptive
edge task offloading scheme assisted by service deployment (SD-AETO) focusing
on the optimization of the energy utilization ratio (EUR) and the processing
latency. In the pre-implementation stage of the SD-AETO scheme, a service
deployment scheme is invoked to assist with task offloading considering each
service's popularity. The optimal service deployment scheme is obtained by
using the approximate deployment graph (AD-graph). Furthermore, a task
scheduling and queue offloading design procedure is proposed to complete the
SD-AETO scheme based on the task priority. The task priority is generated by
the corresponding service popularity and task offloading direction. Finally, we
analyze our SD-AETO scheme and compare it with related approaches, and the
results show that our scheme has a higher edge offloading rate and lower
resource consumption for massive task scenarios in the edge network.
|
[
{
"version": "v1",
"created": "Fri, 6 May 2022 08:59:53 GMT"
}
] | 2022-05-09T00:00:00 |
[
[
"Song",
"Liangjun",
""
],
[
"Sun",
"Gang",
""
],
[
"Yu",
"Hongfang",
""
],
[
"Guizani",
"Mohsen",
""
]
] |
new_dataset
| 0.982435 |
2205.03085
|
Ferdi Kara
|
Ferdi Kara, Hakan Kaya, Halim Yanikomeroglu
|
Power-Time Channel Diversity (PTCD): A Novel Resource-Efficient
Diversity Technique for 6G and Beyond
|
Accepted for IEEE WCL
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Diversity techniques have been applied for decades to overcome the effects of
fading, which is one of the most challenging problems in wireless
communications due to the randomness of the wireless channel. However, existing
diversity techniques are resource-inefficient due to orthogonal resource usage,
or they have high-power consumption due to multiple antennas and RF-chains
which present an insurmountable constraint for small devices. To address this,
this letter proposes a novel resource-efficient diversity technique called
power-time channel diversity (PTCD). In PTCD, interleaved copies of the
baseband symbols are transmitted simultaneously with weighted power
coefficients. The PTCD provides a diversity order of the number of copies by
implementing successive interference canceler at the receiver. To achieve this
diversity, no additional resources are needed; hence, spectral efficient
communication is guaranteed. Additionally, the power consumption at the
transceivers is limited since the PTCD requires only one RF-chain. We provide
an information-theoretic proof that the PTCD could have any diversity order.
Based on extensive simulations, we reveal that PTCD can also outperform
benchmarks without any additional cost.
|
[
{
"version": "v1",
"created": "Fri, 6 May 2022 09:07:26 GMT"
}
] | 2022-05-09T00:00:00 |
[
[
"Kara",
"Ferdi",
""
],
[
"Kaya",
"Hakan",
""
],
[
"Yanikomeroglu",
"Halim",
""
]
] |
new_dataset
| 0.995642 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.